SaaS Platform: 8 Seconds to 200ms
A B2B SaaS product losing trials to a slow dashboard. 47 N+1 queries, exhausted connection pools, and missing indexes — fixed in 4 days with no downtime.

Background
The Client
A B2B SaaS company serving 50,000 users across 1,200+ business tenants. Their platform — a project management and analytics tool — ran on PostgreSQL hosted on AWS RDS. The engineering team was eight people strong, none of whom had dedicated database expertise.
The main dashboard was the first thing every user saw after login. When load times climbed past 8 seconds, free trial conversion dropped sharply. Support tickets about slowness doubled week over week. The CTO estimated they were losing 15–20% of trials before users ever reached their second session.
The Problem
What Was Slowing Them Down
A DharmOps EXPLAIN ANALYZE audit of the dashboard endpoint uncovered four problems that each independently would have been painful — together, they were catastrophic:
- 47 N+1 queries on the main dashboard render. Each dashboard widget was querying the database once per row returned. A tenant with 500 projects triggered 500 individual SELECT statements on every page load. What should have been 5 queries was firing 500+.
- Connection pool exhaustion under concurrent load. With 200+ concurrent users, the application was opening new database connections faster than PostgreSQL could serve them. Peak hours saw connection timeouts queuing for up to 4 seconds before a query even ran.
- Missing indexes on tenant_id foreign key columns across 8 tables. Every multi-tenant query was performing full table scans to filter by tenant. On tables with 5M+ rows shared across all tenants, this was the single most expensive scan pattern in the system.
- Read-heavy dashboard traffic hitting the primary write replica. All application traffic — reads and writes — was routed to the primary RDS instance. Dashboard queries were competing with write operations for I/O and locks, compounding latency on both sides.
The Solution
What We Did
DharmOps completed a full turnaround in 4 working days. All index changes used PostgreSQL's concurrent index build feature — no table locks, no downtime. PgBouncer was deployed alongside the existing infrastructure without any application code changes required.
- Eliminated 47 N+1 patterns through batch query rewrites. Used EXPLAIN ANALYZE to identify every N+1 loop across the dashboard. Rewrote all 47 using IN clauses and JOIN-based batch loading. The dashboard endpoint dropped from 500+ queries per render to 6.
- Deployed PgBouncer in transaction pooling mode. Installed PgBouncer as a connection proxy between the application and RDS. Transaction pooling reduced peak database connections from 800+ to under 50 — eliminating all connection timeout queuing.
- Created partial composite indexes on (tenant_id, created_at DESC) across all eight tables. Designed indexes that matched the exact query access pattern for multi-tenant filtering. Index-only scans replaced full table scans on every dashboard widget. Scan cost dropped 99.6% on the largest tables.
- Provisioned a read replica and routed all dashboard traffic to it. Added an RDS read replica in the same availability zone. Updated the application's database routing layer to send all SELECT queries from the dashboard service to the replica — freeing the primary for writes only.
The Results
Measurable Outcomes
Dashboard load went from 8 seconds to 200 milliseconds. Free trial conversion recovered within the first week. The engineering team was able to remove a significant portion of the caching layer they had added as a workaround — simplifying their infrastructure considerably.
"We were losing trials because people assumed our product was just slow. That assumption was costing us real money. DharmOps identified and fixed the root causes in 4 days — problems our team had been fighting for 6 months without progress."
Sounds Familiar?
Slow dashboards, N+1 queries, connection pool issues — we've seen them all. Tell us what you're dealing with and we'll get you answers fast.