Postgres Is All You Need

Every modern web application seems to follow the same pattern: PostgreSQL for relational data, Redis for caching and queues, MongoDB for document storage, ElasticSearch for search. Before you know it, you're managing four or five different data systems, each with its own deployment, monitoring, and failure modes.

But what if you didn't need all of that? Pat Spizzo makes a compelling case in this talk that PostgreSQL alone can handle most of what these specialized tools do - and often well enough for the majority of applications.

What PostgreSQL Can Replace

Queues (replacing Redis/RabbitMQ): PostgreSQL's LISTEN/NOTIFY mechanism and SKIP LOCKED provide a solid foundation for job queues. For most applications that process a few hundred jobs per second, a Postgres-backed queue is more than sufficient. Laravel even supports database-backed queues out of the box.

Document storage (replacing MongoDB): The jsonb data type in PostgreSQL gives you document-style storage with full indexing support using GIN indexes. You can query nested JSON fields, create partial indexes on specific keys, and still benefit from ACID transactions - something MongoDB only added later.

Full-text search (replacing ElasticSearch): PostgreSQL has built-in full-text search with tsvector and tsquery. It supports ranking, stemming, multiple languages, and phrase matching. For most applications that aren't search-first products, this is plenty.

Caching (replacing Redis): With UNLOGGED tables, you get fast write performance without WAL overhead - similar to how you'd use Redis as a cache. Combine that with Postgres's memory management and you have a caching layer without an extra service.

Time-series data (replacing TimescaleDB/InfluxDB): Table partitioning by date range, combined with native compression in newer versions, makes PostgreSQL a viable option for time-series workloads.

When Does This Approach Break Down?

This isn't about PostgreSQL being the best at everything. ElasticSearch will outperform Postgres for complex search queries across billions of documents. Redis will be faster for sub-millisecond caching. The question is whether your application actually needs that level of performance.

Most applications don't. Most applications have tens of thousands of rows, not billions. Most search needs are "find users by name" rather than "relevance-ranked full-text across terabytes." For those cases, adding another service to your infrastructure creates complexity that costs more than the marginal performance gain.

The Real Benefit

Running fewer services means fewer things that can break at 3 AM. It means one backup strategy, one monitoring setup, one connection pool to manage, and one technology for your team to master. That operational simplicity compounds over time - especially for small teams.

Start with Postgres. Add specialized tools only when you can measure a specific bottleneck that Postgres can't solve.