streaming-arch

Beyond the API: Why Critical Infrastructure is Going Streaming

ben
Ben Papillon
Ryan Echternacht
Ryan Echternacht
·
03/04/2026

The Request-Response Era

For the past 15 years, APIs have been the foundation of how software systems talk to each other. RESTful APIs gave us a clean mental model: your application asks a question, a service answers. The pattern was straightforward and it worked. We've built many products this way.

But then Google Docs came along and demonstrated a new model for a web application. Multiple people editing the same document simultaneously, seeing each other's cursors move. Changes appearing instantly, no refresh button, no "save and reload," or no conflicts to manually resolve. It felt like magic.

That experience redefined how we thought about building software. If a document editor could work like that, why couldn't everything else?

A generation of infrastructure companies took that question seriously. Firebase built a database that synced state across every connected client automatically, worked offline and caught up seamlessly when it reconnected. Pusher made real-time pub/sub something any developer could use in an afternoon instead of a months-long infrastructure project. LaunchDarkly architected feature flags so thoroughly that your application keeps running even if their entire service goes offline. Convex made databases reactive, with queries that automatically stay up-to-date as data changes, like React components rerendering.

These products fundamentally changed what's possible to build. Real-time collaboration stopped being a feature only Google-scale companies could afford. Live updates became the default instead of the exception. Data flows to where it's needed, state lives locally, and checks happen instantly. The infrastructure just works, even when networks don't.

What else could we build this way?

The Pattern: Stream State, Evaluate Locally

Here's what these tools actually do differently.

Traditional infrastructure waits for you to ask. You make an API call when you need data and the service responds. You cache the result if you're smart about it and you invalidate the cache when... well, hopefully you get that right. You write retry logic for failures, handle timeouts, and deal with stale data. It's a lot of work to make something that should be simple actually reliable.

Streaming infrastructure inverts this. The service maintains an open connection to your application, usually through WebSockets. When data changes on the server, it pushes updates to your client. Your application keeps a local copy of the state it needs. When you need to check something, you're reading from memory, not making a network request.

LaunchDarkly streams all your feature flags to your application on startup and keeps them updated. When you check

typescript
if (flags.newFeature)
, you're reading from memory, not making an HTTP request. It's fast and can fallback to the last known value if LaunchDarkly's API is unreachable.

Firebase does this with your entire database. Query for data once, and Firebase keeps it up-to-date automatically. When another user changes something, the server pushes the update to you. You write code as if the data is just always current, because it is.

Convex takes it further. Your queries are TypeScript functions that run in the database. The database tracks what data each query reads. When that data changes, the query reruns automatically and pushes the new result to your client. You no longer need to matain logic to handle data refresh and caching.

In each case, your code works with the up-to-date start locally, while the service handles the difficult (and often error-prone) work to keep that state fresh.

Why This Wins for Critical Infrastructure

Developer experience. You outsource the hard parts—connection management, state synchronization, retry logic, cache invalidation—to the service. Your code gets simpler. You're not maintaining WebSocket reconnection logic or figuring out when to invalidate cached data. The infrastructure handles it, and you focus on building your product.

Reliability. The service being down doesn't mean your application stops working. You have the last-known state and can continue operating with it. LaunchDarkly explicitly designs for this, ensuring your feature flags work even during complete outages. You're not writing code to handle "what if the billing check fails", you’re using the last known billing state stored locally. The trade-off is that state might be slightly stale during an outage, but you can provide service instead of showing errors.

Performance. Because state lives locally, you avoid API calls in hot paths. Checking a feature flag or permission doesn't add 50-100ms to your request. You can check things in tight loops, on every request, without worrying about the performance impact.

What's now possible. Real-time collaboration used to require significant infrastructure investment. Now it's a weekend project with Firebase or Convex. Instant feature rollouts across your entire user base? LaunchDarkly makes it trivial. It's interesting to see what becomes buildable when the infrastructure gets out of the way.

Image

The Engineering Challenge

Building streaming architecture means taking on significantly more surface area than traditional request-response services. In addition to the normal challenges of building a service, you're now responsible for persistent connection management at scale, WebSocket reconnection logic, backpressure handling, and ordered delivery guarantees. State synchronization becomes your problem—initial loads, incremental updates, consistency during network partitions, detecting and recovering from stale state.

With a traditional API, a lot of this complexity lives on the client side. Developers handle their own caching, write their own retry logic, manage their own state invalidation. It's work, but it's their work. With streaming, you're taking that work on yourself. You're promising the data will always be current, connections will stay alive, updates will arrive in order.

The companies that get this right change what's possible to build. These are the tools the next generation will learn from.

The Future of Infrastructure

It makes sense when you think about it. The infrastructure developers use most, the stuff that gets checked constantly, lives in hot paths, and needs to be fast and reliable, benefits from being local. Streaming state instead of fetching it solves real problems: better performance, simpler failure modes, and less code to maintain.

Request-response still has its place. Mutations, writes, and operations you do once in a while. But for the things you check all the time, streaming is starting to look like the better default.

More infrastructure will probably head this direction. The engineering is hardbut the companies doing it well make that complexity disappear for their users. When infrastructure gets out of the way like that, you can focus on building what actually matters.