RSC Streaming: The Reverse Proxy Gotcha
Your RSC streaming might be silently buffered by Nginx defaults—here's how to fix it
You've built a React Server Components app with streaming. Suspense boundaries progressively render. Locally, content flows in chunks. Then you deploy behind Nginx, and everything arrives in one massive blob. Your streaming is silently broken.
The Symptom: Streaming That Isn't
RSC streaming sends HTML in chunks over a long-lived HTTP response. Suspense boundaries resolve and flush progressively—users see content as it becomes ready instead of waiting for everything. But reverse proxies like Nginx buffer responses by default.
// What you expect (streaming):
// Chunk 1: Header + nav → 50ms
// Chunk 2: Main content → 200ms
// Chunk 3: Sidebar data → 400ms
// What happens behind buffering proxy:
// ... silence for 400ms ...
// Everything at once → 400msThe Time to First Byte (TTFB) looks identical. The total load time is the same. But perceived performance tanks because users stare at a blank screen instead of seeing progressive content.
Why Proxies Buffer
Nginx's proxy_buffering is on by default. It's a sensible
optimization: collect the entire upstream response, then send it efficiently to slow
clients. This reduces upstream connection time and handles client/server speed mismatches.
# Nginx default behavior (implicit)
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;For traditional request/response cycles, buffering is great. For streaming responses (RSC, SSE, long-polling), it's catastrophic. The proxy waits for the upstream to finish before forwarding anything—defeating the entire point of streaming.
The Fix: Disable Buffering
Option 1: Nginx Configuration
# For specific streaming routes
location /app {
proxy_pass http://nextjs:3000;
proxy_buffering off; # Critical for streaming
proxy_cache off;
# Also relevant for HTTP/1.1 chunked encoding
proxy_http_version 1.1;
chunked_transfer_encoding on;
}Turning off buffering globally might impact performance for non-streaming routes. Apply it selectively to routes that use RSC streaming.
Option 2: X-Accel-Buffering Header
Let your application control buffering per-response. Next.js (and other frameworks) can send this header to tell Nginx to skip buffering for specific responses:
// In your Next.js response (middleware or API route)
headers.set('X-Accel-Buffering', 'no');
// Or in next.config.js for all pages
module.exports = {
async headers() {
return [{
source: '/:path*',
headers: [{ key: 'X-Accel-Buffering', value: 'no' }]
}]
}
}
This is cleaner than Nginx config changes because the application declares its own
streaming intent. However, proxy_ignore_headers X-Accel-Buffering in
your Nginx config would override this—check your proxy settings.
Other Proxies Have the Same Problem
AWS ALB/ELB
Application Load Balancers don't buffer by default, but idle timeout can kill long-lived streaming connections. Default is 60 seconds—increase it for slow-streaming scenarios.
Cloudflare
Cloudflare buffers responses by default for security/optimization. Use the
cf-no-transform header or configure streaming in your zone settings.
Enterprise plans offer more granular control.
Apache (mod_proxy)
# Disable buffering for streaming
SetEnv proxy-sendchunked 1
SetEnv proxy-sendcl 0Debugging: Is Streaming Working?
DevTools Network tab shows chunked responses, but it's hard to see when chunks arrived. Use curl to verify actual streaming behavior:
# Shows chunks as they arrive (with timestamps)
curl -N --no-buffer -w "\n%{time_total}s\n" https://your-app.com/page
# Or with verbose timing
curl -so /dev/null -w "TTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" https://your-app.com/pageIf TTFB equals total time, streaming isn't reaching the client—something is buffering. If TTFB is significantly less than total, chunks are flowing through.
Edge Cases Within the Edge Case
- gzip compression can introduce buffering. Nginx needs enough data to compress efficiently. Consider
gzip_min_lengthsettings or disabling compression for streaming routes. - HTTP/2 multiplexing changes chunk delivery semantics. Test with both HTTP/1.1 and HTTP/2 to ensure streaming works across protocols.
- CDN caching fundamentally conflicts with streaming—you can't cache a response that's still being generated. Mark streaming routes as uncacheable.
- Connection keep-alive settings can prematurely close streaming connections. Ensure timeouts exceed your slowest streaming response.
The Takeaway
RSC streaming only works end-to-end. Any proxy, CDN, or load balancer between
your app and the browser can silently buffer responses. Test streaming behavior
in production-like environments, not just next dev. When streaming
breaks, the symptom is subtle—same content, just worse UX.
Advertisement
Explore these curated resources to deepen your understanding
Official Documentation
Tools & Utilities
Related Insights
Explore related edge cases and patterns
Advertisement