Server-Side Rendering: A Conspiracy Theory

2025-12-29

I have been using React since a former colleague shoved it down my throat towards the end of 2013. After the obligatory initial criticism, I eventually did see the benefits over my way of building user interfaces with jQuery and Backbone and converted. Our initial naive attempts at pure client-side rendering and server pre-rendering with hydration were not that great[1]. Next.js was not yet a thing, or at least in its infancy and we suffered our way through a lot of hydration errors and hard to solve caching riddles in the years to come.

At that point I have been working mostly on CMS based web publishing solutions, and when Gatsby came along, it solved most of our issues. I was enthralled by the elegance of sidestepping the problem: Just pre-render your pages when they are changed and serve static files. No caching, no database bottlenecks, trivial horizontal scaling. Unfortunately Gatsby failed to find an economic foothold that could support the project in the long run and faced its fair share of cultural problems, which ultimately led to its demise in form of an acquisition (with following gutting and under-the-carpet sweep) by Netlify.

Tin-hat mode: engaged#

Here is the thing: I still believe this is the right approach in most cases. But, similar to the light bulb industry in the beginning of the last century, hosting providers realized that there is not a lot of business in websites that are trivially easy to run and maintain.

In the last year, my focus moved away from publishing and more towards dashboard-y projects and web applications[2]. And even though it is not en vogue any more (or maybe because of that), I strongly believe React is the right choice. The core team has pulled off an incredible amount of re-inventing and improving the framework, while largely keeping backwards compatibility. And the fact that it has reached such critical mass that AI agents can write it really well should not be neglected either.

But which framework should I build on? All the major projects are heavily biased towards server side rendering. And if I were prone to conspiracy theories, I could connect that to the fact most of them are sponsored or even managed by hosting platforms like Vercel and Netlify. And what fool would think that standardizing on a rendering model that is not easy to put into production might not be in their best interest!

The Acronym Soup#

To make an informed decision, I need to look deeper into the up- and downsides of the different approaches. Let's chart the landscape first:

  • CSR (Client-Side Rendering): The browser downloads a minimal HTML shell, then JavaScript fetches data and renders the UI. Simple to implement but has slower initial load and poor SEO since crawlers see an empty page initially. Google claims that their crawler does execute JavaScript, but they also suggest to not rely on that and deliver content as HTML. And in the age of agents, where content is more read by machines than by humans, this is a no-brainer.
  • SSR (Server-Side Rendering): The server renders the full HTML for each request, sends it to the browser, then React "hydrates" to make it interactive. Better SEO and faster First Contentful Paint, but higher server load and TTFB, since every request triggers a render. Next.js uses this with getServerSideProps or the App Router's default server components.
  • SSG (Static Site Generation): Pages are pre-rendered at build time into static HTML files. Extremely fast (served from CDN), great SEO, but content is fixed until the next build. Ideal for blogs, docs, or marketing sites. Next.js does this with getStaticProps, Gatsby is built around this model. The tricky part here is how to handle large volumes, since full rebuilds and CDN uploads can take a long time. Gatsby has a way of partially updating the website based on content changes. Next.js does not.
  • ISR (Incremental Static Regeneration): Pages are statically generated but can be revalidated in the background after a set interval or on-demand. Combines SSG speed with fresher content without full rebuilds, but requires a server runtime.
  • Streaming SSR: React 18+ feature where the server sends HTML in chunks as components resolve. Users see content progressively rather than waiting for the entire page. Works with Suspense boundaries. Essentially a performance optimization that mitigates the TTFB-problem that naive SSR has.
  • PPR (Partial Prerendering): An emerging Next.js pattern combining static shells with streaming dynamic content. The static parts are served instantly from cache while dynamic holes stream in.

That's a lot! But hosting on Vercel is not an option in my case due to data regulations, and the fact that OpenNext even has to exist proves that scaling Next.js in the wild is not easy, which makes me disregard ISR and PPR. I am on a green field and we can focus on modern React features and Streaming SSR is better than the traditional one, so let's collapse them. That leaves us with the three fundamental ones: CSR, SSR and SSG [3].

The Numbers Don't Lie (But They Don't Shout Either)#

There is a great breakdown on developer way that summarizes the performance impact of different rendering strategies. And the results are not as clear as the Twitterverse would like to make us think. There is no doubt that the LCP (largest contentful paint) of uncached pages goes down the more technology is added to the server side, and for certain scenarios, this has tremendous impact. But according to the study results, the TTI (time to interactive) improves by only roughly 10%, which means users very quickly see something that does not immediately work. This could actually be worse, so frameworks like Remix go a long way to provide graceful degradation based on standard HTTP interactions until the client side application is properly hydrated.

But is this really necessary? All the performance benchmarks and case studies I looked at compare the extreme opposites: Pure client side applications against server side rendering everything. We can pre-render the static parts of a page ahead of time (SSG) and post-load dynamic parts. This can bring down the largest contentful paint significantly, without requiring complex, proprietary hosting technologies. How applicable this approach is depends on where on the spectrum of interactivity and personalisation the website lives. Let's look at three common examples:

  • Content focused: Information websites (like this blog). Low interactivity, low personalisation. Every user sees the same content. In my opinion there is no reason to involve a server side process here. Yes, managing statically generated pages with large content volumes can be tricky. But it is exactly the same problem as cache invalidation in a real time environment. With the difference that if it breaks, it breaks at build time, not in production. And it will only delay updates of content, not it's consumption.
  • E-Commerce: Maybe the hardest to optimize case, since it often involves targeted content and extreme perceived performance at the same time. That's also the reason why Shopify bought Remix. If you have the size and audience of Walmart, there is no doubt that it's worth throwing a lot of money and complexity at the problem to squeeze out each ounce of revenue. But for smaller ventures, statically generated product pages that are SEO optimized and fast search index might be good enough for a start.
  • SaaS/Dashboards/Social: The type of web applications users have to authenticate to, interact with regularly, and for extended periods of time (like Google docs). Each user has their own experience and expects interactivity without any delays. Most content is private or at least access restricted, and search engine optimization is therefore irrelevant. The high LCP of client side rendering will only apply on the first access, but the increased TTFB for server side rendering will slow down on each interaction.

The Escape Hatch#

From a functional perspective everything can be implemented in a API/Client model while maintaining SEO compatibility by statically pre-rendering public content. Server side rendering is an optimization that can have a huge impact in certain scenarios, but it comes at a signifcant cost. And I think that cost is a too high investment for an initial project phase. A provider like Vercel will allow you to outsource that, but at the point where the project has matured, gained mass and a transition becomes too risky, they eventually want their piece of the cake.

The technology is great, especially with React server components, but it should be a selective upgrade, not the default starting point for everything.

Waku has become a great alternative that achieves that by building on top of pure React server components, Vite and Hono. In contrast to other frameworks, it has no cemented opinion on how data loading has to happen, but purely relies on Reacts new ability to execute promises within server components and serialize UI state. And it can do that either in real time, in SSR environments as well as ahead of time in SSG scenarios and is not bound to a specific hosting provider.

In general it shares Daishi Kato's signature laser focus he also brought to libraries like Zustand, Jotai and Valtio. And the codebase is small and concise enough that I was able to understand and contribute to it with limited resources. Exactly that is the most important "feature" in my opinion: Waku is so small and lean that it (hopefully) does not need a VC-backed startup to support and maintain it. So we do not fall into the open source monetization trap again.