4

GitHub - theninthsky/client-side-rendering: A case study of CSR.

 1 year ago
source link: https://github.com/theninthsky/client-side-rendering
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Client-side Rendering

This project is a case study of CSR, it aims to explore the potential of client-side rendered apps compared to server-side rendering.

Legend

CSR: Client-side Rendering
SSR: Server-side Rendering
SSG: Static Site Generation
UX: User Experience
DX: Developer Experience

Table of Contents

Motivation

Over the last few years, server-side rendering has started to (re)gain popularity in the form of frameworks such as Next.js and Remix.
While SSR has some advantages, these frameworks keep emphasizing how fast they are ("Performance as a default"), implying client-side rendering is slow.
In addition, it is a common misconception that great SEO can only be achieved by using SSR, and that there's nothing we can do to improve the way search engines crawl CSR apps.

This project implements a basic CSR app with some tweaks such as code-splitting, with the ambition that as the app scales, the loading time of a single page would mostly remain unaffected. The objective is to simulate the number of packages used in a production grade app and try to decrease its loading time as much as possible, mostly by parallelizing requests.

It is important to note that improving performance should not come at the expense of the developer experience, so the way this project is architected should vary only slightly compared to "normal" react projects, and it won't be as extremely opinionated as Next.js (or as limiting as SSR is in general).

This case study will cover two major aspects: performance and SEO. We will see how we can achieve great scores in both of them.

Note: while this project is implemented with React, the majority of its tweaks are not tied to any framework and are purely browser-based.

Performance

We will assume a standard Webpack 5 setup and add the required customizations as we progress.
The vast majority of code changes that we'll go throught will be inside the webpack.config.js configuration file and the index.js HTML template.

Bundle Size

The first rule of thumb is to use as fewer dependencies as possible, and among those, to select the ones with smaller filesize.

For example:
We can use day.js instead of moment, zustand instead of redux toolkit etc.

This is crucial not only for CSR apps, but also for SSR (and SSG) apps, since the bigger our bundle is - the longer it will take the page to be interactive (either through hydration or normal rendering).

Caching

Ideally, every hashed file should be cached, and index.html should never be cached.
It means that the browser would initially cache main.[hash].js and would have to redownload it only if its hash (content) changes.

However, since main.js includes the entire bundle, the slightest change in code would cause its cache to expire, meaning the browser would have to download it again.
Now, what part of our bundle comprises most of its weight? The answer is the dependencies, also called vendors.

So if we could split the vendors to their own hashed chunk, that would allow a separation between our code and the vendors code, leading to less cache invalidations.

Let's add the following part to our webpack.config.js file:

optimization: {
  runtimeChunk: 'single',
  splitChunks: {
    chunks: 'initial',
    cacheGroups: {
      vendor: {
        test: /[\\/]node_modules[\\/]/,
        name: 'vendors'
      }
    }
  }
}

This will create a vendors.[hash].js file.

Although this is a substantial improvement, what would happen if we updated a very small dependency?
In such case, the entire vendors chunk's cache will invalidate.

So, in order to make this even better, we will split each dependency to its own hashed chunk:

- name: 'vendors'
+ name: ({ context }) => (context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1]

This will create files like react-dom.[hash].js, react-router-dom.[hash].js etc.

More info about the default configurations (such as the split threshold size) can be found here:
https://webpack.js.org/plugins/split-chunks-plugin/#defaults

Code Splitting

A lot of the features we write end up being used only in a few of our pages, so we would like them to be downloaded only when the user visits the page they are being used in.

For Example, we wouldn't want users to download the react-big-calendar package if they just loaded the home page. We would only want that to happen when they visit the calendar page.

The way we achieve this is (preferably) by route-based code splitting:

const Home = lazy(() => import(/* webpackChunkName: "index" */ 'pages/Home'))
const LoremIpsum = lazy(() => import(/* webpackChunkName: "lorem-ipsum" */ 'pages/LoremIpsum'))
const Pokemon = lazy(() => import(/* webpackChunkName: "pokemon" */ 'pages/Pokemon'))

So when the user visits the Lorem Ipsum page, they only download the main chunk script (which includes all shared dependencies such as the framework) and the lorem-ipsum.[hash].js chunk.

Note: I believe that it is completely fine (and even encouraged) to have the user download your entire site (so they can have a smooth app-like navigation experience). But it is very wrong to have all the assets being downloaded initially, delaying the first render of the page.
These assets should be downloaded after the user-requested page has finished rendering and is visible to the user.

Preloading Async Chunks

Code splitting has one major flaw - the runtime doesn't know these async chunks are needed until the main script executes, leading to them being fetched in a significant delay:

Without Async Preload

The way we can solve this issue is by implementing a script in the document that will be responsible of preloading assets:

plugins: [
  new HtmlPlugin({
    scriptLoading: 'module',
    templateContent: ({ compilation }) => {
      const assets = compilation.getAssets().map(({ name }) => name)

      const pages = pagesManifest.map(({ chunk, path }) => {
        const script = assets.find(name => name.includes(`/${chunk}.`) && name.endsWith('.js'))

        return { path, script }
      })

      return htmlTemplate(pages)
    }
  })
]
module.exports = pages => `
  <!DOCTYPE html>
  <html lang="en">
    <head>
      <title>CSR</title>

      <script>
        let { pathname } = window.location

        if (pathname !== '/') pathname = pathname.replace(/\\/$/, '')

        const pages = ${JSON.stringify(pages)}

        for (const { path, script } of pages) {
          if (pathname !== path) continue

          document.head.appendChild(
            Object.assign(document.createElement('link'), { rel: 'preload', href: '/' + script, as: 'script' })
          )

          break
        }
      </script>
    </head>
    <body>
      <div id="root"></div>
    </body>
  </html>
`

The imported pages-manifest.json file can be found here.

Please note that other types of assets can be preloaded the same way (like stylesheets).

This way, the browser is able to fetch the page-related script chunk in parallel with render-critical assets:

With Async Preload

Generating Static Data

I like the idea of SSG: we create a cacheable HTML file and inject static data into it.
This can be useful for data that is not highly dynamic, such as content from CMS.

But how can we create static data?
We will execute the following script during build time:

import { mkdir, writeFile } from 'fs/promises'
import axios from 'axios'

const path = 'public/json'
const axiosOptions = { transformResponse: res => res }

mkdir(path, { recursive: true })

const fetchLoremIpsum = async () => {
  const { data } = await axios.get('https://loripsum.net/api/200/long/plaintext', axiosOptions)

  writeFile(`${path}/lorem-ipsum.json`, JSON.stringify(data))
}

fetchLoremIpsum()

That would create a json/lorem-ipsum.json file to be stored in the CDN.

And now we simply fetch our static data:

fetch('json/lorem-ipsum.json')

There are numerous advantages to this approach:

  • We generate static data so we won't bother our server or CMS for every user request.
  • The data will be fetched a lot faster from a nearby CDN edge than from a remote server.
  • Since this script runs on our server during build time, we can authenticate with services however we want, there is no limit to what can be sent (secret tokens for example).

Whenever we need to update the static data we simply rebuild the app or, better yet, just rerun the script.

Preloading Data

One of the disadvantages of CSR over SSR is that data will be fetched only after JS has been downloaded, parsed and executed in the browser:

Without Data Preload

To overcome this, we will use preloading once again, this time for the data itself:

plugins: [
  new HtmlPlugin({
    scriptLoading: 'module',
    templateContent: ({ compilation }) => {
      const assets = compilation.getAssets().map(({ name }) => name)

-     const pages = pagesManifest.map(({ chunk, path }) => {
+     const pages = pagesManifest.map(({ chunk, path, data }) => {
        const script = assets.find(name => name.includes(`/${chunk}.`) && name.endsWith('.js'))

+       if (data && !Array.isArray(data)) data = [data]

-       return { path, script }
+       return { path, script, data }
      })

      return htmlTemplate(pages)
    }
  })
]
module.exports = pages => `
  <!DOCTYPE html>
  <html lang="en">
    <head>
      <title>CSR</title>

      <script>
+       const isStructureEqual = (pathname, path) => {
+         pathname = pathname.split('/')
+         path = path.split('/')
+
+         if (pathname.length !== path.length) return false
+
+         return pathname.every((segment, ind) => segment === path[ind] || path[ind].includes(':'))
+       }

        let { pathname } = window.location

        if (pathname !== '/') pathname = pathname.replace(/\\/$/, '')

        const pages = ${JSON.stringify(pages)}

-       for (const { path, script } of pages) {
+       for (const { path, script, data } of pages) {
-         if (pathname !== path) continue
+         const match = pathname === path || (path.includes(':') && isStructureEqual(pathname, path))
+
+         if (!match) continue

          document.head.appendChild(
            Object.assign(document.createElement('link'), { rel: 'preload', href: '/' + script, as: 'script' })
          )

+         if (!data) break
+
+          data.forEach(({ url, dynamicPathIndexes, crossorigin }) => {
+           let fullURL = url
+
+           if (dynamicPathIndexes) {
+             const pathnameArr = pathname.split('/')
+             const dynamics = dynamicPathIndexes.map(index => pathnameArr[index])
+
+             let counter = 0
+
+             fullURL = url.replace(/\\$/g, match => dynamics[counter++])
+           }
+
+           document.head.appendChild(
+             Object.assign(document.createElement('link'), { rel: 'preload', href: fullURL, as: 'fetch', crossOrigin: crossorigin })
+           )
          })

          break
        }
      </script>
    </head>
    <body>
      <div id="root"></div>
    </body>
  </html>
`

Now we can see that the data is being fetched right away:

With Data Preload

With the above script, we can even preload dynamic routes data (such as pokemon/:name).

The only limitation is that we can only preload GET resources, but this would not be a problem when the backend is well-architected.

Tweaking Further

Splitting Vendors From Async Chunks

Code splitting introduced us to a new problem: vendor duplication.

Say we have two async chunks: lorem-ipsum.[hash].js and pokemon.[hash].js. If they both include the same dependency that is not part of the main chunk, that means the user will download that dependency twice.

So if that said dependency is moment and it weighs 72kb minzipped, then both async chunk's size will be at least 72kb.

We need to split this dependency from these async chunks so that it could be shared between them:

optimization: {
  runtimeChunk: 'single',
  splitChunks: {
    chunks: 'initial',
    cacheGroups: {
      vendor: {
        test: /[\\/]node_modules[\\/]/,
+       chunks: 'all',
        name: ({ context }) => (context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1]
      }
    }
  }
}

Now both lorem-ipsum.[hash].js and pokemon.[hash].js will use the extracted moment.[hash].js chunk, sparing the user a lot of network traffic (and giving these assets better cache persistence).

However, we have no way of telling which async vendor chunks will be split before we build the application, so we wouldn't know which async vendor chunks we need to preload (refer to the "Preloading Async Chunks" section).

Without Async Vendor Preload

That's why we will append the chunks names to the async vendor's name:

optimization: {
  runtimeChunk: 'single',
  splitChunks: {
    chunks: 'initial',
    cacheGroups: {
      vendor: {
        test: /[\\/]node_modules[\\/]/,
        chunks: 'all',
-       name: ({ context }) => (context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1]
+       name: (module, chunks) => {
+         const allChunksNames = chunks.map(({ name }) => name).join('.')
+         const moduleName = (module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/) || [])[1]

+         return `${moduleName}.${allChunksNames}`
        }
      }
    }
  }
}
plugins: [
  new HtmlPlugin({
    scriptLoading: 'module',
    templateContent: ({ compilation }) => {
      const assets = compilation.getAssets().map(({ name }) => name)

      const pages = pagesManifest.map(({ chunk, path, data }) => {
-       const script = assets.find(name => name.includes(`/${chunk}.`) && name.endsWith('.js'))
+       const scripts = assets.filter(name => new RegExp(`[/.]${chunk}\\.(.+)\\.js$`).test(name))

        if (data && !Array.isArray(data)) data = [data]

-       return { path, script, data }
+       return { path, scripts, data }
      })

      return htmlTemplate(pages)
    }
  })
]
module.exports = pages => `
  <!DOCTYPE html>
  <html lang="en">
    <head>
      <title>CSR</title>

      <script>
        const isStructureEqual = (pathname, path) => {
          pathname = pathname.split('/')
          path = path.split('/')

          if (pathname.length !== path.length) return false

          return pathname.every((segment, ind) => segment === path[ind] || path[ind].includes(':'))
        }

        let { pathname } = window.location

        if (pathname !== '/') pathname = pathname.replace(/\\/$/, '')

        const pages = ${JSON.stringify(pages)}

-       for (const { path, script, data } of pages) {
+       for (const { path, scripts, data } of pages) {
          const match = pathname === path || (path.includes(':') && isStructureEqual(pathname, path))

          if (!match) continue

+         scripts.forEach(script => {
            document.head.appendChild(
              Object.assign(document.createElement('link'), { rel: 'preload', href: '/' + script, as: 'script' })
            )
+         })

          if (!data) break

           data.forEach(({ url, dynamicPathIndexes, crossorigin }) => {
            let fullURL = url

            if (dynamicPathIndexes) {
              const pathnameArr = pathname.split('/')
              const dynamics = dynamicPathIndexes.map(index => pathnameArr[index])

              let counter = 0

              fullURL = url.replace(/\\$/g, match => dynamics[counter++])
            }

            document.head.appendChild(
              Object.assign(document.createElement('link'), { rel: 'preload', href: fullURL, as: 'fetch', crossOrigin: crossorigin })
            )
          })

          break
        }
      </script>
    </head>
    <body>
      <div id="root"></div>
    </body>
  </html>
`

Now all async vendor chunks will be fetched in parallel with their parent async chunk:

With Async Vendor Preload

Preloading Other Pages Data

We can preload data when hovering over links (desktop) or when links enter the viewport (mobile):

const createPreload = url => {
  if (document.head.querySelector(`link[href="${url}"]`)) return

  document.head.appendChild(
    Object.assign(document.createElement('link'), {
      rel: 'preload',
      href: url,
      as: 'fetch'
    })
  )
}

Preventing Sequenced Rendering

When we split a page from the main app, we separate its render phase, meaning the app will render before the page renders:

Before Page Render

After Page Render

This happens due to the common approach of wrapping routes with Suspense:

const App = () => {
  return (
    <>
      <Navigation />

      <Suspense>
        <Routes>{routes}</Routes>
      </Suspense>
    </>
  )
}

This method has a lot of sense to it:
We would prefer the app to be visually complete in a single render, but we would never want to stall the page render until the async chunk finishes downloading.

However, since we preload all async chunks (and their vendors), this won't be a problem for us. So we should suspend the entire app until the async chunk finishes downloading (which, in our case, happens in parallel with all the render-critical assets):

createRoot(document.getElementById('root')).render(
  <BrowserRouter>
    <Suspense>
      <App />
    </Suspense>
  </BrowserRouter>
)

This would make our app and the async page visually render at the same time.

Transitioning Async Pages

Note: this technique requires React 18

We will see a similar effect when we move to another async page: a blank space that remains until the page's script finishes downloading.

React 18 introduced us to the useTransition hook, which allows us to delay a render until some criteria are met.
We will use this hook to delay the page's navigation until it is ready:

import { useTransition } from 'react'
import { useNavigate } from 'react-router-dom'

const useDelayedNavigate = () => {
  const [, startTransition] = useTransition()
  const navigate = useNavigate()

  return to => startTransition(() => navigate(to))
}

export default useDelayedNavigate
const NavigationLink = ({ to, onClick, children }) => {
  const navigate = useDelayedNavigate()

  const onLinkClick = event => {
    event.preventDefault()
    navigate(to)
    onClick?.()
  }

  return (
    <NavLink to={to} onClick={onLinkClick}>
      {children}
    </NavLink>
  )
}

export default NavigationLink

Now async pages will feel like they were never split from the main app.

Prefetching Async Pages

Users should have a smooth navigation experience in our app.
However, splitting every page causes a noticeable delay in navigation, since every page has to be downloaded before it can be rendered on screen.

That's why I think all pages should be prefetched ahead of time.

We can do this by writing a wrapper function around React's lazy function:

import { lazy } from 'react'

const lazyPrefetch = chunk => {
  window.addEventListener('load', () => setTimeout(chunk, 200), { once: true })

  return lazy(chunk)
}

export default lazyPrefetch
- const Home = lazy(() => import(/* webpackChunkName: "index" */ 'pages/Home'))
- const LoremIpsum = lazy(() => import(/* webpackChunkName: "lorem-ipsum" */ 'pages/LoremIpsum'))
- const Pokemon = lazy(() => import(/* webpackChunkName: "pokemon" */ 'pages/Pokemon'))

+ const Home = lazyPrefetch(() => import(/* webpackChunkName: "index" */ 'pages/Home'))
+ const LoremIpsum = lazyPrefetch(() => import(/* webpackChunkName: "lorem-ipsum" */ 'pages/LoremIpsum'))
+ const Pokemon = lazyPrefetch(() => import(/* webpackChunkName: "pokemon" */ 'pages/Pokemon'))

Now all pages will be prefetched (but not executed) 200ms after the browser's load event.

Deploying

The biggest advantage of a static app is that it can be served entirely from a CDN.
A CDN has many PoPs (Points of Presence), also called 'Edge Networks'. These PoPs are distributed around the globe and thus are able to serve files to every region much faster than a remote server.

The fastest CDN to date is Cloudflare, which has more than 250 PoPs (and counting):

Cloudflare PoPs

https://speed.cloudflare.com

https://blog.cloudflare.com/benchmarking-edge-network-performance

We can easily deploy our app using Cloudflare Pages:
https://pages.cloudflare.com

Benchmark

To conclude this section, we will perform a benchmark of our app compared to Next.js's documentation site (which is entirely SSG).
We will choose a minimalistic page (Accessibility) and compare it to our Lorem Ipsum page.
You can click on each link to perform a live benchmark.

Accessibility | Next.js
Lorem Ipsum | Client-side Rendering

I performed Google's PageSpeed Insights benchmark (simulating a slow 4G network) several times and picked the highest score of each page:

Next.js Benchmark

Client-side Rendering Benchmark

As it turns out, performance is not a default in Next.js.

Areas for Improvement

  • Compress assets using Brotli level 11 (Cloudflare only uses level 4 to save on computing resources).
  • Use the paid Cloudflare Argo service for even better response times.

Module Federation

Applying the same preloading principles in a Module Federation project should be relatively simple:

  1. We generate a pages-manifest.json file in every micro-frontend.
  2. When deploying a micro-frontend, we extract the asset-injected pages constant:
plugins: [
  new HtmlPlugin({
    scriptLoading: 'module',
    templateContent: ({ compilation }) => {
      const assets = compilation.getAssets().map(({ name }) => name)

-     const pages = pagesManifest.map(({ chunk, path, vendors, data }) => {
+     const pages = pagesManifest.map(({ chunk, path, scripts, vendors, data }) => {
+       if (scripts) return { path, scripts, data }

        const script = assets.find(name => name.includes(`/${chunk}.`) && name.endsWith('.js'))
        const vendorScripts = vendors
          ? assets.filter(name => vendors.find(vendor => name.includes(`/${vendor}.`) && name.endsWith('.js')))
          : []

        if (data && !Array.isArray(data)) data = [data]

        return { path, scripts: [script, ...vendorScripts], data }
      })

+     axios.post({ url: 'https://...', data: pages })
+     // OR
+     fs.writeFileSync('.../some-path', pages)

      return htmlTemplate(pages)
    }
  })
]
  1. We merge the the pages array with the shell's pages-manifest.json file.
  2. We deploy the shell.

We will also have to write additional code to preload the remoteEntry.js files.

Using this method, every time a micro-frontend is deployed, the shell has to be deployed aswell.
However, if we have more control over the build files in production, we could spare the shell's rebuild and deployment by manually editing its index.html file and merging the micro-frontend's pages array with the pages constant.

Sitemaps

In order to make all of our app pages discoverable to search engines, we need to create a sitemap.xml file which specifies all of our website routes.

Since we already have a centralized pages-manifest.json file, we can easily generate a sitemap during build time:

import { Readable } from 'stream'
import { writeFile } from 'fs/promises'
import { SitemapStream, streamToPromise } from 'sitemap'

import pagesManifest from '../src/pages-manifest.json' assert { type: 'json' }

const stream = new SitemapStream({ hostname: 'https://client-side-rendering.pages.dev' })
const links = pagesManifest.map(({ path }) => ({ url: path, changefreq: 'daily' }))

streamToPromise(Readable.from(links).pipe(stream))
  .then(data => data.toString())
  .then(res => writeFile('public/sitemap.xml', res))
  .catch(console.log)

This will emit the following sitemap:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1" xmlns:news="http://www.google.com/schemas/sitemap-news/0.9" xmlns:video="http://www.google.com/schemas/sitemap-video/1.1" xmlns:xhtml="http://www.w3.org/1999/xhtml">
   <url>
      <loc>https://client-side-rendering.pages.dev/</loc>
      <changefreq>daily</changefreq>
   </url>
   <url>
      <loc>https://client-side-rendering.pages.dev/lorem-ipsum</loc>
      <changefreq>daily</changefreq>
   </url>
   <url>
      <loc>https://client-side-rendering.pages.dev/pokemon</loc>
      <changefreq>daily</changefreq>
   </url>
</urlset>

We can manually submit our sitemap to Google Search Console and Bing Webmaster Tools.

Indexing

Google

It is often said that Google is having trouble correctly indexing CSR (JS) apps.
That might have been the case in 2018, but as of 2022, Google can index CSR apps very well.
The indexed pages will have a title, description and even content, as long as we remember to dynamically set them (either manually or using something like react-helmet).

Google Search Results

Google Lorem Ipsum Search Results

The following video explains how the new Googlebot renders JS apps:
https://www.youtube.com/watch?v=Ey0N1Ry0BPM

However, since Googlebot tries to save on computing power, there might be cases where it would crawl the page before it finishes loading.
So we better not rely on its ability to crawl JS apps and just serve it prerendered pages.

Prerendering

Other search engines such as Bing cannot render JS (despite claiming they can). So in order to have them index our app correctly, we will serve them prerendered versions of our pages.
Prerendering is the act of crawling web apps in production (using headless Chromium) and generating a complete HTML file (with data) for each page.

We have two options for prerendering our app:

  1. We can use a dedicated service such as prerender.io and seo4ajax.

Prerender.io Table
  1. We can build our own prerender server using free open-source tools such as Prerender and Rendertron.

Then we redirect web crawlers (identified by their User-Agent header string) to our prerendered pages using Cloudflare Workers: public/_worker.js.

Prerendering, also called Dynamic Rendering, is encouraged by Google and Microsoft.

Using prerendering produces the exact same SEO results as using SSR in all search engines.

Social Media Share Previews

When we share a CSR app link in social media, we can see that no matter what page we link to, the preview will remain the same.
This happens because most CSR apps have only one HTML file, and social share previews do not render JS.
This is where prerendering comes to our aid once again, we only need to make sure to set the correct meta tags dynamically:

export const setMetaTags = ({ title, description, image }) => {
  if (title) {
    document.title = title
    document.head.querySelector('meta[property="og:title"]').setAttribute('content', title)
  }
  if (description) document.head.querySelector('meta[name="description"]').setAttribute('content', description)

  document.head.querySelector('meta[property="og:url"]').setAttribute('content', window.location.href)
  document.head
    .querySelector('meta[property="og:image"]')
    .setAttribute('content', image || `${window.location.origin}/icons/og-icon.png`)
}
useEffect(() => {
  const page = pagesManifest.find(({ path }) => pathname === path || isStructureEqual(pathname, path)) || {}

  setMetaTags(page)
}, [pathname])

This, after going through prerendering, gives us the correct preview for every page:

Facebook Preview Home


Facebook Preview Pokemon


Facebook Preview Pokemon Info

CSR vs. SSR

SSR Disadvantages

Here's a list of some SSR cons that should not be taken lightly:

  • When moving to client-side data fetching, SSR will always be slower than CSR, since its document is always bigger and takes longer to download.
  • SSR apps are always heavier than CSR apps, since every page is composed of both a fully-constructed HTML document and its scripts (used for hydration).
  • Since all images are initially included in the document, scripts and images will compete for bandwidth, causing delayed interactivity on slow networks.
  • Since accessing browser-related objects during the server render phase throws an error, some very helpful tools become unusable, while others (such as react-media) require SSR-specific customizations.
  • SSR page responses mostly don't return a 304 Not Modified status.

Let's elaborate on the first one in the list:
Fetching data on the server is usually a bad idea, since some queries may take several hundreds of milliseconds to return (many will exceed that), and while pending, the user will see absolutely nothing in their browser.
It's hard to understand why developes decide to couple the server performance with the initial page load time, it seems like a poor choice.
We can even see that Next.js's own documentation push you away from server-side fetching and into client-side fetching.
And by doing so, we will fall short behind a CSR app's performance (as mentioned above).

That's why choosing SSR for its "server-side data fetching" ability is a mistake - you may never know how much of the data fetching will end up in the client because of poor server performance (or inevitably large queries).

A quick reminder that since we preload the data in our CSR app, we benefit in both first painting and data arrival.

Inlining CSS

When we talk about SSR's render flow, we paint the following picture in our minds:

Browser request ===> initial HTML arrives (page is visible) ===> JS arrives (page is interactive).

But in reality, most SSR websites do not inline critical CSS.
So the actual render flow is as follows:

Broswer request ===> initial HTML arrives ===> CSS arrives (page is visible) ===> JS arrives (page is interactive).

This makes the SSR flow nearly identical to the CSR flow, the only difference is that in CSR the browser will have to wait for the JS to finish loading aswell, in order to paint the screen.
That's why the FCP differences between the two are marginal and sometimes even nonexistent (especially under fast internet connections).

We have both Next.js and Remix websites to demonstrate the absence of critical CSS inlining.

The main reasons for not extracting critical CSS are that the HTML becomes larger (delaying interactivity) and that the CSS becomes uncacheable.

Why Not SSG?

We have seen the advantages of static files: they are cacheable; a 304 Not Modified status can be returned for them; they can be served from a nearby CDN and serving them doesn't require a Node.js server.

This may lead us to believe that SSG combines both CSR and SSR advantages: we can make our app visually appear very fast (FCP) and it will even be interactive very quickly.

However, in reality, SSG has a major limitation:
Since JS isn't active during the first moments, everything that relies on JS to be presented simply won't be visible, or it will be visible in its incorrect state (like components which rely on the window.matchMedia function to be displayed).

A classic example of this problems is demonstrated by the following website:
https://death-to-ie11.com

Notice how the timer isn't available right away? that's because its generated by JS, which takes time to download and execute.
There are a lot of examples of how this delayed functionality hurts the UX, like the way some websites only show the navigation bar after JS has been loaded.

The Cost of Hydration

It is a fact that under fast internet connection, both CSR and SSR perform great (as long as they are both optimized). And the higher the connection speed - the closer they get in terms of loading times.

However, when dealing with slow connections (such as mobile networks), it seems that SSR has an edge over CSR regarding loading times.
Since SSR apps are rendered on the server, the browser receives the fully-constructed HTML file, and so it can show the page to the user without waiting for JS to download. When JS is eventually download and parsed, the framework is able to "hydrate" the DOM with functionality (without having to reconstruct it).

Although it seems like a big advantage, this behaviour has one major flaw on slow connections - until JS is loaded, users can click wherever they desire, but the app won't react to them.
It might be somewhat of an inconvenience when buttons don't respond to click events, but it becomes a much larger problem when default events are not being prevented.

This is a comparison between Next.js's website and Client-side Rendering app on a fast 3G connection:

SSR Load 3G

CSR Load 3G

What happened here?
Since JS hasn't been loaded yet, Next.js's website could not prevent the default behaviour of anchor tags to navigate to another page, resulting in every click on them triggering a full page reload.
And the slower the connection is - the more severe this issue becomes.
In other words, where SSR should have had a performance edge over CSR, we see a very "dangerous" behavior that might only degrade the UX.

It is impossible for this issue to occur in CSR apps, since the moment they render - JS has already been fully loaded.

Conclusion

We saw that client-side rendering performance is on par and sometimes even better than SSR in terms of loading times.
We also learned that using prerendering gives perfect SEO results, and that we don't even need to think about it once it is set up.
And above all - we have achieved all this mainly by modifiying 2 files (Webpack config and HTML template) and using a prerender service, so every existing CSR app should be able to quickly and easily implement these modifications and benefit from them.

These facts lead to the conclusion that there is just no reason to use SSR anymore, it would only add a lot of complexity and limitations to our project and degrade the developer experience.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK