Contrary to BE considerations, performance may not be the main thing to optimize for. Instead, on the FE, most optimizations lean towards speed of iteration.
There are of course exceptions, when dealing with low latency systems (pro trading) and a variety of clients (slow mobile connections).
Here are some general requirements that will help you architect a solution:
What is the expected response time for various events. For example, for low latency systems such as trading terminals, you will want to use sockets of gRPC, which don't have much of the overhead of REST and GraphQL. On the flipside, such protocols aren't as advantageous for APIs that will evolve very quickly
What is the expected time to first load? For example, are we dealing with a blog, where users are impatient and may navigate away after more than a second or two of load time.
Webpack allows "chunking" the app into segments. So your preferences screen may be asynchronously loaded when you navigate to it. This is a great approach to reduce load time. However, it will increase load time in between pages (as opposed to loading up your app all at once)
With the Node ecosystem, it's very easy to have bundle sizes in the megabytes. Which are no problem for gigabit LAN connections, but will cause issues on unreliable cellular connections. If lower page sizes are needed, you may want to opt for smaller libraries, or a few other strategies I will suggest below. I will discuss the tradeoffs of each strategy
Often times there will be a significant variation for the mobile vs desktop experience. While it's very convenient to maintain them as one app, sometimes the mobile experience could benefit from a smaller package size due to the cellular connection. There are many strategies to get the best of both worlds, such as sharing a common code library with tree shaking (ex: buttons, style guide, API handling) and chunking by route
Many mainstream libraries (React) will have great features, ecosystem, and support for various scenarios. But they come with a large package size. You may want to investigate alternatives if package size is an issue. For example Preact > React. Note that you may lose the advantages of realiability, ecosystem, and support if you opt for these libraries. So don't make this tradeoff prematurely.
You may also opt to use modern JS browser APIs in lieu of legacy libraries. For example, for date and currency formatting you might get away with the the built in functionality (like toLocaleString or Intl.NumberFormat) as opposed to moment.js or currency.js. But note that these APIs offer a much smaller range of supported options.
In general, there are very few scenarios where you might want to roll your own library, unless you have a very specific use case. The React community has optimized for every major use case of rendering standard components. You can most likely build on top of it rather than rewriting it.
However there are a few scenarios where standard libraries are not enough.
- Special Requirements: At HOVER, we built a custom 3D Rendering library (on top of ThreeJS), because we had extremely specific needs, and there simply was nothing out there that covered those needs (for example ray tracing in the browser). Yet it still leaned heavily on ThreeJS when possible.
- Performance: There may be some very high performance scenarios where standard libraries aren't sufficient. For example, charting and graphing come to mind. Standard tables will eat up all your memory when dealing with thousands of rows that need to be rendered at once. Likewise for charting. HighCharts does a great job, but may not be feasible if you have hundreds of charts on a dashboard. Remember to consider these requirements
Many libraries offer "tree shaking" support. Meaning webpack will identify which functions you are using, and only include these pieces of the code. This is a somewhat experimental feature, as many libraries are architected in ways where functionality may break. Generally speaking, functional as opposed to class based libraries will support tree shaking more. If you are building a lot of common libraries you may want to read the Webpack docs for how to enable tree shaking.
Often times you can import just a portion of a package (ex
There is a lot of debate over which coding style is "better". And there certaintly are pros and cons to both, which I will go into later. However, when it comes to bundle sizes, functional will usually win out. It forces a style of few dependencies, and small objects. You will usually export individual functions as opposed to huge classes, which will enable tree shaking
Server Side Rendering is a great way to speed up that first load.
- Many common frameworks such as React will have tooling to help you do SSR (ex: Next.js with Vercel). Note that there are downsides to SSR, such as making sure that all of your components and pages render properly on the server side. It definitely takes quite a lot more checking around.
- An additional benefit of SSR is SEO by Google, Bing, etc
- SSG (Static Site Generation) includes tools like Gatsby which pre-generate static HTML pages. This may not be applicable to many scenarios which require generation on the fly, but for blogs and other more or less "static" pages this is perfect
Your site will likely include many "marketing" pages, such as the home page, feature pages, help, contact, blog, etc. These pages tend to change very frequently in terms of content and design. But require no real "JS" functionality. Meaning by having your core app team work on marketing pages you are taking away from their time to work on the app. Additionally you are bloating your bundle size. And a lot of FE tooling is optimized for static content. Instead I recommend tools like Webflow and Squarespace where non-technical users can easily make copy changes. Additionally a Webflow engineer is significantly more affortable than an application engineer
One way to speed up the time to first load is to render elements as the data becomes available as opposed to waiting for the whole dataset to load. This does require some pre-thought to make sure there's no "chunky" render experience. This usually requires defining loading states and set heights for containers.
- Do we anticipate any huge datasets to be rendered (thousands of history rows somehow) Ex: progressive tables, google photos
- Google Performance API
- Just log manually
- Metric - first page load
- CDN - something like CloudFront? Akamai? Do we need multiple edge locations? Where are our users located?
- Caching - latest info each time or slightly outdated ok? Latest - huge network usage Cache - might be slightly outdated Browser caching, cache busting LocalStorage Cookies or LocalStorage for Auth
- Image optimization - different sizes Retina Display (2x) Alt Text srcset (multiple image sizes)
- Lazy load images
- Serving over HTTP2 vs HTTP1 - compatibility, fallback
Load on Server
- If doing SSR, more load on the server, are we using a separate server? Same server?
- Load balancing for failover K8S cluster for server app?
- For SSR app?
- Lambda @ Edge?
- Lots of API requests = too much load on server?
- Or do we want the latest info all the time?
- Microservices for FE, so that one app doesn’t affect another
- Progress Web App?
- Offline mode
- No internet connection