When we heard of UI Performance what exactly come to our mind??
- Higher Load Speed
- Smooth Rendering
- No Delay with 3rd Parties
and a lot more questions. Actually, Developers’ brains work slightly differently with the business teams, and their requirements may be as follows:
- Why users are not retained for more than 10 seconds on the page
- Why CTR is so low (maybe the user has not reached the page)
- What if I was in a customer’s position
So Many Questions right. This happened to me as well. I am a front-end developer, and one day sitting in my room having a cup of coffee just wondering that we have tried multiple ways to improve Chrome Lighthouse Report but nothing seemed to work much and most of the improvements were framework-specific and so what next. How I should improve??
- What Should I change in the codebase of thousands of lines
- What logic I should break to improve the score
- Are there any improvisations that are pending
These Questions come into everyone's mind and there is a sea of articles, but no approach is a perfect solution. Well, there can’t be as every approach works differently. So, first, let us see how we started.
How we approached to find the loopholes
Instead of going framework-specific we dived a little deeper and analyzed how actually our project is functioning and what is the scope of improvement. Below are the mentioned activities that we did.
1. Carefully Analyzed the Network Calls
We all know that front-end and back-end go hand-in-hand but as I mentioned above we were not able to find any solution and thought there are chances that response time only is more which is increasing the time for items to get painted on the page and guess what we found something like this. Response time from the backend was around 800ms-1200ms . This is approximately 1 second which is already more. So this was the first problem we noted down. Now before going to the solutions let us see what all other loopholes we found.
2. The Same API Call every time for the Same Search Request
Assume you are a student and preparing for your exam. The first time you will study in a detail but next time will you refer to your notes or again hit for a detailed study. If you are smart hopefully you will be making notes. Similar kind of happening we noticed that every time a user searches, a new request is made even if the search sequence looks like search-term1=>search-term2=>search-term1. Now noticed if the user has already searched term1 we don’t have to make a new request again for a fixed period of time or session. This concept is known as memoization. We will look into the details of this later in the article. Let us now go to our Last finding.
3. Size of DOM
“HTML and CSS are static and they will not take much space and time for execution”. This is the mindset for most developers and was the same for us also until we noticed DOM optimizations provided by the lighthouse. It is not recommended to keep in-depth nesting of HTML elements and should be in a range suggested by the lighthouse.
I guess now you must be wondering that these are the theoretical approaches and how we are going to rectify them from Code.
Solutions to above Problems
- Splitting the API Calls
We looked for two concepts suggested by the lighthouse report before deciding to split the calls which were First Contentful Paint (It marks the time at which the first text or image is painted) and Time to Interactive (It is the total time page takes to fully become interactive). Basically, these two terms mean how fast users can see and interact with the HTML elements. Now in our case, we have two major components to load that are products and filters.
In the above image you can see, on the left-hand side and top we have filters and on right we have products. Before splitting the API calls, we used to make one call to load both the components and hence have to wait for both to get loaded, that means first contentful paint will be more and hence time to interact will be more which simply means the page will keep on loading as to get the complete response.
After splitting the API Calls, both the components load independently and asynchronously. This means whichever response from products and filters loads first will be painted on the page which will show the user the actual HTML elements instead of loading a page. Now, first contentful paint and time to interactive is decreased and hence lighthouse score is increased
Page Load Speed is directly proportional to user retention rate.
Before getting into how we used Memoization let us understand why we need it. So, as our application grows and performs heavy computation it becomes necessary to start reusing parts of the application that are possible, or in technical terms, we call that caching. Now have you ever tried caching URLs and API Response in the front-end? We tried this and guess what, we achieved a tremendous jump in performance.
Note: Lighthouse generates reports for first load and memoization does not affect the lighthouse score.
For every to-and-fro visit on a particular search term for the second time, no API call will happen as it will be picked from Cache and hence no loading of the page. Page loads like a blink of an eye.
You can use memoization in case of:
a. Expensive Function Calls.
b. Functions with recurring inputs.
3. Lazy Loading and Infinite Scroll
When given a UX most of the front-end developers rush to develop that without keeping in mind the DOM Size. As per https://web.dev/dom-size/#:~:text=Lighthouse%20flags%20pages%20with%20DOM,more%20than%2060%20child%20nodes. recommended DOM elements in a page are 1500, depth is 32 nodes and parent node should not have more than 60 child nodes. Now you must be wondering how to refactor the existing code which is already huge. Firstly you have to skim through your files and remove unnecessary nodes. There is no other solution.
We also came across the same issue. After skimming through all the files we were able to reduce the DOM elements by approx. 200 only which was not enough to improve our lighthouse score. Later we analyzed we are loading around 48 products on a page and each product itself has a lot many elements which can’t be reduced. So, lazy load and infinite scroll is the only solution that hit us. These two approaches together will load only a limited number of products at a time and as the user scrolls down, it will load more. This way the first load on-page is reduced and as the load on-page is reduced, time to load the page is also reduced.
Using these two approaches we were able to reduce DOM elements from 3000 to 1200 and hence gained the proportional increase in lighthouse score.
So, these are the three approaches we used to enhance our Lighthouse Report, and now it's time to show you the score.
Note: Lighthouse Report may vary according to the device and the running threads.
- https://web.dev/lcp/ — Largest Contentful Paint
- https://web.dev/fid/ — First Input Delay
- https://web.dev/cls/ — Cumulative Layout Shift
- https://web.dev/vitals/ — Web Vitals
At last, I would like to thank the complete Discovery team of blibli.com for supporting us to achieve this.