All you really need to do to get a 100 perf score is to cut out any unused elements, only load the CSS you need, load any JS you need as late as possible (defer the <script> at least), and optimize your fonts and images. If you're using a decent host that has a point of presence fairly close to where Lighthouse is running then you should get a decent score. My own website has a 4x100 Lighthouse score.
If you're interested in this sort of optimization then Zach Leatherman's "Speedlify" is a good tool for doing continuous monitoring - https://www.zachleat.com/web/speedlify/
“Only load the CSS you need” is the piece I don’t have a go-to tool for. Coverage in Chrome DevTools provides a quick but manual way to identify above-the-fold CSS. Critical, criticalCSS, and Penthouse all look like good options for a build process. [1] [2]
The problem with being a slave to these Lighthouse scores is the test is only concerned with the one page you are testing. If you assume your visitor will only visit a single page on your site that's fine, but even if they visit a second page, nearly everything it suggests you do will make that second page load slower than if you had developed the page using the traditional methods- external CSS/JS files that apply to your entire site and can be cached.
The only reason the second page loads slower is because the user hasn't downloaded all the things they need to display it yet - in other words, you're suggesting downloading unused data on the first page in order to make the second page load faster. I'd argue that's not optimal because what you're gaining on the second page you're losing on the first page.
Also you always have to remember that users won't necessarily hit the homepage first. If they click a link from a search or a social media post they might end up on any page in the site. Everything that you do to optimize the homepage should be done for every page. You end up with cacheable assets that are shared between pages and everything else is only downloaded when the user hits a page that actually needs it. That's the way to a fast site.
With HTTP2 request parallelisation having lots of small files is not a bad thing. You can also add prefetch headers for big things like fonts. There's lots of ways to make sites load better.
The Opportunities and Diagnostics sections don't contribute to your Performance score. Your overall Performance score is a weighted average of the Performance metrics [1]. The Opportunities and Diagnostics sections are just potential ideas on changes that may help you. It's up to you to decide what's best for your site.
Webpack 5's federated modules have the potential to be a game-changer for addressing precisely this. Exciting times at the leading edge of webperf optimization.
> …the test is only concerned with the one page you are testing
I use a Lighthouse score for troubleshooting, but at the end of the article there is a chart of the change in Core Web Vitals, which tracks performance of _all_ pages indexed by Google over time.
> "…nearly everything it suggests you do will make that second page load slower…"
This is just not true for _any_ of the suggestions in the article. I guess you can make this argument for breaking down one CSS bundle into four, but their content was picked based on analytics of where people go more often. We make 1 extra request but load much less cruft, and it applies to any page of the website.
> the test is only concerned with the one page you are testing
A common practice is to use Lighthouse CI to test a set of representative pages across your site (such as your homepage, product search pages, and product detail pages if you're an e-commerce site). Every time that someone submits a pull request, Lighthouse CI runs Lighthouse against all of the representative pages to help you gain confidence that you have not introduced weird regressions on the rest of your site. https://web.dev/lighthouse-ci
> nearly everything it suggests you do will make that second page load slower
Which audits exactly are you referring to? To be frank, I highly doubt that claim.
I haven't read this entire blog post in detail, but I've worked extensively in the past with improving page load times. The approach that works best:
1. Have your initial landing page load and render as fast as possible, and cut stuff out to make this happen.
2. That initial page will usually take the user at least a couple seconds to read. While they are doing that, you can load stuff in the background that is needed for other parts of your site, like larger JS bundles. There are a number of ways to do this.
I don't get why 100 lighthouse is important -- maybe someone could clarify it for me.
My main pet peeve with lighthouse is the "Eliminate render-blocking resource". It suggest inlining critical CSS and defer the rest. However, in my experience with limited testing group, there are no perceivable different at all.
I also have way too many web site that origin summary and field data shows all green, but the lab data is almost always in either yellow or red.
Helps push general performance and best practise forward. And its communicated in a way non techie's understand.
Almost too easily - have had some 'consultants' run 'reports' that are just lighthouse re-badged, coupled with no technical understanding of what the action points mean, or why they may not be suitable for the platform / workflow in use.
But for the most part it's a positive tool, thanks!
>I don't get why 100 lighthouse is important -- maybe someone could clarify it for me.
It's not as important as your page rendering correctly, or finding the cure of cancer. It is however a nice goal to strive for regarding performance, encapsulated in an easy to use scoring tool.
I apologise I'm going straight to Least helpful, because I've been trying to perfect scores* so these are fresh in my mind.
LCP. Oh boy, it's hard to track down what will make a difference. Is it the font, the CSS sizing of the element, the images in it... help on diagnosing what's causing the LCP time would be great.
But by far the least helpful metric is when I'm told that analytics.js could be optimised further. A (I assume) Google-sponsored tool is telling me I should improve the result from another Google tool. I appreciate why it happens, but it's frustrating :)
* Only to learn earlier this week from Paul Irish that some sites can't reach 100 due to the way the scoring curves are set.
I think the other thing is I find it hard to remember what the core web vitals mean, and to relate them to what's going on.
When we used to measure TTFB, DomContentLoaded and page fully loaded, I understood what those meant and could relate them to what's going on on-screen.
The core web vitals are a bit abstract to reason about - and how they're linked. FCP vs LCP, how they interplay...
I'd like a video where a page loads and PING! a vital happens, the page pauses and someone talks about that vital, what's led up to it, what's affected it and what would change it. Then the page unpauses until the next vital is reached. Something to make it relatable.
I've worked on a number of these projects where clients were chasing Lighthouse scores without any actual measure of effort, reward, or impact.
Google Analytics offers many site speed metrics that are tied to real-world visits and can be correlated with other metrics, behaviors, and conversions on the site itself. These numbers also give you visibility on the entire site and not just a single page that you've run through the Lighthouse tool.
I don't want to discourage people from making the web faster but Lighthouse scores are about as helpful as domain authority and Alexa rank when you can take a detailed look at your users through Google Analytics and get more granular performance analysis from WebPageTest.org (which also provides Lighthouse scores, can be run privately, simulates various devices and locations, and much more).
The Lighthouse score is just a detailed and useful metric for troubleshooting, but not the only one we monitor. I included a chart from our Core Web Vitals in the end, which monitors the performance of all pages indexed by Google and uses actual usage data from a wide sample of users (Chrome User Experience Report).
Hi, author here. Accessibility score went from 87 to 96. The second screenshot with a score of 86 was a preliminary result after introducing the fake widget. (Best Practices went from 86 to 100.)
If you're interested in this sort of optimization then Zach Leatherman's "Speedlify" is a good tool for doing continuous monitoring - https://www.zachleat.com/web/speedlify/