Hacker News new | past | comments | ask | show | jobs | submit login
Getting Postmark’s Lighthouse Performance Score to 100 (wildbit.com)
66 points by lkbm on Oct 1, 2020 | hide | past | favorite | 37 comments



All you really need to do to get a 100 perf score is to cut out any unused elements, only load the CSS you need, load any JS you need as late as possible (defer the <script> at least), and optimize your fonts and images. If you're using a decent host that has a point of presence fairly close to where Lighthouse is running then you should get a decent score. My own website has a 4x100 Lighthouse score.

If you're interested in this sort of optimization then Zach Leatherman's "Speedlify" is a good tool for doing continuous monitoring - https://www.zachleat.com/web/speedlify/


“Only load the CSS you need” is the piece I don’t have a go-to tool for. Coverage in Chrome DevTools provides a quick but manual way to identify above-the-fold CSS. Critical, criticalCSS, and Penthouse all look like good options for a build process. [1] [2]

[1]: https://web.dev/defer-non-critical-css/

[2]: https://web.dev/extract-critical-css/


The problem with being a slave to these Lighthouse scores is the test is only concerned with the one page you are testing. If you assume your visitor will only visit a single page on your site that's fine, but even if they visit a second page, nearly everything it suggests you do will make that second page load slower than if you had developed the page using the traditional methods- external CSS/JS files that apply to your entire site and can be cached.


The only reason the second page loads slower is because the user hasn't downloaded all the things they need to display it yet - in other words, you're suggesting downloading unused data on the first page in order to make the second page load faster. I'd argue that's not optimal because what you're gaining on the second page you're losing on the first page.

Also you always have to remember that users won't necessarily hit the homepage first. If they click a link from a search or a social media post they might end up on any page in the site. Everything that you do to optimize the homepage should be done for every page. You end up with cacheable assets that are shared between pages and everything else is only downloaded when the user hits a page that actually needs it. That's the way to a fast site.

With HTTP2 request parallelisation having lots of small files is not a bad thing. You can also add prefetch headers for big things like fonts. There's lots of ways to make sites load better.


Lighthouse doesn't care about HTTP2, it'll ding you for loading external files.

Also you're not just gaining for the second page, you're gaining for all pages visited on the site going forward.


> it'll ding you for loading external files

The Opportunities and Diagnostics sections don't contribute to your Performance score. Your overall Performance score is a weighted average of the Performance metrics [1]. The Opportunities and Diagnostics sections are just potential ideas on changes that may help you. It's up to you to decide what's best for your site.

[1] https://web.dev/performance-scoring/

Disclosure: I work on Lighthouse and web.dev


The Opportunities and Diagnostics sections may not, but your Performance score is affected by loading additional external files.


I've learned a ton for web perf stuff from the devtools and Lighthouse docs, so thanks for your work on those. :)


Webpack 5's federated modules have the potential to be a game-changer for addressing precisely this. Exciting times at the leading edge of webperf optimization.


> …the test is only concerned with the one page you are testing

I use a Lighthouse score for troubleshooting, but at the end of the article there is a chart of the change in Core Web Vitals, which tracks performance of _all_ pages indexed by Google over time.

> "…nearly everything it suggests you do will make that second page load slower…"

This is just not true for _any_ of the suggestions in the article. I guess you can make this argument for breaking down one CSS bundle into four, but their content was picked based on analytics of where people go more often. We make 1 extra request but load much less cruft, and it applies to any page of the website.


> the test is only concerned with the one page you are testing

A common practice is to use Lighthouse CI to test a set of representative pages across your site (such as your homepage, product search pages, and product detail pages if you're an e-commerce site). Every time that someone submits a pull request, Lighthouse CI runs Lighthouse against all of the representative pages to help you gain confidence that you have not introduced weird regressions on the rest of your site. https://web.dev/lighthouse-ci

> nearly everything it suggests you do will make that second page load slower

Which audits exactly are you referring to? To be frank, I highly doubt that claim.

Disclosure: I work on Lighthouse and web.dev


I haven't read this entire blog post in detail, but I've worked extensively in the past with improving page load times. The approach that works best:

1. Have your initial landing page load and render as fast as possible, and cut stuff out to make this happen.

2. That initial page will usually take the user at least a couple seconds to read. While they are doing that, you can load stuff in the background that is needed for other parts of your site, like larger JS bundles. There are a number of ways to do this.


I don't get why 100 lighthouse is important -- maybe someone could clarify it for me.

My main pet peeve with lighthouse is the "Eliminate render-blocking resource". It suggest inlining critical CSS and defer the rest. However, in my experience with limited testing group, there are no perceivable different at all.

I also have way too many web site that origin summary and field data shows all green, but the lab data is almost always in either yellow or red.


Lighthouse is a lab tool [1] for testing the performance of web pages. It specifically helps you measure your Core Web Vitals [2].

There are many, many research and case studies [3] correlating improved website performance to improved business metrics (such as conversions).

Google Search has also signaled that the Core Web Vitals will become a ranking signal [4].

Disclosure: I work on Lighthouse and web.dev

[1] https://web.dev/how-to-measure-speed/#lab-data-vs-field-data

[2] https://web.dev/vitals/#core-web-vitals

[3] https://wpostats.com/

[4] https://webmasters.googleblog.com/2020/05/evaluating-page-ex...


Thanks for your work, lighthouse is great!

Helps push general performance and best practise forward. And its communicated in a way non techie's understand.

Almost too easily - have had some 'consultants' run 'reports' that are just lighthouse re-badged, coupled with no technical understanding of what the action points mean, or why they may not be suitable for the platform / workflow in use.

But for the most part it's a positive tool, thanks!


>I don't get why 100 lighthouse is important -- maybe someone could clarify it for me.

It's not as important as your page rendering correctly, or finding the cure of cancer. It is however a nice goal to strive for regarding performance, encapsulated in an easy to use scoring tool.


The short of it is that lighthouse is a tool produced by Google to indicate the sort of things they might penalize you for in search results.

Its results might be arbitrary, but for some industries, when Google says "jump", you say "how high?".


Informal survey: which Lighthouse opportunities/diagnostics/suggestions/metrics have been most helpful for you? Least helpful?

Suggestions/feedback on the Lighthouse guides is also welcome (the documentation that you see after clicking those "Learn more" links).

Disclosure: I work on Lighthouse and web.dev


I apologise I'm going straight to Least helpful, because I've been trying to perfect scores* so these are fresh in my mind.

LCP. Oh boy, it's hard to track down what will make a difference. Is it the font, the CSS sizing of the element, the images in it... help on diagnosing what's causing the LCP time would be great.

But by far the least helpful metric is when I'm told that analytics.js could be optimised further. A (I assume) Google-sponsored tool is telling me I should improve the result from another Google tool. I appreciate why it happens, but it's frustrating :)

* Only to learn earlier this week from Paul Irish that some sites can't reach 100 due to the way the scoring curves are set.


> I apologise I'm going straight to Least helpful

No problem, I should have led off with that because it's usually the most actionable feedback. Thanks for the feedback.


I think the other thing is I find it hard to remember what the core web vitals mean, and to relate them to what's going on.

When we used to measure TTFB, DomContentLoaded and page fully loaded, I understood what those meant and could relate them to what's going on on-screen.

The core web vitals are a bit abstract to reason about - and how they're linked. FCP vs LCP, how they interplay...

I'd like a video where a page loads and PING! a vital happens, the page pauses and someone talks about that vital, what's led up to it, what's affected it and what would change it. Then the page unpauses until the next vital is reached. Something to make it relatable.


Ironically I’m having a hard time getting this blog post to load.

Images aren’t appearing as well.



https://developers.google.com/speed/pagespeed/insights/?url=... shows 99-100 every time I load it ¯\_(ツ)_/¯

Our mobile score is lower (86 according to PageSpeed), but I mention that in a blog post.


Right, now it says 89, which is around that 86. Web.dev is a bit volatile, and depends on the server connection at that moment I guess.


It's down! Archive link: https://web.archive.org/web/20201001142018/https://wildbit.c...

Edit: seems to be back up but slow


Does a blank page also get 100 perf score? In that case, 100 perf score doesn't mean best user experience.


Spot on. I've no doubt that many sites will measure the trade-offs to reach 100 and find it reduces the metrics they care about.

A quick examination of the sites on https://www.11ty.dev/speedlify/ suggest there's a distinct look/set of trade-offs made to get this score.


I am not sure whether 20 development days invested for a postmark score is a good time investment.


I've worked on a number of these projects where clients were chasing Lighthouse scores without any actual measure of effort, reward, or impact.

Google Analytics offers many site speed metrics that are tied to real-world visits and can be correlated with other metrics, behaviors, and conversions on the site itself. These numbers also give you visibility on the entire site and not just a single page that you've run through the Lighthouse tool.

I don't want to discourage people from making the web faster but Lighthouse scores are about as helpful as domain authority and Alexa rank when you can take a detailed look at your users through Google Analytics and get more granular performance analysis from WebPageTest.org (which also provides Lighthouse scores, can be run privately, simulates various devices and locations, and much more).


The Lighthouse score is just a detailed and useful metric for troubleshooting, but not the only one we monitor. I included a chart from our Core Web Vitals in the end, which monitors the performance of all pages indexed by Google and uses actual usage data from a wide sample of users (Chrome User Experience Report).


GA is also moving to WCV reporting as is lighthouse. Personally I think GSC (google search console) is more useful.

I have seen improvements from speeding up pages particular in mobile postmark is unusual in having so much many desktop viewers.


It looks like the accessibility score went down, too.


Hi, author here. Accessibility score went from 87 to 96. The second screenshot with a score of 86 was a preliminary result after introducing the fake widget. (Best Practices went from 86 to 100.)


Sorry, I had just skimmed the article until I came across two sets of scores. That's great!


Seems like the HN hug is lowering their FCP score by bogging down the server.


Great article! I used the same technique for Facebook messenger widget.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: