We started seeing reports about it in GSC early July, when over a single day all our scores turned to crap with no explanation.
We are in the yellow, but the biggest culprits for blocking time are...Google Tag Manager, GAds (and Ganalytics where we still have it). So yeah, thanks Google, can't wait to lose on SEO due to your own products. And also, thanks for releasing this without the proper analysis tooling. (https://web.dev/debug-performance-in-the-field/#inp : this is not tooling, this is undue burden on developers. Making people bundle an extra ["light" library](https://www.npmjs.com/package/web-vitals) with their clients, forcing them to build their own analytics servers to understand what web-vitals complains about...or is often wrong about)
It's hilarious how AMP pages have become a cesspool of JS dark patterns trying to bombard with you with as many ads as possible and keep you from escaping their site.
>We are in the yellow, but and biggest culprits for blocking time are...Google Tag Manager, GAds and GAnalytics.
This has been the case for over a decade with Google's "Lighthouse" analysis tool as well. I used to use it as part of a site analysis suite for my clients - a good portion of the time, my smaller clients would end up deciding to replace Google Analytics entirely with a different product because of it.
With SEO it's entirely the case of Google being Google and just doing whatever they fancy. Core changes aren't often to the benefit of the scene these days. The little space that isn't paid ads is often useless these days.
I don't think the generative AI results they're going to do will be much better either.
Yes a lot of analytics use cases for smaller clients boil down to a server report of page requests. Obviously with google ads you're using it to monetize so that's a different story, but the client side analytics that google provide are bloated and usually overkill for most sites.
If they can't give up Google Analytics or Google Ads, but still want the perf, give Cloudflare Zaraz a try. I am Product Specialist there, if you need an intro in person, happy to do it. Just reach out to me on Linkedin / twitter.
Thanks for this, you just solved my problem! I got an email about this INP thing this morning, but google had zero explanation why our INP timings were so high for a page that does practically nothing. I realised that I had removed analytics from our site last year, but I'd forgotten about this particular page, so that probably explains it.
This may be controversial but I think this has the potential to be a brilliant metric because it measures some part of web UX that’s often neglected. It’s time consuming to make every single interaction display some sort of loading message but it really helps make the site feel responsive.
As long as they avoid the pattern of adding a global loading spinner that covers the whole screen. That’s just the worst possible loading screen. I suppose it would still pass this metric.
Also I’m not sure if I totally understand the metric - I think it’s simply when the next frame is rendered post interaction, which should easily be under 200ms unless you’re
1. doing some insane amount of client side computation
2. talking over the network far away from your service or your API call is slow / massive
and both of these are mitigated by having any loading indication so I don’t understand how this metric will be difficult to fix.
> This may be controversial but I think this has the potential to be a brilliant metric because it measures some part of web UX that’s often neglected.
It also seems to be a metric that is very easily gamed.
If all that matters is instant feedback, then just draw that loader as soon as user clicks add to cart, do not wait for the request to start. It does not matter that it will take X or Y seconds.
> It also seems to be a metric that is very easily gamed.
Fun fact: the current JS-specific metric (which is being fazed out) is First Input Delay, and it was explicitly designed to avoid this gaming:
> FID only measures the "delay" in event processing. It does not measure the event processing time itself nor the time it takes the browser to update the UI after running event handlers. While this time does affect the user experience, including it as part of FID would incentivize developers to respond to events asynchronously—which would improve the metric but likely make the experience worse.
> - https://web.dev/fid/
I wonder why they decided to reconsider this trade-off when designing INP.
But what you're describing as "gaming" is precisely the behavior this is supposed to incentivize.
Of course you should be setting a visual "in progress" state before you send out a request. And yes that's supposed to be instantaneous, not measured in "X or Y seconds". That's the entire point, to acknowledge that the user did something so they know they clicked in the right place, that another app hadn't stolen keyboard focus, etc.
You will be pleased to know that isn’t actually the case and it’s instead an example of a single metric taken from a larger suite of metrics known collectively as core web vitals which is what is actually used https://web.dev/vitals/
You’d need some pretty inefficient code for there to be a delay between the user clicking a button and even starting a request…
But even in that case, instant feedback is probably better for the user. It lets them know the website isn’t broken and they don’t need to click again, and it also makes the experience feel snappier.
Honestly, displaying a loader as soon as I click add to cart would be an improvement on many sites. I'd welcome it.
A site that genuinely responds quickly is best. But for a slow site, I'd always prefer one that at least gives me instant feedback that I clicked something over one that doesn't.
I'm lacking lots of context obviously but:
What good is a sophisticated metric when the pages they index are mostly blogspam SO clones etc? I'm not interested in the "most responsive" SO clone. Seems out of touch with what Google search is struggling with these days.
NYTimes feels like a SSG when browsing (chrome) from Europe after initial payload but that's as an unauthenticated user with ublock. The sad part is that I can't read most of the articles due to the paywall.
At first glance, 200ms INP is a pretty high latency for a "good" rating. As a comparison, I believe 200ms is an average https roundtrip. I'd expect most interactions to be much lower than that.
I guess it depends on how your interactions are implemented. If it’s an SPA then 200ms is absurdly slow. But if it’s a more traditional form submit or something then it would take a lot longer for your next set of pixels to comes through.
Based on the fact that they're hooking into events like onclick, I'd say that they are not looking at traditional form submits, because then the metric would just effectively be First Contentful Paint or something. My interpretation is that they are indeed looking at first paint after an event handler has been fired on the same page.
This is correct (source: working in web perf for 5 years).
INP is the time between you click/press a key/etc and the moment the next paint happens. It’s only measured for on-page interactions, not for navigations.
.2s is slow for a redraw. Just because it might take you 200ms to click after something happens doesn't mean you can't see/notice when things take that long.
I’m not saying it’s fast. I’m saying that based on the goal of what it is measuring (user input responsiveness) it’s fast enough. For the purposes of the metric anyhow.
Plus, speaking from far too long of a career dealing with user testing, respond too fast and users thing you didn’t actually do anything.
So you’re kind of boned either way. This is just measuring programmatic delay.
I've generally had no gripes about this or web vitals in general except for one thing: group population[0]. It's unfair to create a blast radius on a small or medium-sized business's website simply because enough data doesn't exist to determine the true extent of the user experience impact.
The most recent example I've observed this on was a website with a heavy interactive location finder experience that lived on a single page. Fine, penalize that page. There's a chance users won't initially navigate there anyway. However, because a (very minimal, practically irrelevant amount of) similar content on the rest of the page was present on 18 other pages, the impact was huge.
The reality of the web today makes this pretty dire in my mind. Many businesses choose to run websites that are generally fast, but they have to engage with third-party services because they don't have the means to build their own map, event scheduler, or form experience. The punishment doesn't fit the crime.
INP feels like a pretty problematic way to compare sites because INP is going to be way lower on a site that doesn't do client-side rendering eventhough client-side rendering makes interaction with a site faster!
> client-side rendering makes interaction with a site faster!
I am going to have to disagree. Final HTML from the server is just that. Its final. The client displays it and its done. No XHR, no web sockets, no JS eval. It's done. You can immediately use the webpage and the webserver doesn't care who you are anymore. With SPA, this is the best case. You maybe even start with SSR from the server and try to incrementally move from there. Regardless, the added complexity of SSR->SPA and other various hybrid schemes can quickly eat into your technical bullshit budget and before you know it that ancient forms app feels like lightning compared to the mess you proposed.
Reaching for SPA because you think this will make the site "faster" is pretty hilarious to me. I've never once seen a non-trivial (i.e. requires server-side state throughout) SPA that felt better to me than SSR.
I completely disagree. Client side has the potential to be very fast, even faster. However, most people are more interested in writing a complex, Turing complete, type system under their client than making fast, easy to use applications.
Has anyone been able to demonstrate to their satisfaction that improving Web Vitals scores actually improves their search engine placement? We send web vitals field data to our own analytics servers to track P75, but Google changes its algorithm so much we can't quite prove that our various LCP/CLS/FID/INP changes are actually making any difference.
Funnily enough I clearly remember typing Largest Contentful Paint. But it turns out I typed First :)
My point still stands.
"The First Contentful Paint (FCP) metric measures the time from when the page starts loading to when any part of the page's content is rendered on the screen." [1]
Unless your server is overwhelmed, and can't send back data fast enough, there's literally no way to call "1 second before anything is rendered on screen is fast".
In the context of the tweet this is even more egregious. They were talking about Reddit's yet another redesign, and how it was fast. Reddit is a website that displays text and images. Their server responds in 200 milliseconds max. And yet, they were talking about how spending 0.9 seconds to display some info (menu on the left?), and 2.4 seconds to display actiual content is fast.
And that comes from "engineering leader at Chrome". We are at a point in time where people literally don't understand what fast is.
He’s referring to the P75 from Chrome’s field data. Now, Reddit definitely could do more here and get that LCP at the same time as the FCP (eliminate load and render delay). But a big purpose for these metrics is to make the web more accessible/usable, and the reality is most of the world doesn’t have iPhones or fast networks[1].
And yet these metrics literally optimize for Reddit's 109 Javascript files to render a page of text with images in 2.4 seconds. And they call it fast.
However you slice, dice, or interpret these metrcis, none of these shenanigans make it fast.
What these metrics and "75p" show is that modern web sucks for everyone, and they have to pretend that 2.4s to render a page is fast because everyone is even slower.
Or, to put in context: Reddit's servers serve that page in under 200 milliseconds. There's literally no justification in the world that make it ok to say "2.4 seconds to render that on screen is fast". None.
Yep, and this also is referring to the P75 LCP. Notice how’s he referencing the field data from Page Speed Insights, which is reporting Core Web Vitals. Anytime these metrics are discussed it’s always the 75th percentile.
Or to put it another way, people waste 10% of their time on the web waiting for slow bloated garbage to load (or more when you take into account pages that do more loading when you touch them afterwards)
This just shows that they don't even understand what they are measuring.
With their engineering leader [1] arguing that 2.4s to display text and images is fast, no wonder they present "people still spend time on websites after they have spent an eternity loading" as a surprising find.
Less obvious than you think. How long do you spend on the HN front page? When you open say Gitlab how often do you stay there and not immediately click to something else?
We are in the yellow, but the biggest culprits for blocking time are...Google Tag Manager, GAds (and Ganalytics where we still have it). So yeah, thanks Google, can't wait to lose on SEO due to your own products. And also, thanks for releasing this without the proper analysis tooling. (https://web.dev/debug-performance-in-the-field/#inp : this is not tooling, this is undue burden on developers. Making people bundle an extra ["light" library](https://www.npmjs.com/package/web-vitals) with their clients, forcing them to build their own analytics servers to understand what web-vitals complains about...or is often wrong about)