Hacker News new | past | comments | ask | show | jobs | submit login
Interaction to Next Paint (INP) (web.dev)
142 points by 42droids on July 11, 2023 | hide | past | favorite | 72 comments



We started seeing reports about it in GSC early July, when over a single day all our scores turned to crap with no explanation.

We are in the yellow, but the biggest culprits for blocking time are...Google Tag Manager, GAds (and Ganalytics where we still have it). So yeah, thanks Google, can't wait to lose on SEO due to your own products. And also, thanks for releasing this without the proper analysis tooling. (https://web.dev/debug-performance-in-the-field/#inp : this is not tooling, this is undue burden on developers. Making people bundle an extra ["light" library](https://www.npmjs.com/package/web-vitals) with their clients, forcing them to build their own analytics servers to understand what web-vitals complains about...or is often wrong about)


I for one commend Google's efforts to improve the web's chances of long-term survival by eliminating themselves and hopefully tracking in general.


It's hilarious how AMP pages have become a cesspool of JS dark patterns trying to bombard with you with as many ads as possible and keep you from escaping their site.


Meh, with Googles resources they could do so much good. This is a drop in the bucket, compared to what they could be doing.

My company is scrambling to account for the changes here, and ultimately it will be users who suffer until we have the proper data available.

This should’ve been a standard that all major browsers have implemented and agreed to, before being rolled out generally.


Oh come on, it's not like their mission is to "do good" or even "don't be evil" any more.. it's to benefit shareholders.

They don't care if their web tendrils are a net positive or not


>We are in the yellow, but and biggest culprits for blocking time are...Google Tag Manager, GAds and GAnalytics.

This has been the case for over a decade with Google's "Lighthouse" analysis tool as well. I used to use it as part of a site analysis suite for my clients - a good portion of the time, my smaller clients would end up deciding to replace Google Analytics entirely with a different product because of it.


> > We are in the yellow, but and biggest culprits for blocking time are...Google Tag Manager, GAds and GAnalytics.

> ...a good portion of the time, my smaller clients would end up deciding to replace Google Analytics entirely with a different product because of it.

This seems like a good outcome, then? Market pressure may be the only way to get Google analytics to finally cut their footprint.


With SEO it's entirely the case of Google being Google and just doing whatever they fancy. Core changes aren't often to the benefit of the scene these days. The little space that isn't paid ads is often useless these days.

I don't think the generative AI results they're going to do will be much better either.


Yes a lot of analytics use cases for smaller clients boil down to a server report of page requests. Obviously with google ads you're using it to monetize so that's a different story, but the client side analytics that google provide are bloated and usually overkill for most sites.


If they can't give up Google Analytics or Google Ads, but still want the perf, give Cloudflare Zaraz a try. I am Product Specialist there, if you need an intro in person, happy to do it. Just reach out to me on Linkedin / twitter.


has anyone had any luck with partytown?

https://partytown.builder.io/


Thanks for this, you just solved my problem! I got an email about this INP thing this morning, but google had zero explanation why our INP timings were so high for a page that does practically nothing. I realised that I had removed analytics from our site last year, but I'd forgotten about this particular page, so that probably explains it.


Easy solution, remove all that adware crap.


This may be controversial but I think this has the potential to be a brilliant metric because it measures some part of web UX that’s often neglected. It’s time consuming to make every single interaction display some sort of loading message but it really helps make the site feel responsive.

As long as they avoid the pattern of adding a global loading spinner that covers the whole screen. That’s just the worst possible loading screen. I suppose it would still pass this metric.

Also I’m not sure if I totally understand the metric - I think it’s simply when the next frame is rendered post interaction, which should easily be under 200ms unless you’re

1. doing some insane amount of client side computation

2. talking over the network far away from your service or your API call is slow / massive

and both of these are mitigated by having any loading indication so I don’t understand how this metric will be difficult to fix.


> This may be controversial but I think this has the potential to be a brilliant metric because it measures some part of web UX that’s often neglected.

It also seems to be a metric that is very easily gamed.

If all that matters is instant feedback, then just draw that loader as soon as user clicks add to cart, do not wait for the request to start. It does not matter that it will take X or Y seconds.


> It also seems to be a metric that is very easily gamed.

Fun fact: the current JS-specific metric (which is being fazed out) is First Input Delay, and it was explicitly designed to avoid this gaming:

> FID only measures the "delay" in event processing. It does not measure the event processing time itself nor the time it takes the browser to update the UI after running event handlers. While this time does affect the user experience, including it as part of FID would incentivize developers to respond to events asynchronously—which would improve the metric but likely make the experience worse. > - https://web.dev/fid/

I wonder why they decided to reconsider this trade-off when designing INP.


But what you're describing as "gaming" is precisely the behavior this is supposed to incentivize.

Of course you should be setting a visual "in progress" state before you send out a request. And yes that's supposed to be instantaneous, not measured in "X or Y seconds". That's the entire point, to acknowledge that the user did something so they know they clicked in the right place, that another app hadn't stolen keyboard focus, etc.


You will be pleased to know that isn’t actually the case and it’s instead an example of a single metric taken from a larger suite of metrics known collectively as core web vitals which is what is actually used https://web.dev/vitals/


You’d need some pretty inefficient code for there to be a delay between the user clicking a button and even starting a request…

But even in that case, instant feedback is probably better for the user. It lets them know the website isn’t broken and they don’t need to click again, and it also makes the experience feel snappier.


Honestly, displaying a loader as soon as I click add to cart would be an improvement on many sites. I'd welcome it.

A site that genuinely responds quickly is best. But for a slow site, I'd always prefer one that at least gives me instant feedback that I clicked something over one that doesn't.


> when the next frame is rendered post interaction, which should easily be under 200ms unless

Have you used doordash.com? I don't know how they do it, but they manage to exceed 200ms on every single click, easily. And they're not alone.


I'm lacking lots of context obviously but: What good is a sophisticated metric when the pages they index are mostly blogspam SO clones etc? I'm not interested in the "most responsive" SO clone. Seems out of touch with what Google search is struggling with these days.


I don’t know what made you think this was somehow the only factor in their ranking algorithm or even a particularly heavily weighted one.


> I don’t know what made you think this was somehow the only factor in their ranking algorithm or even a particularly heavily weighted one

I don't think I implied this at all actually.


The real metric: INP with ad blocking enabled.

Example: NYTimes.com on Mobile Safari with AdGuard. 18 seconds.

Google is being really disingenuous with its so called metrics. A stroke of the pen could make INP 200ms across the top 500 sites.


> Example: NYTimes.com on Mobile Safari with AdGuard. 18 seconds.

Dear lord, I can't imagine that's the fault of NYTimes. Something is off with your setup.

NYTimes.com is super quick and responsive on my devices.


What "setup"? Parent said Mobile Safari. The setup come off of the factory as a given.


Maybe AdGuard? Maybe they have 2G internet?

Something isn't right, and I have a hard time believing it's NYTimes given my experiences with their website.


Don’t even get me started with 2G (or even flaky wifi networks) internet and JS-heavy pages.


The NY times website on mobile Safari with AdGuard feels perfectly normal on my iPhone 13.

Do you observe the same behaviour in private mode? Something goes wrong on your device.


That's all Google cares about. How to invade our privacy and force us to see more ads?


Google is an ad company. Why would they make metrics that penalize ads?



NYTimes feels like a SSG when browsing (chrome) from Europe after initial payload but that's as an unauthenticated user with ublock. The sad part is that I can't read most of the articles due to the paywall.


At first glance, 200ms INP is a pretty high latency for a "good" rating. As a comparison, I believe 200ms is an average https roundtrip. I'd expect most interactions to be much lower than that.


I guess it depends on how your interactions are implemented. If it’s an SPA then 200ms is absurdly slow. But if it’s a more traditional form submit or something then it would take a lot longer for your next set of pixels to comes through.


Based on the fact that they're hooking into events like onclick, I'd say that they are not looking at traditional form submits, because then the metric would just effectively be First Contentful Paint or something. My interpretation is that they are indeed looking at first paint after an event handler has been fired on the same page.


This is correct (source: working in web perf for 5 years).

INP is the time between you click/press a key/etc and the moment the next paint happens. It’s only measured for on-page interactions, not for navigations.

It’s basically like http://danluu.com/input-lag/ but as a web metric.


Thanks for confirming, yeah that makes sense. Side note, that input-lag thing is a very cool resource.


That was my first impression too but then I thought about what it’s actually measuring: page responsiveness, not animation jank.

I’m not going to expect a 16ms response or anything for every animation but much slower & you see jank.

For page interactivity though? 0.2s is pretty damn fast. Human response time is 0.15-0.25s

So it’s pretty reasonable


.2s is slow for a redraw. Just because it might take you 200ms to click after something happens doesn't mean you can't see/notice when things take that long.


I’m not saying it’s fast. I’m saying that based on the goal of what it is measuring (user input responsiveness) it’s fast enough. For the purposes of the metric anyhow.

Plus, speaking from far too long of a career dealing with user testing, respond too fast and users thing you didn’t actually do anything.

So you’re kind of boned either way. This is just measuring programmatic delay.


I've generally had no gripes about this or web vitals in general except for one thing: group population[0]. It's unfair to create a blast radius on a small or medium-sized business's website simply because enough data doesn't exist to determine the true extent of the user experience impact.

The most recent example I've observed this on was a website with a heavy interactive location finder experience that lived on a single page. Fine, penalize that page. There's a chance users won't initially navigate there anyway. However, because a (very minimal, practically irrelevant amount of) similar content on the rest of the page was present on 18 other pages, the impact was huge.

The reality of the web today makes this pretty dire in my mind. Many businesses choose to run websites that are generally fast, but they have to engage with third-party services because they don't have the means to build their own map, event scheduler, or form experience. The punishment doesn't fit the crime.

[0]: https://www.searchenginejournal.com/grouped-core-web-vitals-...



INP feels like a pretty problematic way to compare sites because INP is going to be way lower on a site that doesn't do client-side rendering eventhough client-side rendering makes interaction with a site faster!


> client-side rendering makes interaction with a site faster!

I am going to have to disagree. Final HTML from the server is just that. Its final. The client displays it and its done. No XHR, no web sockets, no JS eval. It's done. You can immediately use the webpage and the webserver doesn't care who you are anymore. With SPA, this is the best case. You maybe even start with SSR from the server and try to incrementally move from there. Regardless, the added complexity of SSR->SPA and other various hybrid schemes can quickly eat into your technical bullshit budget and before you know it that ancient forms app feels like lightning compared to the mess you proposed.

Reaching for SPA because you think this will make the site "faster" is pretty hilarious to me. I've never once seen a non-trivial (i.e. requires server-side state throughout) SPA that felt better to me than SSR.


> I've never once seen a non-trivial (i.e. requires server-side state throughout) SPA that felt better to me than SSR.

What about gmail? That has all the state server side. How impressive would it be if all rendering was done server side?


People don’t care whether your site is server or client rendered… they care about fast interactions


I completely disagree. Client side has the potential to be very fast, even faster. However, most people are more interested in writing a complex, Turing complete, type system under their client than making fast, easy to use applications.


Absolutely love how one company dictates how you should build your websites. Love it!


You can build your website any way you like.


Has anyone been able to demonstrate to their satisfaction that improving Web Vitals scores actually improves their search engine placement? We send web vitals field data to our own analytics servers to track P75, but Google changes its algorithm so much we can't quite prove that our various LCP/CLS/FID/INP changes are actually making any difference.


Meanwhile "Engineering Leader" at Chrome argues that 2.4s to First Contentful Paint is fast: https://twitter.com/addyosmani/status/1678117107597471745?s=...

Google's one (of many) heads has no idea what another (of many) heads says or does.


Isn’t that tweet talking about 2.4s for Largest Contentful Paint? It mentions 0.9 for FCP being fast, which I agree is pretty reasonable.


Funnily enough I clearly remember typing Largest Contentful Paint. But it turns out I typed First :)

My point still stands.

"The First Contentful Paint (FCP) metric measures the time from when the page starts loading to when any part of the page's content is rendered on the screen." [1]

Unless your server is overwhelmed, and can't send back data fast enough, there's literally no way to call "1 second before anything is rendered on screen is fast".

In the context of the tweet this is even more egregious. They were talking about Reddit's yet another redesign, and how it was fast. Reddit is a website that displays text and images. Their server responds in 200 milliseconds max. And yet, they were talking about how spending 0.9 seconds to display some info (menu on the left?), and 2.4 seconds to display actiual content is fast.

And that comes from "engineering leader at Chrome". We are at a point in time where people literally don't understand what fast is.

[1] https://web.dev/fcp/


He’s referring to the P75 from Chrome’s field data. Now, Reddit definitely could do more here and get that LCP at the same time as the FCP (eliminate load and render delay). But a big purpose for these metrics is to make the web more accessible/usable, and the reality is most of the world doesn’t have iPhones or fast networks[1].

[1] https://infrequently.org/2022/12/performance-baseline-2023/


Indeed, most of the world doesn't have iPhone.

And yet these metrics literally optimize for Reddit's 109 Javascript files to render a page of text with images in 2.4 seconds. And they call it fast.

However you slice, dice, or interpret these metrcis, none of these shenanigans make it fast.

What these metrics and "75p" show is that modern web sucks for everyone, and they have to pretend that 2.4s to render a page is fast because everyone is even slower.

Or, to put in context: Reddit's servers serve that page in under 200 milliseconds. There's literally no justification in the world that make it ok to say "2.4 seconds to render that on screen is fast". None.


Yep, and this also is referring to the P75 LCP. Notice how’s he referencing the field data from Page Speed Insights, which is reporting Core Web Vitals. Anytime these metrics are discussed it’s always the 75th percentile.


Whether 2.5s to LCP is fast or not depends on what your normal day to day experience is


The incentives to measure this and to interpret measurements are completely perversed.

Of course, when everything takes 8 seconds to load, 2.4 seconds is "fast" :)


To me it sounds like this will help pattern of showing a skeleton screen when loading data. https://www.smashingmagazine.com/2020/04/skeleton-screens-re...


So... Chrome is going to officially spy on their users and report that data to Google?


Just like it's already been doing for decades


Will Firefox support it?


Starts strong:

> Chrome usage data shows that 90% of a user's time on a page is spent after it loads

Clearly impressive, breakthrough, research going on at Google.


Or to put it another way, people waste 10% of their time on the web waiting for slow bloated garbage to load (or more when you take into account pages that do more loading when you touch them afterwards)


This just shows that they don't even understand what they are measuring.

With their engineering leader [1] arguing that 2.4s to display text and images is fast, no wonder they present "people still spend time on websites after they have spent an eternity loading" as a surprising find.

[1] https://twitter.com/addyosmani/status/1678117107597471745?s=...


Less obvious than you think. How long do you spend on the HN front page? When you open say Gitlab how often do you stay there and not immediately click to something else?


If it takes 300ms to load and takes me a few seconds to find a link, I’ve spent 90% of my time on the page after it loads.


> How long do you spend on the HN front page?

...more time than it takes to load. much, much more time.


It's much less than I expected.


Truly epic comment.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: