Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you measure performance of your client-side application?
94 points by mgolawski on Oct 18, 2017 | hide | past | favorite | 33 comments
Hey. I wan't to ask you guys what is your solution for gathering info about reflows/repainting, layout trashing and overall performance on the client side of application. Do you use any tool to gather and store analysis data, classical Chrome dev tools investigation or maybe some metrics implementation inside your code?




Lighthouse results seem random, got three widely different results for the same website. I would expect the results to vary slightly (taking into account the time it takes to receive the requests, server load, congestion and whatnot), but I also has a feeling that the time to first paint and time to load are impacted by current load on the machine running the Chrome running the Lighthouse tests or something.

Am I alone in observing this?


I'd love to investigate what's going on with your results. Could you save the results from a few runs and share them in a github issue?

> I also has a feeling that the time to first paint and time to load are impacted by current load on the machine

This is certainly true of any performance test. But it shouldn't be a significant problem unless you're multitasking during recording.


I'll see what I can do about reproducing the issue.


Is that puppeteer is similar to Selenium framework that is used for automation? Am I missing something here?


I watch over people’s shoulders as they use the application in various devices and makes notes when they get frustrated.


How does watching people give you any actionable information "about reflows/repainting, layout trashing and overall performance on the client side?"


Uhhh... Don't worry about it if the users are happy?

I mean, unless you're allocating 1GB RAM for a text editor, I say don't worry about it if the users aren't noticing it. It's super easy to nitpick your own work to the point of wrecking what you've done right.

EDIT: Oh, in Chrome.

Well, then it depends on what you built and how many libraries you decided to use.


My team's approach has been to automate real devices and report on any regressions in terms of JavaScript evaluation, layout, paint, garbage collection, etc. Shameless plug: the team is growing and we're hiring an experienced front-end dev. https://nordstrom.taleo.net/careersection/2m/jobdetail.ftl?j...


Oh no Taleo, the corporate ATS blackhole where resumes go to die. I rather not apply with companies that use Taleo for various past bad experiences. The first being the resume and all the details that I type in Taleo goes into HR oblivion. The problem of job applicant tracking & recruitment is not yet solved.


I've recently been through a number of these damn things and have found SmartRecruiters[0] to be pretty solid. Clean UI and easy application flow. The companies using it have been good about actually responding to applications as well, which is always nice.

[0] https://www.smartrecruiters.com/


How are you measuring those metrics on real devices ? Do you use the Chrome tracing timeline ? I wrote a tool called http://github.com/axemclion/browser-perf that does something similar, so was curious !!


Do you take remote-only applicants for that position?


Curious, is this the same team that does your Analytics / AB testing?


I open chrome's task manager, which shows total memory usage & CPU usage for each tab in a sortable table. Also in dev tools, I activate the FPS meter overlay. Then I do things in the app to stress it which will depend on your app. If it maintains 60fps through all use cases, you don't need to go any further really. I also use the app for an extended period of time (or programatically simulate that) and ensure memory doesn't creep up continuously. And ensure CPU is 0% or close to it when idle.

If it drops below 60fps in a way thats an issue, open the performance/timeline tab in chrome dev tools & record while repeating that action a few times. In this tab you can also throttle CPU, or you could just do a bunch of CPU bound stuff on your main thread every 30ms to simulate a slower CPU

From there you'll know what the issue is, n+1 calls into a jquery plugin, memory leaks, garbage collections, it even shows paint issues pretty clearly.

From there, you can dig deeper but it depends on your app & frameworks used. For example if you're using react, you might go into react dev tools or redux dev tools. For angularJS apps you could use batarang to try to see what watch queries you have, etc.

For paint issues, use the checkbox in devtools to highlight paints, you can easily see at a glance if the whole UI is being repainted, if so it means your virtual DOM library is detecting changes when it should not. Or maybe you have a legacy jquery app that just builds up a huge string and does div.innerHtml = string, which should be rewritten to use virtual DOM libraries.

For reflows, I also use the performance tab, it shows up as "recalculate style" or something like that. Usually it means you're doing $(div).width() or something similar inside a loop & you can just cache the value to fix it. Again, it depends on your app. If you have drag & drop widgets & you have 9000 widgets on a page, you're going to have some jank if you're binding jQuery draggable 9000x. You can use optimizations like not binding until the user mouses over each widget. To get rid of the jank, one strategy is you just make jank happen in smaller pieces over time instead of on page load or component mounting.


Chrome dev tools investigation is decent, and even lets you throttle network bandwidth to see the effect of slow connections. It doesn't let you throttle CPU, though, so an important part of identifying performance problems is to try your app out on a cheap or old phone.


CPU throttling is now available: https://plus.google.com/+AddyOsmani/posts/NRsAqshb17n

It's in the controls near the top of the Performance tab


Oh that's funny. I just saw it in "Highlights from the Chrome 61 update" and came here to correct myself.


It doesn't let you throttle WebSockets either.


I've been using Typometer (1) which measures time from input to output. I think it's important to measure from outside the program. It would be ideal with a high-speed camera setup that takes 1000 images/per second pointed at the monitor. And test from several devices, old onces in particular.

1) https://pavelfatin.com/typometer/


MachMetrics - it's basically a private WebPageTest instance on better hardware, which runs your tests recurring, with nice graphs to show trends. https://www.machmetrics.com


I wonder how much business value (versus technical value) there is in those metrics.


Quite a lot, actually.

I went to the NY Velocity conference back in 2014, and these are from my notes. Unfortunately, I don't have any sources to link back to, except I believe this was the session where my notes came from: https://conferences.oreilly.com/velocity/velocityny2014/publ...

Also an obligatory warning that correlation is not always causation.

- Walmart.com: for every second improvement in page load time they experienced a ~2% conversion increase.

- Shopzilla: sped up pages from 6s to 1.2s, increased page views by 25%, increased revenue by 12%.

- Yahoo: increased traffic by ~9% for every 400ms of improvement

- Mozilla: made pages 2.2s faster and resulted in 60m more downloads.

- On average ~57% of users will abandon a site that hasn't loaded in ~3 seconds (again, I don't know the source).

- When a page loads at ~8 seconds , conversion rates start to drop by 40-60% (and again, I don't know the source).

- In a 2012 EEG study (sorry I don't know which one), they throttled a desktop connection from 5Mbs to 2Mbs, and found that about half the participants had difficulty concentrating and finishing the task asked of them.

- From 2013-2014, the median page size has grown ~67% in just a year. About 50-60% of the size is from images (again, I don't know the source).


I get what you are saying. My doubt is because most businesses are not Walmart (etc).

What matters at the scale of a large retailer (so large as to dwarf Amazon) is not what matters at most places. It's not just that Walmart is dealing with fungible goods, it is also that:

+ Walmart's online store is dealing with legacy pre-internet database architectures on the back end.

+ Walmart's online store is competing with Walmart's brick and mortar stores when it comes to architecting IT infrastructures and allocating resources and the brick and mortar stores eclipse Walmart's etailing.

Because I think Yahoo is pretty good due to being consistently profitable, I will forgo the softball snark of "Why would a tech company want to be like Yahoo?" But there is really question of whether the best practices of the companies you list are appropriate for a business in a narrow market.

Going further, if it is an area of concern choose technologies and page architectures that don't run the risk. Angular and React are developed to meet the needs of companies at the scale of Google and Facebook not a shop where the collective knowledge does not include experts at page repaint metrics.


Quite a bit. Infographic here is interesting:

https://searchenginewatch.com/2016/11/16/how-speed-affects-y...

Additionally, the amount of 'Jank' in your user interactions can have an effect on conversion and engagement. Etsy has released data in the past on this.


Well, for initial time to interactive there’s a definite correlation with conversion rates in ecommerce. I would assume (not confirmed) that the correlation continues through the purchase process as performance goes down. Investing in performance delivers very clear business value (of course there’ll be a point of diminishing returns).


Luckily, https://wpostats.com is there to answer this question.


At the scale of the sites listed, it probably matters. But those sites have the traffic volume to run A/B tests at statistically relevant scales and have probably run A/B tests against page content to have already picked up lower hanging fruit.

Or to put it another way, the BBC website is much further along the "first make it correct then make it fast" curve than most websites.


Ya I get what you're saying. Buddy of mine made this barebones calculator that allows you to see the kind of monetary impact any site could expect when working on performance: https://wwwillchen.github.io/web-perf-budget/

It's still a prototype and missing a lot of explanation, but the numbers behind it are solid.


I'm not sure that there are generic numbers. Some things I will wait in line for, other things like gasoline at the filling station I won't because I can get it somewhere else. The same is true with ecommerce, it depends on what I am buying and where else I can get it.


We run an internal private instance of WebPageTest


Headless chrome opens the window to Visual Debugging. See

https://screenster.io/what-is-end-to-end-testing-and-is-ther...


With a stopwatch and notepad




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: