Hacker News new | past | comments | ask | show | jobs | submit login
Website load and speed analyzer (varvy.com)
226 points by warrenm on Aug 9, 2017 | hide | past | favorite | 68 comments




On that note, GTMetrix just did a great write up comparing a few https://gtmetrix.com/blog/the-difference-between-gtmetrix-pa...


thanks for that list.


Getting rid of image metadata is _not_ optimization, that is simple information loss. I hate it when speed tests tell m the progressive jpgs are not optimized because they have EXIF; EXIF is part of the image.


What is the use of metadata of an image while rendering an image on a webpage?


Its not uncommon to download images from a website in general, however more specifically I know my mother - an avid photographer - uses a plugin to inspect metadata to find how and where a photo was taken.


That is not the typical use case. As web engineers one of our goals is to make websites faster. Speed is a very important UX.

Also unless you have a fast website, people will drop off your site and google won't rank you. No point of having great images with good EXIF data if people never see the image.

For photography websites, it does make sense to keep it but in general no point wasting those extra bytes which add upto GBs in wasted bandwidth.


I disagree that as web engineers, our goal is to make websites faster. Our goal is to make a website that suits the customer's needs.

Sometimes, that can even mean making a website slower. I've had that request before because end users were surprised that the site loaded so fast. We made stuff slow on purpose so that the user thinks we are crunching data hardcore and giving their request some real thought before presenting the results.

Sure, if the customer asks for ultimate speed and explicitly does not care about image metadata availability, then removing that metadata is fine. But otherwise, removing those few bytes probably won't change how fast your website loads by a significant amount. You're better off working on caching, reducing TTFB, automatically adapting image size to screen size, spriting icons with http1, server push with http2, gzipping, reducing the size of Javascript/CSS and working on perceived speed. This will probably be orders of magnitude more useful to users than removing image metadata. And it won't remove that useful information from images.


Wow! I primarily work on consumer facing web products where speed does play a big part but glad to know where you are coming from. I have not had such request or perspective. :) Thanks for sharing your experience.

Agree, that its not the first thing you should focus on. But saying you should not do it is what I was protesting.

When you run your site through speed analysis tools like the site posted or WebPageTest(my favourite), you do get a set of tasks based on priority and removing exif data is generally low and I am not advocating against that.


I've had such requests too. We had a system that returned query results in a few ms, and people rated it as untrustworthy, not valuable, etc etc.

We added a fancy "computing... thinking... validating..." interstitial, and suddenly everyone loved it.

It was literally just an animated gif with a meta refresh redirect.


>I disagree that as web engineers, our goal is to make websites faster. Our goal is to make a website that suits the customer's needs.

The overwhelming majority of the time, to "make a website that suits the customer's needs" == make it faster


Recall that EXIF data is textual, and therefore no larger than kilobytes in contrast to the megabytes that JPEGs otherwise take up. While this may eventually add up to gigabytes of bandwidth, it will only be after downloading terabytes of JPEGs.

One can imagine a variety of small things that are only useful to a minority of visitors, and go unnoticed by the rest. "alt" attributes on images are one example, that are often used by the blind, but are mostly unused by anyone else. Do you think provisions for the blind should be removed, due to wasting (tiny amounts of) data?


When I spoke about bandwidth it was about the server. I should have made it more clear. Those costs add up over months.

The alt tags are VERY important for SEO purposes.


The same applies whether it’s the server or user bandwidth, the volume used by EXIF data is going to be tiny compared to the relatively incompressible image data you’re shifting.


Removing EXIF data won't make your website faster lol. You should find other ways to achieve your goals ;)


Even when your webpage has over 100s of images?

Which is the case for listing pages of e-commerce sites, video sites, social networks etc.

As a frontend engineer, I find every way to achieve my goals as every ms matters :).


Even when your webpage has over 100s of images?

On a page with 100s of images removing the EXIF data from all the images would very likely have less of an impact than removing 1 image. Optimising is good, but you need to concentrate on optimising the right things. Start with the biggest impact and work downwards; chances are you'll never get to the point where stripping EXIF data makes a measurable difference.

Besides, if you're worried about every ms then you wouldn't implement a website that loads hundreds of images on to a single page in the first place.


100s of images are 99% lazy-loaded, so not a concern. Get rid of the JS bloat instead, which doesn't effect the download speed at all, yet cripples the receiving box with the client-side rendering.


That is actually very uncommon for the typical user.

Outside of photographers, who is going to do this? What percent of the web are photographers (real photographers... not "here's what I ate for dinner" photographers)?

Even if you somehow got 1% of the total web population, wouldn't that still fall into uncommon?

Compound this with the vast majority of web images are not works of art where you have photographers trying to dissect how the image was shot and I would bet uncommon quickly goes to once in a blue moon.

If you run a photography site then obviously the metadata is important to you and they would consciously keep it.


Then you link to the full image from an optimized edition - it's not rocket science .. and makes everybody's experience better


The same for when it's on your desktop. To be able to look into the image for additional detail. Such as where it was taken, camera specs, etc.


Embedding copyright notice is one. I used to work in journalism. It's a real issue.


Seems to be wrong about caching.

Tells me:

    Browser caching not enabled for all resources.
And then gives an example of a file that is delivered with this cache header:

    Cache-Control:max-age=3600
Also it seems to put DNS resolv time into the "Server response time". So if you get "Slow server response time" that might be something that is happening between this service, it's DNS provider and your Nameserver. Then when you try it again, suddenly it's "Fast server response time" because DNS caching took place.


That Cache-Control header specifies one hour. My resources specify a month or two, and the tool doesn't complain about any.


an hour to cache? That's nothing

Days, weeks, or months are more appropriate 99% of the time for 99% of content


IMO, inlining styles is rarely worth it. Just include a single CSS file in the head and call it a day. There's value in simplicity.

If I'm given the choice, I usually avoid web fonts. Most platform's built-in fonts are usually pretty good and fast. It's fine for a website to look different on various browsers and operating systems.


When your CSS file gets large. Which is the case in most sites today thanks to frameworks like Bootstrap. Your page will wait for the CSS file to download, process and repaint. Inlining some of the CSS i.e. critical path CSS can give you lighting fast renders.


Also don't forget to strip your unused classes, which I'm embarrassed to admit, I did not start doing until recently


Then you need bandwidth to download the CSS that is embedded in the page.

Where as you if you cache the linked CSS file you only take the hit once but as you navigate around the site each time it'll be loaded from the cache.


Also, inlining CSS prevents its caching, so the first page load may be faster but subsequent ones will suffer from that if the amount of code is fairly large.


Yup. I work with a very large site and our entire CSS is about 25kb after gzip.

It’s a pointless micro-optimization.


Google Chrome Developers/Google Developers Youtube channel has great content on Frontend optimisations by Paul Irish and others. They teach you what to measure and how. You might want to check that if you are interested in Frontend Engineering.

When your site has a large audience, every microsecond counts. Especially when you don't have someone bankrolling for your traffic. Speed plays an important part for SEO and UX.


I’m quite familiar with Paul’s work. He’s not really saying anything new in those talks either. It’s good technical info but again - a pointless micro-optimization in most cases.

There are so many other, bigger, better, wins to focus on first.

Considering anything Google does as useful for the average developer - even at a reasonably large company - is like thinking pizza delivery guys should emulate UPS: it’s a misunderstanding of context.


25kb of CSS after compression?

That's a damn lot of CSS


That’s all possible css across the site. Average is much lower. Our global css is 3kb for example.


Critical Path CSS is not fairly large, its fairly small in most cases.


When you say "critical path" you just mean above-the-fold stuff?



The tool should also check if a page uses HTTP2, because for HTTP2 inlining does not give any benefits.


Not true. Even in h2, the browser will first parse the HTML to discover the `<link rel=stylesheet>` and it'll then go issue that request. In h2 it'll reuse the TCP connection (which is a win), but you've still got to round trip to start receiving the stylesheet response body.

FWIW here's a pretty good analysis that shows even with h2, bundling and inlining still has benefits: https://sgom.es/posts/2017-06-30-ecmascript-module-loading-c...


It incorrectly reports that browser caching is "Not enabled for third party resources" on my site: http://blog.zorinaq.com

It gives absolutely zero details about why it thinks it's not enabled, and does not say which are the affected "third party resources"... As a matter of fact I only have 1 third party resource (https://fonts.googleapis.com/) and it properly sets Expires and Cache-Control...


I've used http://yellowlab.tools religiously in the past, glad to see more solutions coming about.

Although it is frightening that we need these in the first place, to be honest.


Varvy is not new. It used to be feedthebot and has been operating like it is now without any updates for a least 2 years. Maybe longer.


Ah gotcha, thanks for letting me know.


How's that? Frightening I mean. Why do we need profilers in the first place? Git gud scrubs?


We're not profiling ns/ms differences due to the weird ways that the Intel CPU decides to cache certain instructions due to idiosyncrasies, they're tools that literally spit out "stop using so many large assets so you can cut the load time to under 10 seconds" and "learn how to minify every type of file".

I frequently come across websites with several thousand lines of extraneous CSS/JS code laying around because they've imported multiple libraries (WordPress templates are usually a surefire way to see this). Websites use multiple MB images all over the place.

It's fucking insane. The web leaves everything to be desired from a development target, yet we keep trying to build a palace on a foundation of feces simply because the lipstick is so alluring.

Just as Eternal September ruined discussion on the web, basing the internet on the most poorly designed stack we could come up with has greatly hampered the speed at which we're able to advance humanity using it.

The same thing already happened with operating system development, where we used to have many differing options (with new ones coming out all the time), which for the most part sort of all worked together, but each had a different take on certain technical decisions, which we then gleaned valuable information from. Plan9 was the last well-known experimental OS to be released which actually made waves to operating system development (OpenBSD deserves a lot of credit here as well, as they're constantly thinking outside the box, but mainly in a security-related fashion). RedoxOS is the next OS project I'm peering into from a distance to learn from their decisions.

But I digress, that's why it's scary. The fact that everyone is wasting their lives trying to dress up a pig who only cares about frolicking around in the mud.

Unfortunately, that won't change because popularity dictates attention, and working on little experimental things that could truly revolutionize the world is far too risky for the vast majority to undertake.

Large companies are the only ones able to toy around with experimentation full-time, but the problem you come across is that they're usually going to make whatever they figured out proprietary, so they can make some money off it.

Never going to start a reply on a mobile device ever again, I hope my rambling was coherent enough =b


Not even close to coherent.


Others seem to disagree, thankfully.


YellowLab complains about the number of webfonts in my CSS file, which us just stupid - fonts aren’t loaded if they’re not used, so why are they a problem? Because of the three lines of CSS needed to include them?


I mean, if it's there and not being used, yes, that's a problem.

If we were using a better platform, dead code elimination and tree shaking would have handled those types of issues for us before clients see it.

Like, fuck me, we solved this problem what, 50 years ago? Yet we still haven't solved it? WHY IS THIS STILL AN ISSUE! </rant>

Anyways, it's a problem, yes. A few programming languages wouldn't even allow you to compile equivalent code.


What? No, it’s definitely not a problem. If I include all the different weights of a font family in my CSS, but on some pages don’t use the bold weight, for example, how is that a problem? The browser only downloads the fonts that are actually used for exactly this reason.


What I mean is that the CSS should be minified to remove _any_ extraneous rules, and should be done automatically, just as it's done with most compiled binaries.

If you're using those font styles later on, but still using the same stylesheet, that's obviously alright, the performance-testing tool would need to be updated in order to spider through all the pages to verify where each one is being used.


Ok - it works and I get a nice report.

Does this offer anything more than WebPageTest.org, Google Pagespeed, or auditing in Chrome?


Would be good to be able to kickstart the process via a query string, i.e. varvy.com?q=https://my.url


I hate how all of these tools complain about google fonts. This looks like Google's page speed tool with a nicer UI.


I think all the tools are trying to tell you something. Using fonts is in fact the largest dependency with many websites.


I specify only the characters I use in custom fonts. I budget no more than 30kb to fonts when making a website.

These tools complain that Google font's css is only cached for 2 hours. Which is something none of us can control. Even Google's own page speed tool complains about this. I get perfect scores in everything but Google fonts so it sticks out like an eye score.


Cant you serv them from your own server?


Google does optimisations based on user agent. Ilya Grigorik does not recommend hosting the fonts from your own server.


Hey UI point - you might want to set a consistent height for those grid elements in the list -- would do a lot to make the elements more consistently laid out.

It looks like the individual tiles vary in about 2-4 lines for the below-title description text, so I fiddled around for a bit, and it looks like this is just about right:

  .card {
      height: 25em;
  }
Of course, if you're going to set the height of the card you should probably go in and set the heights (in em or percentage) of the children elements, etc.

(just noticed the results page cards are a lot bigger, so probably scope that rule down to only the cards on the landing page)


I noticed the site is reporting JS and CSS sizes AFTER gzip, not before. That seems unnecessary. Browsers will download the compressed version, so it's the compressed version that determines the page speed.


What are you supposed to do about image alt text when the image is purely decorative, like a border?

This complains about alt="".


Some people would say use a div and background image CSS property. Not sure if that's the recommended solution though.


Well - I would try to omit the image at all and use border style properties to do it. This way I would omit an request totally.

But maybe someone with real experience (not some hobbyist when it comes to front end work) should chime in.


Per https://www.w3.org/wiki/HTML/Elements/img

> A purely decorative image that doesn't add any information In general, if an image is decorative but isn't especially page-specific, for example an image that forms part of a site-wide design scheme, the image should be specified in the site's CSS, not in the markup of the document. However, a decorative image that isn't discussed by the surrounding text but still has some relevance can be included in a page using the img element. Such images are decorative, but still form part of the content. In these cases, the alt attribute must be present but its value must be the empty string.

I have always understood that (and earlier spec incarnations) to mean that <img alt=''> is correct for these cases.


Many of these recommendations are at odds with HTTP/2 recommendations, practices.


WebPageTest offers more actionable insights. Chrome Developer Tools and Google PageSpeed offer almost the same features.


http://www.confessionsoftheprofessions.com

It probably blocked you but it's been over 5 minutes... https://www.screencast.com/t/CFkwelblQ




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: