I hate to be negative, but what really is the point of this? That a simple webpage without any content can be fast? Of course it can.
Is it desirable to inline your CSS, "like a boss?" Maybe if you have one single web page. What if you have dynamic content and your users intend to browse more than one page? With externalized CSS, that is all cached.
Same with images. If I'm building a web application, I certainly do not want inlined images. I want those on a CDN, cached, and I want the page to load before the images.
Not only is this not particularly useful advice, it's bad advice.
This guy's advice is exactly what Google advises you to do, and exactly what Google does.
You say this website is only fast because it's "without any content". If there's no content then tell me how it communicated its point so clearly. If it's inherently fast then tell me why the same thing posted to Medium is so slow.
A hallmark of a great solution is that people who see it decide the problem must not have been very hard.
Google doesn't do this on their more full-featured apps like G-Mail or Google Docs. Docs ships a bunch of files and stylesheets.
Search works well because they have relatively few features to support by default (for things like the calculator I bet they ship that in with the response).
HN is neither of the two and IMO represents where people spend most of their time.
I think Google's CSS embedding is terrible advice for the meaningful web, but logical advice for adwords landing pages or sites with content so bad or sparse you wont be navigating them.
HN has a purposely minimalist stylesheet / layout, with NO icons (except the one Y on the upper left corner), NO images or other media, NO fanciful animations.
Not all websites can do without all of that - imagine a photography site, or an e-commerce site, etc. without pictures?
I agree though that this is great advice for landing pages; load times are probably among the reasons of most bounces.
Yes, and HN is in violation of Google's performance guidelines for putting those sensible rules in a sensible place.
> Not all websites can do without all of that - imagine a photography site, or an e-commerce site, etc. without pictures?
How does HN's CSS enforce a ban on img elements in pages pointing to a photo's canonical location? Or preclude putting your standard frames into it?
Imagine a photography site where no photo link is shared across any pages (but 90 page base64 encoded URLs are repeated randomly), an ecommerce site where a product is shown in a strange new light at every step in the checkout process using a mishmash of entirely different CSS.. Google's advice is approving the most idiotic behavior on sites that are barely keeping their head above water in terms of technical understanding, letting them hold onto strange ideas because they are "fast."
Please don't consider this as an attack but, I tend to believe that Google is not in the possession of the absolute truth. They even usually contradict themselves.
There is no silver bullet solution to all problems when building webpages, and if you think this guy's advice is catch-all for everyone and that "shaming" people who don't follow those rules is good, then you're a bad developer.
Someone else said it elsewhere in this thread, but different contexts need different solutions based on determined use cases and needs.
There is no silver bullet, no technology that is good everywhere, but there are things that are bad everywhere, like unnecessary reliance on JavaScript, code and resource bloat and so on.
Being diplomatic about it and not assuming everyone who deployed a Joomla installation with a ton of bad plugins because it HAS TO WORK NOW is an idiot is OK. But wrapping bad engineering practices in "different contexts need different solutions based on determined use cases and needs" is a very different story.
Obviously, different contexts and use cases need different solutions. That's no excuse to pick the bad ones, nor to pretend they're not really, really bad.
It really depends of your website usage. I am maintaining a free database of chemical properties. Usually people are landing there from Google and looking at a single page[0] before leaving.
To have this single page experience as good as possible, I pack the CSS directly in the HTML. First I packed also the image of the molecule, but it was not good for SEO, because I have quite some visitors coming from search of the drawing of molecules. Google would be smart to index the inlined images, I would switch back to use them.
All these optimisation techniques are only good if the context is the right context. That is, you need to pay attention to the assumptions used to make these rules.
I have come across this guy's site before and he does offer some useful tips. Not anything you won't find anywhere else but in a clear and concise manner that almost anyone can understand.
There are lots of people self-managing small sites that don't have a clue about any of this stuff - it's a decent resource for such people - nothing more.
We know that if speed is your goal there are tradeoffs you will have to deal with, such as inline images and css that are harder to maintain. This is a well known architectural tradeoff (along with usability vs security). If you are willing to pay for your load speed in the form of increased page complexity and reduced scalability, this is a legitimate option.
I suppose if we are in control of the backend database that our content is delivered from, we can build a page with inlined images and CSS when the content that drives the page is changed, so it is possible to build quick pages.
The main driving point is probably the fact that there is sometimes no need to build pages that rely on all manner of JavaScript etc.
After some benchmarking, we did something of this sort on a media site. A middle-fold path works best for us. I'm yet to see any media site do this (yes, we are content-heavy, but we also load fast).
Admittedly, the only thing slowing down the page load is the ads, but we don't bombard those either.
Link: http://www.sportskeeda.com/?ref=san
It is relevant for resource limited IoT devices. It can be a struggle to get embedded web sites running fast using modern frameworks. Throw in TLS with large keys and you enter a whole new world of slow where it pays to shave off every byte possible.
the guy says he's not an idiot then brags about spending $30 per month on a VPS (idiot), for a single-page static HTML website with all inline-code (idiot+1).
It's not being negative to point out the glaring flaws in a person's statements. My assumption is the entire thing is an advertisement for that hosting service.
Also, whether the VPS has an SSD or not is totally irrelevant— if you really were serving a single page it would be cached in the memory of your webserver.
(Or better yet, serve the thing off S3 and let Amazon be your CDN.)
I'm not saying a VPS is not appropriate for a static HTML web-page, but there are perfectly capable VPSes avaible for $3.50 to $5.
I'm not in agreement with many of the commenters regarding CDNs. I don't believe in a free lunch. Free software is one thing, but CDNs require infrastructure, which incur costs. Somewhere, the people offering those services expect to make those costs back. You'll either pay for it directly, or you'll pay for leeching off someone else's bill in karma. For a very tiny, low-traffic, low-bandwidth website, I think not using a CDN is perfectly reasonable.
Just because Godaddy sucks doesn't mean that all shared hosting sucks. NearlyFreeSpeech.net is fine for most people, or Amazon S3 if you're into AWS / webgui stuff.
If you're actually using a decent amount of bandwidth (ie: image hosting), something like Hawkhost.com would be good. Just gotta stay away from EIG (https://en.wikipedia.org/wiki/Endurance_International_Group), which is a conglomerate that's buying up all the shared hosts and making them crappy.
Bam. Now you have economies of scale AND far cheaper hosting than any VPS.
Keeping in mind that you can get a VPS for as low as $3.50 per month, I don't see that there is ever a reason to use shared hosting, and several reasons not to, performance is only one of them.
If you're website is that unimportant, then you could probably get by with any free hosting services where your page would be yourpage.serviceprovider.tld
the guy says he's not an idiot then brags about spending $30 per month on a VPS (idiot),
You can always go with cheap, fully virtualized GNU/Linux server, or you can go with a virtual, true UNIX server running at the speed of bare metal[1].
Your choice, but quality, correctness of operation, data integrity and performance still cost something. It you don't care about any of those things, fork out $5 for the alternative and call it a day.
Not disrespecting your opinion here, but $5 Digital Ocean droplets[1] have been working quite well for me as well as for nearly two dozens of my clients, spanning across the last several years (taking into account all the four important parameters you've specified: quality, correctness of operation, data integrity and performance).
My (limited) experience with Vultr[2] has also been fairly satisfactory.
As far as I am aware, Digital Ocean is running on Linux, and because Linux requires turning off memory overcommit and integrating ZFS on Linux, at a minimum, there is no correctness of operation. As there is no fault management architecture (like fmadm), and support for debugging the kernel and binaries is incomplete (no mdb, no kernel debug mode, no kdb, incomplete DWARF support), there really can be no assertion about correctness of operation.
Correctness of operation does not only refer to end-to-end data integrity, but also to adequate capability to diagnose and inspect system state, in addition to being able to deliver correct output in face of severe software or hardware failures. Linux is out as far as all of those.
In other words, if you want JustWorks(SM), rock solid substrate for (web)applications, anybody not running on FreeBSD, OpenBSD, or some illumos derivative like SmartOS is out, at least for me. Perhaps your deployment is different, but I don't want to have to wake up in the middle of the night. I want for the OS to continue working correctly even if hardware underneath is busted, so I can deal with it on my own schedule, and not that of managers'.
As a reality check, we're talking simple (and even not-so-simple) web site/app hosting options here (and not some NASA/space/military/healthcare grade requirements).
From my perspective (as a freelance web tech/dev professional who routinely manages close to two dozen hosting accounts for clients), what you're saying above comes very very close to driving a nail with a sledgehammer.
That would only hold true if your or my time were worthless, or had a very low valuation.
Hosting clientele is notoriously high maintenance; the more technology ignorant, the higher the maintenance in terms of support, and the more fallout one has to deal with when there is downtime.
My time is expensive. My free time is exorbitantly expensive. Therefore, when I pick a solution and decide to deploy on it, it has to be as much "fire-and-forget" as is possible. Picking a bulletproof substrate to offer my services on also increases the time available to provide higher quality service to my clients: since my time dealing with basic infrastructure is reduced as much as possible, I have more of it to spend on providing better service and adding value, thereby increasing the client retention rate. Because of the economy involved in this, and especially considering how razor thin hosting margins are, I feel that the nail with a sledgehammer metaphor is inapplicable to this scenario.
Edit: that's obviously a non-issue in this particular case since everything is static. But as a best practice this needs to be considered and inline CSS doesn't make you a boss.
Yes, of course that's correct. And I wouldn't have felt that pointing out that it can be is necessary if he hadn't written that he is inlining it "like a boss". That makes it sound like it's some awesome best practices we all should be adhering to.
So if I'm reading that page correctly, the basic gist of the claim is:
If we only do a half assed job of sanitising user input by attempting to blacklist whatever javascript we can think of, we'll still be open to XSS attacks from people smarter than us who put css into our user supplied data, so the answer is to prohibit inline CSS - not to properly sanitise user supplied data.
I think there are better pieces of security advice around than that...
I still don't understand what it accomplishes. Why does inline matter?
The vulnerability means they can inject arbitrary markup including <link> or <script> that load offsite sources.
You can use CSP to whitelist allowed offsite domains. But if you're not careful, "you never know" and "you might as well" are more likely to waste your time chasing low value things.
For instance, inline CSS is valuable as an intermittent developer convenience, and disabling it takes that away while protecting your from an unlikely event.
Also, you generally should be escaping-by-default and not sanitizing. A templating system should escape by default and make it obvious when you opt out.
I still don't understand what it accomplishes. Why does inline matter?
In the worst case scenario, where the server doesn't have all the blocks of the file cached in random acces memory, the server can fetch the single, inlined file with fewer input operations per second from local storage far faster than one's web browser can fetch multiple files over the network. This means that latency is lowered, and thus delivery time is accelerated.
Just to point out, there's no particular reason to host a page like this on a VPS at all. You could just throw it on S3. Even better, you could put it behind a CDN like Cloudfront and the total cost would be a dollar or two a month, not $25+ and it would be significantly faster.
> You could just throw it on S3. Even better, you could put it behind a CDN like Cloudfront and the total cost would be a dollar or two a month, not $25+ and it would be significantly faster.
I apologize for quibbling (really, I do! but I'm an infrastructure guy! This is my bag!). Yes, host it on S3, but ALWAYS put a CDN in front of S3 with long cache times (even just Cloudfront works). S3 can sporadically take hundreds of milliseconds to complete a request, and you know, AWS bandwidth is expensive (and CDN invalidation is damn near free). And you can use your own SSL cert at the CDN usually instead of relying on AWS' "s3.amazonaws.com" SSL cert (although you will still rely on the that S3 SSL cert for CDN->S3 Origin connections; C'est la vie).
EDIT: It also appears Cloudfront supports HTTP/2 as of today. Hurray!
Don't you have to provide a CC to sign up for even the free tier? (That's how it was when I was trying it out a couple of years back.) It was really cute too, Google would send me a $0.00 invoice each month.
IIRC, the CDN isn't charged separately, and the object store is available (within a certain quota) within the GAE free tier, so, within certain limits, yes, "for free".
No, thanks for correcting me. The names are similar enough and the purpose is too, so it's easy to confuse. I mean, if you hadn't commented, I would have wondered why it got downvoted.
How long S3 takes to fulfil a request does not affect bandwidth.
I personally went the other way. I still use CloudFront as a CDN but made it cache items for short periods of time. Invalidation was too much of a hassle, and it took too long. Admittedly, I should use hashes or something of the sort to keep my items versioned, but laziness always gets in the way.
> How long S3 takes to fulfil a request does not affect bandwidth.
Correct. Did I insinuate that? I apologize if I did. They are two distinct issues, both of which a CDN prevents.
1. S3 outbound bandwidth is expensive. Use it only as an object store of last resort. Your CDN bandwidth is orders of magnitude cheaper (don't believe me, go compare the pricing).
2. S3 response times can vary wildly at times. Use a CDN to avoid this.
And of course feel free to use a cache key instead of invalidating via an API if ~15 minutes it too long to wait for fresh content to appear at edges.
PS Don't apologize for laziness. When directed appropriately, its a most productive force.
My mistake! I assumed you meant the time it takes to fulfil a request and the bandwidth cost were related because they were mentioned in the same sentence.
Yeah I notice that was weird too. The creator speaks a lot about about html optimizations but one of the most widely-used methods of page speed increases are global CDN distribution.
The SSD is really meaningless in this context. The website is so small that it will be loaded almost 100% from the filesystem cache. As long as it has more than 512 MB of ram...
If I wanted my website to load incredibly fast, I would absolutely not put it on an obscure VPS. Not that there's anything inherently wrong with it, but it's generally not going to make your site faster.
Meaningless-weaningless, but that site loaded instantly on my iphone 4-without-s and did not lag, unlike all others (except hn ofc). No CDN can hide modern js freezotrons.
Regardless of where on this globe you put your VPS, someone will be accessing it at 1000 ms latency. It doesn't make sense to optimize the browser page load speed to 10 ms, and forget that it takes 600 ms to fetch the data from Asia.
That's because all those other sites are poorly built. It's not because the article's site is a brilliant example of "doing it right".
Putting bare text on the web is always going to be fast. So what. If he presented a real full-featured website with the bells and whistles that people expect today, and made it operate that fast, he'd have something to show. Instead he presents polished garbage.
Wait - what precisely do users demand from your website today? Usually I'm happy to find a website which loads quickly, is clean, and steers me in the direction of whatever I'm trying to find, personally.
My expertise is not marketing so I don't feel I could adequately answer that question but there are plenty of focus group studies which show what sort of UX works best. It's a safe bet that most of the big corporations who are already focus-grouping everything they publish, such as Disney for example, are also using focus groups to design their websites.
They're using split testing, conversion rate optimisation, and bizdev to design their sites. When something appears on a corporate website, it's there to benefit someone in the corporation, not the users (although it might benefit them as a side effect).
His site is at least full-featured article (you can load, scroll and read it, yay). Most sites I open are article sites, and they are rarely full-featured articles, because load/scroll features aren't easily accessible.
I guess everyone's needs are different, but for me, hardly anyone reads anything I write. If I have a sudden surge in interest in something I wrote, last thing I want is to cut off access to it. Would rather keep paying the infinitesimal amount per page view to keep people reading it.
You can setup jobs to run that fire when CloudWatch alerts fire noting that your bill is going up or that your hit rate for certain objects is going way up. I think there's a way to setup billing such that you can't exceed a certain amount in a month but that's a weird situation to try to hard stop charges without deleting everything in your account.
The clever internet marketer who set up the page missed a trick!
Instead of the affiliate link to some host no on has heard of,he should have affiliated linked to AWS (if possible) and a CDN. Then he could have added that as a strong feature that helps makes the page so fast?
Has any one dealt with DDoS attack on a static hosting (S3 + Couldfront) set up?
I sometimes fear that if something like this happens the bandwidth bill will be too much to handle for small personal projects. Also it's a pain that AWS doesn't allow one to set hard limits on cloud spending. Yes, they allow to set up some billing alarms, but no hard limits. No guarantee that no matter what, the month's hosting bill will not exceed $10 for this project.
For small personal projects a tiny VPS seems to be safer from this angle. At max a DDoS will cripple the VPS but the hosting bill will stay the same.
If you have been through this, did you get any discounts from AWS for resources being used during DDoS attack or you had to pay the full amount.
> "I am not on a shared host, I am hosted on a VPS"
Hate to break it to you, but your virtual private server (VPS) is likely sharing a bare-metal server with other VPS. ;-)
Also, you can look into content delivery networks (aka CDN), which will most likely deliver this page faster to clients than your VPS especially when you consider your VPS is in Dallas and CDN's have nodes located around the world.
I think he is contrasting VPS service to shared hosting services like Go Daddy or one of the many cPanel providers. Unless a VPS is using OpenVZ, the box isn't over provisioned. A cPanel host is usually very over provisioned. 500 customers per server wss not uncommon 10 years ago.
I think the pedantry is unnecessary here. "Shared hosting" colloquially refers to multiple websites sharing a single web server, database, and PHP process. Everything is set up for you by the provider, you simply supply the files. What "shared hosting" does NOT usually refer to are containers, virtual machines, bare-metal, IaaS deployment environments, or anything like that.
Hosting on a single VPS is never gonna be very fast globally no matter what you pay your hosting. In fact our free plan on netlify would make this a whole lot faster...
It's still pretty fast all over the world, because that total time is all you need. For most sites, that three seconds is just the start, ensued by several more seconds of downloading CSS, JavaScript, images, analytics, widgets, and whatnot.
It is. Linode's Atlanta data center has been getting DDoS'd on and off since Sunday. This site isn't hosted on Linode, but could there perhaps be congestion in Atlanta from that attack causing general slowness?
OP has certainly nailed Hacker News psychology. My old coworker called the technique "inferiority porn." Titles like "the secretly terrible developer" or the closing statement of this particular article: "Go away from me, I am too far beyond your ability to comprehend."
As many people have pointed out there are faster methods of static hosting through a CDN, and many of the techniques of this site are inapplicable for larger sites. But A+ on the marketing.
IMHO there is mainly one way to get attention - it's to get (great and instant) emotion from the user. You can give good emotions or bad emotions.
Personally, I think, that to create a good emotion takes much more effort than to create a bad one. The website/product can say how great am I, but it will not 'click' as instantly as someone telling me I am a dumb baby and I suck [0][1] or I am not superior mere mortal baboon [2], for which most people will get instant rage and will start flame wars in whatever comment section, as there "is no such thing as bad PR".
Most popular writer/bloggers in my country have created these dipshit arogant characters (I tend to believe that they are "normal" people, but they clearly know what sells) who always say that they are richer, smarter and better than you. They create stories about "cheap restaurant breakfast for 60€" and so on, though the most interesting thing is that people buy their shit and then rage on whatever websites about how dared the writer call them a dumbass homeless bum.
I probably sound like I'm tooting my own horn but it definitely felt really contrived to me. I upvoted it because of the comments containing better tips or caveats provided for the good ones.
I'd say that the general idea of watching out for external and/or bloated resources is absolutely applicable for larger sites. Media sites are particularly egregious: not only does the js take the lion's share of what's transferred, rendering of the content I'm interested in typically blocks until everything is downloaded and processed.
I make my personal pages fast this way since last century. Probably a huge amount of people did the same. It's pretty obvious.
When you need fancy graphics (a static photo album), things become less easy: you e.g. may want to preload prev / next images in your album to make navigation feel fast.
Things become really tricky when you want interactivity, and in many cases users just expect interactivity from a certain page. But client-side JS is a whole another kettle of fish.
When things become ugly is when you want to extract some money from page's popularity. You need to add trackers for statistics, ad networks' code to display the ads, and complicate the layout to make room for the ads, placing them somehow inobtrusively but prominently. This is going to be slow at worst, resource-hungry at best.
(Corollary from the above: subscription is more battery-friendly than an ad-infested freebie.)
I have two copies of a book. One is a hard cover, one is a paper back. The paper back is a few centimetres wider than the hard cover, and it makes that copy annoying to read.
There a certain length that a line can be, with being confusing or annoying to read. The reader mode in most browsers understands this, but for some weird reason reader mode isn't available for http://motherfuckingwebsite.com/, at least in Firefox.
Took me almost 30 seconds to load, maybe because the server is being hammered by HN traffic right now? Also like others here were saying, using a CDN would definitely help with the initial latency.
I think this is the ironic lesson: for many sites, optimizing for consistent performance (i.e. CDN, geographic caching) is a more important objective than prematurely optimizing for a subset of users.
Example:
Business A - average render time 0.3s, but under load 5-10s
Business B - average render time 0.8s, but under load 1-2s.
Subjectively, around ~10s response time is the point I would close the tab and look for another business if I was trying to do shopping online, anything involving a credit card etc.
Is this image inlining thing something new? Am I reading it correctly that the images are encoded in base64 and delivered as html? Surely this is a bad idea... no?
No, it's been around since forever. Just not used terribly often.
> Am I reading it correctly that the images are encoded in base64 and delivered as html? Surely this is a bad idea... no?
It depends. Making a new request to fetch the image always has overhead. Whether that overhead is bigger or smaller than the overhead of base64-encoding the image depends on:
• file size (naturally)
• file compressibility: The difference isn't as pronounced after gzipping everything, especially if the source data is somewhat compressible
• protocol: http2 allows a correctly configured server to push attached data with the original request, so no second request is needed. Even without server push, http2's multiplexing will reduce the overhead drastically compared to plain HTTP1.1 or the worst case, HTTPS1.1 to a different domain. The latter requires a full TLS handshake, and that's what, >30kb data exchanged if you have more than one CA certificate in the chain? That's a lot of image data.
You forgot the most important factor: Whether you're reusing that image on a different page. Embedding images in the HTML is basically saving an HTTP request at the expense of not being able to cache the image separately from the HTML.
This seems like it should work, but have you ever tried it? Or, can you point me to some results of a test to show that it indeed caches the image embedded in the CSS?
The problem is that now it's going to be sent with every request. So it'll make the first page faster for the initial request, but slower in the long run.
Depends. If you consider that in each request a big chunk of time is spent one opening the connection, and that you can even start opening the connection to download the linked picture after you have received the response from the first hit, then maybe it's not a bad idea. It's one round trip worth of time that you shame off the total loading time.
However, if the image is very large, it will make the initial request large as well. I would only use this for images that are small and above the fold.
The biggest problem with inlining images (imo bigger than the base64 size increase) is that when you change something (like a word in the text of the page) you are forcing your users a full reload of the page (images included). Most of these performance tips assume things won't change.
I think it depends on how much of an image we're talking about.
If it's small, the overhead from base64'ing it (if the page is gzipped) is lower than the overhead of opening a new HTTP connection just to retrieve that one image.
There is an increase in size to base64 encode, in addition to what I assume is the lost capability to cache images, as well as load them intelligently.
Other reasons to embed images using base64 are to have pages work standalone, to reduce complexity (no need to keep pages and associated resources in sync) and increase locality (things are defined where they're used).
These probably aren't particularly important for most sites, but it's something I do on my personal site ( chriswarbo.net ) since I care more about ease of maintenance than load times.
Base64-encoding induces a factor 1.333 size increase of the byte stream, so it's likely not worth it if the site is served on HTTP2. To get exact figures, one would of course have to calculate the size of the additional TCP packet(s) and HTTP headers.
I think it depends on the image size and use case. For small images where the round trip time of an extra request would make a bigger impact than the file size, inlining them might make sense. Especially on mobile where latency tends to be higher.
If you're done in one HTTP round-trip, your HPACK state has to push brand-new headers, you benefit nothing from server-side push (if it's even enabled), there's nothing to pipeline, so don't benefit from multiplexing and head-of-line blocking is a non-issue.
I want to benchmark this, because intuitively I disagree.
The HPACK spec is a pretty easy read [1]. There is a static, hardcoded table that contains most of the HTTP header names, and even some common predefined KV pairs. You save some bytes on the wire if your header's name or value is one of these entries; the header name will essentially always will be in the static table.
But for names and values that aren't in the static table, you have to put them into the dynamic table and encode them using either the integer packing or the huffman code. The client has to decompress these, of course.
On future requests, you have some leftover state in your dynamic table so future 'duplicate' headers are packed, and take up very little space. But for the first (ever) HTTP request-response pair, you have to trade ALL the headers in "full". So the true benefits of the dynamic table don't kick in.
Cool... Unfortunately in practice it's easy to find a list of best practices, much harder to implement in a scalable and durable manner on any project of sufficient size, especially if working with a legacy codebase.
> "My images are inlined into the HTML using the base64 image tool, so there is no need for the browser to go looking for some image linked to as an external file."
This does not work in most cases when you use big images.
From StackOverflow answer [1]: "It's only useful for very tiny images. Base64 encoded files are larger than the original. The advantage lies in not having to open another connection and make a HTTP request to the server for the image. This benefit is lost very quickly so there's only an advantage for large numbers of very tiny individual images. "
If you're using photoshop you can create a PSD that sources other PSDs and if I remember right create an action that generates the exported image so you could automate things quite a bit if not entirely.
Furthermore the image used is one that compresses uncommonly well with PNG (small palette, large chunks of solid color). I think the vast majority of 350x400 images would be at least 10x larger unless they're deliberately composed in a similar style or are JPEGs with the quality turned way down.
I tried to create an SVG version to see how an SVGZ would compare, but evidently I'm too crap at Inkscape and kept screwing it up.
A VPS is shared hosting to me, it's just an instance on a shared system. Shared hosting used to mean a folder on a shared web server but I consider sharing resources in a hypervisor equally shared. ;)
If they truly wanted speed through control of resources they would have used bare metal.
But yeah, the website is easy to optimize when it's simple, the hard part, often outside of your control, is DNS and actual connection handling. Many have already mentioned CDN so there's that.
But you also don't know what kind of firewalls are being used, or switches, or whatever else may impact your site. Why not just do what others have suggested and put it all in the cloud so that Amazon can worry about balancing your load.
Don't really think PageSpeed score really accurately reflects page loading speed (maybe initial page loading speed). It seems to not really care about lazy loaded resources as one of my JS heavy webapps I made (around 200KB) actually scores higher than this one https://developers.google.com/speed/pagespeed/insights/?url=.... Funnily enough the screenshot on the test only shows the loading spinner.
Ok I'll bite as this is near and dear to my heart. Instead of showing me a fast webpage with a minimal content, tell me how to make my tons of css and js load fast! That's a real problem.I deliver web apps, and interactivity is a must.
IMO, the real problem with the web is the horrendous design choices and delivery of very popular news and daily reading sites (ahem cnn) where subsequent loads of ads and videos start shifting the page up and down even when you have started reading something. Let's address that problem first!
For speed optimization it's really important to always fine-tune for you particular use case and apply some common sense. For instance, inlining everything as suggested here is faster only if you expect visitors to open just that one page and bounce away, so browser caching is not helpful. Consequently, it's a very good tip for e.g. landing pages, but it makes no sense at all to serve pages that way to your logged-in users.
HTTP/2 with server push will eliminate the inlining hacks, and automatically compress content.
But the other points remain: No Javascript is still the fastest Javascript framework, and while you can do lots of crazy hacks with CSS, maybe you shouldn't.
I think it feels fast because it loads at once, but I'm actually not getting very impressive results programmatically if you measure how long the entire TCP transaction takes (which is what I consider page loading):
# Both DNS records are cached before request
>>> print requests.get('https://varvy.com/pagespeed/wicked-fast.html').elapsed.microseconds
226515
>>> print requests.get('http://www.google.com').elapsed.microseconds
92027
Even google.com (92 ms) is about 250% faster than OP (226 ms) to establish connection, read all of the data, and close.
Okay, okay, it "matters". But it's nothing compared with the 3s to load all the JS and CSS and the subsequent sluggishness as 20 analytics scripts are loaded and processed.
Honestly? I am surprised to see this page with such high vote on the first page. If you really wanted a fast "static" page, you would put it on a CDN. All you wanted to do is put a marketing link in your last paragraph.
Your page can be very fast and uses minimal resources and is hosted in a good place. But you always gotta watch out for proximity to user, time to first byte and dns resolution time. Perceived speed is highly affected by those.
It took 2 seconds to load the page on a fresh ec2 box:
You can do much better!
What's about html-muncher for css class minification?
Those png are not fully optimized and an SVG would probably even be smaller too and even if it isn't in the case of the orange one it would have could be compressed much better.
Making use of data: urls might look good on first visit but honestly with HTTP/2 just push in the resources and externalize them.
Because seriously cache for 300 seconds? How about offline support anyways? It's 2016.
Furthermore where's my beloved Brotli support?
By the what's about WebP support? Ok TBH if the PNG would be properly optimized WebP would actually not beat the file size but hey: "It isn't"
So even though it's only this tiny static page there's still so much wrong with it. Please improve!
By the way what's about QUIC?
Easy to make a website fast when it has nothing on it. In the real world a site isn't this light. It has images, analytic scripts, stylesheets, fonts, Javascript (jQuery at the least). Using a combination of a CDN and realistic caching, I can make a fast website as well.
Many real world sites have strategic/marketing partners who ask to add analytics scripts so that they can capture metrics for their partnerships with you/your real world site...And if your site has been optimized, but their servers (which serve up these 3rd-party/analytics scripts) aren't as optimized, guess where the slow down comes from? And senior leaders don't always enforce these partners to confirm to internal performance standards. So, yes, unfortunately real world sites DO have - or at least are forced to have - stuff like that.
Simple text, few links to 'tipps', a little bit of base64 images without any deeper knowledge. For example there was a website which showed the impact of base64 images just a few weeks ago (when i remember correctly)
This is odd. Clearly anyone can make a lighting fast page by making a single page since then you can have css inlined versus needing to link to css style sheets with multiple pages, and of course not having javascript would make it faster, but thats a requirement for most all typical sites these days, and loading images that way is nice for hackers but not for real people using cms's by common people and clients. Also paying $25-35 for hosting is not very bright since you can get a $5 digital ocean server ssd, not shared, that would load this particular page just as fast if not faster.
His affiliate link for VPS service has its cheapest option priced at $25 a month. You can get a nice little VPS for static hosting on SSD from digital ocean for $5 a month. $6 a month with backup.
Time4VPS offers you 2 cores (compared to one on DO), 80 GB SSD (compared to 20 on DO), 2 TB bandwidth (compared to one on DO), and 2 GB of RAM (compared to 512 MB on DO) for 3 euros (3.36 dollars). 1 additional euro for daily and weekly backups.
Started renting one just two days ago, so I can't really guarantee that it's reliable, but it was recommended to me by a friend who's renting it for over 100 days now without any downtime.
What an arrogance, the page is done with me? I done with the page yet.
I can get the same page much faster by putting the png in an inline svg, strip the source of unnecessary whitespace and returns, serve brotli (or sdhc compressed pages) with firefox, chrome and opera dynamically... or even just do the decompression inline with javascript. Might save another 20% https://github.com/cscott/compressjs
I can see in the source code that you're expressing all dimensions in terms of ems and %s. A technology such as Bootstrap will always be the way to go; however, could you tell us a little bit more about how you did this? How did you ensure that it looks good not only on your screen but on any screen?
I know people are saying it has some errors on certain mobile devices, but that's still some pretty good job manipulating CSS properties.
Bootstrap was just an example. It could Bootstrap, Materialize, W3.CSS, etc.
The point is that it's much more convenient to reuse code from a framework, because it's better to sacrifice file size for fast iteration and functionality.
There is always a tipping point. We tend to use Bootstrap a lot at work for the reasons you mention, but you can pretty quickly get to the point where your CSS is complex enough that you would have been better off doing it from scratch. All frameworks are like that -- you trade off initial convenience for design constraints.
When I'm doing my own projects I always write my CSS by hand because it ends up less complex in the end. I don't need to see pretty things up front like my corporate customers do.
The whole hosting issue seems to open a can of worm, at least if this comment stream is any indication. I think it probably would have been better if they stated something more along the lines of, 'Choose (and likely expect to pay) for some sort of superior hosting solution which will prioritize allocating resources to your site(s)'.
The general point could be made without leaving so much room for everyone to argue over specifics.
For instance, delivery one giant JS/CSS file is now bad because it is harder to cache, since HTTP/2 removes the overhead of multiple requests there is no downside for many files.
Around the same for me, running fibre in New Zealand. Long delay before content even began loading - as mentioned in other comments, would likely have been a non-issue if a decent CDN was used.
The best "Shift+Reload" refresh I've managed to get out of this page from where I'm sitting, in Firefox 48.0.x, according to its Network Console, is around 360 ms. It doesn't beat this HN discussion page by a whole lot, and this has actual content, which is dynamic.
If I'm doing a single page application, surely I'll have infrastructure in place already to compile, minify and do whatever I need to. So I could just serve the monolithic page and be done with it. Much like desktop applications used to do.
I've always wanted to play with putting /var/www into a ramdisk for PHP/html stuff. Would be much faster loading since it's all just text in the end of the day. Completely cut out the bottleneck of SSd/HDD
If you own the hardware I imagine much of your PHP/html stuff will be served from the file system cache much of the time so you probably wouldn't see much benefit...
I don't think it would do much there either: If it fits in your left-over RAM, then it's probably in the disk cache. If it doesn't, then you can't create a RAM disk large enough.
It might help with latency for the long-tail of data that isn't used very often and thus maybe replaced in the cache by other data, but on the other hand the OS probably had a reason to replace it and forcing it to stay in RAM might slow everything down.
You'll likely have better luck configuring php-fpm and the OpCache properly, as they are going to be doing basically what you're proposing anyway, and for the OpCache it's even nicer as it avoids reparsing the code
I'm curious - would this page see any speed improvement with HTTP2? I ask because the new protocol seems optimized for the exact opposite of this - many asynchronous fetches.
was thinking exactly this, keep loaded in mem for the duration of the server's lifetime. Not too familiar with HTTP2 but could you cache the compressed packet and reuse with minor modification to the headers when needed to speed up the communication?
Preformatted payload can be a big win for page speed, especially if your payload cannot vary based on request headers, or has only a few variants.
A special case of preformatted response used to be baked into microsoft IIS. If you connected to an address that could only redirect to another address, IIS wouldn't even wait for the request, it would just send the 302 response and hang up. This, it turns out, was not really compatible with Mozilla at the time, and may have violated some RFCs, but I kind of liked it as a hack.
That one file, but presumably it's loading assets that would be the same total size if they were split. Loading 100k from one source is faster than the same aggregate size from multiple connections.
Except if you only need the first 50k to render the page, and can wait for 50k of Javascript to come later, your page is going to display a lot faster. Standard technique.
> Loading 100k from one source is faster than the same aggregate size from multiple connections.
Usually doing things in parallel is faster than doing them serial. That's why HTTP2 loads slower than HTTP1 - you are sucking everything through single TCP pipe, even though it is multiplexing within.
Is it desirable to inline your CSS, "like a boss?" Maybe if you have one single web page. What if you have dynamic content and your users intend to browse more than one page? With externalized CSS, that is all cached.
Same with images. If I'm building a web application, I certainly do not want inlined images. I want those on a CDN, cached, and I want the page to load before the images.
Not only is this not particularly useful advice, it's bad advice.