Netlify is one of my new favorite apps. It's does so many simple things really well and the roadmap of features is pretty dope. Best static site toolchain service I've seen.
Big fan of Netlify and use it for several sites. They have had some issues with DDoS attacks lately (and I've seen outages), but the blips seem to be under control.
The other one to consider is Google Firebase Static Hosting. Really excellent performance.
1000x this. Netlify is one of those tools that I really have trouble remembering life without, especially for static sites. The first time I used it, I distinctly remember saying, “oh wow,” aloud. And I still find myself saying that the more I get to know their offerings.
AWS and the like make a killing from the basic fact that most web devs don't know anything about the real costs of bandwidth (a blind spot, if you will).
HE.net is currently advertising 10gig ethernet for $1300/mo. Obviously there are various scaling issues associated with this but the basic premise still holds - AWS and other cloud services need major pressure to push their bandwidth prices down.
This is less about cost of bandwidth and more costs of having edge POPs around the world.
With cloudfront (or any other edge cache system), you get TCP termination closer to end-users so the site loads faster, particularly if you have users around the world. Dropbox case study: https://blogs.dropbox.com/tech/2017/06/evolution-of-dropboxs...
Edge cache bandwidth costs quite a bit more than the not quite bottom of barrel IP transit that HE.net is selling.
I have set up some sites the same way, the only ugly part for me was that I wanted to use the features of S3 web hosting interface, but restrict all traffic to come through CloudFront. This blog post (not mine) describes the problem and the approach: https://abridge2devnull.com/posts/2018/01/restricting-access...
there's actually a way to do this with OAI, you just have to configure the html5 routing stuff using CloudFront distribution rules instead of configuring the S3 bucket as a static website. As a nice side effect, you can also enforce https<>https<>https communication all the way through the S3<>Cloudfront<>world chain, which isn't possible to force when the S3 bucket is configured for static site hosting.
I have about 75% of a blog post about how to do this, it's not terribly complicated. This comment just gave me the motivation to finish it, I'll post it on HN when it's ready.
yeah, I tried to do that or something similar first, IIRC the s3 website setup was handling the index document and error document stuff, but using cloudfront to fetch through the S3 API wasn't doing that so that is how I ended up where I ended up.
Just wondering, what's the motivation to restrict access? Is it to keep costs down in case someone decides to run up your bill with a flurry of requests?
Just so the CloudFront logs and monitoring are complete. It won't affect the hosting cost as far as I know, I just use billing alerts on the overall account to keep track of that.
Right now, I am hosting my personal site and blog on S3 + Cloudflare for 6 cents / month. I use Middleman, because I feel that it offers more flexibility than Hugo.
Cost-comparisons aside, the massive speed improvement makes me think he didn't have caching configured correctly with Cloudflare and his PHP backend (thus defeating the entire point of the CDN aspect).
I'm guessing a couple of Cache-Control: headers would have provided similar latency improvements.
I'm fairly sure that I had some speed optimizations set within my Nginx server block. Unfortunately, I don't have the Nginx config file to hand anymore as I've (somewhat stupidly) deleted the snapshots without taking a backup.
I'm no Cloudflare nor Cloudfront expert but these should be apples-to-apples comparisons and seeing that huge disparity in latency makes me think it's most likely a configuration issue, as all comparisons I've ever seen claim they perform roughly about the same: https://blog.latency.at/2017-09-06-cdn-comparison/
Specifically though we're talking about how long Cloudflare caches your content on their proxy/edge servers. Browser caching is irrelevant for this discussion, as speed tests of this nature should always be performed on a clean request.
I can't claim I am either, unfortunately. I required assistance ensuring the certs were in place and my configuration was correct when doing the migration.
And just to clarify, based on some of the articles I've read, I actually think CloudFlare may be a better choice of CDN, this post just highlights one way of achieving a global-scale website and doing it in such a way that it's resilient and cheap as chips!
There is a substantial performance difference in SSL handshake speed and site performance if you have a central server, vs multi-edge storage before you get to Cloudflare. The closer your edges are to Cloudflare's edges when the handshake happens the faster things go. The handshake delay is particularly slow internationally with Heroku running under CF, since you can't predict what heroku IP you are fetching from, unless you fix to one point and then latencies are still long at far away locations.
That setting is what Cloudflare passes on down to your visitors, not how long it caches itself.
I don't actually know cloudflare personally, but assuming they're following the spec, you need to set "Cache-Control: max-age=300" or "Cache-Control: s-maxage=300" to tell cloudflare to cache a given response for 5 minutes.
Nowadays, if you really don't need a server, then don't spin up one. In this case, the author does not need a server.
If you think you might need to do some backend processing, see if you can use 'serverless' functions and offload the admin work to someone else.
If that doesn't suffice, maybe something like App Engine.
Still not enough? Ok, so maybe a AWS batch job (or equivalent). Does it have to run all the time? Then maybe a container managed by ECS or similar.
None of this works and you need an actual VM? Fine, spin up one. But now you have to care for it and keep it monitored, patched and secured.
I was previously using Cloudflare in conjunction with a Linode server. The disadvantage of this, however, is that you need to ensure that one server never goes down otherwise anyone hitting a cold cache would see the site down.
Also, why do you need a server at all? If you are doing nothing but serving static files then why incur the overhead of maintaining LetsEncrypt certificates and writing Nginx config files? AWS provides a simple interface in which you can view and manage your static files without having to ssh into a server.
On the point of price, I'm predicting my total cost of hosting for a site serving roughly 40-45k users per month will be roughly $7/month. All whilst minimizing the amount of administration time required and monitoring of a more traditional system.
Hope this clarifies this, would be keen to hear any counter thoughts!
We use Azure and we have multiple sites we're getting that visitor count (or more, for PPC campaigns) for way less than $7/mo using Cloudflare/Azure Edge/Web Apps. This might not work for a single site, but multiple sites on a service plan we can handle way more visits per site (especially if it were static) for $2-$3/site/mo and we could auto scale if needed though that would raise that number, it would not exceed your number. Unless I'm missing something, I don't see how $7 isn't viewed as rather expensive for that visitor count.
Maintaining a server is not complicated nowadays, with ansible. I have a few dozens, some of them don't need any admin for years (like linode), the others have debian testing and ansible to deploy matching configurations and keep up with the updates
S3 is going to be more reliable than anything you can reasonably be expected to administrate yourself, and it's cheaper as well. It's also supported by all CI tools, so you just script the deployment based on git commits.
My linode hasn't been updated for years, still runs nginx just fine. port knocking means little risks of intrusion, as ssh is on a non standard port too.
My deployment is shell scripts launched by systemd timers
Instead of downvoting, can someone please explain why this setup that is stable is not up to whatever standards?
Most likely your "hasn't been updated for years". Given all the security vulnerabilities that have been disclosed, including things like Heartbleed, I hope you meant your "setup hasn't been changed in years".
* Inject a cryptocurrency miner JavaScript into your page during the transmission of your static HTML page to your clients, without you or your users knowing it [1]
* Injecting explicit, illegal image material that would get your clients immediately in legal trouble for possession of such material, without you knowing
* Injecting a JS snippet that, instead of your site's contents, shows a fake antivirus page, telling the clients that your site is malicious and that a threat was eliminated by Fake Antivirus 10.0 and that they should immediately call Microsoft Support (Phone number in Bangladesh) for further "assistance". There they're told they need to get a full cleaning of the hard drive for only 99$ and are asked for their CC number
The point is: If you don't have end-to-end-encryption, you can never be sure what your users see. They might see your site - or some slightly modified version of your site, with a login box, phishing passwords from your users, abusing their trust in your brand.
A MITM has nothing to do with someone gaining access to your server. It's someone gaining access to infrastructure - a vulnerable public wifi, for instance.
You have a single point of failure and a linux box (and CMS?) that needs to be constantly patched - how is that in any way comparable to the simplicity and scalability of dropping files onto an S3 bucket?