Hacker News new | past | comments | ask | show | jobs | submit login
Building a static serverless website using S3 and CloudFront (sanderknape.com)
257 points by SanderKnape on Feb 17, 2020 | hide | past | favorite | 273 comments



I don't work for them or anything, but I've honestly found Netlify to be the absolute easiest solution for static site hosting. And it's free! There are some paid features, but the free stuff is all you need. You can use SSL, accept form inputs through request capturing, automate deployments with Github/Gitlab hooks, auto generate static pages for most popular static site generators (Jekyll, Hugo, etc.) Absolute breeze to use. Beats any hacky AWS solution hands down imo.


I recommend using AWS Amplify if you need to stay within AWS.

The full product can be compared to Google Firebase but the Amplify Console specifically offers features similar to Netlify on top of standard AWS services (S3/CF/Codebuild).

I find it a much better experience than manually setting up S3/CF websites because of the out-of-the-box features that simply wouldn't happen otherwise for a static site like:

- instant cache invalidation

- branch deployments (with password protection & rollbacks)

- deployment process only deploys modified files

- simple custom headers

- simple redirects (redirecting individual assets in Cloudfront is not easy)


Agreed, Netlify is fantastic.

Another thing I really like, though I wouldn't really recommend it for corporate sites (but I would recommend it for personal sites because of the community/purpose) is Gitlab + Neocities. I use Gitlab CI to build my site, rclone to copy it to Neocities, and that's it.

Very simple, no-hassle combination with loads of bandwidth (I think my paid account has 3 TB/mo).


I enjoy it so much (since 2017) that i'm going to london Jamstack !


I love Netlify as well; it's dead simple and works out of the box. Occasionally I do run into issues where the builds get 'stuck' and never processed, so I have to manually cancel the build and re-run it.

I've also had Russian users tell me that Netlify is sometimes blocked in Russia and can't access the sites but it may sporadic.


From your link:

> No, netlify.com is probably not blocked in Russia. Yet.

> Details: > URL http://netlify.com

> Domain netlify.com

> IP 134.209.226.211

Is it intermittent?

Also, any idea why they would be blocked, or is that part of the problem?


This is what I see

https://isitblockedinrussia.com/?host=https%3A%2F%2Fwww.netl...

> Yes! It appears that https://www.netlify.com/ is currently blocked in Russia.

> Details:

> URL

> https://www.netlify.com/

> Domain

> www.netlify.com

> IP

> 167.99.137.12

> Decision 27-31-2018/Ид2971-18 made on 2018-04-16 by Генпрокуратура.

> This block affects IP 167.99.0.0/16.

My understanding is that the blocks are accidental side effects from the Great Telegram Blockage of 2018 [0] but I'm not entirely sure.

[0] https://en.wikipedia.org/wiki/Blocking_Telegram_in_Russia


Actually came here to ask the advantage of something like this over Netlify. I gave up midway through the article because I couldn't figure out the point.


Maybe an overly simple and dumb question but have to ask: What's the big difference between hosting static sites on services like these and just going with a regular webhost like Bluehost and slapping in something like a wordpress template? This is what a lot of blogs, landing pages and even fairly static small business or organization sites seem to do without problems and it's all pretty user.friendly for the non-developer crowd (me included)


People have been coming up with solutions to the 'make a basic business website' problem since the 90s, when Microsoft FrontPage, Macromedia Dreamweaver and NetObjects Fusion fought it out, and computer magazines would mail out their entire website on the CD on the magazine's cover, for readers without web access.

At this point, there are a great many ways to skin a cat.

Assuming you've got $5 a month or so, it mostly comes down to personal preferences about workflow, vendor independence, security, reputation, and who you already have an account with.


I asked below but no answer, so hopefully not bothering but I ask again here. Any solid guide or resource you could recommend on doing just this, building a static site with these services while not using wordpress and typical hosting services like bluehost etc.


Because your static .html and .css files won't get immediately hacked like an out-of-date turdpress site will.

There are many wordpress-like tools that spit out static files to upload to your hosting provider https://www.staticgen.com/


Static sites are faster, more secure and cheaper to host than Wordpress. Comes at the expense of user friendlyness.


Thanks for the clarification, all of you. "Turdpress" gave me a chuckle. That said, honestly would like to know so I can explore this further, are there any resources or guides you could recommend that lay out how to build a secure and straightforward static site in this way? For someone who isn't a developer by profession.


Agree, Netlify is awesome.

We recently moved our static sites from S3+CloudFront to Netlify.

The S3+CF combination is clever, and very, very cheap. But it gets messy when you want to add things like custom headers and lots of redirect rules.


Netlify is indeed very easy to use, but (at least here in Europe), I find their performance disappointing. Sites that shouldn't need more than a hundred milliseconds to load take a couple of seconds. Of course I can't really complain about a free product, but if it weren't free, I wouldn't be using it still.


I never noticed, but now that you mention it, there's like a 3s delay when loading my page, and then the page loads in 300ms... Strange. I wonder whether a paid version would improve the performance?


Stick Cloudflare in front of it.


Is using Netlify easier than using GitHub pages?


Netlify does more than GitHub pages, it also builds, so with GitHub pages, you also need to use GitHub Actions. Netlify also has some other services they provide.


I've never had to use Github actions to deploy to Github pages (pages existed long before actions did). You only need to do something like that if you want them to build for you- with static I prefer to build myself (Hugo makes that fast).


Agree, Netlify is simple and easy to setup even for beginners. I am hosting my domain for past 1 year integrating with Github. Haven't had any issue so far.


This is hilarious. All this complicated and unneeded stuff for a static website. It completely misses not just a point but multiple points.

Maybe you want a static sites so that it'll 'live' forever and not be effected by future changes in software stacks. You definitely don't get that by doing this.

Maybe you want a static site so that it's simple to set up. It's not this.

Maybe you want a static site for security, all the complexity and accounts here make it less secure than a random site running random php.

This is just buzzword bingo for someone's resume.


Hmm you conveniently omitted comments about HA, latency or scalability... Pushing a bundle of files to a CDN it's undeniably a superior static site/SPA hosting in many ways.

Maybe the article overcomplicates but with services like Netlify, Amplify or Firebase Hosting it's stupid easy and cheap, and there's no vendor lock-in.


No need to be cynical. Content can be both helpful information and buzzword bingo.


>Maybe you want a static site for security, all the complexity and accounts here make it less secure than a random site running random php.

You seem to have no idea what you're talking about. Are you claiming that a static site with Cloudfront and S3 is less secure than an arbitrary PHP website?

There are clearly tradeoffs. Running your own server on your own hardware in a colo or at home is the best option if you need it to run for 30+ years and don't want to worry about "software stacks". Good job, now you have to worry about hardware stacks and backups.

This solution has an instant, global CDN that mitigates low level DOS and provides better response time across the planet. But that doesn't seem important to you.


What changing software stacks? The S3 and CloudFront configuration options used here haven’t changed for years. I can’t think of an AWS deprecation that has ever happened that would be on the level of breaking this very common configuration.

There’s only one account here (an AWS one), and since the content is public anyway there also isn’t much in the way of configuration subtleties.

If you have a way to compromise a static S3/CloudFront site more easily than you would a Wordpress instance that hasn’t been updated in 6 months, I’d love to read that blog post.

Buzzword bingo has unfortunately become a buzzword itself it seems.


Yeah, because no one can DDOS your CDN or hosting box /s


Host a static website the HN way:

1. Buy the cheapest most unsustainable VPS ipv4 deal out there.

2. Make A record and WWW record and point domain to your new fangled server.

3. Configure server:

apt-get install nginx goaccess

cd website

cp * /var/www/html

Yearly maintenance required: apt-get update, apt-get upgrade

reboot

View traffic stats: goaccess -f /var/log/nginx/access.log


It's amazing how truly unreliable ultra-cheap VPS providers can be. You're lucky if you even get an upfront notice before they decommission hardware you are relying on, or the entire company just disappears overnight.

Usually you get what you pay for.


I've used the same openvz instance with my ultra-cheap VPS providerfor as long as google's "Cloud" platform has effectively existed. It's been extremely solid with almost instant support over IRC. I've paid $5/mo for this the last decade.

Sounds like you just made some bad choices. Big companies are only marginally less likely to disapear services than small companies are to disappear.


Is this company named RN?

You've described my experience exactly.


Google Cloud gives you 1 free f1-micro compute instance. Coupled with a CDN like Cloudflare or Netlify, it should be beefy enough for a static site.


At that point, why even keep the compute instance? You can get them to host your static content entirely.


Or setup a Google cloud domain named bucket and put Cloudflare in front.


Digital Ocean is $5 per month. You could probably run hundreads of static sites off a single instance depending on traffic and these days many websites are just business cards really.


It costs like $2 for the lowest tier VPS through OVH, and they aren't going to disappear overnight.


Not a problem. VPS's are a commodity, you just switch to another supplier and repeat the steps. If it's too much trouble, make a bash script that runs the setup. If you can spare a few extra dollars per month, sign up to additional providers for redundancy.


I used to do this 15 years ago, but it's 2020... rock solid hosting for small projects has been free for a long time and takes 0 maintenance time. Plus making it to the top of HN won't kill your site.

A true hacker way in 2020 would be using IPFS or Dat... =)


The last time I tried to maintain a personal VPS I was using it for a Jenkins build server so users could obtain the latest artifacts of some open source programs I work on. That burned me after it got exploited using a Jenkins RCE and a bitcoin miner was installed on the VPS.

Also, the cheapest VPS you can find likely won't ever achieve the level of scale your static website in S3 could achieve. In the rare event you get a lot of web traffic, you're only hosed if AWS is hosed.


You gotta receive a ton of traffic for Nginx to stop serving static files. I'm pretty sure most of the static websites today could survive on a lite Nginx vps with minimal tuning.

Also, i rather have my server go down than to receive a larger bill from aws, but that all depends on your use case obviously


Yeah this is probably true, I've never actually had a nginx instance be overwhelmed myself. I just don't know how much I trust VPS providers that aren't charging a premium $5+/month to deliver quality reliable performance.

The best part of S3 + Route53 is your costs are basically constant. $.50 a month for the hosted zone and then you pay pennies on the dollar for GBs of data transfer. In theory your bill could balloon if you had a hefty static website or some big files left public in your buckets and someone constantly downloaded it.


S3 isn’t actually that great at serving a ton of traffic without a CDN in front of it


I've been renting a $3/mo ARM C1 server from Scaleway for 5 years now (a physical microserver) and doing exactly this. It handles being on HN frontpage without a hiccup. I'm very happy.


Serious question: how much more maintenance is required? Could I get away with unattended-upgrades and nginx+wsgi+PostgreSQL?

I ask because actual servers seem like dark magic to me so I want to try to build a product with them, but I can't find anywhere if it's possible to run a reasonably secure server without years of studying.


If you're serving static content, installing Apache, nginx, or any other web server will do just fine. Make sure to set the document root to a directory you're fine being public.

If you're running something dynamic like WordPress, stay extremely on top of patches, unfortunately, and be super cautious about what plugins you use. (This is one of the better reasons to use a static website.)

If you want to run a Postgres for your dynamic website, configure it to listen only to localhost or only via UNIX sockets.

Make sure you keep your software up-to-date. unattended-upgrades is a great idea for OS-provided software.

Be careful about where you get software from. More than just "get it from somewhere trustworthy," the big concern here is to get it from someone who is applying software updates. For most OS-ish things, you want to get them from your distro; try to avoid downloading e.g. PHP from some random website, because you won't get automatic updates. For a few things - especially things like WordPress - I wouldn't trust the distro to keep up, largely because the common practice is to release security fixes by releasing new versions, and distros are going to want to backport the fixes, which is slower and not always guaranteed to work.

As another commenter mentioned, turn off remote password logins and set up SSH keys. (Most VPS providers will have some form of console / emergency access if you lose access to your SSH keys.)


I run my sites, all static on a VPS, but I do the authoring in a single multi-site wordpress install and use 'Simply Static' plugin to publish the result. The benefits are pretty awesome:

heaps of templates (because I'm often lazy), 1 stop shop for patches, locked down plugins (child sites can't install plugins, only enable/disable), and only one place to look for problems (& you can lock the wordpress site to a single IP if you always want to use it from a single place).

FWIW, I never groked AWS & it's ping times in my country are about 1/2 as good as local providers. (15-30ms for local, vs 50-100ms for AWS local). Speed matters.

Also, my use case is to 'fall over' (meaning: fail/stop working/be unresponsive) wrt DDOS, whereas I know many here are 'must not fail' (with varying levels of acceptability). So, I write concise, low bandwith consuming websites that appear instantly (to my local market users).


Thank you for the advice. I've tried out passwordless login and found it more convenient, so that's not a problem. I'd want to be deploying a Python app I wrote myself, and some static files.


It’s not that bad. A day or two initially, then an hour or two every six months, depending how much work you put into automating it. It’s definitely a good way to learn.

Write everything down! Every command you type. You don’t want to come back in sixth months time and have to relearn what you did the first time.

If you’re feeling ambitious you can script almost the entire deployment from provisioning a machine through to rsyncing the content. It’s pretty fun to run a bash script or two and see an entire server pop up.


As a former sysadmin, this is still a lot of pain in the ass. One Terraform file that keeps my S3 + CloudFront sites configured, run once a month to ensure LetsEncrypt certs are rolled, and done.

Have maintained enough servers for a lifetime, I’d rather be coding!


Thanks for the advice! I was stuck thinking I'd have to learn something like Ansible to automate deployments, bash scripts is a greatidea.I

I have Linux on my laptop and I've been trying to document what I configure with heavily commented bash code, but I've run into issues with editing config files. I frequently want to say something like "edit this variable to this value" but sed feels too fragile and easy to mess up silently, relacing the entire file is silently badly future incompatible if other entries in the config get changed in an update, and appending to the file so the last item overrides feels hacky and doesn't always work.

How does everyone else do that?


Managed cloud products seem like dark magic to me. A VPS or EC2 VM is just like the computer I'm using right now. There's no magic. If something goes wrong, I can fix it as if it were on my local machine since it's often literally the same kernel version, same architecture, same shared libraries, same software from the same package manager. Performance tests on the local machine very closely predicts that on the server. On a serverless cloud product, to fix something deep, the tools at my disposal are a maze of buttons on a web GUI or CLI that sends the same opaque API calls the web console does.


Do not fear running your own server. There is no such thing as perfect security and neither is the cloud inherently secure. Many of the infamous data leaks you've heard about in recent years occurred on cloud-hosted systems. Ultimately, if security is a concern, you need someone that understands security, regardless of where its hosted.


Yes, you can absolutely do that, but shouting that message from the hilltops isn't a good business model in an industry with ADHD.


What do you mean? I asked because I constantly hear that running my own server is better and cheaper on HN, and also that running a server is really hard if you didn't grow up memorizing binders of man pages.


I felt the same was last year before I had ever deployed a server publicly. It's really not that bad for small things. I run Nginx and some docker containers and proxy to those docker containers for certain subdomains. Now that I know how to do it, I moved from AWS to DO and the new setup was probably 20-30min to get everything set up, including Let's Encrypt.


you can change the ssh port and use a ssh key instead of a password. Don't worry about a firewall or fail2ban. That's about all. Also run everything from root.

Repeat above steps once vps provider goes out of business (as someone else also pointed out)


> you can change the ssh port and use a ssh key instead of a password.

I'd advice against changing the ssh port - I don't think the (small) inconvenience is worth the (tiny) benefit to obscurity.

I would always recommend turning off password authentication for ssh, though.

(along with disabling direct root login via ssh, but root-with-key-only is now the default - and if you already enforce key based login, it's a bit hard to come up with a real-world scenario where requiring su/sudo is much help for such a simple setup).

I would probably amend your list to include unattended-upgrades (regular, automated security-related updates - but I guess that's starting to be standard, now?).

You will probably need an ssl cert, possibly from let's-encrypt.

At that point, with only sshd and nginx listening to the network - avenues of compromise would be kernel exploit (rare), sshd exploit (rare) or ngnix exploit (rare) - compromise via apt or let's-encrypt (should also be unlikely).

Now, if the site is dynamic, there's likely to be a few bugs in the application, and some kind of compromise seems more likely.


Anecdotally, changing the ssh port on a very low-budget VPS is worth the effort because the CPU time eaten by responding to the ssh bots can be noticable.


This has been my experience as well. I remember having a VPS with digital ocean a long time ago and it was getting hammered badly with bots. Changed the ports, made pubkey authentication only and installed fail2ban for future pesky bots did the trick for me.

To be honest I don't think the people controlling those bots want to deal with us that makes it harder for them to gain access. Instead why not happily hammer away everyone's else port 22 with the bare minimum configuration? Those who enhance the security were never the targeted audience to begin with.


> Those who enhance the security were never the targeted audience to begin with.

This is pretty insightful. Statistically, attackers are probably mostly looking for badly configured machines which are easy to exploit rather than hardened systems that take a long time to penetrate.

State actors and obsessed attackers are different, of course. But statistically even taking care of using the simplest precautions keeps one out of the reach of the broad majority of such attacks.


I'm more familiar with AWS. There I just firewall SSH to just my IP (with a script to change it for the laptop case, or use mosh), and thus spend no CPU time responding to ssh bots.

Do VPS providers offer some sort of similar firewall service outside your instance?


I don't think low budget vps providers typically allow this. That said, fail2ban works OK, as does manual iptables (now nftables) - unfortunately /etc/hosts_allow is deprecated[1].

If you don't know that you'll be able to arrive from an IP or subnet - another option would be port knocking. (eg: knockd). Although, I'd try to avoid adding more code and logic to the mix - that goes for both fail2ban and knockd.

[1] ed: Note, the rationale for this is sound: the firewall (pf or nftables) is very good at filtering on IP - so better avoid introducing another layer of software that does the same thing.


You can't create/edit firewall rules via apis in some vps providers?


By "low budget" i read"cheaper than Digital Ocean". I'm not sure how many of them let you specify firewall rules outside of/"in front of" your vm.


You still get hit by bots, AT least some of them. If you are really concerned you want to use port knocking.


That's not something I had considered - I suppose the handshake does take up some cpu.


I'm inexperienced, but relatively confident if I use an off the shelf login module to protect everything but the login page, the handful (literally) of users with creditials are internal to the organization and trusted with underlying the data anyway, and the data itself is essentially worthless to outsiders, I'm pretty safe.

My thinking is that even if I for example fail to sanitize inputs to a database or displayed to other users that won't lead to an exploit absent a bug in the off the shelf login module or someone attacking their colleagues (in which case there are other weaker links).

The organization I'm building this for has other moderately sensitive systems on an internal network, but the server I'll be managing will on the public internet. The site I'm building will export CSV files to be opened with Excel, so I suppose if the site I build was compromised it could be used to get an exploit onto a computer in the network. Still I presume if they're facing that kind of attack they'll have plenty of other weak links like documents spearphished to people and I'm pretty sure the sensitive systems are on a separate internal network.

Is my confidence crazy?


I don't think you're crazy.

But I also think that I would trust eg apache/nginx basic auth, more than login/session handling at the application level (php/ruby/... with users in a db).

Assume at least one user has a dictionary password, and suddenly you'll want to enforce 2fa via otp or similar - for peace of mind.

As a general rule, I tend to assume a targeted attack will succeed (no reason to make that too easy, though) - what I aim to avoid are the bots.

They'll likely be brute forcing passwords, blindly trying sql injection - along with a few off the shelf exploits for various popular applications (eg: php forum software).


> I'd advice against changing the ssh port

Hard disagree, even if for as simple a reason as port 22 gets inundated by drive by hacking attempts, making the log files virtually useless.

Running telnet with no password on any port would be, but having ssh on a non-default port isn't trying to achieve security through obscurity.


Ideally you should block all SSH traffic except from a VPN (not a public one obviously). In that case it wouldn't matter whether you changed the port.


there are literal dozens of sites who offer free static site hosting, from github pages to firebase hosting, why advantage you get using a VM?


A machine I can actually use, portability. I have enough servers, adding nginx and certbot to one isn't hard. Adding instances and load balancing isn't either should it be warranted. The "serverless" approach is the new one and thus the one that should seek justification.


> I have enough servers, adding nginx and certbot to one isn't hard.

I think this is key. If you have resources that are already serving things, the marginal cost of serving something else is low.

For a typical person not running servers for personal services, the upfront cost does not seem justified, when its so easy (and cheap) to setup and use the alternatives.


If you’re hosting static content, are you really worried about the “lock in” boogeyman?


Server side stats instead of Google Analytics is big one, imo.


You can also just get a micro instance from the major cloud providers. Just as cheap, and more reliable and usable with all the cloud tooling.


But then you have to manage the instance. Once you set everything up with S3, your deployment is literally.

   aws S3 cp $your-local-directory S3://your-bucket


That's out of context. I was responding to OP getting a cheap VPS provider. If you want a cheap server then it's better to just get a tiny instance from a major cloud instead. Most also have free tiers.


The top of his post was

“Host a static website the HN way”

It would be overkill for a static website.


Then reply to their post instead?


> Buy the cheapest most unsustainable VPS ipv4 deal out there.

[...]

> Yearly maintenance required:

Well, presumably, find the new cheapest, most unsustainable VPS ipv4 deal, the previous one not having been sustained.


Replace nginx with Caddy for zero config HTTPS.


I would wait for Caddy 2 at this point, for I found 1.0 doesn't work very well when your site gets complicated - many directives conflict with each other. Fortunately it seems that most of these are solved in 2.0.


Don't wait, start using it now while we're in beta so we can fix problems before they affect everyone!


And even bring your config with you: https://github.com/caddyserver/nginx-adapter


For extra work add a let’s encrypt client, configure Nginx to do ssl, and set up automatic renewal.


And watch it go down from the load when it gets linked to on HN.


Static site? You can serve insane amount if static content with cheap vps box. Are people now only using so called serverless tech and forgotten how it works underneath all the buzz words? 5 dollar digitalocean box can serve double the HN traffic and more.


The described cloud front configuration would be less than $5/month. How? As you point out, the cheap VPS box can serve insane amounts of traffic, so AWS doesn’t even need hardware equating to that box to host your content.


> Host a static website the HN way

Gatekeeping alert!

Fortunately, there is more than one way to do things. I've been in HN for a while now and I don't do any of this. I was doing it when I wanted to learn, but now I need to get a project launched without any burden so I deploy it to ZEIT Now which is not the same as OP, but you get the idea.


You realize he was being sarcastic right?


Nope. It's really hard to distinguish these days!


How is that "without any burden"? That's paying somebody else to take that burden for you, buying into their non-standard tooling, and hoping that they outlast whatever project you are hosting.


You pay for convenience. Someone else takes care of the server. I pay them $5 per month for the site/app I pay $15 for the database in other place and I charge +$150/hour while I work in something else instead of dealing with the server. That's without any burden.


I would like to just mention that github does static site hosting for free. I have used it for a few years. Never had a problem. Static, Free, probably not going out of business in the next 5+ years, domain is my only cost.


True, but they reserve the right to prohibit sites that use up a non-trivial amount of bandwidth, or that have commercial purposes, or various other reasons.

It's a great service, but I wouldn't count on it as your primary hosting.


You don't get TLS certs for both apex and www subdomain from GitHub though.

https://github.com/isaacs/github/issues/1675



And they can probably handle your traffic, too!


are there free domain name (even it exotic) still ?


I found s3 + cloudflare to be a better combo. Cloudflare offers free ssl certs and has overall been a great experience. I also use AWS SES for my domain mail. It gets delivered to S3, then a local python script grabs it and dumps it in a mailbox file for dovecot to serve via imap. I pay $0.05/month for my hosting of my site and email.

https://markw.dev


Or GitHub Pages + CloudFlare.

And as another comment mentions, GitHub Pages now offers HTTPS certificates [1] for custom domains, so GitHub Pages alone is sufficient for most static websites.

[1]: https://github.blog/2018-05-01-github-pages-custom-domains-h...


I started there but didn’t like having my drafts and unfinished ideas visible in the repo. I now maintain my site in a private repo and publish to S3. Not sure if that’s possible now in Github or not.


That’s a fair point. GitHub Pages from private repos are enabled only if you have a Pro or Team subscription. [1]

[1]: https://github.com/pricing#feature-comparison


Actually I'm not sure if it makes sense. If you had gh pages from a private repo, they would be public (in the sense that they are published on the open web) - so that won't solve the OP's problem.

IMO the simplest solution for the OP is to have a private repository where he does any draft work, and then pushes the master branch (or whatever) to the public repo in order to "publish".

This option is available with or without a paid account, and I don't see any significantly better option available to paid accounts. The only thing you could do there is publish your pages from a private repo, and maintain your drafts in branches in the same repo. That is virtually identical to the original suggestion except that "master branch in public repo" is replaced by "master branch in private (same) repo" which probably makes little practical difference in the workflow.


Well, the html files of your website are of course public information once you publish them, and you can copy the html files to a public repo as part of the build process. When I tried Hugo a couple years ago, I had it set up to do just that (output to a different repo). There was no difference on my end. No matter how you build your site, you can always add the copying at the end of your Makefile or whatever method you use.


You could always keep the main repo private, and just pushed the published versions to the public repo


You can have a private repo hosting github pages, I'm using it for my personal website


Are you paying for it? It sounds like a great solution if it's available on the free tier.


GitHub provided private repos to everyone after Microsoft bought them. And I don't know why it'd require the repo to be public to do the pages...


Pages from a private repo is a paid feature.


private repo + netlify when ready then


AWS also offers free SSL certs via the Amazon Certificate Manager. The certs only work with CloudFront, Load balancers and API Gateway though.

One advantage is that they are auto renewing and you don’t have to manage them.


The free SSL certificates via ACM are only available in the US East (Northern Virginia) Region. That has tripped me up before.


That's not quite right. You must provision your CloudFront certificate in us-east-1 because that's where CloudFront is. You can provision ACM certs in any region.


I have ACM certs issued and working in ap-northeast-1 and us-west-2, attached to an ALB.


That is the case for Cloudfront certificates globally. For Loadbalancer or other uses, you should just provision your certs in your own region.


AWS ses has convinced me Amazon and Google have too much power. 1% "spam" = frozen account

Could be bounces, but mostly just users who literally signed up clearly for my email list with 0 deception marking spam.

It's actually cut my content production down because I'm afraid to email. From 50 articles per year to 12 to 4 to 2.


Knowing AWS SES rules and seeing the lack of context from your post, I'm going to guess that either you're embellishing the story, or your content quality is low enough that users see it as spam. 50 a year that is one a week; I need to be super motivated to stay subscribed to a 1-a-week email from a single content provider.

Bounces also have their own system and should be handled by your email system as well.


Not embellishing.

And even at 4 a year I have this issue.

If 70 of the 7000 emails I send bounce or get marked as Spam, I get frozen until I ask for forgiveness.


I find your email setup intriguing. Any chance you could share those scripts in a GitHub repo or gist?


I plan to write a post about it. I’ll put it up here when I’m done.


I have this set up couple years ago via this lambda function: https://github.com/arithmetric/aws-lambda-ses-forwarder

It may look outdated, but it still functions well for me


The big difference is my setup doesn’t require another email service. Just 10 lines in a config to spin up an imap server. I was trying avoid the big mail providers when I set this up.


How is this so complicated? I shudder to think of the amount of developer hours wasted by the weirdness and complexity of AWS. Really wish they would prioritize usability and developer experience.


AWS's business isn't making simple things like this easy for solo practitioners who are never going to spend any money.

Their business is making big complicated things possible for companies that are going to spend a lot of money and don't care about a small amount of incidental complexity.


> Their business is making big complicated things possible for companies that are going to spend a lot of money and don't care about a small amount of incidental complexity.

While imposing technical limitations to extract more revenue along the way.


Could you share some examples?


Probably because it is built on tools that are made to scale arbitrarily and solve a large amount of very general needs. This need is simple, the tool is not built for simple needs though it can meet them.

I dislike the UX of AWS but the complexity here isn't strange to me. I bothered to set up the first bit of this (didn't need sub-dir index files so didn't realize it was a problem) since I wanted a simple storage, simple deploy and good resilience.

My site has been hit by HN a few times now and it hasn't been an issue. Fairly set and forget. But the start is annoying.


I have my personal website deployed to S3 and my DNS in Route53 with a Travis CI commit hook that will upload my files and update the permissions automatically when I push changes to Github.

Only costs ~$27/year to have a domain($15) and a static website deployed to S3/R53 ($1/month), which I've found to be fairly reasonable for the level of reliability I can expect.

I admit it is complicated up front but once it is setup and updates are automated it is really nice to not need to worry about self hosting, hardware, VPS', etc just for a simple website.


> push changes to Github

Or create an organisation on Github and publish this website as a Page. It is simple and it is free.


Certainly not a bad idea, but in my case I care to own the domain because I also use it when naming Java packages for the projects I write. Since I'm already paying for the domain, I figure I might as well use it for my actual website too. I've found S3 and Route53 to be the most cost effective for my particular use case.


I personally haven’t used it, but I think you can use your own domain with GitHub pages fairly easily.

https://help.github.com/en/github/working-with-github-pages/...

Edit: I see others mentioned this too, but hopefully the link adds some added benefit!


Github Pages supports domains


It is easy to connect your own domain to a Github Page ;-)


Github Pages support custom domains for free.


Can someone please provide practical advice on moving from a site hosted on Wordpress using all the plugins to a static hosted site? It seems unbelievably complicated.

My non-technical employees and partners like Wordpress. We like the plugins that make web development easy and not arcane.

But we don't like the fact that every page requires basically a database read. Yes, we use WP-SuperCache, and it's ok.

But why oh why is it so hard to do the editing on Wordpress and the publishing on a Static site? Anyone with some real guidance here?

We want the people who come to the site to get the static experience while the people who edit the site to get the Wordpress experience.


1 - Use a CDN and cache your html on the edge. 5 mins of work for infinite read scaling and no other changes.

2 - You can run a process to crawl your WP site and output all the pages as static files, then upload those for hosting. WP2Static handles that for you: https://wordpress.org/plugins/static-html-output-plugin/

3 - Use Wordpress as a "headless" CMS for editing and storing content, then use one of the modern static site frameworks to get content over JSON/HTTP from Wordpress while building your site.

I'd go for option 1 and call it done.


> My non-technical employees and partners like Wordpress. We like the plugins that make web development easy and not arcane.

Use what works for your teams. Apparently, that's Wordpress and that's okay.

> We want the people who come to the site to get the static experience while the people who edit the site to get the Wordpress experience.

Stick a caching proxy or CDN in front of it, not just a Wordpress cache plugin. Something like CloudFlare would help you out.


Static site generation is just a trendy reinvention of caching. Put a CDN in front of your site, or run one of the many wordpress cache plugins [1] and point your httpd at the cache directory instead of PHP.

[1] https://wordpress.org/plugins/static-html-output-plugin/


> Can someone please provide practical advice on moving from a site hosted on Wordpress using all the plugins to a static hosted site? It seems unbelievably complicated.

Oh I'm sure it is. Static website is just that--static. You don't have a database, and that is a great thing, but that also means you can't store stuff like users, comments, articles to be published in the future, etc. out of the box.

You can do all of that of course, but it's going to be difficult, and you're gonna plug non-static parts to your static website (such as plugging the comments part). All in all, you can't expect to have all the advantages of static websites as well as Wordpress.

I personally simply took my content from Wordpress and built a new website with gatsbyjs. It's fast, clean, and entirely in React + Typescript. It has no dynamic parts, though. If you're a business and your Wordpress site is production critical, who knows, you might not want to migrate. You might also want to explore alternative solutions, like split some parts of your Wordpress blob into several static websites, and keep the main WP site with less content that's easier to manage... There are a number of possibilities.


You can use wget or curl to make a static copy of your Wordpress site and publish that. There are some small caveats (all files need to be reachable via links or you will have to manually tell wget where to find them) but it's a great solution if you want the convenience of Wordpress and the security of a static website.

I set up such a system for my old research group during my PhD (as they were not allowed to use Wordpress directly) and they are still using it 8 years later: http://iramis.cea.fr/spec/Pres/Quantro/static/index.html

Back then I wrote a simple Wordpress plugin that would create an empty file when a button was pressed in the UI, and I had a cron job that was executed once per minute, checked if the file existed and if so deleted it and copied the new version of the dynamic website to the static server.


In a previous life I worked on a site that had a private Wordpress instance that was scraped and cloned to a more or less static site.

You can either do this explicitly using a small scraper tool, or implicitly using aggressive reverse-proxy caching rules.


Sounds pretty intensive. Every page would need constant scraping as links update and sidebars change, wouldn't they?


I think you might be overthinking it. Just drop cache after publishing.

Maybe you’re doing something more complicated that isn’t amenable to pseudo-static?


Just put cloudflare or a similar service in front of it and tada, your host handles only a few requests per minute.


Is there any quick guide or plugin for front-ending Wordpress with Cloudflare? Sorry if this is a LMGTFY or RTFM but there's legit lots of conflicting advice.


You should set this up from within cloudflare. What they do is essentially download your website from a private url, e.g. blog.mydomain.com and publish it under your public domain, e.g. myblog.com.

In theory, wordpress doesn’t even know it’s happening. Make sure to disable any caching plugins as well, you won’t need them anymore.


Hardypress.com is an easy option.


I’ve had to do this multiple times over the past two years. It’s always a shock to me how hard it is.

The Lambda@Edge functions especially. They have 4 phases they can be injected into: viewer-request, origin-request, origin-response, and viewer-response. If you put your code in the wrong place you get hard to debug issues. Additionally you cannot “annotate” a request, so a decision at say “viewer-request” cannot pass information to later stages.

Also, deployments take 15 minutes at least which just further frustrates the debugging process.


Every AWS service is shockingly crusty. Even after spending years on AWS calibrating my expectations downwards, it regularly finds ways to surprise me.

I know, I know. For developers it beats obtaining permission every time you want to spend $5/mo on some plumbing and for managers it beats getting fired because you chose Google cloud and they canceled it the next year. Still... ugh.


This was actually one of my most delightful experiences with Azure Functions and Microsoft's API Management solution - overhead does not make it the right fit for every serverless architecture, but it did make orchestrating functions in this way a lot easier


Thanks for this comment! I know what the next side tech I'll play around with will be :)


I totally agree. CloudFront is maddeningly anemic, with hacks required to do the most basic things. Nginx/Varnish configuration is bad but CloudFront is worse. I wonder how much better competitors like Cloudflare and Fastly are.


Have you tried Cloudflare Workers and Workers Sites? I believe they fix both issues.


Since we are all sharing how to deploy static sites, here's my approach using CloudFormation to build a static site with S3, CloudFront, Certificate Manager and Route53.

https://gitlab.com/verbose-equals-true/django-postgres-vue-g...

and here's how I deploy that site with GitLab CI.

https://gitlab.com/verbose-equals-true/django-postgres-vue-g...

Also, nobody mentioned GitLab pages which offers some pretty nice static site solutions as well.


Nice solution! If you're interested in a code based solution, this is one space where the CDK really shines. OAI, CloudFront Website Distributions, and Route53 AAAA records to the distribution are all turnkey constructs. Deployment is just executing 'cdk deploy'.

If your CFN gives you headaches ever, maybe take a look


Thanks, I'm interested in learning a code-based solution. I like the ideas behind CDK, Pulumi and Terraform, but I feel like they all have trade-offs. CFN isn't the best, but I don't look forward to having to re-implement my entire tech stack to achieve the same result compared with what I currently have


Just use Netlify, it's free, you can hook it up to a GitHub repository and they'll handle your Let's Encrypt certs for you.


Just use Github pages, supports TLS, CNAME and is free.


Netlify does DNS and can serve from private repos for free. Not sure if GitHub changed their policy, but last time I looked, if you wanted to serve from a private repo, you needed to have a Pro plan.


That's not true (at least not anymore). Now that Github offers free unlimited private repos, there's no difference between public/private ones.


No, it is still true. From https://help.github.com/en/github/working-with-github-pages

> GitHub Pages is available in public repositories with GitHub Free, and in public and private repositories with GitHub Pro, GitHub Team, GitHub Enterprise Cloud, and GitHub Enterprise Server. For more information, see "GitHub's products."


Ah, I guess it's because I'm on Github Student.


Cool, that's good to know.


For anyone who is hosting .html files and doesn't want to have to put index files inside directories, the "trick" with s3 is to rename the file to the name without the html extension, and change the Content Type to text/html.

Earlier today I modified the popular jakejarvis s3 sync github action to allow for this during my CI/CD process.

https://github.com/davidweatherall/s3-sync-action


Thanks for sharing that. Nice repo! Can you explain why I wouldn't want an index file in a directory? I assume you mean index.html?


I do mean index.html! Personally I prefer the url structure of example.com/about as opposed to example.com/about/ - especially when adding on anchors or parameters (example.com/about?param as opposed to /about/?param).

Relative pathing can be useful depending on file structure.

It avoids issues with directory / page naming conflicts. E.g. my images are stored at /assets/img/image.png - but if I wanted to create a page at example.com/assets this would require me to then place an index.html inside my assets directory - which doesn't seem logical.

Overall, mostly personal preference of me disliking the index.html inside directory method - It just doesn't seem like it's the "right" way to access a page. If I have a example.com/about page - I expect them to be hitting an about.html file.


Vendor lock-in is real - why not just generate an index.html file? Then you don't need to hack anything and your files are portable. index.html will work anywhere.


I feel dumb after reading this. So easy. Thank you.


I’ve been using S3 static hosting and CloudFront for all of my static sites recently and it’s fantastic (and nearly free). This is not a hacky solution by any means, S3 buckets support static file hosting out of the box and CloudFront sits in front caching your content across the CDN. Throw in AWS Certificate Manager and you get TLS solved too. Low traffic sites cost literally nothing. S3 runs $0.004 per 10k requests and Cloudfront comes in at $0.001 per 10K.


I don't really understand the author's claims here.

        const streamsBucket = new s3.Bucket(this, 'StreamsBucket', { websiteIndexDocument: 'index.html', removalPolicy: RemovalPolicy.DESTROY })
        /* Create a cloudfront endpoint to make it publicly accessible */
        const streamsDistributionOai = new cloudfront.OriginAccessIdentity(this, 'StreamsBucketOAI', {});
        const streamsDistribution = new cloudfront.CloudFrontWebDistribution(this, 'StreamsDistribution', {
            originConfigs: [
                {
                    s3OriginSource: {
                        s3BucketSource: streamsBucket,
                        originAccessIdentity: streamsDistributionOai,
                    },
                    behaviors: [ {isDefaultBehavior: true} ],
                }
            ]
        });
Will create a bucket that will serve index.html from a non website bucket. Three lines of cdk code. If you need to populate the bucket, add another line to create a new BucketDeployment to populate the bucket.

All files are secured by OAI and not accessible over the public internet.


This is a ridiculously complicated tutorial.

Sure, S3 works if you're deploying it as safest as possible and want to pay for it too and do this on a recurrent basis, but otherwise is too big of a fuss for a personal blog, I use GitHub pages for over 4 years now with no issues, my own domain and SSL.

The simplest is surge.sh, period. If you don't need a domain, a web platform and just want to deploy from your CLI, surge.sh is the solution. It's good when you want to show off a static website to someone outside your sever. Even the switching to a premium plan is done from the CLI.

Other solutions are: Netlify, ZEIT Now, Aerobatic or Render.

I frequently use this list from this GatsbyJS wiki to check for static website hosts, I recommend it: https://www.gatsbyjs.org/docs/deploying-and-hosting/


The complicated stuff seems predicated on this:

“However, the S3 website endpoint is publicly available. Anyone who knows this endpoint can therefore also request your content while bypassing CloudFront. If both URLs are crawled by Google, you risk getting a penalty for duplicate content.”

I’m curious if anyone has experienced these issues in practice.


You can specify the canonical url in the HTML. Then it's not an issue.

https://support.google.com/webmasters/answer/139066?hl=en


I run the setup of S3 + Cloudfront and never ran into their index problem because I'm not trying for URLs without document file extensions.

I assume that's the reason for really wanting index redirect in directories. Pretty URLs, I didn't feel the need. There are other valid reasons too I'm sure. But my needs diverged from the article even earlier than SEO penalties.


The pretty URLs thing is confusing me now too. AWS lets you configure the buckets to do the pretty URLs.

https://docs.aws.amazon.com/AmazonS3/latest/dev/HowDoIWebsit... https://docs.aws.amazon.com/AmazonS3/latest/dev/IndexDocumen...

I put up a Jekyll site on s3/cloudfront recently and it seems to work well.

Anyway, I’m still wondering about the consequences of the naked s3 endpoint being available.


I know this is not a popular opinion here, but here goes:

I use a digitalocean instance where I install nginx. Then I copy the static files through sftp or scp. For ssl, I install letsencrypt.


"I saw a guy building a website today.

No React.

No Vue.

No Ember.

He just sat there.

Writing HTML.

Like a Psychopath."

(credit to https://twitter.com/kiddbubu/status/1187120868259979269)


I've been doing the same, however I just SSH into it, pull my git repo and run a command to generate all static files.


I have a similar setup! I use Ansible to deploy though


What benefits does this have? in comparison to much easier solutions like; Netlify, Github Pages etc..


What's wrong with old school apache server running on a $5/month DigitalOcean droplet?


I've been running my personal blog and many other micro-sites on a $5 DigitalOcean droplet for years. It's seriously the best bang for the buck.


What I’ve gleaned from this thread is that in 2020 there are a thousand easy ways to host a static website. Just take your pick and go with it.


And for every way, there will be 10 people who will call you names for doing it this way, and that you should really do it another way. Just do whatever makes sense for you.


I see you are using Hugo 0.62.1

https://sanderknape.com

FYI starting with 0.63.0 the speed has about doubled:

https://github.com/gohugoio/hugo/releases/tag/v0.63.0


Is the speed it's taking to building the static site usually a problem? I don't recall ever waiting for it to be done. Maybe my site is too minimal to see a difference (https://annoying.technology).


I have compile times of >30 seconds with Jekyll blog [1,2]. This is an issue. It adds a lot of friction to the edit/preview loop.

BTW: I just moved it to Netlify since reading this thread. Took me ~5 minutes.

[1] https://github.com/HeinrichHartmann/HeinrichHartmann.github....

[2] https://www.heinrichhartmann.com/


FWIW I'm getting a "certificate does not match *.netlify.com" SSL error on your site.


No, circle goes from clean state to npm install, scss compile, drags content from contentful and then runs the Hugo build and upload to S3 in under 30 seconds for our site. Most of the time is npm flaffing about.


It's much simpler to use GitHub Pages + Cloudfront (optional).


Ya, I'm trying to understand what the downside to github pages is compared to this strategy. Can anybody elaborate on what I might be missing?


I’d rather be a paying customer and know I’m not going to lose my setup because of some reason.


That's absolutely a valid reason. All my static sites are not generating income and don't need support. I've also got my own backups, so if GH decides I shouldn't be there I'm not losing much. If a site is important to your business then support is certainly more critical.

Thanks for the very valid counterpoint!


> The redirect logic provided by the S3 website endpoint can be moved to a Lambda@Edge function.

The post here is using Lambda@Edge to handle pointing example.com/about/ to example.com/about/index.html

However there's a lot more you can do / may want to do with Lambda@Edge if you're hosting a Cloudfront+S3 website.

I just wrote up how I set up a system to make managing server-side redirects (e.g. 301s from /old-page to /new-page) easier via Lambda@Edge.

https://news.ycombinator.com/item?id=22351484

https://engineering.close.com/posts/redirects-using-cloudfro...


I did some redirects without lambda's last year using s3 routing rules, which kind of worked. As figuring out this and several other things, I also wrote up some details on this.

https://dev.to/jillesvangurp/using-cloudfront-s3-and-route-5...

IMHO, Amazon should be ashamed of themselves for making it so unbelievably convoluted and hard to host a simple website on cloudfront. It's the one thing almost every website out there has to solve and they made it super painful to get this right.


Did the same for my personal site. Really happy with the result: https://ethanaa.com/blog/conversion-to-static-site-with-vuep...

I also wrote follow up posts on adding document search: https://ethanaa.com/blog/document-search-with-algolia/

And continuous deployment: https://ethanaa.com/blog/continuous-deployment-with-circleci...


You should use Amplify, it solves allk those problems.


I use AWS Amplify, it makes all of these headaches go away and it's ridiculously simple to configure.

1. Add your domain in AWS Route 53

2. Tell it which git repo & branch you want to use (including private github ones)

3. It detects the repo contains a hugo site (works for other ssg) and generates a build script

So now every time I commit to my selected branch, Lambda is notified, it fires up a vm to generate the html, move it into S3 and takes care of Cloudfront, SSL & so on.


I did something very similar recently with my website. With that said, I would not do it again. For doing something—seemingly so simple, the infrastructure work was a pain.

Also, while I’m at it, static websites are, in my opinion, a little “gimmicky” since it’s impossible to determine the user’s browser size at build time. This means that you have to make a choice to build for either mobile or desktop first. Then, if the user is using the other device, they’ll encounter the FOUC issue (flash of unstyled content).

For me, that’s big enough reason to not use static websites in the future.


What? Why aren’t you using media queries?


I am using media queries—while they work, they’ll flash the “unadapted” version depending on what component is rendered first. The alternative is to add some kind of loading aspect to each component but I think it’s not very scalable


they’ll flash the “unadapted” version depending on what component is rendered first

A normal CSS file is render blocking so the DOM content won't be displayed until the browser has downloaded it, parsed it, and knows what to display. If you move your main layout styles out of the component (presumably this is using styled components or similar?) and in to a CSS file that's defined as a <link> in the header of the page then that problem will go away (or a <style> tag in the header if you're worried about the additional fetch). This is how browsers have been designed to work.


Use CSS to style the same site for different device sizes


Now you just need to configure a Lambda to take care of cache invalidation on CloudFront, based on s3:PutObject events. CloudFront also supports headers like "cache-control: public, max-age=3600, s-maxage=86400", so you can have quite aggressive caching on CloudFront where you can just invalidate it on demand anyway.

Personally I just listen for any event and invalidate all objects, since that's way cheaper than issuing an invalidation request for each object, but performance wise that might not be the best option.


Here's my terraform-ization of an S3 + Cloudflare setup for SSGs. I use Zola but that's incidental.

https://gitlab.com/bitemyapp/statically-yours

There's a weird bug with the `tf init` but it can be worked around by running `tf init`, deleting the providers in `.terraform`, then running again. Hashicorp has an open bug for the problem I think.

I could probably clean it up more but this has been okay for a couple sites I run.


What are good ways of managing your content with a static website?

For example, say you want a single index.html file, with each article contained in a separate content file - "blogpost01.html", "blogpost02.html", etc. - that you somehow include in the <body></body> of your index file?

Is there any way to do that with a static site, or something similar where your content is separated out into individual files?


Use a static site generator. The parts you touch for authoring posts will end up "included" during the build step, producing files that you don't touch which each redundantly contain the common markup. Or with an SPA generator like Gatsby, there wouldn't even be any redundancy (but client side JS is required).


One example of the above is PyKwiki: https://github.com/nullism/pykwiki


One thing that these guides on using S3 never mention is to make sure to set alerts if the cost of the bucket grows too fast, i.e. your site gets popular.


We've also been using S3 + Cloudfront for our static website, it's super cheap and has been low touch.

We've run into two things though:

- We have very little control over TLS versions and cipher settings.

- We have to use a CNAME so we can't point a bare domain at it which also means we can't add our site to the HSTS preload list (I think there is a way to purchase an IP though now, if anyone knows please let me know).

Overall totally worth it though.


On your point about using CNAMEs - if you have the domain set up with Route 53, you can create an A/AAAA alias record on the domain apex pointing to CloudFront.


I struggled with setting up static website hosting on S3, CloudFront, Route53 and ACM, so I created https://github.com/tobilg/serverless-aws-static-websites to automate this via the Serverless framework


Are there any guidelines for controlling cost when running a static website on S3+CloudFront?

Say my regular bill for this setup was $5 a month. Can I at least get a notification if I have a lot of visits and the bill goes over a threshold, say $50?

Also, is there any protection against things like DDoS that could quickly push the bill into the $100s and beyond?


aws (and all other cloud providers I've used) has a pretty elaborate cost alert system.


Alerts do nothing to stop a huge bill.


You can use a lambda function that is triggered by a cloudwatch alarm when your bill approaches an arbitrary limit.

What you do in response to such an event is entirely up to you


Why is this "serverless"? It's just rented server capacity on a shared host, right?


Serverless is an operational construct that refers to the amount of management you have over the underlying servers.


Yes. “Serverless” is an overly strong term that means you don’t even have a dedicated portion of a server.


For static websites nothing beats the CLI experience of surge.sh - one (simple) command and its deployed. Its so smooth that I forget its there.

The way the API has been designed to make the usage natural/guessable has inspired me a lot when I have made other CLI projects.


I prefer Google Cloud Run and it need not be 100% static though it should be stateless.


Deploying static sites is the original and best candidate for engineers overengineering


Anyone knows how I could measure website performance all over the world averaged across multiple runs both for webpage and downloading huge file (100Mo) ? To verify that the CDN is indeed faster than my current hosting solution


Why isn't this the 3rd option: Set the S3 bucket private and give CloudFront access to it. I've been using it for a while and it seems to be working.


this was literally what I was doing today and if I'd just checked hacker news it would have saved 3 hours of faffing with Lambda


Static sites are overrated for most cases. Often you still need a backend for editing with non-technical folks or just easier handling of media, etc so you're now running a server-side instance of some blog or CMS while adding a whole new detached frontend layer.

With things like Cloud Run that can run a container on-demand for every request, it's easier to just stick with regular Wordpress or Ghost server-side stacks.


> Cloud Run that can run a container on-demand for every request

People will bounce back. It will take far too long to load. The point of a static site on a CDN is speed. If your goal is to get outside visitors, then you definitely do not want to do this.

I can understand Cloud Run for demoing small apps though.


Cloud Run just pauses the CPU, the container is still fully loaded and resumes very quickly. You can also cache the pages with a CDN.


> Static sites are overrated for most cases. Often you still need a backend for editing with non-technical folks or just easier handling of media, etc so you're now running a server-side instance of some blog or CMS while adding a whole new detached frontend layer.

Not for my blog.


most sites.

If you're editing markdown files with only text and running a simple convert to html then it'll work fine, but ghost/WP in a container would work just as well.


Still can't do the same without paying hourly for a load balancer on Google Cloud. C'moon Google!


actually you should use Netlify or Firebase Hosting. Both have a CDN behind them.

Frameworks like Nextjs Static Site export and Gatsbyjs have a lot of tooling support for firebase hosting and netlify.


Use google Cloud Run though I love Netifly.


Amplify really eases the pain with this.


It's amusing to me how programmers look back with disdain at web development in the 90s and how "you just uploaded files to a FTP server" and now we've circled back to the same thing, just with a whole lot more plumbing in the middle.


For me, Netlify is the right mix (I'm not affiliated with them). It has the 90s ease of use with modern tools and performance out of the box. It's simple to upload files or sync a repo and it it's free. I'm not sure why anyone would host a static website on the AWS stack.


> I'm not sure why anyone would host a static website on the AWS stack.

Well, I do that, so I guess I'll chime in.

First and foremost is the idea of not tying your site's build process and hosting to a company that could go under. Yes, Amazon could, but if I had to bet Netlify will either cease to exist, be bought out, have its free offering discontinued/changed/etc. long before that.

Second is that sometimes your site's build process is more complicated than what Netlify provides. For example, my site is a static Hugo site, which Netlify supports, but there's one crucial step in the build process where I turn my "resume.html" page into "resume.pdf" -- I have to run a Docker image that starts headless Chrome in order to render the PDF properly. As far as I can tell, that can't be rigged up in Netlify.

Finally, I was able to learn a lot about AWS by hosting my site this way -- setting up a Lambda to listen for GitHub commits, using the AWS API to launch an EC2 instance to build my site, how to configure S3 and CloudFront to serve up content properly, etc. And at the end of the day I get a site that I have complete control over and is approximately free to run.


> First and foremost is the idea of not tying your site's build process and hosting to a company that could go under.

Sounds like you're imposing a complex build process on yourself while getting locked into a web of AWS features to be honest.

Could you not skip generating the PDF resume since you've already got the HTML version to keep life simple?

If you stick to running a basic NPM script to build your site there's little lock-in with Netlify. You could also build your site on a CI platform then push the generated pages + assets to Netlify after.


> Sounds like you're imposing a complex build process on yourself while getting locked into a web of AWS features to be honest.

I should elaborate a little. My whole site is contained within a Docker container that has NPM, Yarn, headless-chrome, Hugo, etc. I can generate everything with a single command. The AWS bits are just there for hosting and responding to a GitHub commit. So the build process is portable-ish, but from what I saw Netlify doesn’t just let you do headless-chrome on a free plan[1].

> Could you not skip generating the PDF resume

Not really, the point is to automate everything away behind a single Git push.

1: https://community.netlify.com/t/using-zipped-binaries-how-to...


Why do you need a PDF resume that's built automatically though? Does it need to be in PDF? Does it change that often? Aren't resumes something you customise per job application?


For the first point, I'd say it's best to save the setup time by using a service like Netlify and keep that for if/when they shut down.

For the PDF, there's probably some service that could be used to wire that quick. But I agree that can be the cut-off point where it might be simpler for some to use docker.


Who is downvoting this? It seems to be a polite answer to the question asked.


I didn't vote on the comment but it came across to me as emblematic of what I feel is "wrong" with technology trends over the past decade. Where instead of solving business problems it's mostly building tools on top of tools or reinventing the wheel in the latest language/ecosystem. Programmers rarely produce anything of value anymore, but instead parade around their mastery of an increasingly complex toolchain that delivers the same end result.


You could use something like LaTeX to create your PDFs on Netlify during the build process. e.g. https://github.com/frangio/netlify-latex


It’s a page with some pretty fancy styling, a lot of it shared with the regular pages of my website. So it would be a ton of effort to port it to LaTeX.


You can install and run just about any binary: LaTeX being an example of a binary that generates PDFs. Just curious: what do you use to generate your PDFs?


The site is generated by Hugo, and the resume is one page on the site with some custom styling bits. Then a Docker container running headless Chrome plus some JS bits is used to export it to PDF.


Thanks. I’ll give that a try as I’m still not happy with my PDF generation from headless CMS.


I’ll give you my answer: I optimized for the lowest cost for a solution I could forget about. And not any suspicious free option like Github pages that might go away next year. True lowest cost hosting for a static website. AWS was the cheapest for me.


Well... technically netlify is AWS behind the scenes isn't it?


Does it matter? To people at our distance from it the process is a magical one where a git repo becomes a website. What technology stack they use, on which services, etc, is an abstraction that isn't particularly relevant to us.


Until it breaks. Then you are either completely stuck, or trying to untangle the thick, multilayer abstraction hosting your "static" website.


If it broke it would take me all of minutes to have it hosted elsewhere. There is essentially zero lock-in with the service, even if you're using some of more complicated build steps.


Yup. Netlify have nailed it.


"growth hacking"


It's sad how Netlify feels the need to shill their service when others are launching their product - like you have done with this account.


How do you know?


I don’t work for or have any relationship with Netlify. I do think their service is wonderful.


To the point that your handle on HN is your proclaimed love for the company? That's weird for an account less than 24 hour old.


It‘a not weird. It’s just a username.


And there seems to be a rash of "whoa I stumbled on my old site's url at {90's hosting provider} and it is still up and running after all these years" comments lately. (And loads instantaneously with no UI framework).


Tech suffers significantly from memory loss.

To be fair, when you dive deep into complexity, it's easy to get lost in the weeds. The trick is to remember what you're doing and why you're doing it so you don't lose sight of goals while chopping through the complexity jungle.


that is comical but doing what is described in this post in the 90s would have required load-balanced tower PCs strategically located all around the world, with another few computers acting as a load balancer, all maintaining copies of the website and cached server responses, and an expensive SSL certificate

we get all that for free now or 50 cents a month, just for static websites

there is almost no comparison, except for acknowledging that the plumbing sucks, all of which you can make better if you want.


> programmers look back with disdain at web development in the 90s and how "you just uploaded files to a FTP server"

Since when? I thought most people liked the simplicity and low barrier to entry…


It's amusing how quickly people forget how bad FTP was -- complex connection setup made it difficult to proxy, lack of encryption made it easy to sniff, managing FTP accounts was really hard without sophisticated auditing and management tools.

SSH/SFTP helped a lot on the security front, but even for small companies, it's way easier to use cloud buckets than FTP.


Yes but this time instead of FTP...it's a git remote! Progress!


But it's AWS-branded plumbing now!


it's in the cloud!


and buckets of cash for amazon...


Great article!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: