Hacker News new | past | comments | ask | show | jobs | submit login

Usually it comes down:

- easy client login for template tweaks, uploads, and redirects

- forms

- extremely minor server-side functionality thing

WordPress is perfect for this.




“Extremely minor server-side functionality thing” is where SSGs fail miserably. All of a sudden you’re using SaaS forms or whatever else, or your self hosting some other tremendously inferior CMS and your margins go out the window.

Wordpress killer that accomplishes these things you mentioned would interest me. Statamic looks interesting in this context but it wasn’t super well formed 4 years ago when I dug deep into this ecosystem


20 years ago, CGI covered this use case pretty well. I wonder if anyone has tried to make a modern equivalent.

Maybe something along the line of a Cloudflare Worker (but using the open source stack) or possibly something minimal and flexible based on WASM or JS that could be invoked from different servers could work.


CGI / PHP is really not a bad way to work in 2024, even though I was traumatized by PHP in my early days. I’ve only experimented with it though, I think it would be hard to maintain at scale, and I’ve heard there are not insignificant security concerns. It’s a lot easier to hire somebody to maintain a Wordpress install than it is to mess with Apache or PHP.

When I figured out that Wordpress literally is executing every piece of PHP every time the site loads it was kind of a “woah” moment for me

Interesting idea with the Cloudflare thing


A problem with Wordpress and with most CGI setups is that there is no privilege separation between the script and anything else on the site. It would be nice to let individual pieces of server side script be deployed such that they can only access their own resources.

I don’t think Cloudflare workers, as deployed by Cloudflare, really tick that box either. Some of the university “scripts” systems, with CGI backed by AFS, came kind of close.


> I don’t think Cloudflare workers, as deployed by Cloudflare, really tick that box either.

They mostly do. You can map different Workers to different paths in your site. A Worker can only access the resources it is explicitly bound to. E.g. if you create a KV namespace for storage, a worker can only access that namespace if you configure it with a "binding" (a capability in an environment variable) pointing at the KV namespace. Workers on your account without the binding cannot access the KV namespace at all. Some more on the philosophy in this blog post:

https://blog.cloudflare.com/workers-environment-live-object-...

There are a couple of caveats that exist for legacy reasons, but that I'd like to fix, eventually:

* The HTTP cache is zone-scoped. Two workers running on the same zone (domain name) can poison each others' cache via the Cache API. TBH I want to rip out the whole Cache API and replace it with something entirely different, it is a bit of a mess (partly the spec's fault, partly our implementation's fault).

* Origin servers are also zone-scoped. All workers running on a zone are able to send requests directly to the zone's origin server (without going back through Cloudflare's security checks). We're working on introducing an "origin binding" instead, and creating a compat flag that forces `fetch()` to always go back to the "front door" even when fetching from the same zone.

Note that if you want to safely run code from third parties that could be outright malicious, you can use Workers for Platforms:

https://developers.cloudflare.com/cloudflare-for-platforms/w...

(I'm the tech lead of Cloudflare Workers.)

(EDIT: lol wrote this without reading your username. Hi, Andy!)


The worker binding system seems pretty great. I'm thinking more about the configuration / deployment mechanism.

In the old days, if I wanted to deploy a little script (on scripts.myuniversity.edu, for example), I would stick the file in an appropriate location (~username/cgi-bin, for example), and the scripts would appear (be routed, in modern parlance, but the route was entirely pre-determined) at a given URL, and they could access a certain set of paths (actually, anything that was configured appropriately via the AFS permission system). Notably, no interaction was needed between me and the actual administrator of scripts.myuniversity.edu, nor could my script do anything outside of what AFS let it do (and whatever the almost-certainly-leaky sandbox it ran in allowed by accident).

But Cloudflare has a fancy web UI [0], and it is 100% unclear that there's even a place in the UI (or the command-line API) where something like "the user survey team gets to install workers that are accessible at www.site.com/surveys and those workers may be bound to resources that are set up by the sane team" would fit. And reading the "role" docs:

https://developers.cloudflare.com/fundamentals/setup/manage-...

does not inspire confidence that it's even possible to pull this off right now.

This kind of thing is a hard problem to solve. A nice textual config language like the worker binding system (as I understand it) or, say, the Tailscale ACL system, is nice in that a single person can see it, version it, change it, search-and-replace it, ask an LLM about it, etc. But it starts to get gnarly when the goal is to delegate partial authority in a clean way. Not that monstrosities like IAM or whatever Google calls their system are much better in that regard. [1]

[0] Which I utterly and completely despise, but that's another story. Cloudflare, Apple, and Microsoft should all share some drinks and tell stories of how their nonsense control panels evolved over time and never quite got fixed. At least MS has somewhat of an excuse in that their control panels are really quite old compared to the others.

[1] In the specific case of Google, which I have recently used and disliked, it's Really Really Fun to try to grant a fine-grained permission to, say, a service account. As far as I can tell, the docs for the command line are awful, and the UI kind-of-sort-of works but involves a step where you have to create a role and then wait, and wait, and wait, and wait, and maybe the UI actually notices that the role exists at some point. Thanks, Google. This is, of course, a nonstarter if one is delegating the ability to do something useful like create two resources and link them to each other without being able to see other resources.

(Hi Kenton!)


So, two possibilities:

1. If you have a relatively small number of users whom you want to permit to deploy stuff on parts of a Cloudflare account, you may need to wait for finer-grained RBAC controls to be fleshed out more. It's being worked on. I really hope it doesn't end up as hopelessly confusing as it is on every other cloud provider.

2. If you have a HUGE number of users who should be able to deploy stuff (like, all the students at a university), you probably want to build something on Workers for Platforms. You can offer your own completely separate UI/API for deploying things such that your users never have to know Cloudflare is involved (other than that their code is written in the style of a Cloudflare Worker).


Workers for Platforms looks pretty neat, and I hadn’t seen it before. I don’t think it’s targeted at the low-effort CGI-like little bit of script on an otherwise mostly static site market, though. But maybe someone could build that on top of it?

Heck, one could probably even build middleware to deploy regular workers for this type of use, where the owner of the worker has no Cloudflare credentials at all and only interacts with the middleware. (Other than the origin and cache API issues.)


Right, that's exactly the idea. You could build your own CGI-like hosting platform using WfP to run untrusted JavaScript.

To be clear the two caveats don't apply to WfP. The cache API is disabled there. The origin thing can be solved by installing an "outbound worker", which intercepts all outbound requests from the untrusted workers and so can block unwanted requests to origin.


I agree re simplicity of the old way of doing things. There's another benefit that most cgi-bin systems had: lack of build step or exotic runtime requirements.

Eg, you'd drop some html into public_html folder and an executable into cgi-bin dir. I would performance engineer some scripts into C++ binaries and just checkout the source & run make to produce binaries in-place. This approach made it easy to use local dev tooling to test/debug stuff instantly via oldschool emacs TRAMP/sshfs.

There is a system that replicates the simplicity of what we lost (while letting you use fancy modern JS frameworks): https://www.smallweb.run/. It also offers a path to cloudflare-like edge computing migration without any code change via deno deploy. With smallweb, one drops a bunch of files into own dir (eg, you could give a dir to each student), which results in https://<dir>.domain.name running stuff ondemand in that dir. No build step, no exotic runtime to transpile into, full ability to use local dev tooling to test/debug stuff instantly. It's still early days for smallweb, but it's specifically designed with the philosophy of "edit some files and stuff runs..while remaining compatible with standard deno way of doing things".

I love the concept of cloudflare workers[1], their fancy state management, bindings, etc, and the fact that they took inspiration from cgi-bin. However, the fact remains that it's an exotic system with weird restrictions (hello changing your apps around 500mb chunk limits ala https://github.com/cloudflare/serverless-registry). This limitation can make it difficult to work with libraries that aren't tested against the cloudflare runtime. 90% of code I write would run better with cloudflare than with deno (due to awesome cold startups), but dealing with these restrictions is too much engineering overhead upfront.

In contrast, with deno/smallweb, I just drop some files into a directory, don't need to bother with package.json, lockfiles, etc, but can gradually opt into those and then gradually switch to CI/CD mode of operation. You can't expect a student new to web development to deal with the exoticness of cloudflare's solution from day 0.

[1] Kenton, it's a fantastic design, I sang praises to it in https://taras.glek.net/posts/cloudflare-pages-kind-of-amazin.... But after trying equivalents in deno ecosystem like val.town and smallweb I would love for it to be less exotic(I know you guys have more node compat work happening).


> is where SSGs fail miserably

It's trivial to bundle and deploy some server-side code with a static site on Vercel, Cloudflare, Firebase, or Netlify.

Usually you simply create a /functions directory wth your JS code and that's it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: