I think options 1 and 2 are exactly right. Depending on your system, you may not even have choice 1... it's quite hard actually to control a system so well that you can confidently say it will never exhaust the available resources. The JVM for example can only kill its process once its memory usage exceeds `Xmx`, but it can't prevent that from happening in the first place. If you had a few JVMs (or Python, Ruby or any other process) taking up all of your memory and the system started swapping disk like crazy, you could easily have the system crash anyway as different processes don't "throttle" or coordinate with each other.
I'm really not sold on this attitude. Ordinary exception handling may be a lot of effort, but if you don't do it, that means your work is sloppy. Same here.
More generally, a program shouldn't ever outright explode, regardless of input. That's not quite the same thing as exit-with-error, but I suppose the distinction is ultimately fuzzy.
> The JVM for example can only kill its process once its memory usage exceeds `Xmx`, but it can't prevent that from happening in the first place
I don't see that the particulars of JVM memory-management have any bearing here. A web server should be capable of detecting when it has reached capacity, and to reject additional requests as appropriate. There's no reason this couldn't be done in Java. From a quick search, this is indeed how modern Java servers behave.
If there are many communicating processes, I can imagine that could complicate things greatly. That's a downside of using many communicating processes.
Don't assume it's an easy thing to avoid.