Yes, but have you been part of the development & testing effort for mission-critical software (e.g. a class 1 or 2 medical device?). It's not true in all cases, but for the most part the level of QA that goes into the devices before release is significantly higher than that of your average product. This is why regulation is required.
GCE downtime just means people lose money, it's not life-or-death. Skimping on QA in order to reduce costs and get to market faster is a perfectly reasonable decision when the consequences are so mundane.
I understand what you mean but that is generalising too much what people use GCE, public clouds, self hosted servers for, and especially going forward. It is not all convenience applications, game backends etc. What people these days use AWS/GCE for is so varied, even public sector use AWS Gov Region for example. Downtime consequences is not just money lost but can be life-and-death and for some application they need solid QA even if hosted in a public cloud.
It may (emphasise 'may') be how they share medical data via GCE/AWS that gets delayed just before a surgery (ok, edge case) or how they update bugs in a critical GPS model that happen to be used by an ambulance, or even a taxi used by pregnant lady that is about to drop, etc. Or simple general medical self diagnosis information site that by chance could have saved someone in that time slot. Or any other random non medical usage which involves a server and data of some kind that happen to be in GCE.
Yes critical real time systems often are on-premise or in self hosted data centres, but more and more are not especially if viewed as not critical but in some cases indirectly are critical.
You make a good point, but in the end the responsibility is on the life-critical application (e.g. medical software, device, self-driving car) to ensure that is has been properly QA'd and that all of its dependencies (including any cloud services or framework that it is built upon) meet its safety requirements. The event of an app server or cloud service experiencing downtime would very much have to be planned for as part of a Risk Management exercise. Ignoring that possibility would be negligent.