Just spraying dust with water will not remove it. Detergent helps, but most of the cleaning effect is done by mechanical agitation, eg. wiping the glass.
The problem here begins even before the mathematical issue - it's that web sites that live from listing bookings have an incentive to offer a way to delete reviews that are not in line with what the owner wants to see.
You were commonly given a network uplink and a list of public IP addresses you were to set up on your box or boxes. IPMI/BMC were not a given on a server so if you broke it, you needed to have remote hands and probably brains too.
Virtualisation was in the early days and most of the services were co-hosted on the server.
Software defined networks and Open vSwitch were also not a thing back then. There were switches with support for VLANs and you might have had a private network to link together frontend and backend boxes.
Servers today can be configured remotely. They have their own management interfaces so you can access the console and install OS remotely. The network switches can be reconfigured on the fly, making the network topology reconfigurable online. Even storage can be mapped via SAN. The only hands on issue is hardware malfunction.
If I was to compare with today, it was like having a wardrobe of Raspberry Pies on a dumb switch, plugging in cables when changes were needed.
I only got €100.000 bounded to a year, then a 20% discount for spend in the next year.
(I say "only" because that certainly would be a sweeter pill, €100.000 in "free" credits is enough to make you get hooked, because you can really feel the free-ness in the moment).
Synology does not use vanilla btrfs, they use a modified btrfs that runs over mdraid mirror, which somehow communicates with btrfs layer to supposedly fix errors, when they occur. It's not clear how far behind that fork is.
It's not legally required in terms of law, but it is legally required in the way that the legal department will complain if the banner not there. Checklists and all that. ;)
Handcount in Slovenia, first count done on local polling location, with a recount done by a central election comission and certification of the result. It works.
edit: First results are in within a couple of hours, depending on how many things are on the ballot.
Your first paragraph implies 8-bit bytes are a coincidence, which is not true. It was a design decision.
A byte in computer is the smallest addressable memory location and this location at that time contained a character. The way characters are encoded is called code. Early computers used 5-bits, which was not enough for alphabetics and numerals, 6 bits was not enough to encode numbers and lower and upper case characters, which eventually lead to ASCII.
ASCII was also designed(!) to make some operations simple, eg. turning text to upper or lower case only meant setting or clearing one bit if the code point was in a given range. This made some text operations much simpler and more performant, that is why pretty much everybody adopted ASCII.
Doing 7-bit ASCII operations with a 6-bit bytes is almost impossible and doing them with 18-bit words is wasteful.
When IBM was deciding on byte size a number of other options were considered, but the most advantageous was the 8-bit byte. Note that already with 8-bit bytes, this was over-provisioning space for character code, as ASCII was 7-bit. The extra bit offered quite some space for extra characters, which gave rise to character encodings. This isn't something I would expect a person living in the USA to know about, but users of other languages used upper 128 bytes for local and language specific characters.
When going with 8-bit byte, they also made the bytes individually addressable, making 32-bit integers actually 4 8-bit bytes. 8-bit byte also allowed to pack two BCD in one byte and you were able to get them out with a relatively simple operation.
Even though 8-bits was more than needed, they were deemed cost effective and "reasonably economical of storage space". And being a power of two allowed addressing a bit in a cost effective way, if a programmer needed to do so.
I think your post discounts and underestimates the amount of performance gain and cost optimisations 8-bit byte gave us at the time it mattered most, at the time computing power was low, and the fact that 8-bit bytes were just "good enough" and we didn't get anything usable from 9, 10, 12, 14 or 16 bit bytes.
On the other hand you overestimate the gains with imaginary problems, such as IPv4, which didn't even exist in 1960s (yes, we ran out of public space quite some time ago, no, not really a problem, even on pure IPv6 one has 6to4 NAT), or negative unix time - how on Earth did you get the idea that someone would use negative unix time stamps to represent historic datings, when most of the time we can't even be sure what year it was?
I think the most scary thing is having and odd-bit bytes; there would be a lot more people raging against the machine, if byte was 9 bits.
Given how many early computers had 18-bit or 36-bit words, the possibility of 9-bit bytes doesn’t seem as unrealistic as you suggest. I don’t see 8-bit bytes as being so inevitable.
This is nice, I once did something similar for my SO to chart artworks on timeline by different geographic areas. Time was on x-axis, different geographic areas were different timelines vertically, and different art periods were colour coded.
The hardest thing was mapping approximate date descriptions (think "1st half of 17th century") into years to draw on chart.