Hacker News new | past | comments | ask | show | jobs | submit login

Quote from FAA: "Our preliminary work has traced the outage to a damaged database file." UK news source The Independent is reporting that Nav Canada's NOTAM system also suffered an issue.[1] Speculation: corrupting input, either international or North American? E.G. UTF-8, SQL escape, CSV quoting.

edit: Better reporting of Canada's issue from Canada's CBC (and frankly, better reporting about the US, too). [2] "In Canada, pilots were still able to read NOTAMs, but there was an outage that meant new notices couldn't be entered into the system, NAV Canada said on social media." "NAV Canada said it did not believe the outage was related to the one in the U.S., but it said it was investigating."

[1] https://www.independent.co.uk/news/world/americas/canada-fli... HN thread https://news.ycombinator.com/item?id=34347520

[2] https://www.cbc.ca/news/us-air-travel-chaos-notam-outage-1.6...




> "Our preliminary work has traced the outage to a damaged database file."

> Speculation: corrupting input, either international or North American? E.G. UTF-8, SQL escape, CSV quoting.

I read it as filesystem corruption from a bad disk, coupled with redundancy that doesn't actually work.


What kind of database are they using, I wonder, to end up with such a spectacular failure?


Why would this be an indictment of any specific database technology? If your disk fails and corrupts the filesystem, you're toast, regardless of what database you are using.


The technology to detect and recover from disk failures does exist. RAID and ZFS, for example.

I would not expect a disk failure to replicate to the backup.


Working with critical infrastructure and lack of true in depth oversight it wouldn’t surprise me DR plans were not ever executed or exercised in a meaningful manner.


This is quite common.

Comprehensive DR testing is really difficult. Many orgs settle for “on paper,” or “in theory” substitutions for real testing.

They do it right; no problem.

Doing it right, though … there’s the rub …


Yep and if you ship WAL transaction logs to standby databases/replicas, corrupt blocks or lost writes in the primary database won't be propagated to the standbys (unlike with OS filesystem or storage-level replication).

Edit: Should add "won't be silently propagated"


Neither checks the checksum on every read as that would be performance-prohibitive. So "bad data on drive -> db does something with corrupted data and saves corrupted transformation back to disk" is very much possible, just extremely unlikely.

But they said nothing about it being bad drive, just corrupted data file, which very well might be software bug or operator error


This is wrong, both ZFS and btrfs verify the checksum on every read.

It's not typically a performance concern because computing checksums is fast on modern hardware. Besides, historically IO was much slower than CPU.


> Neither checks the checksum on every read as that would be performance-prohibitive.

It is expensive. It might be prohibitive in a very competitive environment. This is hardly the case here. Safety first!


RAID does not really protect you from bit rot that tends to happen from time to time. ZFS might because it checksums the blocks. But if the corruption happens in memory and then it is transferred to disk and replicated, then from a disk perspective the data was valid.


> If your disk fails and corrupts the filesystem, you're toast, regardless of what database you are using.

There are databases that maintain redundant copies and can tolerate disk / replica failure. e.g. Cassandra.


journal databases are specifically designed to avoid catastrophic corruption in the event of disk failure. the corrupt pages should be detected and reported by the database will function fine without them


If you mean journaling file systems, no. They prevent data corruption in the case of system crash or power outage.

That's different from filesystems that do checksumming (zfs, btrfs). Those can detect corruption.

In any case, if you use a database it handles these things by itself (see ACID). However I don't believe they can necessarily detect disk corruption in all cases (like checksumming file systems).


We had Oracle corrupt itself due to software bug. It similarly went undetected for some time and thus ended in backups.


Well, for example, MySQL/MariaDB using utf8 tables will instantly go down if someone inserts a single multibyte emoji character, and the only way out is to recreate all tables as utf8mb4 and reimport all data.


Surely nobody would use that format and allow a commit message including emojis to cause an effective DOS for a large Sonarqube project.


It doesn't block inserts with invalid data? I thought that was the whole point of telling the database what types you're using


MySQL historically isn't very good about blocking bad data. Sometimes it would silently truncate strings to fit the column type, for example. It's getting better as time goes on, though.


It does and poster above is incompetent


I have had customer production sites go down due to this issue when emojis first arrived. It was a common issue in 2015. I would hope it is fixed by now!


Having dealt with utf8mb4 data being inserted into the utf8mb3 columns many many times in the past, I've never had a table "instantly go down". You either get silent truncation or a refusal to insert the data.


Well, your applications haven’t used a serialized or JSON column. That’s how you go from truncation to downtime.

That said, I do remember this being an issue even with plain text.


I need more info about this.


In MySQL the `utf8` character set is originally an alias for `utf8mb3`. The alias is deprecated as of 8.0 and will eventually be switched to mean `utf8mb4` instead. The `utf8mb3` charset means it's UTF8 encoded data, but only supports up to 3 bytes per character, instead of the full 4 bytes needed.

https://en.wikipedia.org/wiki/UTF-8#MySQL_utf8mb3


Imagine you have one node which is running as a replica of another and it takes the backups. Well, let’s pretend it is backing up the corrupted data once in a while and it happened to overwrite their cold backup. They could have any number of databases and still had this failure. It’s more their methodology for taking backups. They should have many points in time to choose from to rebuild their database. They should be testing their databases before backing them up blindly.


> They should be testing their databases before backing them up blindly.

Oh you mean they should be testing/validating the generated backup db file before replicating it to long-term archive ...


Way back when use cases were a thing, I used to chide people for saying that Backup was a use case.

No, Restore is a use case.

(Replace "use case" with "requirement" or "user story"...)


A corollary to this would be: “Backups are worthless. Restores are priceless.”


semantics but yes


Maybe Bobby Tables is getting into the NOTAM system now

https://xkcd.com/327/


It could also be corruption caused by log-based database replication


Someone put an emoji in a NOTAM


Something like ... "CAVOK :)"


The CBC is great. I watch the news show every night even as an American.


To paraphrase Sarah Palin: Canada can see the USA from their front window. (A great deal of time is spent looking through that window)


Funny! As a Canadian, I am riveted by CNN/MSNBC/PBS/ for news on American political developments.


This could be due to some third party service caching the NOTAMs. Even in the US Foreflight had all notams available but just couldn’t fetch news ones.


I have no idea how often NOTAMs typically get updated but would that explain why flights were able to operate for a few hours before ultimately a ground stop was called for?


Unicode. AmI right?


> traced the outage to a damaged database file

Lil' Bobby Tables strikes again!


My first thought as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: