interesting find! I wonder what would be a good safeguard to this. I feel like just backing up your data would offer something - but a file could silently change and become corrupted in the backup too.
Hm. You could make two backups and checksum each file and safe the checksum. Then you could compare regularly file contents with the initial checksum, if there is a mismatch copy from the other backup.
Git-annex is nice for this; the fsck command will check the file against a checksum and request a new copy from other node automatically if the check fails.
This is why ECC is important. Many many people poo poo the idea that it's needed. But by not having it you have left a single vital part of the data path unprotected. And ram and disk is cheap, losing your data is not. The risk simply isn't worth it to save literally a few dollars.