This was not meant to look smart, I refuse to compete and do not feel the motivation to position myself, I speak freely here - please do not apply a ratrace-like competitive mindset, this is misunderstanding me. I just was really interested in the question, if the spof is not seen by anybody.
Of course I am willing to help - the problem is desribed clearly in the first point of the author: they are generating one "projectfile" - whatever this looks like, it is a reduction of many to one. The distribution of 1500 git repos with n-thousands of files is relying on one single file - there is no technical need for that, in fact it eliminates the power of distributed repos by reducing reliance into the presence and integrity of one single text file.
The author writes about the process of this file beeing corrupted and triggering a random repo killing process - the incident is a good bad example for what can happen if you anticipate the antipattern of making one out of many.
Building redundant systems you always try to achieve the opposite - make many out of one, to eliminate the spof. You can not scale infinitely with this, because in the end we are living on just one planet.
However, unneccessarily making one out of many is the wrongest thing you could do building a backup or code distribution system. This antipattern still exists in many places and should be eliminated.
This is not about filesystem corruption etc. - the reason for the destruction was one single project file. Do not do this. It is not critical for a backup system if it takes long time to scan a filesystem for existing folders over and over again. A backup system is not a web app, where it might be a good thing to do one out of many (aka as caching in this case), but a backup system does not need this reduction.
Of course I am willing to help - the problem is desribed clearly in the first point of the author: they are generating one "projectfile" - whatever this looks like, it is a reduction of many to one. The distribution of 1500 git repos with n-thousands of files is relying on one single file - there is no technical need for that, in fact it eliminates the power of distributed repos by reducing reliance into the presence and integrity of one single text file.
The author writes about the process of this file beeing corrupted and triggering a random repo killing process - the incident is a good bad example for what can happen if you anticipate the antipattern of making one out of many.
Building redundant systems you always try to achieve the opposite - make many out of one, to eliminate the spof. You can not scale infinitely with this, because in the end we are living on just one planet.
However, unneccessarily making one out of many is the wrongest thing you could do building a backup or code distribution system. This antipattern still exists in many places and should be eliminated.
This is not about filesystem corruption etc. - the reason for the destruction was one single project file. Do not do this. It is not critical for a backup system if it takes long time to scan a filesystem for existing folders over and over again. A backup system is not a web app, where it might be a good thing to do one out of many (aka as caching in this case), but a backup system does not need this reduction.