Until someone turned on logging of the full request body on the load balancer/proxy or otherwise unknown-to-you middlebox that was the TLS termination point in production that you did not know about
So their incompetence lies in lacking configuration management in production as opposed to a lack of testing? Why are they manually configuring production servers anyway at that scale?
They are managing a lot of things, they have full responsibility to be aware of these things as a company but I'd guess that with the scale of ops Facebook has regular developers never interact with the team that's managing their attempt at high availability by scattering servers across different hosts, data centers, legal jurisdictions etc. in fact I'd imagine at a op the size of Facebook that sort of problem has a dedicated department.
That said, someone should have been watching for this stuff and failed to do so (or to exist), so I'm not excusing them - but this is not a trivial thing to protect against.
Facebook hires some of the top software developers and engineers on the planet, if not leaking plaintext passwords is too high of an expectation for them then nothing that isn't public knowledge should ever be put into any computer system. As a profession we should demand our peers do better than this.
We're all people of varying skills, I would never assume an innate high bar for any activity a human does - only by requiring the bar be maintained at a level and regularly checking and enforcing that requirement can we be reasonably sure it is - and this isn't just `echo $password` the way these passwords got into the log file is (from what I've been able to discern) pretty obscure and round about - Facebook is absolutely responsible and needs to be held to account, but the mistake is understandable.