If you're a small business - I get it, it's good to whitelist whatever information you're logging explicitly, but for smaller teams a hard to diagnose issue might lead the team to "log everything so we can sort it out later". Facebook is Facebook, whether this decision was the product of the corporation as a whole, a small dev team, or a highly paid consultant/third party, Facebook is a big enough company that they don't have the "I didn't realize..." excuse anymore.
I've worked for multiple Fortune 25 companies, and that excuse does not fly. Not in banking or healthcare, where breaches of privacy/confidentiality are actually illegal, rather than merely distasteful. Small teams and careless devs doing that sort of bad logging will be caught and corrected by strict security oversight.
This is the sort of thing that leads the HN crowd to sneer at the old, slow ways of the enterprise world.
The passwords in plaintext wasn't a breach or a leak. If you punish Facebook for it, they would be less inclined to share such information in the future.
If we can't trust a company to audit themselves fairly (in fact, we can't generally) then they should bear the burden of paying for an external audit of their processes, many "boring" industries have these sorts of requirements.
There might not have been a large-scale public data dump of passwords, but if 20,000[1] employees had access to logs with plaintext passwords, there is no guarantee I could ever accept that zero of them read or used customer passwords for personal purposes.
What's to stop a malicious ex-lover from grabbing a FB password and reading that person's private messages? If FB didn't even know there were passwords in plaintext, they very likely weren't auditing log access as much as was needed.
Malicious insiders don't need your password to access your data. If you are concerned about malicious insiders then plaintext password logs don't matter.
I'm more concerned with plaintext passwords because most people use the same passwords over and over, so if you can break one system, you can break others.
With FB's dataset and employee count, it would be insane not to be concerned with malicious insiders. Even if you completely discount morality and honesty where Facebook is concerned (and I do), there's substantial liability risk, PR risk, regulatory risk and commercial paranoia, which I suspect was likely their biggest concern, at least until recently.
There is no oversight in a Devops world. New systems come up in protoduction, celebrations are had, and the data protection officers and security compliance officers remain in blissful ignorance.
Hrm, so you checked the apache access logs, or maybe an error log, what about the system logs? Does the login request spin up a bash script and pass the password to it?
I don't think it's trivial to guarantee non-existence.
> I don't think it's trivial to guarantee non-existence.
I disagree in this case. Log messages don't spontaneously appear in arbitrary places. If the developers understand what their software is doing and how their systems are configured then they should know where to check for the logging messages.
Until someone turned on logging of the full request body on the load balancer/proxy or otherwise unknown-to-you middlebox that was the TLS termination point in production that you did not know about
So their incompetence lies in lacking configuration management in production as opposed to a lack of testing? Why are they manually configuring production servers anyway at that scale?
They are managing a lot of things, they have full responsibility to be aware of these things as a company but I'd guess that with the scale of ops Facebook has regular developers never interact with the team that's managing their attempt at high availability by scattering servers across different hosts, data centers, legal jurisdictions etc. in fact I'd imagine at a op the size of Facebook that sort of problem has a dedicated department.
That said, someone should have been watching for this stuff and failed to do so (or to exist), so I'm not excusing them - but this is not a trivial thing to protect against.
Facebook hires some of the top software developers and engineers on the planet, if not leaking plaintext passwords is too high of an expectation for them then nothing that isn't public knowledge should ever be put into any computer system. As a profession we should demand our peers do better than this.
We're all people of varying skills, I would never assume an innate high bar for any activity a human does - only by requiring the bar be maintained at a level and regularly checking and enforcing that requirement can we be reasonably sure it is - and this isn't just `echo $password` the way these passwords got into the log file is (from what I've been able to discern) pretty obscure and round about - Facebook is absolutely responsible and needs to be held to account, but the mistake is understandable.
You have no problem with the create_user function? Obviously it's pseudocode my point is there are a finite number of log locations and checking for an instance of a known string among them isn't difficult.
There are a finite number of log locations now. How do you intend to operate an integration test across the entirety of Facebook's stack to detect new log locations?
Bureaucracy like this is what kills teams and products. There's no guarantee it works, and every change or commit should already have privacy and data protection in mind anyway. You also don't know what you don't know - you could for all your knowledge think that nothing is being logged, when in fact there's a subsystem outside your scope that's actually doing the logging.
You assert that it's trivial, yet you're adding more layers to protect against something like that from happening. It's the naivety that all problems are trivial is what gets people and companies into trouble in the first place