I see this is based on a Spiegel article. Can someone link to the full ruling? One thing I am curious about is how the number was derived.
Regardless, even if you disagree with the scope of the law (which I do but not the intent of course), it is a very welcoming sign to see some actual enforcement happening. An under/subjectively enforced law of this size is much worse than a reasonably enforced one.
Here's the Data Protection Officer's press release (in German): https://www.baden-wuerttemberg.datenschutz.de/lfdi-baden-wue... - the relevant paragraph is towards the end. The tl;dr is that the fine is rather low because they were very cooperative and quick to follow suggestions for improvement, and have additional improvements planned. Also, they likely couldn't afford more, and the goal of GDPR fines is not only to be effective and a deterrent, but also proportionate. The DPO says that including the cost of IT security measures taken and planned, the total expense for Knuddels is a six-digit figure.
I think the cost of complying with the law is just cost of conducting a business lawfully. So, those parts should not be spoken of as if they were part of the damages.
Well obviously they should have improved their security practices years ago. Nonetheless, they incurred the costs now, as a result of self-reporting a data breach (which is mandatory). That's the number that's relevant as a deterrent for others, so I think it's fine to report it this way.
This requirement is technically fulfilled by encrypting transmissions with TLS and storage with disk encryption like LUKS or Veracrypt. It does not really say anything about password hashing.
Note that the actual cost to Knuddels is much higher, because you also have to include the cost of implementing proper security measures. The Data Protection Officer's statement (https://www.baden-wuerttemberg.datenschutz.de/lfdi-baden-wue..., in German) states that the total cost to Knuddels is a six figure sum.
For a larger company, should be considerably higher.
Also, the ruling mentioned a reduced fine for cooperation and quick remediation.
This probably wouldn't play out so well with a bloated structure and process, as you mentioned.
I've looked at many of those Tumblr posts; most of them show that the website sends you a welcome email with your password in plain text, which is bad practice, but doesn't prove that the password is stored in plain text in the database.
No. I've implemented precisely that and it doesn't prove what you think.
What you do is have one single function create the user, pick a random password, set it in the database (which in my case uses a perfectly sensible hash) and send the user email. The cleartext password in the mail comes from the function's local string variable, not from the database.
Whether doing this is a good idea is another question. IMO it usually isn't. But this kind of mail does not prove cleartext access.
How often do you purge your mailserver's logs (or if you use a mail API, how often do they purge their logs)? If it's "Never, I didn't think of that." then all your user's inital passwords are sitting there for the taking.
Of course, you may have a system that forces a password reset on login. That won't help the users who have never logged in. Those accounts are freely available to a hacker.
Plaintext passwords anywhere are a really bad idea.
You make it sound as if most mail servers log message contents. None of the servers I've used in the past 25 years did that (sendmail 5 on ultrix, later smail 3, then zmailer, then postfix).
The recipients' servers store the message with the password, of course, but they also store the other messages the same user has received from the same server, which in my case contain the same information as what could be accessed with the password. So the password offers very little additional value to an attacker, compared to just reading the mail.
What if the email with the plain text password is sent after a user pressed on the "I forgot my password button"? Because so far I only have encountered this type of email where the password is sent in plain text.
It's a matter of storing it in plaintext or not, which any sane developer knows not to. The codebase will always have access to your plaintext password at one point or another, whether it's on signup before they hash and store it, or when you login before comparing hashes.
If someone has access to your codebase you've got bigger problems than plaintext passwords anyway.
The codebase will always have access to your plaintext password at one point or another.
Not necessarily. The simple solution is client-side hashing.
You could combine that with challenge-response to only reveal the password hash to the server once.
>> If someone has access to your codebase you've got bigger problems than plaintext passwords anyway.
You're joking, right? The context of the discussion is when your database is already leaked. Then the chance is that your e-mail database is leaked, too. You may leak code, too. It doesn't necessarily mean someone can execute arbitrary code on your server though, yet.
Presuming you mean Access to change the codebase, this would also be access to add a key logger. If someone has write access to your production codebase you’ve lost all data.
Well, you can work your way up with ever increasing levels of safety by adding first MD5, then moving to SHA-1, then adding a salt, and eventually something sensible like bcrypt. That's four more press releases right there :-)
The passwords to login where actually hashed. But they stored another copy in plaintext on purpose, to censor the users password if they wrote it into chat...
"Passwords were hashed as a hash in 2016, but the unchanged version of the passwords has been retained, so users can not filter their own password via our platform via a filter"
It just has to run once, at filter creation time. Disallow creation of the filter if your password is in it.
Note the quoted reason from Knuddles is different from what others are saying the reason is: "so users can not filter their own password via our platform via a filter"
This sounds dumb. So if my password was "something" (yes, it's a terrible password), it would just keep censoring that word every time I write that into the chat?
You would use a unique hash key for a user's password. You'll only have to hash each word with that unique hash key once. And ideally cache the hashing and prehash known words.
In my experience, this is mostly what the GDPR is. There is no excuse for storing plaintext passwords in 2014+ and 20k is a fair fine for a mid-size company.
€20k doesn't seem much to me. Cheaper than taking on a security consultant.
Not that you need a security consultant to know passwords shouldn't be stored (at all, nevermind plaintext).
If they're doing that then they're likely being sloppy elsewhere, and by only paying €20k across the last n years they might have saved a €million.
If your company is in the same boat probably worth not bothering to get any security issues addressed. Why address security, just pay the much smaller fine if you ever get caught ...
I couldn't find Knuddels annual profit but they appear to have a dozen staff, which suggests to me the fine is too small.
Well, the fine is only 20k€ because they were very cooperative, quick to fix the worst issues, and promised to continue improving their security further. According to the Data Protection Officer's statement their total expenses were in the six figures. They also explicitly state that the fine wasn't higher as not to place a disproportionate burden on the company's finances, which probably means that they wouldn't have been able to afford a significantly higher fine. Contrary to the fear-mongering on this site, the purpose of GDPR isn't to fine companies out of existence.
> If you could skip your tax bill for a few years, but get a much smaller fine if you cooperated when caught then you'd be silly to actually pay.
If you do the work ahead of time, you pay the cost of doing the work. If you wait for the fine, you pay the cost of doing the work plus the fine. It doesn't take a lot of fine to make doing the work to begin with worth it -- basically just accounting for chance of getting away with it and time value of money, which goes down as the government gets better at catching more people quicker, as should be their primary goal for something like this.
Maybe, but there's much more work that needs doing to secure PII than just not having plaintext passwords. So, they can seemingly avoid doing all that work too, and maintaining those systems (with staffing costs). And you get a leg-up over the competition who can't use the cash that they put in to security.
That means those with poor security regimes may "win" because the costs of poor PII hygiene are externalised.
It would certainly be nice to imagine all the 2 million UK corporations are addressing PII security rather than hiding and hoping not to get a fine ...
> Maybe, but there's much more work that needs doing to secure PII than just not having plaintext passwords. So, they can seemingly avoid doing all that work too, and maintaining those systems (with staffing costs).
If the regulators catch someone breaking a rule like this, the consequence should obviously involve an audit that looks for other violations and requires them to fix those too.
But even if it didn't, your conclusion wouldn't follow, because they would still have no incentive to fix the other problems unless they expected to get caught for not fixing them. But if they did expect to get caught then the numerous predicted small fines would be a sufficient deterrent.
> It would certainly be nice to imagine all the 2 million UK corporations are addressing PII security rather than hiding and hoping not to get a fine
It's the hiding and hoping not to get a fine that's the reason large fines don't work. Higher penalties can't deter someone who doesn't expect to be caught.
What works is smaller penalties with vigorous enforcement.
Parking fines aren't designed for deterrence, they're designed for revenue generation. If parking is €2 and the fine is €1 on top of the parking cost if you get caught (plus €5 worth of inconvenience doing fine paperwork), and there is a 90% chance of getting caught, nobody parks illegally -- and therefore there is no fine revenue.
But if you make it a $200 fine with a one in a thousand chance of getting caught, then it's profitable, because then many people rationally take the risk and become a source of citation revenue. But the violation rate is higher, so if that was your goal, it fails -- unless you're still doing vigorous enforcement, in which case high fines are once again unnecessary.
If bad security is a consciously chosen company strategy, then sure, the fine is too small.
But most places don't do dumb stuff like this because they've smartly chosen to be dumb. It's just thoughtlessness, just focusing on the wrong things. And one of those wrong things is "saving" money by being too cheap.
If a cheapskate client asked me to store passwords in plaintext on the theory they could save a few days of dev work, I'd love to be able to say, "Sorry, that's such a bad idea it's illegal. Fines start at €20k and go up." Their cheapness meter would swing into the red and they'd leave me alone.
If they company had actually chosen to be broadly negligent, it's clear the regulator could have imposed a much bigger fine, so I think your case is covered too.
The fine is small since they completely complied with all inquiries and took proper steps to inform users and improve security. Thus do what the actual goals is. Making money is no a goal of GDPR, but ensuring data safety.
Even the threat of death penalty doesn't stop crimes.
True, if there is no punishment and a threat is teethless nobody acts on it (that's why the big GDPR outcry also came only this year after the two year introductory phase)
However if you have too high fines what happens s that companies try everything to hide the fault and lie to avoid the fines. Here a company complied to all things, improved security (which according to the data privacy agency let to six digit costs (while a question is how you measure this) - see other comments) and therefore got a low punishment.
The punishment also has another effect: It makes it clear that fines are being collected. If it were higher the Knuddels company would go to court and we'd have an example case only in two or more years.
The goal is to improve data safety. That goal was achieved.
There is a significant difference between deterring personal crimes (e.g. robbery at gunpoint, carjacking, murder) and deterring 'economic crime'.
Some examples of economic crime would be: not implementing security, tax fraud, overweight freight trucks; speeding to make a delivery on time (whilst on the clock).
The first kind of crime is generally made by people who 'know they are wrong, but they feel like they don't have other options' or people who 'know they are wrong, but don't give a f*ck about that'. Especially that first type of person won't respond to more punishment.
The second kind of crime is much more calculated. Here, the response to harsher punishment would be a lot better.
This is generally why the fines on overweight trucks are so high. It is actually required in order to make the calculation unacceptable for driving overweight.
I used the introductory example in response to the request of "extremely punitive" fines. Extreme fines lead to companies trying to be clever in hiding their faults. Moderate fines more likely lead to companies coming out by themselves.
On the comparison with overweight trucks: I doubt anybody builds an insecure system to gain an economical benefit, not using state-of-the-art technology is a mistake/stupidity/carelessness/.... "See how much money we earned from saving CPU time of bcrypt!" nobody said.
"Das Unternehmen hatte sich am 08. September 2018 mit einer Datenpannenmeldung an den LfDI gewandt [...] Gegenüber dem LfDI legte das Unternehmen in vorbildlicher Weise sowohl Datenverarbeitungs- und Unternehmensstrukturen als auch eigene Versäumnisse offen."
("The company contacted the data protection agency on September 8th 2018[...] In exemplary manner they gave access to company and data management processes and highlighted their on omissions")
https://www.baden-wuerttemberg.datenschutz.de/wp-content/upl...
> There is a significant difference between deterring personal crimes (e.g. robbery at gunpoint, carjacking, murder) and deterring 'economic crime'.
The relationship is the opposite of the one you're describing.
The problem with personal crimes is that everyone has a different utility function. If you could steal a million dollars at risk of a month in jail, many people would take the risk. Fewer at six months in jail. Fewer still at a year. Fewer still at five years. So you need as high a penalty as you can get without violating proportionality (or reaching the point of diminishing returns, once nearly everyone who can be is deterred).
There are some people who aren't even deterred by the death penalty, e.g. because they'd rather have the money needed to save their kid's life even if it costs them their own, but that's pretty rare. Most of the people who aren't deterred by even large penalties are simply the people who don't expect to be caught, or don't realize they were violating of the law to begin with.
By contrast, for economic crimes, nearly everyone's utility function is the same. If you can save $2000 by taking a 50% risk of having to pay the $2000 anyway plus a $1500 fine, it's profitable. If you can save $2000 by taking a 50% risk of paying the $2000 anyway plus a $2500 fine, it isn't. A $3000 fine provides no additional deterrence, and anything more is just a money grab. Even the $1500 fine may be higher than strictly necessary, because getting fined at all results in a PR hit that independently provides a non-zero deterrence value.
That isn't to say that a small fine won't leave a large number of violators, but they're no longer the people overtly doing the calculations. They're the people who assign negligible probability to getting caught, or who don't even realize they're violating the law. No amount of higher penalties will deter them, the only thing that works against that group is vigorous enforcement -- which works fine (better even) with modest penalties, because all you really need to do to get those groups into compliance is to tap them on the shoulder and explain how they're not.
It really isn't. The cost of a $2000 for a poor person is huge, the cost for a richer person - whilst significant - is not debilitating. If you've 10% chance of getting caught then a rich person can afford it, getting caught really doesn't hurt so much.
That's why progressive justice systems use means tested fines for things like speeding.
I'm going to guess you're relatively wealthy, your analysis seems entirely wrong to me.
What you get with small fines is people will pay, even if they didn't deserve the fine, because of the cost of time/effort to challenge it.
If a company can save €100k for multiple years, the only downside being that if they're the 1:10000 that are caught they'll have a €20k fine, the financial analysis - morals aside - says don't pay, unless the €20k would sink you.
> The cost of a $2000 for a poor person is huge, the cost for a richer person - whilst significant - is not debilitating.
Which is irrelevant for economic issues because both values are in the same units. An hour may be worth more than $500 for a rich person and not a poor person, but $3500 is more than $2000 for everybody.
> That's why progressive justice systems use means tested fines for things like speeding.
Then the super rich will hire a chauffeur to do their speeding for them, or fly in a helicopter, so all you're doing is creating a differential between the low and middle income people. But then either the fine is oppressively high for middle income people or is an inadequate deterrent for lower income people.
Because dollars have a declining marginal utility when you get more, but the relationship isn't linear. Someone who makes $60,000 may have effectively the same disposable income as someone who makes $30,000 (i.e. both near zero) because the first person has higher costs (housing/transportation/other cost of living) needed to live in the area where the higher paying job exists. You also end up penalizing the person who has "double the income" because they have three kids to support and have to work two jobs. Means tested fines are a populist farce.
> What you get with small fines is people will pay, even if they didn't deserve the fine, because of the cost of time/effort to challenge it.
This is not a deterrence issue, and can be solved by returning to the person the true entire cost of the resources and time taken to successfully challenge a false claim against them.
> If a company can save €100k for multiple years, the only downside being that if they're the 1:10000 that are caught they'll have a €20k fine, the financial analysis - morals aside - says don't pay, unless the €20k would sink you.
If the €20k would sink you then surely the €100k/year would, so the amount of the fine in that case is irrelevant. The real problem in your scenario is the 1:10000 chance of getting caught. If you could clear €100k/year for ten years with a 1:10000 chance of getting caught, the fine would have to be ~€10B, which would obviously annihilate any entity for which €100k/year was a meaningful amount of money to be worth skimping to begin with. Which means that no amount exists that could act as an adequate deterrent for a small organization and that probability of getting caught. Any amount over their total enterprise value couldn't actually be paid and therefore doesn't act as a deterrent.
What you need is to improve the chances that they'll be caught. In which case you don't need such a large fine.
> I couldn't find Knuddels annual profit but they appear to have a dozen staff, which suggests to me the fine is too small.
On the other hand they seem to have at least as many open positions. This is either a sign of strong growth or a sign of inability to offer competitive pay. For a struggling pre-Facebook social web relic, it's easy to guess which one it is.
Seems like they got punished for informing the government about the hack, obviously gives the next company that gets hacked a reason to try to hide it.
They’d have to report it themselves. This is a worse value proposition than regular blackmail, where you take an existing violation, which the target already knows is illegal and has already shown willingness to conceal from authorities (or is not illegal at all, but e.g. just embarrassing), and threaten leaking it. In the proposed scheme, the mere reception of the threat itself creates a new situation for the receiver which must be reported. Paying off the blackmailer puts you in a worse spot than before. Not so with “ordinary”, Hollywood movie blackmail.
Nitpick: I don't think that they would have to report themselves if what the attacker claims isn't actually true. They could still fail horribly at verifying the claim (and subsequently pay the appropriate "tax" for that failure)
If he blackmailer can figure out, others can. Takes away the leverage.
Best strategy: Ignore the blackmailer and improve the security of your system and take privacy serious. (Doing such a cleanup also helps to reduce fines, thus also reduces leverage of blackmailer)
They were doing this so they could filter out the passwords from chats (i.e. to make it so users can't give out their passwords to other users). Not saying this justifies it, but it's interesting.
It's possible to do that without storing the passwords in plain text though! Run each word of the chat though the same hash+salt mechanism and compare to what you have stored.
Assuming they're using a suitable hashing algorithm for passwords (ie, Argon2, bcrypt, scrypt, PBKDF2), this approach would be prohibitively expensive, especially for a chat platform, with presumably lots of messages.
Also, you probably can't just try hashing each word, since there could be whitespace and punctuation in the password text, so I think you'd have to hash all possible substrings of each message to be able to reliably catch passwords.
Obviously, though, they shouldn't have been storing them in plaintext.
Store the length L of the password, its salted hash H, and its bytes, XOR-ed, X.
For every message typed, compute a running XOR of each sequence of L bytes (2 XOR’s per character, so as good as free). Whenever it equals X (about once every 64 letters or so, because typical text doesn’t use all bits in each byte equally), compute the salted hash of the last L characters, and compare with H.
Unicode and Unicode normalization will complicate that, but I think it should be fast enough for a chat.
You probably can also improve on that factor 32 by storing multiple XOR-like (but slightly more computationally expensive) hashes and computing multiple running totals.
Given that this is to protect users from falling for scammers who claim they need their password to help them, you may be able to run it on the user’s machine.
I fear, however, that a scammer will just ask them to type their password with a space inserted, spell it in the NATO spelling alphabet, or whatever. If you fall for a scammer, that won’t stop you from giving them your password.
I did think of similar approaches, but anything I could think of that helps you to quickly determine if a given string contains the password also helps an attacker if the passwords and salts are compromised.
In the suggested case, storing the length of the password alone massively reduces the search space, and storing the XOR (of the plaintext with the hash, I think you're suggesting?) negates the value of using a hashing algorithm suitable for passwords, since the point is that checking if a password matches a hash is an expensive operation.
But what if people have multi-word passwords? At that point the solutions become so over-engineered(either use some ngram-like setup to detect passwords being posted or save a hash for each separate word of the user's password, which also decreases security since then you know the user has a multi-word password) that you might as well drop that feature.
That significantly reduces entropy in the password.
Also, the premise is faulty, because as soon as users figure out they can't type their password in the chat, they'll just describe it in words or split it into two pieces etc.
Give them a big scary message "Never give out your password to strangers" when the censoring happens, because it's highly likely somebody is pretending to be an admin asking for a password in that situation.
I thought the same, although such a filter would be intended to help out unknowing users who might give their password to a stranger. People who use passphrases may know enough to not do that in the first place.
It could however in general have a problematic side-effect if the password is a common word that could be guessed from surrounding context when censored that way. Something I'd find a lot more likely here than passwords with spaces.
What if this is part of someone's policy, with the knowledge of users of course? For example an app for the technically illiterate or for small children?
Regardless, even if you disagree with the scope of the law (which I do but not the intent of course), it is a very welcoming sign to see some actual enforcement happening. An under/subjectively enforced law of this size is much worse than a reasonably enforced one.