Hacker News new | past | comments | ask | show | jobs | submit login
Knuddels: Chat platform must pay after hacker attack fine (tellerreport.com)
95 points by MSeven on Nov 22, 2018 | hide | past | favorite | 124 comments



I see this is based on a Spiegel article. Can someone link to the full ruling? One thing I am curious about is how the number was derived.

Regardless, even if you disagree with the scope of the law (which I do but not the intent of course), it is a very welcoming sign to see some actual enforcement happening. An under/subjectively enforced law of this size is much worse than a reasonably enforced one.


Here's the Data Protection Officer's press release (in German): https://www.baden-wuerttemberg.datenschutz.de/lfdi-baden-wue... - the relevant paragraph is towards the end. The tl;dr is that the fine is rather low because they were very cooperative and quick to follow suggestions for improvement, and have additional improvements planned. Also, they likely couldn't afford more, and the goal of GDPR fines is not only to be effective and a deterrent, but also proportionate. The DPO says that including the cost of IT security measures taken and planned, the total expense for Knuddels is a six-digit figure.


I think the cost of complying with the law is just cost of conducting a business lawfully. So, those parts should not be spoken of as if they were part of the damages.


Well obviously they should have improved their security practices years ago. Nonetheless, they incurred the costs now, as a result of self-reporting a data breach (which is mandatory). That's the number that's relevant as a deterrent for others, so I think it's fine to report it this way.


Full list of 5000+ websites that store their passwords in plain text: https://github.com/plaintextoffenders/plaintextoffenders/blo...


That list is very out of date. One of my clients appears on there and when we took over in 2012 we encrypted all their user credentials.


Their FAQ [1, 2] suggests that using an encrypted password still warrants an entry.

[1] http://plaintextoffenders.com/faq/devs

[2] http://plaintextoffenders.com/faq/non-devs


Encrypting passwords isn't really much better, though, is it? It's still reversible as there has to be a key somewhere.


Well you took over and did a terrible job.

Hash passwords, not encrypt.

An encryption is reversible, a hash result isn't.



IME banks often have poor security. And why not? They managed to rebrand robbery as identity fraud.


It's just infuriating, because credit card companies are the ones behind, for example, PCI. Which has guidance like:

"8.4 Render all passwords unreadable during transmission and storage on all system components using strong cryptography"


This requirement is technically fulfilled by encrypting transmissions with TLS and storage with disk encryption like LUKS or Veracrypt. It does not really say anything about password hashing.


The screenshot shows that the plaintext password was sent over SMTP. So it isn't meeting that bar either.


What makes you think it's SMTP and not SMTPS?


Because you can't force the endpoints of your customers to all support that.


ironic how the linked website itself is a "plain text offender" by not using HTTPS, causing HTTP requests to be sent over plain text.


Someone should send a friendly email to each of those offenders, linking the ruling.

It's also fair to say that the next few years will be a busy time for the government agencies tasked with GDRP enforcement.

(Assuming they do it properly, which falls within the responsibility of the relevant country)


They should, though assuming a bloated org structure and process, fixing it now is probably more expensive than the €20000 fine.


Note that the actual cost to Knuddels is much higher, because you also have to include the cost of implementing proper security measures. The Data Protection Officer's statement (https://www.baden-wuerttemberg.datenschutz.de/lfdi-baden-wue..., in German) states that the total cost to Knuddels is a six figure sum.


For a larger company, should be considerably higher.

Also, the ruling mentioned a reduced fine for cooperation and quick remediation. This probably wouldn't play out so well with a bloated structure and process, as you mentioned.


But the fine would have been more if they had refused to fix it. So the calculus isn’t straightforward.


I've looked at many of those Tumblr posts; most of them show that the website sends you a welcome email with your password in plain text, which is bad practice, but doesn't prove that the password is stored in plain text in the database.


The discover card email reads "Here's your _existing_ password".


But which proves that access to the website codebase will grant you access to those passwords.


No. I've implemented precisely that and it doesn't prove what you think.

What you do is have one single function create the user, pick a random password, set it in the database (which in my case uses a perfectly sensible hash) and send the user email. The cleartext password in the mail comes from the function's local string variable, not from the database.

Whether doing this is a good idea is another question. IMO it usually isn't. But this kind of mail does not prove cleartext access.


How often do you purge your mailserver's logs (or if you use a mail API, how often do they purge their logs)? If it's "Never, I didn't think of that." then all your user's inital passwords are sitting there for the taking.

Of course, you may have a system that forces a password reset on login. That won't help the users who have never logged in. Those accounts are freely available to a hacker.

Plaintext passwords anywhere are a really bad idea.


You make it sound as if most mail servers log message contents. None of the servers I've used in the past 25 years did that (sendmail 5 on ultrix, later smail 3, then zmailer, then postfix).

The recipients' servers store the message with the password, of course, but they also store the other messages the same user has received from the same server, which in my case contain the same information as what could be accessed with the password. So the password offers very little additional value to an attacker, compared to just reading the mail.


What if the email with the plain text password is sent after a user pressed on the "I forgot my password button"? Because so far I only have encountered this type of email where the password is sent in plain text.


>> No. I've implemented precisely that and it doesn't prove what you think.

Well, the way I read the parent is that you can request a previously set password to be sent to your e-mail.


GP said … "sends you a welcome email with your password in plain text" … which seems clear enough.


Access to the code base, gives you access to the login form. And gives access to all data.


I did not imply you can change the codebase, by the way.


It's a matter of storing it in plaintext or not, which any sane developer knows not to. The codebase will always have access to your plaintext password at one point or another, whether it's on signup before they hash and store it, or when you login before comparing hashes.

If someone has access to your codebase you've got bigger problems than plaintext passwords anyway.


The codebase will always have access to your plaintext password at one point or another.

Not necessarily. The simple solution is client-side hashing. You could combine that with challenge-response to only reveal the password hash to the server once.


The client-side code that does the hashing is part of the codebase.


>> If someone has access to your codebase you've got bigger problems than plaintext passwords anyway.

You're joking, right? The context of the discussion is when your database is already leaked. Then the chance is that your e-mail database is leaked, too. You may leak code, too. It doesn't necessarily mean someone can execute arbitrary code on your server though, yet.


Presuming you mean Access to change the codebase, this would also be access to add a key logger. If someone has write access to your production codebase you’ve lost all data.


"Knuddels is safer than ever."

Corporate speak is just so funny. The bar for "safer than ever" is pretty low when your dev team hasn't heard of password hashing.


Well, you can work your way up with ever increasing levels of safety by adding first MD5, then moving to SHA-1, then adding a salt, and eventually something sensible like bcrypt. That's four more press releases right there :-)


And then you can claim "industry-leading encryption" when moving to argon2.


Is argon2 NIST approved? Can't say "military grade encryption" otherewise. /S


The passwords to login where actually hashed. But they stored another copy in plaintext on purpose, to censor the users password if they wrote it into chat...


So they tried to improve security, but instead they weakened it.


They might also have prevented a lot of users from being hacked. We can't know for sure.


"Passwords were hashed as a hash in 2016, but the unchanged version of the passwords has been retained, so users can not filter their own password via our platform via a filter"

https://www.archynety.com/tech/why-knuddels-saved-his-passwo...

Which sounds odd. You could just hash/compare filter words.

I'm guessing similar issues too, like "no salt" or "same salt for all passwords".


If you do password hashing properly, using a key derivation function, you shouldn't be able to do that filtering efficiently at all.


It just has to run once, at filter creation time. Disallow creation of the filter if your password is in it.

Note the quoted reason from Knuddles is different from what others are saying the reason is: "so users can not filter their own password via our platform via a filter"

Edit: Apparently, the posted articles on this are misquoting things. Here's the original company response: https://forum.knuddels.de/ubbthreads.php?ubb=showflat&Number...

It does appear they were screening all chat text for the user's password after all.


What does filtering your own password mean on that platform?


Found the quote from a company rep on their chat forum:

https://forum.knuddels.de/ubbthreads.php?ubb=showflat&Number...


That sounds like a bad excuse...


This sounds dumb. So if my password was "something" (yes, it's a terrible password), it would just keep censoring that word every time I write that into the chat?


sounds smart. not only you prevent users from giving their passwords, you force them to choose better passwords.


Why didn't they just hash all the words they type if you must(maybe bloom filter?)


But if you use different hash keys, then you would have to hash each word multiple times.


You would use a unique hash key for a user's password. You'll only have to hash each word with that unique hash key once. And ideally cache the hashing and prehash known words.


You would have to hash every possible substring in a sentence.


Also with a quite expensive hashing algorithm since ideally the password wouldn't be just stored as unsalted MD5.


Yeah,can't see sunstring matching working.


So, there is no way at all to stop that from happening.


This is actually a cool idea - paying idiot tax


In my experience, this is mostly what the GDPR is. There is no excuse for storing plaintext passwords in 2014+ and 20k is a fair fine for a mid-size company.


€20k doesn't seem much to me. Cheaper than taking on a security consultant.

Not that you need a security consultant to know passwords shouldn't be stored (at all, nevermind plaintext).

If they're doing that then they're likely being sloppy elsewhere, and by only paying €20k across the last n years they might have saved a €million.

If your company is in the same boat probably worth not bothering to get any security issues addressed. Why address security, just pay the much smaller fine if you ever get caught ...

I couldn't find Knuddels annual profit but they appear to have a dozen staff, which suggests to me the fine is too small.


Well, the fine is only 20k€ because they were very cooperative, quick to fix the worst issues, and promised to continue improving their security further. According to the Data Protection Officer's statement their total expenses were in the six figures. They also explicitly state that the fine wasn't higher as not to place a disproportionate burden on the company's finances, which probably means that they wouldn't have been able to afford a significantly higher fine. Contrary to the fear-mongering on this site, the purpose of GDPR isn't to fine companies out of existence.


Yes, I saw that elsewhere when I'd finished writing.

More cooperative still would be doing the changes before you're caught.

If you could skip your tax bill for a few years, but get a much smaller fine if you cooperated when caught then you'd be silly to actually pay.

In short, in terms of pour encourage les autres this fails badly IMO.


> If you could skip your tax bill for a few years, but get a much smaller fine if you cooperated when caught then you'd be silly to actually pay.

If you do the work ahead of time, you pay the cost of doing the work. If you wait for the fine, you pay the cost of doing the work plus the fine. It doesn't take a lot of fine to make doing the work to begin with worth it -- basically just accounting for chance of getting away with it and time value of money, which goes down as the government gets better at catching more people quicker, as should be their primary goal for something like this.


Maybe, but there's much more work that needs doing to secure PII than just not having plaintext passwords. So, they can seemingly avoid doing all that work too, and maintaining those systems (with staffing costs). And you get a leg-up over the competition who can't use the cash that they put in to security.

That means those with poor security regimes may "win" because the costs of poor PII hygiene are externalised.

It would certainly be nice to imagine all the 2 million UK corporations are addressing PII security rather than hiding and hoping not to get a fine ...


> Maybe, but there's much more work that needs doing to secure PII than just not having plaintext passwords. So, they can seemingly avoid doing all that work too, and maintaining those systems (with staffing costs).

If the regulators catch someone breaking a rule like this, the consequence should obviously involve an audit that looks for other violations and requires them to fix those too.

But even if it didn't, your conclusion wouldn't follow, because they would still have no incentive to fix the other problems unless they expected to get caught for not fixing them. But if they did expect to get caught then the numerous predicted small fines would be a sufficient deterrent.

> It would certainly be nice to imagine all the 2 million UK corporations are addressing PII security rather than hiding and hoping not to get a fine

It's the hiding and hoping not to get a fine that's the reason large fines don't work. Higher penalties can't deter someone who doesn't expect to be caught.

What works is smaller penalties with vigorous enforcement.


Thanks for expounding your position, I still disagree however.

The analysis is similar to a parking fine, if the fine is €1 but parking is €2 per hour then people will chance it.

If the fine is having your car towed and €200 then people will be damned sure not to go even a minute over their paid time.


Parking fines aren't designed for deterrence, they're designed for revenue generation. If parking is €2 and the fine is €1 on top of the parking cost if you get caught (plus €5 worth of inconvenience doing fine paperwork), and there is a 90% chance of getting caught, nobody parks illegally -- and therefore there is no fine revenue.

But if you make it a $200 fine with a one in a thousand chance of getting caught, then it's profitable, because then many people rationally take the risk and become a source of citation revenue. But the violation rate is higher, so if that was your goal, it fails -- unless you're still doing vigorous enforcement, in which case high fines are once again unnecessary.


If bad security is a consciously chosen company strategy, then sure, the fine is too small.

But most places don't do dumb stuff like this because they've smartly chosen to be dumb. It's just thoughtlessness, just focusing on the wrong things. And one of those wrong things is "saving" money by being too cheap.

If a cheapskate client asked me to store passwords in plaintext on the theory they could save a few days of dev work, I'd love to be able to say, "Sorry, that's such a bad idea it's illegal. Fines start at €20k and go up." Their cheapness meter would swing into the red and they'd leave me alone.

If they company had actually chosen to be broadly negligent, it's clear the regulator could have imposed a much bigger fine, so I think your case is covered too.


The fine is small since they completely complied with all inquiries and took proper steps to inform users and improve security. Thus do what the actual goals is. Making money is no a goal of GDPR, but ensuring data safety.


Fines need to be extremely punitive to make the risk-reward analysis favour fixing security _before_ the company gets caught.


Even the threat of death penalty doesn't stop crimes.

True, if there is no punishment and a threat is teethless nobody acts on it (that's why the big GDPR outcry also came only this year after the two year introductory phase)

However if you have too high fines what happens s that companies try everything to hide the fault and lie to avoid the fines. Here a company complied to all things, improved security (which according to the data privacy agency let to six digit costs (while a question is how you measure this) - see other comments) and therefore got a low punishment.

The punishment also has another effect: It makes it clear that fines are being collected. If it were higher the Knuddels company would go to court and we'd have an example case only in two or more years.

The goal is to improve data safety. That goal was achieved.


There is a significant difference between deterring personal crimes (e.g. robbery at gunpoint, carjacking, murder) and deterring 'economic crime'.

Some examples of economic crime would be: not implementing security, tax fraud, overweight freight trucks; speeding to make a delivery on time (whilst on the clock).

The first kind of crime is generally made by people who 'know they are wrong, but they feel like they don't have other options' or people who 'know they are wrong, but don't give a f*ck about that'. Especially that first type of person won't respond to more punishment.

The second kind of crime is much more calculated. Here, the response to harsher punishment would be a lot better. This is generally why the fines on overweight trucks are so high. It is actually required in order to make the calculation unacceptable for driving overweight.


I used the introductory example in response to the request of "extremely punitive" fines. Extreme fines lead to companies trying to be clever in hiding their faults. Moderate fines more likely lead to companies coming out by themselves.

On the comparison with overweight trucks: I doubt anybody builds an insecure system to gain an economical benefit, not using state-of-the-art technology is a mistake/stupidity/carelessness/.... "See how much money we earned from saving CPU time of bcrypt!" nobody said.


>Moderate fines more likely lead to companies coming out by themselves.

I can't recall ever hearing of a company coming forward to declare they broke the law and so should pay a fine.

Could anyone give us a couple of high profile examples?

Do you have any support for your assertion that punitive fines don't stimulate regulatory compliance but small fines do?


Yes. The case we are talking about.

"Das Unternehmen hatte sich am 08. September 2018 mit einer Datenpannenmeldung an den LfDI gewandt [...] Gegenüber dem LfDI legte das Unternehmen in vorbildlicher Weise sowohl Datenverarbeitungs- und Unternehmensstrukturen als auch eigene Versäumnisse offen." ("The company contacted the data protection agency on September 8th 2018[...] In exemplary manner they gave access to company and data management processes and highlighted their on omissions") https://www.baden-wuerttemberg.datenschutz.de/wp-content/upl...


> There is a significant difference between deterring personal crimes (e.g. robbery at gunpoint, carjacking, murder) and deterring 'economic crime'.

The relationship is the opposite of the one you're describing.

The problem with personal crimes is that everyone has a different utility function. If you could steal a million dollars at risk of a month in jail, many people would take the risk. Fewer at six months in jail. Fewer still at a year. Fewer still at five years. So you need as high a penalty as you can get without violating proportionality (or reaching the point of diminishing returns, once nearly everyone who can be is deterred).

There are some people who aren't even deterred by the death penalty, e.g. because they'd rather have the money needed to save their kid's life even if it costs them their own, but that's pretty rare. Most of the people who aren't deterred by even large penalties are simply the people who don't expect to be caught, or don't realize they were violating of the law to begin with.

By contrast, for economic crimes, nearly everyone's utility function is the same. If you can save $2000 by taking a 50% risk of having to pay the $2000 anyway plus a $1500 fine, it's profitable. If you can save $2000 by taking a 50% risk of paying the $2000 anyway plus a $2500 fine, it isn't. A $3000 fine provides no additional deterrence, and anything more is just a money grab. Even the $1500 fine may be higher than strictly necessary, because getting fined at all results in a PR hit that independently provides a non-zero deterrence value.

That isn't to say that a small fine won't leave a large number of violators, but they're no longer the people overtly doing the calculations. They're the people who assign negligible probability to getting caught, or who don't even realize they're violating the law. No amount of higher penalties will deter them, the only thing that works against that group is vigorous enforcement -- which works fine (better even) with modest penalties, because all you really need to do to get those groups into compliance is to tap them on the shoulder and explain how they're not.


>everyone's utility function is the same //

It really isn't. The cost of a $2000 for a poor person is huge, the cost for a richer person - whilst significant - is not debilitating. If you've 10% chance of getting caught then a rich person can afford it, getting caught really doesn't hurt so much.

That's why progressive justice systems use means tested fines for things like speeding.

I'm going to guess you're relatively wealthy, your analysis seems entirely wrong to me.

What you get with small fines is people will pay, even if they didn't deserve the fine, because of the cost of time/effort to challenge it.

If a company can save €100k for multiple years, the only downside being that if they're the 1:10000 that are caught they'll have a €20k fine, the financial analysis - morals aside - says don't pay, unless the €20k would sink you.


> The cost of a $2000 for a poor person is huge, the cost for a richer person - whilst significant - is not debilitating.

Which is irrelevant for economic issues because both values are in the same units. An hour may be worth more than $500 for a rich person and not a poor person, but $3500 is more than $2000 for everybody.

> That's why progressive justice systems use means tested fines for things like speeding.

Then the super rich will hire a chauffeur to do their speeding for them, or fly in a helicopter, so all you're doing is creating a differential between the low and middle income people. But then either the fine is oppressively high for middle income people or is an inadequate deterrent for lower income people.

Because dollars have a declining marginal utility when you get more, but the relationship isn't linear. Someone who makes $60,000 may have effectively the same disposable income as someone who makes $30,000 (i.e. both near zero) because the first person has higher costs (housing/transportation/other cost of living) needed to live in the area where the higher paying job exists. You also end up penalizing the person who has "double the income" because they have three kids to support and have to work two jobs. Means tested fines are a populist farce.

> What you get with small fines is people will pay, even if they didn't deserve the fine, because of the cost of time/effort to challenge it.

This is not a deterrence issue, and can be solved by returning to the person the true entire cost of the resources and time taken to successfully challenge a false claim against them.

> If a company can save €100k for multiple years, the only downside being that if they're the 1:10000 that are caught they'll have a €20k fine, the financial analysis - morals aside - says don't pay, unless the €20k would sink you.

If the €20k would sink you then surely the €100k/year would, so the amount of the fine in that case is irrelevant. The real problem in your scenario is the 1:10000 chance of getting caught. If you could clear €100k/year for ten years with a 1:10000 chance of getting caught, the fine would have to be ~€10B, which would obviously annihilate any entity for which €100k/year was a meaningful amount of money to be worth skimping to begin with. Which means that no amount exists that could act as an adequate deterrent for a small organization and that probability of getting caught. Any amount over their total enterprise value couldn't actually be paid and therefore doesn't act as a deterrent.

What you need is to improve the chances that they'll be caught. In which case you don't need such a large fine.


> I couldn't find Knuddels annual profit but they appear to have a dozen staff, which suggests to me the fine is too small.

On the other hand they seem to have at least as many open positions. This is either a sign of strong growth or a sign of inability to offer competitive pay. For a struggling pre-Facebook social web relic, it's easy to guess which one it is.


This is what GDPR should be about.


Maybe https://www.plus.net/ should be next. Reported few times without any result.


Report for what? Storing plain text passwords?


Reported to them that it's not a good idea.


Seems like they got punished for informing the government about the hack, obviously gives the next company that gets hacked a reason to try to hide it.


The fine was set low for cooperating with the DPO. If you hide it and someone leaks it then the agency could ask for much more.


I read the headline as saying that Knuddels was a platform for chatting about GDPR, which made it all seem very ironic.

edit: title has been changed, nevermind


Funny to speak about security and a GDPR fine for a website that hasn't HTTPS activated...


I don't see much reason why their website should have HTTPS though.

There are no input fields, no requests sent with personal information at all etc.

Everything that's questionable already comes over HTTPS on their site though, like Facebook content etc.


Attack, succeed and blackmail could become a business. "If you don't pay me X we'll report you under GDPR and you'll have to pay much more."


"If you don’t pay me X we’ll report you under criminal law and you’ll have to pay much more."

"If you don’t pay me X we’ll report you under environmental protection law and you’ll have to pay much more."

"If you don’t pay me X we’ll report you under labour regulations law and you’ll have to pay much more."

How would GDPR be special?


> How would GDPR be special?

The proportion of businesses who are unintentionally violating it is unusually high.


None of these other things are designed to protect against misuse of personal data specifically?


In the extreme large fines.


They paid a 20.000€ fine. How is that extremely large?


I wasn't talking about this specific case. The regulation allows larger fines than that, and some people fear that. I myself disagree with them, btw.


They’d have to report it themselves. This is a worse value proposition than regular blackmail, where you take an existing violation, which the target already knows is illegal and has already shown willingness to conceal from authorities (or is not illegal at all, but e.g. just embarrassing), and threaten leaking it. In the proposed scheme, the mere reception of the threat itself creates a new situation for the receiver which must be reported. Paying off the blackmailer puts you in a worse spot than before. Not so with “ordinary”, Hollywood movie blackmail.


Nitpick: I don't think that they would have to report themselves if what the attacker claims isn't actually true. They could still fail horribly at verifying the claim (and subsequently pay the appropriate "tax" for that failure)


If he blackmailer can figure out, others can. Takes away the leverage.

Best strategy: Ignore the blackmailer and improve the security of your system and take privacy serious. (Doing such a cleanup also helps to reduce fines, thus also reduces leverage of blackmailer)


and if that's ever discovered you've just added a hefty multiplier to your fine. Risky.


laughably little though (20k)


According to the link: https://www.baden-wuerttemberg.datenschutz.de/lfdi-baden-wue...

They were doing this so they could filter out the passwords from chats (i.e. to make it so users can't give out their passwords to other users). Not saying this justifies it, but it's interesting.


It's possible to do that without storing the passwords in plain text though! Run each word of the chat though the same hash+salt mechanism and compare to what you have stored.


Assuming they're using a suitable hashing algorithm for passwords (ie, Argon2, bcrypt, scrypt, PBKDF2), this approach would be prohibitively expensive, especially for a chat platform, with presumably lots of messages.

Also, you probably can't just try hashing each word, since there could be whitespace and punctuation in the password text, so I think you'd have to hash all possible substrings of each message to be able to reliably catch passwords.

Obviously, though, they shouldn't have been storing them in plaintext.


Store the length L of the password, its salted hash H, and its bytes, XOR-ed, X.

For every message typed, compute a running XOR of each sequence of L bytes (2 XOR’s per character, so as good as free). Whenever it equals X (about once every 64 letters or so, because typical text doesn’t use all bits in each byte equally), compute the salted hash of the last L characters, and compare with H.

Unicode and Unicode normalization will complicate that, but I think it should be fast enough for a chat.

You probably can also improve on that factor 32 by storing multiple XOR-like (but slightly more computationally expensive) hashes and computing multiple running totals.

Given that this is to protect users from falling for scammers who claim they need their password to help them, you may be able to run it on the user’s machine.

I fear, however, that a scammer will just ask them to type their password with a space inserted, spell it in the NATO spelling alphabet, or whatever. If you fall for a scammer, that won’t stop you from giving them your password.


I did think of similar approaches, but anything I could think of that helps you to quickly determine if a given string contains the password also helps an attacker if the passwords and salts are compromised.

In the suggested case, storing the length of the password alone massively reduces the search space, and storing the XOR (of the plaintext with the hash, I think you're suggesting?) negates the value of using a hashing algorithm suitable for passwords, since the point is that checking if a password matches a hash is an expensive operation.


But what if people have multi-word passwords? At that point the solutions become so over-engineered(either use some ngram-like setup to detect passwords being posted or save a hash for each separate word of the user's password, which also decreases security since then you know the user has a multi-word password) that you might as well drop that feature.


Just forbid spaces?


That significantly reduces entropy in the password.

Also, the premise is faulty, because as soon as users figure out they can't type their password in the chat, they'll just describe it in words or split it into two pieces etc.


Give them a big scary message "Never give out your password to strangers" when the censoring happens, because it's highly likely somebody is pretending to be an admin asking for a password in that situation.


If they allow whitespace in passwords I could imagine complexity issues though.


I thought the same, although such a filter would be intended to help out unknowing users who might give their password to a stranger. People who use passphrases may know enough to not do that in the first place.

It could however in general have a problematic side-effect if the password is a common word that could be guessed from surrounding context when censored that way. Something I'd find a lot more likely here than passwords with spaces.


What if this is part of someone's policy, with the knowledge of users of course? For example an app for the technically illiterate or for small children?


This can be done without storing passwords in plaintext


hunter2


doesn't look like anything to me.


Who gets the money, after those who administer it take there cut


The state, Baden-Württemberg.


GDPR is not supposed to be enforced after a 2 year grace period ?


That 2 year grace period ended in May 2018


That was starting March 2016


It was, and the grace period ended in may of this year




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: