Hacker News new | past | comments | ask | show | jobs | submit login
Hacking Grindr Accounts with Copy and Paste (troyhunt.com)
458 points by snowwolf on Oct 2, 2020 | hide | past | favorite | 188 comments



> we believe we addressed the issue before it was exploited by any malicious parties

I wonder how they are sure of this.

In their logs, there would be no difference between a legitimate password reset and a malicious one, given that even a legitimate flow would result in an initial request from some IP address, then when the user receives the email with the reset link they will most likely click on that from the same computer, thus the same IP address showing up on the logs. In case of a malicious attempt the same pattern would be seen - there is no way for them to know whether the user obtained the reset token from the e-mail (as they should) or directly from the password reset endpoint itself.


$50 says they're not. This is something every organization has to say for PR reasons, but saying "we believe" is very fishy wording. It could well be this bug has been around for months before it was discovered, and used by many black/grey-hat hackers.


Governments. It was likely used by governments.

Bi men who live straight lives with a wife and family are ridiculously common. The ability to blackmail those people is extremely valuable to certain state organizations.


I've never been gay or bi, and I've never used Grindr, but I have held government security clearances for almost 40 years. It's a lot different today than it was back then. Early on, I knew several people who had "experimented" in college, and they were denied clearances. (Actually the government never officially denied them because that would require an explanation of the criteria used for the denial. Instead, it was perpetually "pending".) Anyway, the basis for their denial was the potential for blackmail, which is a serious national security threat. Sometime within the past 25-30 years, they seem to have revised their policies so gay/bi people can get cleared, as long as they are open about it.


Foreign governments blackmailing US government staff is only one side of the equation. There is, of course, the whole "but what would the wife/churchfellows think?" issue, still, but there are still many countries that do not take the same enlightened view that the US does.

It's entirely possible that even the ability to verify that a particular email address has a Grindr account may be enough to threaten a person with imprisonment or death in several countries I can think of off the top of my head.


verifying that an email has an account is unfortunately always unavoidable. all you need to do is attempt to register with that email.


Fortunately it's easily avoidable: You defer checking address status until you send the mail out.

So when an email-address is entered to create an account, you always respond with "pending email verification". Then you send an email saying "Someone is registering an account with us using this address." And then, when the account already exists, you continue with "lol it already exists. If this was you, you can click to reset your password". If there is no account under that address, you send the "please click to verify" mail. At no point does this process expose the status of the address.


We could do better by letting the web UI say nothing about that and only in the email that we send we tell someone that they already have an account.


So were they denying clearances to openly gay people back in the old days? Because there wouldn’t be potential for blackmail in that case. Sorry I can’t tell what you’re saying changed about the policies.


Thirty years ago George Bush Senior was president, Reagan has recently left office, and the United States government was handling HIV poorly and intentionally so.

Sodomy laws were still on the books - can you keep a clearance while openly breaking the law every time you and your partner sleep together?


I don't know, that's why I asked since the original commenter seems to know, but his original comment doesn't really answer it as it basically says "they used to deny clearances to secretly gay people, and today they grant them to openly gay people".


> gay/bi people can get cleared, as long as they are open about it

That's what I have been told is the current policy in France. As long as you cannot be blackmailed with it, it is good.


I have Swedish security clearance, and I was never even asked during the interview but I must assume the know since an even cursory glance at my social media would reveal I’m gay.


It’s not really being bi that’s the problem here so much as cheating on a wife secretly


It's only cheating if you're breaking a rule to not sleep with other people. Swingers aren't cheating (assuming they're respecting any agreed upon boundary) when they're having an orgy for example.

A bi man with a wife and a family could very well be having sexual encounters with men once in a while with the agreement of his wife (who could well have her own stuff going on too) and it could be a very healthy thing in what is a fulfilling relationship for both parties.


You might be right morally, but in a lot of countries it is still the case that 'not having sex with other people' is a compulsory clause of the marriage contract. So legally it's a little bit murkier.

People have been advocating for marriage contracts that are a bit like the CC license, where you can pick the parts that you agree on.


Marriage is on its way out in many places. In my region of Canada (Quebec), less than 50% of couples with kids are married. That number keeps going down every year.

(There are some automatic protections in the event of a separation.)


Are bi guys that much more likely to do this than straight ones? I could believe 2x, but it doesn't seem like an order of magnitude kind of difference


It's more about them keeping it secret; secrets are some of the most valuable currency.


But then it has nothing to do with being bi... unless they're secretely bi, in which case it has nothing to do with cheating...


Factually speaking yes but practically speaking gay/straight people tend to be doubly touchy about their partner "playing for the other team" so to speak. Which... I mean, cheating is cheating is cheating lol


Doesn't everyone cheat on their wife secretly? Otherwise it's not even cheating.


A wife might be accepting of a husbands bisexual adventures and not consider it cheating. What about her parents, his parents, children, family, coworkers, congregation, community, etc.

I wasn't speaking directly to that, moreso that secrets have value. Some people would reveal their secrets rather than be controlled with them, others will define the value of their secrets by what they're willing to do maintain them.


It's easier for a straight person to pass off a straight dating/hookup site vs. a "straight" married person on a gay/bi site. The straight person can just say they used the site before they were married, etc. whereas there's no such excuse for the gay site.


It's not about cheating. It's about people other than your spouse learning you have sex with men.

This remains something quite a lot of people want to remain secret.


I wonder why you are getting downvoted so much. I don't know how common bi-men are (probably more than bi erasure makes us believe) but it feels like a least a portion of them are not ready to come out as bi.


> I don't know how common bi-men are (probably more than bi erasure makes us believe)

They're not that common. The literature shows a bimodal straight/gay distribution of homosexual tendency in men and a more Gaussian distribution in women.


The "bi erasure" phenomenon is that if a man is bi, many will believe he's just gay but in denial. Because he may prefer to be seen as straight than gay, he may just let everyone think he's straight. This would reduce how many men appear to be bi in surveys.


Alternatively, I'm gay and even though I grew up in an area of the US which wasn't that homophoboic. I still hated myself for being gay enough that if I had an even slight attraction to women then I would have just killed off the male attraction side to me. If you're bi and grow up in a world where everything tells you that being gay is wrong, then that part of you that is attracted to guys will probably never get explored.


It also may be because of what I observe as 'masculine thinking' vs. 'feminine thinking'. It appears to be drawn biologically from the genetic male sex and 'male brain' vs. genetic female sex and 'female brain'. Natural variations exist in all things, but if you look at what's most common, with biological sex differences (whch feed cultural 'stereotypes'), they still are what they are.

See Robert Green's book "The Laws of Human Nature" (2018) for his take on the following. It's very insightful. (Chapter entitled, "The Law of Gender Rigidity")

'Masculine thinking' prefers to categorise and bifurcate. (Dualism.) I'm a guy and I consider myself having a 'super' 'male' brain in some aspects even more than the average. I find it very hard to multi-task, and I easily hyperfocus.

Male thinking solves problems by breaking things down and focusing on one part of the picture at a time. It's about specialisation.

Female thinking treats things more as a whole, with everything connected. It solves problems by looking at the whole picture at once. It's about multi-tasking.

I now see that 'male thinking' (as opposed to males), such as 'specialisation', dominates modern capitalism and public policy, and often to detriment. Most females at top levels in business at this time in our culture would appear to mostly have this success because they are 'atypically' strong in 'masculine thinking'.

Personally, I think such leadership needs more female thinking. I'm slowly trying to understand it more, as my own starting point. Modern diversity policies that change what's on the outside (how many penises are around the table) don't actually solve the real problems. We're still picking what's taking place on the inside. We need diversity of things far deeper - to embrace and celebrate true 'feminine thinking' - not just what's on the surface.

So anyway, this could help explain why female sexuality is seen to be more 'fluid', on average, than among males. It's not so much how people actually are, as how people see themselves, because of how they tend to be wired. And this is not even factoring in that there is greater cultural stigma around a male being bi vs. a female being bi.

I am a male, and 100% bi.


The reason is that bi men round off to gay whereas bi women round off to straight. Bi women are even sought after by straight men so even women who aren't bi will sometimes claim to be, there is really no such equivalence to bi men. This will give the kind of distribution you are talking about especially when done by surveys as this kind of research often is.


Literature?

Seriously?

Surely the way someone identifies public doesn't necessarily match with reality.


I'm not sure why being on Grindr is being seen here as a blackmail risk.

If you are visible on Grindr, you have made a kind of public testimony.

So, you have reduced your blackmail risk - since you're already at least somewhat prepared for this information to be public.


A lot of the information a user who owns an account can see is not visible to the public. I don't see why being on Grindr in 2020 would be a blackmail risk IN COUNTRIES WHERE BEING GAY IS LEGAL.


OK, but in that case you're at (potentially much) greater risk than blackmail.

Why is blackmail the risk that people on this page are going to, then, is my question. If you're willing to make a public statement then blackmail is lower risk, not higher risk.


I'm not sure, sorry. I'm not honestly sure why there's so much discussion about security clearances.

If someone gets your grindr account they can get your (a) address, (b) phone number, and (c) private photos, none of which would otherwise be public. So let's just get that out there. A reasonable user would not expect that any of those three things are readily available to a stranger that they've never talked to who merely has their email address. Especially if they turn off distances specifically to avoid triangulation attacks on their location.

Those pieces of information can be used together to harrass, commit violence, or threaten to leak your photos or personal (albeit sure, not "private") conversations. Those are the obvious ones, but other information in there can also hidden from public view, like HIV status.

With this bug a malicious person could knowingly target you based on email address instead of finding you and even putting the work into catfishing you into sending them embarrassing photos.

I'd say that's a higher blackmail risk but only because of the hack, a little. Catfishes still existed but this makes doing it invisible and massive in scale while associating it with a public ID like your email address.


I was disturbed by that statement as well. It's pure PR spin based on turning a blind eye.

They could detect mass malicious activity if a single IP was resetting thousands of accounts. But I'm skeptical they even checked based on the horrible initial flaw and specious response.


saying they are working on the disclosure system is good especially because it seems unprompted.


It’s possible that the emailed link contains extra query params which are logged. Checking for the existence of these query params in requests would enable them to verify that reset requests to date were clicked from email rather than using this method.


Also, the referrer header may be different too? Although it's likely nobody thought to log it.


I would expect a "Referer" value to be empty in both cases:

  - directly navigating to a URL after doing a copy-paste
  - opening a link from an email


Not saying they did, but couldn't you make an estimate by looking at frequency of resets by single accounts? If someone took over an active account presumably that person would reset the password to get back in (and have a weird email). ASSUMING Grindr logs the person out of the app when the password is reset.

You might also have a few emails from users...


This would detect a large-scale attack, but wouldn't detect small-scale, targeted attacks as they would just get lost in the noise of legitimate password resets.

Furthermore, for dormant accounts (where the user is no longer using the app - potentially because they are now in a relationship) the user will not notice anything either, and the notification email is likely to get lost in the endless newsletter spam the non-technical majority has in their inbox.


I think this is a good point. I'll admit that I'm naive about web and security (not my area). Are multiple password resets within a small time frame common? I would not expect this to be common, but user behavior has often defied my expectation. If it is uncommon I think you could create a correlation and get an estimate, if it is common then I completely agree that it would be lost in the noise.

And yeah I agree that this type of analysis wouldn't help with dormant accounts and also does require them to log the user out on their phone (otherwise why issue another reset?). But both these could be captured. This is probably way too much analysis for such an attack and over engineering the issue, but hey that's what we all do, right? haha


> Are multiple password resets common within a small timeframe?

Yes. When you reset your password once, probability is high to reset it many times. It is often because you don’t remember it, and the new one isn’t fixated in your memory. Or because I’ve changed devices, but my computer kept my old password, so I reset it too, and back and forth on each device until I have time to bring the two devices together and type in the same password. Basically password resets happen rarely, except when they happen, they happen in a salve.

That is the exact opposite scenario of when London hired statisticians during the Battle of England because they were surprised how all German bombs fell on specific buildings, and were wondering why Germans would target those, only to discover that randomness meant bombs would randomly fall in clusters for no reason at all.


We see this with password email reset requests that have deliverability delays.

If the user does not receive the reset email within a few seconds, they submit more reset requests.

Also, there have been several Windows updates that cleared the saved password login info in the browser, and get a flurry of reset requests.


This makes a lot of sense about why my priors were wrong. Thanks! (always gotta check your priors)


Increased volume of password resets would indeed suggest an attack, though it can also be explained by benign reasons (redesign of the app, marketing campaign prompting previous users to log back in, news exposure, the pandemic increasing loneliness and making more people use dating apps, etc).

However the biggest risk here is that small, targeted attacks distributed over time (where a single attacker only targets a handful of accounts) wouldn't stand out in the overall statistics.

In case of this incident, small-scale attacks (where a single person targets a single account of someone they don't like) are actually more likely which is why them saying they do not believe this was exploited while being completely unable to detect these attacks is so misleading and lures people into a false sense of security.


I've at times done a string of password resets when unusually designed sign up pages cause a password not to be captured by my password manager.

This seems to happen most when it's a multi page setup process. I often use a plain text scratchpad document to prevent the loss of data but sometimes circumstances happen.

I'm using LastPass for what it's worth. If anyone has better experiences with competitive products I'd be happy to hear about it.


Since when do beliefs require evidence?

They don’t mention any, so this is the most positive sounding but still truthful position they can take.

Best I can think of is geolocating IPs of the reset requests and then seeing if the real owner (near original location) does a second reset later to take the account back, but that’s not convincing especially if you know where the account you’re targeting lives and went through a VPN in the same city to match.


It's still pretty misleading.

They are supposed to be the experts (in the eyes of non-technical people) and if you don't have the skills to understand how the attack works it's reasonable (or at least used to be reasonable) to consider that the risk is minimal if "experts" do not believe it's bad.

This response lures their users into a false sense of security.


> This response lures their users into a false sense of security.

That's the entire point of their response though. If all you ever had to do was tell people the truth, PR wouldn't be a thing.


Logging IP address and have some AI/SIEM comparing IPs of regular/past use to IPs on-or-after a password reset can give 'some' level of comfort. E.g. if someone has extensive use from NY-USA IP address and the requests came from a Paris-FR IP address then 99% it is an attack and you block or send out email/SMS (just in case) or 1% that person's company guest WiFi surfaces in another country (e.g. mega-big insurance company in London has corp internet exiting in Chicago and guest network exiting in London).

In any case, it is better/safer to cause some slight inconvenience to prevent data leak.


Well email tracking isn't perfect, but it can help a lot. A legitimate pattern you'd see an email open event, email click event, then successful reset in that order. An illegitimate one might have no email open or click before the reset, or clicks from multiple places or something like that. That could narrow down the list significantly.

Of course, not all email clients allow send these events.


They didn't say they are sure of it. They said they believe it :)


If your company is being actively targeted by nation states (and rest assured, Grindr is), you should have a serious security team where this sort of stuff shouldn't have seen the light of day.

I'm not exaggerating when I say this bug may have gotten people locked up, or been the lever for corporate/government espionage.


Hacking Grindr sounds like a lot of work. Why wouldn't gru@kremvax.ru just sign up, post a photo of her son and his classmates twerking at the pilot academy, enable tourist mode, and take a virtual trip to Los Alamos?


I don't mean to downplay the issue, but why would LGBT-hostile nation-states target Grindr's infrastructure when it's much easier to detect users at the network level based on TLS SNI (since encrypted SNI is still not a thing thanks to corporate influence)?


Because at the application level, you get so much more data. Who they're talking to, who they're meeting. HIV status, photos, etc.


Grindr doesn't require real email address...


I'm sure a government could detect that a citizen visited grindr.com, but it'd be harder to guarantee that they actually had intent to commit "crimes" without access to unencrypted internal messages.

I'm also concerned about antagonist nation state that gets the personal emails of top officials at Department of Defense. Goes through a targeted list in an attempt to find out who's a member. And if a match found, then engage in a blackmail scheme for secret information.


> I'm sure a government could detect that a citizen visited grindr.com, but it'd be harder to guarantee that they actually had intent to commit "crimes" without access to unencrypted internal messages.

Why would that hypothetical government care about that? Just lock everyone up!


You're assuming the attackers are that sophisticated. With an attack this simple it could be exploited by a group of thugs a local police station (with maybe a "computer savvy friend") logging into local accounts to see if they can find anyone they recognize.


proof wise, there is a difference.

Blackmail wise, there is an even bigger difference. "I know you use grindr" vs "this is your last conversation on grindr". These have very different credibility and impact when leaked.


Honestly, I wouldn't be surprised if this was an intentional back door (...) that Grindr was required to create and let foreign authorities know about in exchange for being allowed to market the app in their country.


Assuming that's true, why would they publicly expose the back door as an anonymous API endpoint that's used in a standard flow within the product? Incompetence seems much more likely.

I'm not even sure that would constitute a "back door" - it's more of an "additional front door with no lock whatsoever".


"Never attribute to malice that which is adequately explained by stupidity."


Account takeover is a really shitty backdoor...


Considering Egypt is using apps like this to persecute LGBT people, this is absolutely horrifying.

I'm so glad I've gone social media free, all the big players in this space have shown repeatedly they don't care about the safety of their users. Grindr was already caught sharing HIV status information with 3rd parties. Eventually these horrible companies will be regulated, but tons of people are going to be harmed before that happens.


Well, it's not like this vulnerability would stop Egyptian theocrats from figuring out who uses the app. All it takes is creating an account and arranging dozens (or hundreds) of dates.


Back when I reported a Grindr security flaw (2016), I couldn't find them on any of the bounty sites, security@grindr.com bounced, and support failed to route it correctly.

Reaching out to their CTO, who I found on LinkedIn, and firstname.lastname@grindr.com got a reply in 8 minutes.

Sad to see they still haven't upped their security game.


That’s appalling

Bug bounties are are well and good, but a basic pen test would have picked that up. They aren’t that expensive and for a business trading in data that can get you killed in some parts of the world, should be mandatory.


It's not a bug. It is either a backdoor placed there from the design/implementation or super lazy programming. I don't want to think it's done on purpose (Hanlon's razor).


A full account takeover is a really shitty backdoor. Just make a separate "test" endpoint that's exactly the same as the main API but requires no authentication so anyone can read anything. Perfectly deniable as just a bug and entirely undetectable from a target's POV.


If that's an intentional backdoor it's a very weird backdoor. Wouldn't you at least obfuscate things a little bit? Simply mixing up the characters in that string in some pre-planned order would be enough.


While I doubt it's an intentional backdoor, I wouldn't assume that backdoors would be obfuscated. You can't deny knowledge of an obfuscated backdoor, while an obvious one could plausibly be a simple mistake.


If you stick with that logic, you’ll think every mistake is a backdoor !


If it were a malicious backdoor, it wouldn't have been "hidden" in the response to the _actual_ password reset request form.


My understanding -

Grindr doesn't store your chat history, so logging in on a new device won't show your old chat messages. Phew.

Grindr is particularly bad at security. It was fairly easy to triangulate users locations until fairly recently and some users were being harassed, and grindr ignored their reports for a long time.

It was also fairly easy to use fake locations until fairly recently which was also causing problems for non-users.

Grindr regularly shuts down accounts with no process. It's very easy to lose your contacts.

Grindr lies about what information they retain. They claim to hold very little information, which they provide when your account is shut down. However they must retain a list of your blocks and favorites in order to function. They lie and say they don't retain this info.

Because of the nature of their service, they should be on top of all this stuff, but they are really bad at it.


They lie and say they don't* retain this info.


You can edit your own recent post.


This issue is incredibly strange and severe.

One thing I did notice, though: The timestamps on the Twitter DMs, which were used as evidence to assert that they're unresponsive in DMs, cover a time period of 90 minutes. The language the twitter client is set to is also not english (maybe French? the original discovery was made by someone who lives in France. I don't know), which introduces the possibility that it wasn't even daytime in the US when those were sent.

I'm all for publicly announcing these things (in a responsible way) and forcing a quicker response from the company, and its also likely that Troy tried to reach out on his own, but I just think that screenshot is a bad example of a company not responding to DMs. If it had been 48 hours to a week, then I'd be in the concerned camp.


The Twitter DMs are timestamped on September 23. Troy sent his public tweet on October 1.


There is additional information in the post indicating that there was no adequate response to the original report even after 5 days:

The person who forwarded this vulnerability ... provided full details ... on September 24. ... after 5 days of waiting and not receiving a response, contacted me. He also shared a screenshot of his attempt to reach Grindr via Twitter DM


> This issue is incredibly strange and severe.

Severe, yes. It doesn't seem that strange.

You could have a flow like:

Reset page receives email address and passes it to some backend functionality. The backend checks whether the email address corresponds to an account on the site. It does, so the backend generates a reset token and emails it to the address on the affected account.

All of that is supposed to happen. What's also happening is that the reset token is being returned to the reset page, where the person requesting the reset can see it. This is very bad, but it seems likely to have come from some sort of automatic connect-your-frontend-pages-to-backend-services framework solution.


Wow, password reset tokens returned directly in-browser; that's hard to believe. I wonder how long this had been going on?


This is frighteningly standard across most companies with no serious planning phase for new features, and no code review process. Fact is, some developer was told to create a REST API for password resets, and to return the secret token so that the (internal to the company!) client can send the email containing that token. This developer did their job correctly.

At some point, a different developer was told to consume this endpoint, send the related email, and tell the end user (browser client) that the email was sent. This second developer is not part of the "senior services team" who designed the above API, which is perfectly valid. Instead, this is a junior developer taking on their first task at the company. "Take this password reset API endpoint, and integrate it". In addition to queuing the password reset email with the token embedded within it, they also accidentally proxy the password reset service's payload to the browser. No intermediate or senior develop reviewed this new employee's PR; if they did bother to look at it, they only checked for coding standard violations (eg. indentation), without taking the effort to understand the logic of the code.

This is actually extremely common, unfortunately. The server-side layer that directly interacts with clients (ie. browsers) is generally delegated to the most junior developers, because it's menial and uninteresting work to connect the backend services to the browser. The current senior developers spent years working on that kind of garbage already, and they'd rather work on the "more interesting/advanced" backend work. Thus, the junior developers whose skills aren't yet honed are stuck–typically unsupervised–working on the front-facing components.

Also, this routinely happens at companies which rush every feature out the door with modern "agile" practices. The sprint is almost over! Quick, deliver all features by tomorrow to keep up our velocity and avoid a sprint review with negative feedback! Just merge it and push to prod without QA on a Friday at 4pm!

If only the above was a comedy routine, rather than what it truly is: the genuine reality at a large number of companies.


I'd expect even a junior to at a minimum test and view the response payload, see the token and think "bad idea".


Agreed, this is far too basic for the "oh yeah, a junior developer might not have noticed it" excuse.

Hacker News seems to assume juniors are useless, from the comments I've seen to date - but they should be able to _think_ and solve problems, even if they're less experienced at interacting with stakeholders, designing system architecture etc.


Mental thought process of programmer:

Sooo... what's the one thing we need this token to be. Secret.

OK, let's just return it to the one person in the whole world we don't want to have it.

Mmmm is it lunchtime...?


More like this:

Issue #2141 - implement password reset: After answering secret question user should see password reset link.

Issue #2534 - send email with password reset link.

Issue #2743 - remove password reset link from web page

Issue #3892 - replace secret question with email address input


I wonder how many bootcamps that promise to make you a "fullstack developer" in X weeks even cover the basics of security.


Mine didn't in any great detail.


someone designed and implemented it. Would be interesting to know the rationale and their train of thought leading to that.


If I had to guess, the developer used that to debug the reset token to QA if the flow worked; then it was forgotten and skipped past Code Review since the team just left a LGTM without actually looking at the code OR there were to many changes to the PR.


This sounds extremely likely, because it sounds like something I would have done.


I mean, I don't think it's that hard to surmise how something like this could have happened. Yes, the bug is egregiously bad, but I don't think it's likely the developer purposely designed it to work like that. Some simple possibilities: (a) perhaps the page was originally intended only to be accessible from a user hitting from a private link sent to their email address (i.e. how normal password resets work, or (b) The API in use was designed to only be accessed server-side, but it was inadvertently proxied through to a client-side call.

Again, yes, the bug is very bad. Software is complex, and humans are humans, and it's not difficult to imagine how these bugs occur.


I don't think the developer designed it to work that way either, but something like this only happens when the person creating it, or the people touching it after either don't know how important this interaction flow is or don't take it seriously enough.

Whatever API this is using, there's zero reason to show any information about the request other than "we didn't die, so it should succeed". Beyond even showing it, there's zero reason for a password reset API to respond to the request with the secret at all. If it needs to return anything identifying about the request, it should be some identifier that is NOT the secret, which can be used to pull up general info about the request later if needed (time generated, whether it was used, is it expired, etc). Extra points if the access credentials to any back-end API the request page uses can't even request the secret key from a request.

A sane API makes it very hard for something like this to happen. Often that takes an inversion of thinking, so instead of making an API as useful as possible and return as much data as efficiently as possible, you make to make it as secure as possible, which means returning as little data as possible to satisfy the specific needs of the use case, and different locked down credentials for specific use cases.


That’s why doing pen tests on the regular is necessary. We shouldn’t rely on humans getting it right every time, or strangers on the internet reporting it.


From experience I noticed that a lot of developers don't look at the big picture and don't have a full understanding of how the system works, what's the rationale behind how the feature achieved its objective and how it might be abused by a malicious user. The #1 thing that I think about when I'm looking at some code or feature (and recommend others do the same) is how malformed or intentionally malicious input would break it, but it seems like their developers clearly didn't do so.

This is also compounded by the drive to artificially complicate software stacks (microservices, etc) and "silo" developers into their own little bubble where they only work on a small aspect of the system and never have a need (nor the mental capacity - due to intentionally complicated stacks with dozens of microservices in various languages) to look at the big picture.


"It's friday and I want to go home" :)

But seriously, I'm sure it was made by people that just didn't stop to think about security and "it works, so we're done here" Then, as a business you're not going to try and fix it if the software already works. That would be pure cost.


It makes automated testing of pw reset flow easier. Otherwise you'd need some out of band method to get the token.


You should be able to get the token from your database, unless you're doing black box testing, which I am not a fan of for reasons such as this.


I'm not justifying it.

Just saying that this might have been one reason for such data to be returned.

You may not have access to the database if you're doing frontend testing using headless browser.


> Hey, do you have a Grindr account?

> Lol

I can understand this is most probably a private lol by a surprised. But how about we at least stop making these are you gay? Lol! a public moment worth screenshooting?

An Ashley Madison data leak is a national embarrassment whereas a Grindr one, a "national security threat" [1]. Being on AM is just a vaudevillian indiscretion, being on Grindr is bro lol that feeds hate and wrecks lives.

[1] https://www.theverge.com/interface/2019/3/28/18285274/grindr...


In places they are using Grindr and other apps to target and arrest people. [0] Worse than what the phrase 'wrecks lives' connotes.

[0] https://www.independent.co.uk/news/world/middle-east/egypt-l...


The photo used for the account is also kind of mildly offensive in the same vein. (To non-gay readers: this is not actually how gay men pose for their grindr pics.)

Update: I decided to make the same point in the comments on his post and he responded in about the douchiest low-key homophobic way imaginable: http://disq.us/p/2c9pnno Tech has a long way to go on homophobia :(


I am gay and don't find it offensive or homophobic* at all , just a bit cringey. To non-gay readers: yes actually some gays do pose with grimaces.

* I'm even inclined to say you are kind of abusing these Big Words, but not care to start argument.


I did say "mildly" offensive. There's lots of screenshots of the Grindr profile grid posted online. I'd be surprised if you could find one including a profile photo that looks much like this one.

But really the point isn't that no gay man has ever posed for a photo like this. (I'm sure some straight men have too.) The point is that the photo is used to make a joke about how ridiculous Scott would look if he was gay. He couldn't just have a normal dating profile photo.


> The point is that the photo is used to make a joke about how ridiculous Scott would look if he was gay.

You seem to have decided that Troy is homophobic and therefore this picture is mocking gay people, and not just mocking Scott's attempts at being sexy.

> He couldn't just have a normal dating profile photo.

They both delight in getting unflattering pictures of each other into every security presentation they can. I think you're reading into this too much.

What would be the point of changing the photo if it didn't annoy Scott when he saw it?


There is clearly no technical reason for Scott's involvement in the first place, as Troy could just have created an account himself using any photo whatsoever. The whole post is structured as a joke about the idea of Scott being gay and having a Grindr account.

Do I think Troy is a raging homophobe? No. But his initial response to my comment was to claim that he chose the photo because it looked like a typical Grindr profile photo (!), rather than to just admit that it was a joke. That doesn't increase my confidence that the photo was chosen with good intentions. You apparently think that there is another explanation for the choice of photo relating to some inside baseball between Troy and Scott (which I frankly don't care about, not knowing either of these guys). If that is the real reason then Troy could just have said so himself.

In general, making in-jokes in public blog posts that anyone can read doesn't always go over so well. I'm not saying "Troy is a class A homophobe and must be deleted from the internet". I'm just registering that I'm one gay dude who is bored of this kind of gay joke.


> There is clearly no technical reason for Scott's involvement in the first place, as Troy could just have created an account himself using any photo whatsoever.

"I was able to log into my own account after resetting the password" isn't a particularly good attention grabber

> The whole post is structured as a joke about the idea of Scott being gay and having a Grindr account.

Scott is married, I'm pretty sure the joke is about him being on a dating app in the first place.

> [...]

I guess at this point, Troy assumes he's famous enough that everyone who sees his blog will know about him and Scott as well as all the shenanigans they get up to with each other.

Tech and infosec definitely have a lot of problems with inclusivity towards anyone that's not a straight white male, but I really don't think that this blog post was written from the perspective of "ha ha ha ha imagine being gay".

Still, he usually takes criticism, let's see if he does a followup.


>"I was able to log into my own account after resetting the password" isn't a particularly good attention grabber

You’d still be resetting the password in a way that doesn’t require access to the associated email account but only knowledge of the email address. It doesn’t make any difference whether you created the account yourself or got a friend to do it for you. You're still showing that anyone who knows my email address can access my Grindr account, which surely ought to grab people's attention.

From my point of view, its a precursor to a sensible discussion about this that we acknowledge that Troy had Scott make the profile purely to add a little humor to the post. There’s of course nothing wrong with adding humor to a blog post. However, subtly mocking people for being gay is a pretty ingrained part of our culture, and it’s easy for straight people to thoughtlessly slip into doing it.

I know that you have given an alternative explanation of the intention behind the photo, but unfortunately it's not the explanation that Troy himself gave in the comments.


That is plainly untrue, some gays do use ridiculous facial expressions.


And I think you actually meant "not professional" instead of "homophobic".


So do some straight men. That's not the point. (I think you responded before my edit - sorry.)

If the pose is a common one, it should be easy to find a screenshot. I couldn't, personally.


I feel the same way about this issue (the chat and the photo) and the response by Troy just feels... abysmal.

He starts out by saying:

"The photo, however, is the one most consistent with others I saw on Grindr during this exercise." and then continues to agree with the following statement: "... you didn't choose this pose by looking at typical Grindr profile photos."


Isn't Grindr a hookup app? It's not like all gay people use it. It'd be like me asking a straight married friend if they used Tinder. Would "lol" be offensive in that context?


I’m not an engineer, but I can say that for a very long time Grindr felt like it was basic, poorly built, and generally unreliable. A couple of years ago it felt like there was a serious wave of investment in the app - the UI got better, it stopped dropping messages and having random outages - but clearly the DNA of the company hasn’t really changed.


OK, I know it’s easy to say “well of course it’s not safe, don’t send nudes and don’t go on sketchy hookups”. But, to paraphrase Drag Race: men are rotted gila monsters. (I’m a gay male, I can say that. Also I speak from experience. I've seen things you people wouldn't believe.)

So, as a thought exercise, how do you make an app like this more secure? Harm reduction is the name of the game. What are the best practices for this? Is it 2FA? Is it encryption keys linked to one device? Is it copying principles from Signal? Is it just having competent developers?


Uh... One part of it is not returning password reset tokens in the browser. If you know remotely anything about web security this is the most glaring security flaw you could ever encounter.

Other steps are nice to think about, but ensuring basic security measures would preempt 99% of data breaches and "hacks".


The most appalling part is that this was a dedicated endpoint, named "password-reset". This wasn't some negligent leak, some misconfigured logger. It was done this way on purpose. Somebody thought this was a good idea. And nobody else saw it and thought to question it! It reveals gross institutional incompetence that probably should have been filtered out at the hiring stage.


Could you explain why that's bad for someone who knows nothing about security?

Where should the password reset token be?


The token should only be accessible to the user requesting the password reset, meaning that it would be sent via email (this is the standard password reset flow).

The flaw here is that anyone, even if they did not control the email of the user, could reset the password, because the reset token was returned in the browser, where anyone could see it. Essentially, just by knowing someone's email (not having control over it), you could reset their password.


You did a really great job breaking that down! Thank you!


It should’ve been sent via email to the registered email address. That lets the account owner reject it (I didn’t request a password reset!) or use it.


In an email sent to the address linked to the account.


Yeah, in this particular case, they were just glaringly stupid.

Just gaming out ideas in my head. I have friends from rather more repressive countries, namely China, where being gay is still a grey area in terms of legality and acceptance, and I’m just thinking of better ways to structure a system.


> rather more repressive countries, namely China, where being gay is still a grey area in terms of legality and acceptance

...what?


Unfortunately, a large part of it can only really be solved by making the world and the laws of the world safer for the LGBTQ community.

I do what I can in terms of promoting certain ideas, creating informational resources, etc. But I am just one person and yadda.

As long as it is okay to jail or kill people merely for being gay, no amount of security will ever really make it safe.

In the meantime, perhaps someone should blog about "dating security best practices for the LGBTQ crowd" or something like that. (It won't be me. I'm just tossing the idea out there.)


For the communication part, E2E encryption is the obvious choice and the Signal Protocol is a great candidate. They could either implement it themselves or make a deal with Open Whisper Systems to dual-license it (not 100%, but I think that's what WhatsApp did).

The problem here is the profiles, which can't be E2E encrypted because the server need to run matching algorithms on them. This is where hiring competent developers comes in, along with semi-regular security audits.

Regarding this issue specifically: as far as I'm concerned, a password reset endpoint should return absolutely no information, which should be enforced by an integration test. And I don't only mean the HTTP body here - even the return time of the request (check db, send email if user exists) could be a user enumeration exploit, which for a gay dating app already sounds like a big problem. Throw the email into a queue and return immediately. Have a background worker deal with asynchronously. Add a random sleep() if you can afford it. if resp.code == 200: "If the address was correct, you will receive a reset link"

In many parts of the world, you could be risking people's lives by having a side-channel user enumeration bug, let alone this level stupidity. But I doubt your average overworked "full-stack" JS dev would even think about this, and the incentive structure simply isn't there for a for-profit company to hire people that would.


[flagged]


Instead of trying to make this about yourself, maybe take the context and infer it as bringing some humor into their comment on a pretty serious topic


I have very little trust in the capabilities and interest of the Grindr team to do anything but making money with overpriced subscriptions. It's riddled with bugs, years old, yet they keep adding new, unnecessary features like video chat to justify their insanely priced "unlimited" subscription.

This year there have been a few months where your own profile data would not load, making you think you'd lost your profile data and having to create it all again. Yet all you needed to do was to restart the app ~10 times to get it to load.

Sometimes messages just... get lost in the ether.

The "online now" notification is flaky.

Grindr Online (web browser) is a whole new mess. I haven't used it in a long while, but the first months it felt as "professional" as an interns side project. Also you need to keep Grindr open on the phone while using it, kind of beating the purpose.

The setting to use the metric system still resets to imperial regularly.

The app is full of fakers, yet they still have no identity validation feature.


I guess the good news is that it requires knowledge of the user's email address to execute. You can't just run it on random people (emails aren't disclosed) and even if you know someone on the app in real life, chances are good that they use a personal address that you won't have.

Still a pretty bad vulnerability and pretty awful that grindr was ignoring it.


Imagine someone running their contact list through this. You could find everyone you know on Grindr right away, and snoop on their conversations and read their personal info...

Not only that, but emails are very easy to find these days with tools like apollo.io.


Good point, even just being able to use it as a tool to play "gay or not" has some pretty aweful implications for people who aren't openly gay.


Yes, but even if the referenced security risk is patched you would still be able to find out if some has an account or not since a password reset page will tell you if it has successfully sent an email to an account.


A good password reset page would not disclose such a fact (it would return a successful response with a message "if this email exists, we'll email you" regardless of whether it actually exists) however attempting to create an account would disclose that fact by rejecting an account creation attempt with an existing email, unless they use emails purely as communication channels and accounts are uniquely identified by username/account number instead.


>however attempting to create an account would disclose that fact by rejecting an account creation attempt with an existing email, unless they use emails purely as communication channels

They can tell the user to await an e-mail from them with the confirmation link. Then if the e-mail address is already in use, send an e-mail saying, "somebody, probably you, tried to register as <new-username> on <site> but we have you down as <old-username> already". Otherwise, send a normal confirmation link.


This is a very good idea I haven't thought about, thanks!


Grindr saves on storage costs as much as they can. Messages are sent from the backend to a device only once. You can not read old conversations. You can not even see who this account has been talking to. They are only kept on the server until someone logs into the app again.

This also makes it very easy to lose conversations/content.


Social engineering trick: "Hey can you take our picture, my phone is dead, so can you just email it to me?"


It would be very easy to target a large group of individuals at a given organization.


> even if you knkw someone on the app, chances are good that they use a personal address that you won't have

I doubt that; I bet most users use whatever Gmail/etc personal address they use for other non-work accounts.


Extremely anecdotally: it’s [person_name]@gmail.com

I know of very few friends who go through the process of creating a burner email account to sign up for Grindr. Now, maybe that’s different in other countries, but at least in the States, I would bet good money you can guess their Gmail address.


In the case of gmail accounts you could simply prepend +grindr or any other name to the user part of the email address to get something (relatively) unguessable.


You could do that, but most users don't.


A startup I worked for had this exact same security issue. I brought it up to the tech lead/CEO but they were in denial about it. Handrolled password reset by dummies basically


Why people are still hand rolling common stuff like this is baffling to me. I'm treading on offensive waters here, but I'd guess this is from a nodejs backend, for some reason it seems to be more common to hand roll stuff like this in node than pretty much any other web language/framework I've worked with.


> Why people are still hand rolling common stuff like this is baffling to me

Don't most systems hand roll their own password reset? Using any backend tech, I mean. This isn't crypto, where hand rolling your own solution is almost always a mistake.


I have handrolled a pass reset in node but I didn't give up the key back to the client. In this case it was actually spring framework


Couldn't you just demonstrate the exploit by resetting any password? (by a willing participant, so as not to be considered as doing something illegal). I wonder how your tech lead could deny that.


"Eh that required too much work, no one will try that in real life"

"Oh you were smart enough to open the dev tools and see that, that won't happen irl"

"oh users don't have important enough info stored on this account so it won't hurt to have someone access it" (<- literally a reasoning used by a site I used in defense of poor security. "the attacker only gets access to your last name and the last 4 digits of your credit card, that's not bad enough to need more security")

Don't put it past an incompetent/lazy/underfunded tech lead to dismiss even a one-click account takeover script.


I've been on Grindr for years, and I know first hand that their support is as bad as it can get. _Seriously_. I know because they also have a big automated ban problem. In trying to fight their bot issue, they've started auto-banning accounts that trigger their filter in some way or another. I've been banned 4 times without cause. Each time you have to contact support, who seemingly are either unable or not allowed to answer with anything other than canned responses. Three times the support person realized the ban was erroneous and lifted it without further ado. One time the person affirmed it, all while refusing to break from the canned responses or provide any justification. Most frustrating experience I've ever had with a digital service.


As far as I can tell Grindr has had crappy security and a willful negligent response to security concerns for its entire existence. Don't forget that location tracking in real time of people with Grindr. Don't use Grindr.


I'm not a user of Grindr, but isn't the real time location tracking a feature that users actually like?


Yes. "Who else in my structural engineering tute is gay?" was one of the killer apps for GPS on mobile phones.


The real-time location tracking is very central to the app. It’s a grid of people organized by distance.


It’s how you try to find out if the hot guy across the bar might be up for it.


This will continue to happen as long as companies aren't given any reason to care. The incentives simply don't work out, and I highly doubt the market will ever change that at this point.


It would be nice if this level of negligence and incompetence was somehow punished so that it stopped happening so often


I've fixed this exact vulnerability (sans QR code) for a client of mine in the last 2 years. I place the cause for these kinds of issues on the split between "frontend" and "backend" developers, with many frontend developers coming out of code camps able to build client-side rendered single page applications and being very proficient in JavaScript but not having experience with aspects of security-related software design. Back in the olden days, coming through learning PHP which was all server-side, you got a lot more exposure to that. Less so with these React-heavy code camps.


Any recommended resources to improve on this specific gap? Ie backend security for frontend devs


That 'bug' is so stupid and elementary that I'm disinclined to think it's a bug. If they had any security people, it'd never have existed. So ... they just don't give a shit. Surprise?


"Security people" spend most of their time dealing with dubious compliance requirements that rarely improve security (in most cases they annoy users and force them to use even less secure workarounds) than actual security like reviewing code to catch things like this and implement policies to make sure unreviewed code doesn't make it to production.


And the standards for getting into security vary, a lot. I've worked with extremely knowledgeable security researchers, and people who were promoted from helpdesk (typically in areas like compliance), with very little knowledge outside of some certificates. With the latter I often had to explain pretty basic stuff, like how digital signatures work and why the client needs to know the public key.


In some cases, mandatory compliance measures even worsen security with rules such as requiring some kinds of characters in a password for instance.


I’ve worked on these types of features and this is egregiously bad. Where I work, we won’t even tell you the full email address that we’re sending the password reset to.


All you need to do is buy a seat on a RTB exchange and you can already collect pretty much all the information you need without having to hack anything.

Our digital infrastructure is ludicrously insecure and open to abuse. The stable door is wide open, and has been for over a decade.


I'm not even sure I'd call this a security flaw or bug... It seems like the design was wrong or it just wasn't done right for some reason. A post-mortem on how this ended up in production would be interesting.


Getting in touch with "the right people" seems to be hard at a lot of companies for rare issues like this.

Imagine another rare issue - say I want to speak to the board of directors to give them a buyout offer... Would I manage that though in-app chat?


Why was it necessary for Troy to create an account on behalf of someone else?


He didn't, his friend created the account. It's usefull to have someone else, on a diffrent device, and IP block to create the account to confirm that it's possible to take over an account that you have no previous connection to.


If the target has halfway competent security response, you just say "Look at this obvious bug in your design" and they fix it. The first part of Troy's post makes it clear that Grindr did not have halfway competent security response.

When you're dealing with a target that doesn't have halfway competent security response the only option is to actually have an equivocal demo that there's a hole which means you need to break into somebody else's account.

Anything else they'll most likely gaslight you and their users. "No, there was no hole, Troy just accessed his own account, nothing to see, fake news".


Didn't ytcracker work for Grinder?

It's a hard thing to Google, but I follow him on Twitter and I thought that was the case. If so, this is a hilarious event for some other rapper to dunk on.


One reason why generating random email address for each registered account is a good practice if you care about security, and can sometimes save you.


This is why I love sign up with Apple. Even though developers don’t like it, it’s good for users privacy and security.


In theory, yes. But since it itself has had significant security flaws, since it's so difficult to log in without apple devices, and since you cant trust Apple to be neutral (they recently deleted all Sign-In-With-Apple Accounts from Epic, even though that has very little to do with their dispute and will hurt customers more than Epic), I'd rather reuse my email a couple of times than sell even more of my soul to this volatile company.


They didn't actually remove Epic's access. Epic just claimed that they would with no corroboration. https://www.imore.com/apple-reverses-course-will-still-allow...


fuck "responsible disclosure"

the outcome of this runaround was that grindr stated they will create a bug bounty program

proving once again that the "market based bug bounty program" has better aligned incentives and results in solving the same thing, vulnerabilities that should have been fixed to begin with were fixed.


Must be some framework that has this behaviour as default. Else it would be really really bad.


What would be the use case for a framework that returns password reset token to random user requesting password reset of another account. Token must only be available to account owner.

A framework like this should not be used.


This seems to be fairly deliberate, the QR code might probably give you some clues. They needed to generate a QR code so the user could just scan it and reset their password.


A lot of these "bugs" are just backdoors that countries might force them to include. When caught they call it a bug.


Shouldn't this incur a massive fine under the GDPR? Isn't this gross negligence in data protection?


GDPR enforcement against non-EU companies has not been tested yet, but this might be a juicy test case if there are any lawyers out there.


Grinder was owned by a Chinese video game company from 2016 to 2020. Under pressure from from the US government it was sold to a southern california company.


Troy is obviously a good guy but I think he may be stepping into murky waters here with the switch from logging pwns to actively investigating.

Think he'll do a legit job either way but it seems like a gamble to me. Investigative stuff is well...more murky




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: