Hacker News new | past | comments | ask | show | jobs | submit login
Uber paid 20-year-old Florida man to keep data breach secret (reuters.com)
336 points by _vvdf on Dec 7, 2017 | hide | past | favorite | 95 comments



Pentester here.

If any pentester would download more data than necessary to prove a bug exists, they would be fired.

I don't know whether this person downloaded 57M user records. Maybe it was a 500kb zip file, which would be totally reasonable to grab in a pentest. And then once you realize what it is, you run `srm` on it to ensure it's wiped, then immediately call the client so they can deploy an emergency hotfix and perform forensic analysis to see if the data had already been nabbed.

We know he contacted Uber. We don't know whether he said "Give me $100k and I'll delete the data" or "Give me $100k and I'll keep my mouth shut" or if Uber offered him the $100k or any details at all. In this scenario, it's better to assume the best until proven otherwise. And by "proof" I mean "emails showing what was said," not a second-hand news report that attempts to spin it into an easily clickable form.

But I suppose it could have gone the other way too, and maybe he did. The point is, it's totally reasonable for Uber to raise the price until he was willing to keep quiet about it. If I ran into this bug in the wild, I would be ethically obliged to report it to you, dear readers, after a reasonable time period. But I suppose certain ethics would take a back seat to $100k in my pocket, and I'm not ashamed of that.

But it depends on the data. If we're talking SSNs, that could really screw up peoples' lives, so I don't think I'd be able to be bought. CC numbers I'd probably overlook, since you can't really mess with someone's life by stealing their CC number. They just get refunded. (edit: On the other hand, businesses eat the cost of fraud, so this would probably need to be reported.)

The point is, it's a big complex topic and there are a bunch of things we don't know. But above all, if you are ever holding data hostage and demanding money to destroy it, you're not a pentester, you're a chump.

edit2: it occurs to me that maybe the 20yo wanted to hold the data in order to prove to the world that the breach really did happen, i.e. his intent was altruistic. I could picture myself doing something misguided like that back when I was 20. But the trick is to keep only a few records at most, and redact everything sensitive. Then tell the truth. The company can't lie and say it didn't happen, since they don't know whether you can prove that it did. And no one is at risk because the data is gone.


"if you are ever holding data hostage and demanding money to destroy it, you're not a pentester, you're a chump."

If you're not getting paid up front, regardless of the outcome, you're not a 'pentester' in the first place - you're just doing free work for a company who will then (in some cases) not hesitate to sue you just for telling them they have a problem. Fuck that. You're taking some sort of moral high ground here; which is to be expected, as you're in the industry, or as we say in Dutch - "whose bread one eats, whose word one speaks". But just because a bunch of people with security services to sell says it's so, doesn't make it true.

"Responsible disclosure" is bullshit. These "bug bounty" programs are just a cover up - a way to retroactively say "but we had procedures!". The real chumps are the people spending days or weeks looking for issues and then being send off with a $250 Starbucks coupon and a pat on the head. The days of 'playing nice' or 'doing the 'ethical' rollseyes thing' are over. Today's internet is an all-out, free for all warzone (security-wise). "Responsible disclosure" is a PR scapegoat, a smoke curtain devised by companies unwilling to spend what it takes to make our eye-wateringly bad state of infrastructure seem... if not good, then at least less crap.

I don't have a horse in this race; I stopped caring about infosec 15+ years ago when the full contact spirit of the scene began to wade. I just assume that anything with a keyboard I type on is compromised and adapt my behavior to match. But it does still make me angry that so many people bought into this whole spiel of blaming whoever finds the issue, instead of holding those that caused it in the first place responsible. It's morally equivalent to the GFC bailouts, except that there's not even a 'too big to fail' argument to be made.


There are a lot of people who do this type of work for status rather than money. The bug bounty programs are a way for otherwise would-be hackers to gain recognition, and a small chunk of change for stuff they would be doing anyway, which would likely net them less money and more legal trouble.


I agree with you, however when you get paid is irrelevant.

It's just a matter of having a clear objective and guidelines scoped out in a contract.


Yes, I meant 'agree upfront that you'll get paid at all', not so much that the invoice needs to be paid before you start the work - that was unclear phrasing on my part.


> CC numbers I'd probably overlook, since you can't really mess with someone's life by stealing their CC number. They just get refunded

This doesn't happen in my parts of the world (Eastern European country) where most of the people still use debit-cards. Card cloning (https://en.wikipedia.org/wiki/Credit_card_fraud#Skimming) is a frequent crime around here, and it absolutely and negatively affects those who end up on the wrong side of it all (i.e. the card-holders). A former friend of mine had her money stolen from her bank account when someone had cloned her card and the procedure of her getting the money back was to go to the police, file a police report, and then go to the bank and complain about it with said police report in hand and hoping that she'd receive a refund in the next 6 months.


> This doesn't happen in my parts of the world (Eastern European country) where most of the people still use debit-cards.

In Netherlands banks are still responsible, no matter debit or credit card. It's pretty common for the bank to have refunded someone before the customer notices anything.

The skimming bit only worked super easily with the swipe system. With the chip it reduced the amount of fraud considerably. Plus they only allow the card to be used in Europe by default (you can easily change this setting in any bank).

People now moved to social engineering (pretending to be the bank and telling people to transfer money).


And that’s why there used to be a card system in Europe that didn’t rely on numbers, but used good cryptography.

Nowadays EC is mostly dead, though, except for Germany.


>Nowadays EC is mostly dead, though, except for Germany.

It's dead in Germany as well.

For quite a long time the banks rebranded their scheme as "electronic cash" and reused the logo. But a few years back they changed their brand to "girocard" [2] and MasterCard bought the famous "ec" trademark [1].

I especially like the consorsfinanz card, which is just a regular Debit MasterCard.

https://www.mastercard.de/content/dam/mccom/de-de/privatkund...

[1]: https://www.mastercard.de/de-de/privatkunden/produkte-featur...

[2]: https://die-dk.de/zahlungsverkehr/zulassungsverfahren/electr... (in German)


> It's dead in Germany as well.

A 2017 study found that while over 80% of Germans have an EC card, less than 6% have a regular debit or credit card. As result, the entire payment structure is still aimed at EC.

Mostly because only 2-3 banks offer free CCs or Debit cards, and the others have 30-90€ fees per year.

And now that the EU forced CCs (which previously had between 3 and 7% fees (!)) to the same fees as EC (between 0.125% and 0.25%), even the existing free CC offerings are moving towards high fees.


The card you call "EC card" (which is still called that in colloquial usage) is called "Girocard" officially and by the poster you replied to – you are not disagreeing except on terminology.

It should be noted that what you call "EC card" is a Germany-only thing from its inception. There was another, totally different scheme under the name "Eurocheque" used internationally in Europe but that is completely dead for a few decades. It never relied on cryptography but rather holograms. The logo was then used for the "Electronic Cash" system which never was supported outside Germany.

(I would also dispute the 6% number. Yes, the scheme of the German banks is really popular, but 6% is too low and doesn't match the numbers I know. Which study would that be?)


> I would also dispute the 6% number. Yes, the scheme of the German banks is really popular, but 6% is too low and doesn't match the numbers I know. Which study would that be?

I don’t have it on hand, but it was a study comparing the costs of cash, and prevalence of different payment methods, from Steinbeis I think, for either the Bundesdruckerei, or another federal agency.

> but 6% is too low and doesn't match the numbers I know

Remember that until 2015, REWE was the only retailer in most of DE to even accept CCs, and even today, most CCs still have fees above 40€/year.

A card that no retailer accepts, which is useless in German online stores, and which costs you money, is only interesting to those who travel internationally.


I don't know such an 2017 study. The closest are the Bundesbank 2014 one https://www.bundesbank.de/Redaktion/DE/Downloads/Veroeffentl... which says 97% own a Girocard and 34% a credit card. 21% of the people that own a credit card responded that they got it for free with their bank account. (The second volume of that project that was published this year doesn't contain any new numbers as far as I remember – they are currently conducting new surveys).

There is also the 2013 Steinbeis Cost of Cash study: http://www.steinbeis-research.de/images/pdf-documents/CFP_Co... It cites EHI numbers, about 30% with a credit card.


EMV still uses crypto, as does contactless EMV.

It's pretty good and has more or less killed cloning.


I doubt the man resorted to extortion. 100k seems pretty reasonable for the severity of this issue and given Uber's budget it could've been a reasonable initial offer.


There's clearly a spectrum between "reward" and "extortion", but... come on. The point to a bug bounty is to make it worthwhile for smart hackers to report their findings to the company in a ethical way, which broadly means that it should be comparable to the kind of money they can make for their labor on the open market.

A hundred grand for a few weeks or a month of work is way, way above that level. This is a jackpot, not a reward. Whether Uber threw out the number or the guy demanded probably won't ever be known, but I know where I'd put my bet.


If you consider the value of such a hack on the open market, $100k makes more sense. The bug bounty program is supposed to make it more lucrative to be a white hat then to be a black hat so $100k for something this severe is in the realm of possibility. Whether it took the hacker hours or months isn't really a factor...it's just about the value of the hack.

For comparison, Apple's bug bounty program says they'll pay $100k for a hack to extract data from the secure enclave.

(Bug bounty programs are often probably underpaying compared to what something would be worth on the black market, but clean money is worth lots more than dirty money with a prison risk)


I’m sorry, but all of this comment is incorrect.

1. Vulnerabilities in the individual websites of specific companies have typically little to no salable value on a “open market”, black or otherwise.

2. People who manage bug bounty programs know this, and those programs are not designed to compete with a shadowy underworld. They’re just an incentive for reporting security vulnerabilities.

3. Apple’s security vulnerabilities actually do compete with a market for the sale of exploits, but this is because vulnerabilities in iOS or macOS represent vulnerabilities in deployed operating systems and software for which they are not the sole arbiter of an update decision.

> Bug bounty programs are often probably underpaying compared to what something would be worth on the black market, but clean money is worth lots more than dirty money with a prison risk

I say the following as someone who has: 1) managed a bug bounty internally as a security engineer, 2) managed bug hounties as a consultant for various tech companies of various sizes, 3) reported security vulnerabilities in bounty programs for companies you’ve heard of, 4) spoken professionally with engineers at tiny, small, medium and large companies running programs and 5) sold vulnerabilities for various reasons:

Bug bounty programs are emphatically not underpaying relative to a black market. Black market exchanges exist for vulnerabilities which impact operating systems, widely used open source software and languages. A key component of the value of a vulnerability is its half-life - that is to say, how long it can be expected to be useful. A vulnerability in Ubuntu has a half-life of years, perhaps decades. A vulnerability in Uber’s web applications has a half-life of one week. In 15 years, you will reliably find servers on the internet chugging along with a horribly misconfigured, vulnerable version of Windows or Debian and an open service written in Python 2.7. In contrast, Uber’s web applications will scarcely look the same in 15 years, and the company can deploy a hotfix to the entire landscape of the vulnerability (their centralized servers) in 24 hours.

Can you conconct a scenario in which a hypothetical sabateur manages to weaponize and capitalize on an exploit in Facebook Ads Manager, or some random Uber server with sensitive data, within a week? Sure, but it’s contrived. The risk/reward ratio just isn’t really there.

I’ve continually crusaded against what you’re claiming on HN for literally years now. It’s simply not true. I don’t mean to be harsh on you in particular, but the confident repetition of incorrect claims becomes frustrating.


> 1. Vulnerabilities in the individual websites of specific companies have typically little to no salable value on a “open market”, black or otherwise.

This may apply to regular random companies, but does it really apply to very known, rich brands like Uber?

> Can you conconct a scenario in which a hypothetical sabateur manages to weaponize and capitalize on an exploit in (...) some random Uber server with sensitive data, within a week? Sure, but it’s contrived. The risk/reward ratio just isn’t really there.

This is Uber we're talking about. It's not exactly an universally loved company. I can easily imagine someone interested in profiting of extra bad press and attention caused by a data breach.


>This may apply to regular random companies, but does it really apply to very known, rich brands like Uber?

I would pay 10BTC for a recent copy of Uber user db containing at least the emails, names and phone numbers of all users.


> Can you conconct a scenario in which a hypothetical sabateur manages to weaponize and capitalize on an exploit in Facebook Ads Manager, or some random Uber server with sensitive data, within a week? Sure, but it’s contrived. The risk/reward ratio just isn’t really there.

Sure kill the credit cards who gives a fuck.

Knowing where uber users residents live, their standard time home on a friday or saturday night, whether or not they're throwing up with a chance on not remembering anything (I stole) would be fantastic. Oh, I could also sell this to anyone doing ANY datamining to easily enrich their data set.

This is 10 seconds worth of thought, do you really think the Uber data set has so little value?


> This is 10 seconds worth of thought, do you really think the Uber data set has so little value?

No. I’m saying that a vulnerability in Uber’s software has very little value.

More precisely, I’ve sold data (and analysis thereof) to the financial sector. I’ve even sold unique data on Uber and UberEats specifically (not gained through a security vulnerability). Data and vulnerabilities are distinct products with separate buyers. Companies interested in data like this are mostly interested in it being sourced, at worst, through scraping or mining. They’re usually skittish about outright vulnerabilities, and have a sense of how likely it is data was obtained in a legally defensible manner.

On the other hand, buyers of vulnerabilities are mostly not using them for interesting dataset acquisition. They weaponize the vulnerabilities themselves instead of buying any single output from a vulnerability, and they mostly use them for developing botnets or constructing online “holes” for identity and credit card harvesting on an ongoing basis.

The point of purchasing vulnerabilities is gaining a privileged position for ongoing compromise that replenishes for a reasonably long time. No one is saying these vulnerabilities are bad; I’m specifically telling you the vulnerabilities are not generally salable, because the parties interested in them have little to no overlap with the parties interested in data. Furthermore, those two markets have separate intentions, processes and risk/reward ratios.

A dataset and a vulnerability that can lead to a dataset are simply not comparable. I believe someone would probably be willing to purchase this particular data, but I do not believe you could weaponize this data on an open market with any regularity, and bug bounty programs would not take this into account when calibrating their payouts. Finding an organization willing to buy a legally sourced, unique dataset is comparatively easy. So is finding an organization willing to buy a vulnerability that can be weaponized towards a significant number of servers on the internet. But finding an organization willing to buy a vulnerability just for its data value, or an organization willing to use illegally sourced data, is hard. Not impossible, but rarer than either of the other two examples. There is not a regular market for it.


I agree with you bug bounty programs for small companies/products aren't competing with the black market, but big ones like Apple, Uber, and Facebook certainly are. I was really only referring to these large folks. I agree that an XSS bug on AcmeBizSoftCo doesn't have much of a black market.

To take your FB Ad Manager example...just recently there was bug that allowed people to start campaigns for free (by somehow charging the bill to unrelated accounts). A bug like that has a small half-life but if it lets someone use up a million dollars in targetted ads over one weekend for free I would think you could still get quite a bit for it on a black market.


User personal data, account information and relationships, financial data, and the PR impact of having these hacks made public in the wrong light during an ugly news cycle carry value far beyond the half-life of the vulnerability itself.


I would quibble with this, because in general things like share price have little impact from security breaches unless it’s something like medical R&D.

But instead of taking that argument I’m going to say this: I have been heavily involved in bug bounty programs during my career thus far and I’ve sold data to hedge funds for companies like Uber (not using security vulnerabilities, however). In fact, I have specifically used unique, originally sourced and curated UberEats data to forecast GrubHub’s revenue and market share over time. I’ve even managed bug bounties for several companies and participated in them. I’ve had people threaten me with taking a vulnerability public in the hopes of receiving a payout, etc, etc.

I’m harping on all of this because vulnerabilities and data are very different products, and have very different markets. A vulnerability is salable to a black market under specific conditions; the people interested in acquiring unique data are, to a first approximation, none of the people interested in buying a vulnerability. One of these markets is interested in illegally generated, extremely high profit on an ongoing basis which they can mostly control end to end. The other market is very risk-averse, and is interested in profitable analysis of the data.

You could certainly find some counter-party willing to buy this kind of data eventually, but it would not be a traditional blackhat organization, and it would not be a routine regularity as it is with vulnerability exchanges. Further, bug bounty programs still would not calibrate their prices based on this, because it would be rarer than vulnerabilty sales.


You're quite misinformed on the value of such a vuln on the black market. Access to companies is indeed something that blackhats monetize. It's different from selling 0day in some software, but there is a market for it.


This is not access to anything. This is a static dataset.


Or you can go into the business of running exploits as a service ala Cellbrite.


If we were talking about any other company I'd probably agree with you, but I definitely think it would be Uber's style to offer 100k as hush money without being asked for hush money.

"Just throw money at it" seems to be a more integral strategy of their corporate playbook than most companies.


> The point to a bug bounty is to make it worthwhile for smart hackers to report their findings to the company in a ethical way, which broadly means that it should be comparable to the kind of money they can make for their labor on the open market.

To a first approximation, no bug bounty for web application or mobile application software competes with any black market. In fact there is no black market for those vulnerabilities. These vulnerabilities exist in a centralized system, and therefore have virtually no half-life, which means little to no value. They’re not salable.

Much to the chagrin of people who speculate one way or another about bug bounty payouts on message boards like this one, bug bounty programs do not calibrate their program payouts to compete with a real or perceived black market.


I'm in agreement that the person here couldn't readily turn around and sell the exploit, but what about the data?


At that point you’re comparing apples to oranges. There is a white hat market for company vulnerabilities, but not a black hat market for them. Whereas for stolen data, there is a black hat market, but not a white hat market.

The reason for this is obvious: bug reports have value to a company, but dumps of their own internal data do not. The only buyers of vulnerabilities on a white hat market are the companies who are victims of the vulnerabilities. The bug reports are valuable to those companies because it allows them to patch previously unknown vulnerabilities. But data breaches are not valuable to them in the same way. Why would they pay you for their own data that they already have? It’s not analogous to buying vulnerabilities, because the company gains nothing new.

Therefore the argument that the stolen data would be worth more on the black market than the white market is a moot point, because there is no white market for stolen data. If you’ve stolen data, you’ve overstepped the bounds of any reasonable bug bounty program, at a greatly increased risk of prosecution. Your options are to 1) stop and do nothing, 2) sell the data on the black market, 3) attempt to responsibly disclose the breach, knowing you are in a vulnerable legal position having downloaded the data, or 4) extort the company.

(Notice that “sell the data on a white hat market” is not an option.)

What we do not know in this case is if the hacker chose #3 or #4. It seems like he used social engineering to get the GitHub credentials, which would normally fall out of bounds for bug bounty programs (never mind the data breach itself). That seems to support the speculative conclusion that the hacker went into this with malicious intent. So does the fact that he resorted to hackerone seemingly post-facto, as the article mentions, so Uber could “verify his identity.” But perhaps he was just naive. We don’t know.

I second what others have said. The fact that this is Uber makes me inclined to believe they initiated the offer of $100k.


It's more than that even. As a consumer, you should have a right to know when data you entrusted to someone has been compromised. Uber needed to report this even assuming this was a "white hat" hacker.


Indeed. And that is likely one of many reasons why bug bounty programs typically forbid downloading any more data than necessary to prove a bug.


Comparing a bug bounty to a jackpot seems reasonable. Researches are paid when they find something. They might work for many hours and find nothing. So when they do find a bug, the payout has to be worthwhile.


You may be right, but you can't just compare to the market rate of a month's labor. You have to price in the rather high probability that no serious bug would be found and the risk-adjusted rate of return.


Uber hast been created to bet on a billion dollar jackpot so I think they should respect people who are betting on smaller jackpots. I don't understand why the little guy is supposed to ask only for his labor rate while the company is shooting for the really big money.


I hesitate to defend Uber, but I think you're mixing the post-hoc reward of $100,000, with the propter hoc expected value of the guy's work.

Such reward schemes are set up as a sort of competition, or bet. You invest time not knowing if you will find anything worthy of a reward. If you expect to have a 10% chance of finding a vulnerability, the reward needs to be 10x the value of the work for it to a worthwhile use of your time.


A few weeks work with no guarantee of success, they could have worked for a year and made $0 so the good times have to earn you enough to get through the bad.


There are companies who charge 50 to 100K for vulnerabilities test and you will get a 20 to 100 page report with one guy onsite for 5 to 8 days, so no 1 month work is definitively more than 100K worth, even Infosec analyst will get it as one month pay


The appropriate calculation is the amount of it took to find the issue, divided by the probability of not finding the issue.


$1k/hr consulting fee for 100 hours. That seems normal.


"If any pentester would download more data than necessary to prove a bug exists, they would be fired."

I disagree, sticking to that would even be foolish in a full scope pentest. It depends what the data is. I would, and have, download any data of operational value using a bug that was found. If I can dump the entire company's employee password hashes, I'm going to. And after the engagement, a company-wide email is going out to roll those credentials, which, whether I personally viewed them or not, are now considered compromised. There are some exceptions if there is reason to have high confidence that no one else ever exploited the bug, but that's taking a big chance.

Data of no operational value would be difficult to explain, and customer data or credentials would be off limits beyond proving the bug. Even though it's likely legally fine in most cases (note: a work-for-hire contract or an internal pen test team have very different rules of engagement than bug bounty spec work), it would be highly unprofessional. You're going to be putting the customer in a very awkward position at best. But if you're hired to perform services on behalf of the company, breach disclosure laws don't apply any more than if you are hired as a contractor to migrate their database from one system to another.


You're quite right, and I winced at my imprecision. Thanks for calling that out. I clarified here: https://news.ycombinator.com/item?id=15872824


It's also possible that he discovered it, reported it, got paid the bounty, and all after someone else discovered it, didn't report it, and got paid elsewhere.

(note: clear conjecture, I only read half of the article before becoming more interested in the HN discussion)


57M user records can't possibly be a 500kb zip file. Even if you could decompress a single byte to a whole user record (i.e. there could be at most 256 different user records possible) then the compressed size would be 57 Mb, but because you can't, it would be at least half a gigabyte compressed. It's not something you can download as a side effect of exploring, you'd get it only intentionally or if you're fetching a whole database dump or disk/VM image.


I was assuming middle-out compression.

It's routine during a netpen to download very large unlabeled files, especially VM images. You don't know whether they're a security threat until you check, and your goal is to escalate as much as possible. Even if it's named backup-20141120, you don't know what it's a backup of, or whether it's encrypted. You need to at least start downloading it to check the file headers.

A good pentester will try to suck down as may different things as possible and sort ruthlessly for anything that could be useful: keys, passwords, notes, bash histories, logs, everything. But that's why we do so on an encrypted, isolated drive. The data never leaves the partition, and it's deleted at the conclusion of the test.

People working on HackerOne don't have that kind of discipline, so it's important to err on the side of caution. But it's totally valid to grab a VM snapshot and look through it to check how to pivot elsewhere.

But the moment you realize it's super sensitive data, you want to wipe it and contact them immediately. (Perhaps after checking whether there's anything you can use to escalate privileges.) If it's named "ssns.txt", you probably still want to download it just to check it really is SSNs before you go running to them about their exposed text file.

The point is, it sounds very dramatic to say "Hacker downloaded 57M user records in 1GB of data", but sometimes you don't know they're user records until you look, and sometimes you can't look until it's fully downloaded. And your goal is to safely simulate what a real hacker would do. That's the point of a pentest, and why ethics and trust are so important.

I've snagged several VM image files during pentests at various companies. I don't remember whether any of them turned out to be useful, but I do remember poking through them in vmware fusion to see what the devs had littered around.

Now: I was under strict NDA. That isn't true for HackerOne finders. Every company has different rules. Some tell you up front not to do this, e.g. https://hackerone.com/deptofdefense ("You do not exfiltrate any data under any circumstances.") https://hackerone.com/square ("Never attempt to view, modify, or damage data belonging to others.")

Crucially, Uber does not: https://hackerone.com/uber

Searching for "data" shows that everything is in scope. So it's really tricky to say there was malicious intent here.

edit: Usually it's the other way around, though: You find an exploit that gives you a little drip of data, so you know that you could technically enumerate the entire dataset if you wanted to. Obviously, don't do that, because you already know from the first drip whether there will be anything useful if you keep going.


It's not routine to escalate. There was an article here not long ago about a guy who found a RCE on facebook and escalated to everything he could and kept a copy of all the data, that didn't go well.


It's extremely important to understand the terms of each bug bounty program. FB prohibits escalation (https://www.facebook.com/whitehat/):

You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.)


The issue though would be that this person is an proper pentester but the fact that are these things obviously under reported when someone extorts you. Sure the person is the chump but we don't know how much data has been exfiltrated if they go the way of extortion.


Uber paid this Florida man 100K as a bug bounty - and the secrecy was just part of the deal. My understanding is that bug bounties usually come with a reasonable disclosure process but in this case, Uber did not want this because of the severity of the issue, which in my opinion is wrong because of the potential impact of the bug. In any case, I wouldn't be surprised if there are similar cases that happened to other big companies like Google, Facebook.

Edit: According to the disclosure process on https://www.hackerone.com/disclosure-guidelines, there's nothing about disclosures lasting this long.


> Uber did not want this because of the severity of the issue

And they were currently negotiating with the FTC over a different prior undisclosed data breach.


People find game-over vulnerabilities and report them to bug bounties all the time. To a first approximation, 100% of serverside RCE vulnerabilities reported through HackerOne create comparable condictions. But the reporters don't have their machines forensically imaged or violate breach disclosure laws when they report on H1.

This doesn't add up.


So, knowing it was HackerOne was this as nefarious as the news is truly making it? It sounds like he found private keys in the github repos like was mentioned, that doesn’t necessarily mean that he downloaded or even blackmailed uber. Uber is still in the wrong for keeping a potential breach secret, but I’m beginning to have my doubts here.


As someone involved in bug bounties, this was completely different from a traditional bug bounty reward. Uber disguised this payment as a "bug bounty" to hide what it actually was -- a ransom payment to get the hacker to destroy the data. If this were truly under the realm of bug bounties, the hacker would have violated Uber's policies for exploiting the flaw and extracting information (the reward also exceeded Uber's top payout tenfold).

In short, the issue people are taking with Uber is that they tried to pass off a security breach as having taken proactive measures, while this was a case of ransom.


But if he found them, then someone else could have _already_ found them and downloaded the data.

The problem isn't that Uber paid him for finding the vulnerability, but that Uber kept it secret for so long.


That doesn't make sense. Serverside RCEs get disclosed all the time. If you take the median large SFBA tech company and stipulate serverside RCE, you're almost 100% of the time going to be an hour or two away from breach-disclosure-law event. But that never happens. What was different in this case?


Uber also conducted a forensic analysis of the hacker’s machine to make sure the data had been purged, the sources said.

It's a good thing no hacker would ever think to make a backup copy of the file on a USB stick or upload it to some cloud provider.


Looking for evidence of data exfiltration is common procedure in any forensics review.


No doubt, but after the fact it's very hard to detect any evidence especially if the hacker was purposely trying to cover his tracks. Maybe they can see that a USB drive was plugged in, but they won't know what may have been copied to that drive or to a network drive.


You'd be surprised at what can occasionally be found.

I think that I might be able to cover my tracks, but I'm definitely not sufficiently certain to stake my freedom on it, there's always a chance that I'd make some mistake and they happen to be more thorough than I am, and the same applies for everyone (e.g. including authors of APT's employed by the major intelligence services around the world); a 90% chance of getting some extra money on top of what he got isn't worth a 10% chance of criminal prosecution. Knowing that the machine is going to be analyzed by someone with a lot of resources is a sufficient deterrent IMHO.


Yea, if you are worried about this particular threat, you plug the hard drive into a device in read only mode, copy the data off, and put it back in your machine using a Linux live CD.

That said, Windows does keep a whole lot of information on activities in the registry and filesystem.


The Florida hacker paid a second person for services that involved accessing GitHub, ... to obtain credentials for access to Uber data stored elsewhere, one of the sources said.

In what world is the FL man (and his 2nd person) not a felon?


A well known heavily trafficked site was put under onerous FTC sanction and had to agree to prepare monthly reports about how they were keeping user's data safe for the next ten years.

Perhaps Uber will face the same penalty.


What site was that? Surely if they were FTC-sanctioned, it's public information, right?



this seemed like a bug bounty from the beginning, and the media was disingenuous to spin it like blackmail.

if there was no evidence that any data was actually compromised, I'm not sure I see a reason why they would need to disclose this to the public.


> Uber received an email last year from an anonymous person demanding money in exchange for user data ...

Doesn't sound like a typical bug bounty to me.


That sounds more like you’ve never been on the receiving end of a bug bounty program :)


A data breach isn’t a big bounty.


It doesn't matter if this was a bug bounty or not. It doesn't matter whether blackmail occurred.

The difficulty for Uber is that the existence of this a bug was kept a secret from the public, whose information may have been stolen. Nobody knows that this bug was not exploited by other parties.


> the personal data of 57 million passengers and 600,000 drivers were stolen in a breach that occurred in October 2016, and that it paid the hacker $100,000 to destroy the information


Not news

Long time security researcher here... most smart companies do not bribe, they simply hire , post exploit. An employee or even consultant under NDA can’t disclose very easily. In fact many fortune 500s will seek out up and coming analysts for some fluff project with little other reason than to get that NDA.


So, they are willing to pay $100k but normally their max payout for the severest bug is $15k?


Top end bugs typically get paid far above the 'max payout' for more typical bugs, and that is not unique to Uber. Maybe not 6x higher, but 'max payout' is a soft ceiling.


I bet Uber was hacked long before 2016 as 1k was stolen from my Uber account in May 2014. It was supposedly for a ride in London while Im in DC. When this happened I searched Twitter and saw ten to 20 ppl a day complaining of the same thing.

Uber's PR response at the time was it's the users fault for not choosing a complicated password vs. owning up there's a problem and or being concerned for their customers. What a great company!


You can go onto underground markets and forums today and buy hundreds to thousands of Uber accounts. They're obtained via endpoint malware and shared passwords.

Buying and selling Netflix, porn and Uber accounts in this way is a very common and popular

If you're account was affected it's a very good chance it was via this method rather than a broader Uber breach


Oh Uber was hacked in 2014 per this article https://www.google.com/amp/s/www.cnbc.com/amp/2017/11/21/ube...

Scroll to the middle of the article.


Did you have a secure, complex password on your account that wasn't shared with any other accounts?

How was the money stolen? Is there some way to transfer money or vouchers from Uber to someone else?


Driver rides with another phone and the hacked account, racks up an expensive drive and gets paid for it. Not sure how easy that'd be to do without getting caught though.


In the disclosure it says that the attack included names, email addresses and phone numbers. It did not contain any passwords or social security numbers, so your passwords must have been compromised in some other way if your Uber account was compromised.


Whose disclosures? Uber's ... lmao

Can we trust anything they say? I just know my account was hacked in 2014.. money stolen from me and then seeing tons of others suffering the same fate. All the while the company who let it happen laughing at it's users... blaming them.


Uber employee said "Uber hack - What a fucked up way to handle such a problem."

https://us.teamblind.com/article/uber-hack-OYUM6OPh


Finally, Florida Man gets a break! Normally he has such a hard life:

https://www.reddit.com/r/FloridaMan/


And that’s why you shouldn’t check in passwords or tokens with Git, kids.


Isn't this like bug bounty? I know Uber is not a shining example of ethical practices, but could this be a case of genuine bug bounty?


Lol sounds like a settlement but without the lawyers.


http://reddit.com/r/floridaman

So he's doing corporate espionage stuff now?


#deleteUber


I see Florida Man has made the news again


The most important principle on HN, though, is to make thoughtful comments. Thoughtful in both senses: civil and substantial.

The test for substance is a lot like it is for links. Does your comment teach us anything? There are two ways to do that: by pointing out some consideration that hadn't previously been mentioned, and by giving more information about the topic, perhaps from personal experience. Whereas comments like "LOL!" or worse still, "That's retarded!" teach us nothing.


Eh. It's better to just let stuff like this slide. I get the impulse too, but it was a mistake for me to act on it. HN doesn't need a hall monitor.

Plus it makes boring reading.


This comment is at least a bit more than a LOL, since it's a reference to an internet meme. Also, why are you on a (presumably) throwaway? I mean, given that you're aware of that about commenting on HN, wouldn't you have a more long standing account? ;)


Perpetuation of a trope communicates group inclusion as domain-specific language. And it's unusual for the actor in this trope to personally benefit.


Always getting up to trouble, that rascal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: