I would rather see more transparency once you are a reporter than shinier leaderboards. It is extremely frustrating to spend a week reverse engineering a vulnerability in an opaque cloud service only to be told it was a known issue (but private), won’t be fixed (but is within 48 hours), and that you don’t qualify for any compensation. I would like to see private issues shared with reporters when they are independently discovered. I would like to see status updates from developers. I would like to see some kind of shared compensation system that acknowledges it can take more than one person to investigate a problem before it is fixable and that even time spent replicating a vulnerability has value.
*Not a Google employee but have worked for a bug bounty*
I agree everything you've stated would be desirable, and if there was a strong culture and policy of supporting bounty programs from the CEO on down, this could potentially be achievable. However:
dupes - On the bounty side dupes are extremely common and buddies telling buddies about their finds is going to drive fraud up quite a bit. In my triage work I saw very clear attempts at this regularly.
wontfix - This one is largely due to the fact that many bug bounties don't have authority over or even shared reporting structure with the product teams. There's probably room for a consolation prize as long as the bug is in scope but that's about it. The fact that the bug goes away later could be a fix or could just be part of a new release. This should be extremely rare though and is worth following up with the program (again, as long as its in scope).
sharing issues - This is going to struggle mightily with legal without good contracts and NDAs for each researcher.
status updates - Agree its frustrating but is challenged by the product team/bug bounty alignment noted above. Most bounty programs don't get info from devs either and if product teams don't listen about fixing bugs the likelihood that they are going to regularly report on fixes is almost nil.
shared comp - Unless I'm missing your point you can self-organize outside of the bounty program (and many do) for this.
Sounds like the conclusion is that the bounty programs needs to work closer together with the product teams if it wants to be more effective.
Phrased differently, internal organizational challenges should never be a valid reason why a bug is disqualified. It’s completely irrelevant from an outsider’s perspective.
I don't know if 'extremely valid' is good English but it's how I would characterize your assessment. I totally agree.
The challenge, however, is constructing a sustainable model to incentivize product teams to reciprocate this closer working relationship. I would say all but the smallest of companies running bug bounties also have an internal security function that is already doing reporting on vulnerabilities, time to fix, etc. etc. So whatever internal 'reputation' there might be across product teams is well established (and in my experience the culture around bug fixing across product orgs is consistent from both internally discovered and externally reported bugs)
Another thing that happens is that there is typically a backlog on bugfixes, so if a researcher reports a new bug that's Medium priority, it's not going to get prioritized against a backlog of Criticals or Highs. Most of the lack of feedback from devs is simply the fact that there's been no action. Once the bug is front and center, the fixes are extremely simple and done within a few days and rolled out.
From what I understood from your original comment, was that sometimes things get qualified as “wontfix” because of internal struggles. I understand that these struggles exist, as with any large org with conflicting priorities, especially when the issues are not 0day severity.
I think, however, from an outsider’s perspective, acknowledging it’s a valid bug, but also saying “this will not get fixed any time soon” is infinitely better than just marking it as wontfix.
As such, I’d argue that the minimum effort the product teams would have to put into the bug bounty program is properly qualifying the submissions. If that, for some reason, is too much to ask, I submit it’s a higher level, leadership problem that they want to have a bug bounty program without the necessary resources to qualify the bugs.
Your points are totally valid and I agree with them in principle, I just see them as existing on a spectrum.
One pretty interesting example is GitLab's bounty program because you can see both sides, both the reported issues and how they are tracked on the engineering side:
There's very good organizational alignment here from what I can see, and overall the program seems to be quite adaptive and well supported (e.g. the comment about eliminating the 50% bounty here https://hackerone.com/reports/1154542#activity-11394543, 'unduping' a report here https://hackerone.com/reports/402658 ). It's about as best-case as you can get for a bug bounty...relatively small, single-product company that exposed to the sunshine effect of a public issue tracker.
That said, if you peruse every one of those issues you're going to find instances where they drop the ball, take too long, make mistakes, etc. etc.
So I would just say that if you ask anyone that has run a bug bounty, they will likely say they were surprised how difficult it was. You'll also generally find that they are advocates for the researcher community and want to run the best program they can. Everything else is largely a function of the dark art of business prioritization and the quality of the experience will be a function of how well they and their team is able to navigate it.
That sounds reasonable, and I’m more than willing to believe that. In the end, reality is more messy than a bunch of well-intended processes, and you’ll have a whole bunch of bugs in limbo, which can keep whole teams occupied without making any real progress towards the end goal that should be achieved.
Thanks a lot for your insights, they were very valuable.
> I don't know if 'extremely valid' is good English but it's how I would characterize your assessment. I totally agree.
Completely off-topic comment follows, but I find lingustics interesting, am a native (Br) English speaker, and think it's worthwhile to reassure someone they're foreign language skills are fine.
Yes, it's good. (It's extremely valid :wink:). It has a slightly comical flair to it - not sarcastic, just certainly not formal. You could push it further to 'I extremely agree', which is more debatable but as long as they realised you were 'being ironic' nobody would say 'hey that's extremely invalid English' or anything.
Only one thing wrong: characterise (oxendict be damned!) and incentivise are spelt (not spelled) with an 's'. :British-trollface:
Well, 'valid' is comparable (more valid, less valid, etc.). I don't think it can be said that it's incorrect grammar.
It's just an unusual pairing, and 'extremely' is so much of a 'stronger' word that it's got that humourous edge.
Not in formal writing, sure, but it wouldn't make me think the speaker's struggling with English, if anything the opposite - a good command of it and able to twist it in fun ways.
>dupes - On the bounty side dupes are extremely common and buddies telling buddies about their finds is going to drive fraud up quite a bit. In my triage work I saw very clear attempts at this regularly.
wouldn't looking at the logs of when the reports were taken pretty much clearly show the first person to make the report? how is this a thing that gets confused to be an issue?
Yes. There was never a real problem figuring out who reported it first.
Duplicate reports are likely the most frustrating thing most security researchers will encounter. They put in a ton of work into finding the bug, developing a proof of concept and writing up a detailed report with the hope and expectation of being awarded for their effort. So when the triage team comes back and say it’s a duplicate and there will be no award, it's incredibly maddening. If someone needs to vent in my direction because of that, I can totally understand. The *problem* was trying to remain diplomatic with people who would sit there and repeatedly claim they were actually first, or that it wasn't a dupe, or that all of this was far too sophisticated for me to understand. Then, finding no sympathy from me, go to Twitter to wail and moan and bash the program with impunity because they know that the org won't respond in kind.
Maybe they should be allowed to. If it is someone that is seriously being that petulant about something, then the Org could post dates of correspondence, and even quote petulant tempertantrum once it escalates beyond civility. Once the user name gets out there, other bounty programs could just put a blanket ignor and drive the petulant person into obscurity. But of course it is the today&now, and nobody actually believes facts anymore.
I worked with some true geniuses back then, and the idea of watching them systematically dismantle idiots in a public forum would give me chills. Alas, it wasn't to be. :)
It's really clear to me why people want more transparency on this stuff. I'd want it too if I was submitting to bounties.
But the transparency you're asking for is difficult to actually provide. Meanwhile, for a vendor at Google's scale, there is essentially zero upside to screwing over bounty hunters. At any realistic valuation for a vulnerability, these are rounding error sums to the business. In fact, the exact opposite incentive exists: these bounty programs are deemed to be performing well when they pay out more money, not less. The people managing these bounties aren't paying with their own money. They'd rather make you happy and encourage you to submit more stuff.
The two phenomena you describe here are real and common. But it's just simply the case that vendors are generally working through backlogs of issues, triaged by severity. If you're told your finding is a dupe of a private issue, it is overwhelmingly likely that it is. If you pay for every independent discovery of an issue, people game that; worse than the dead weight loss of the bogus bounties, you set up crazy incentives on your dev team to fix marginal issues because they're being gamed, rather than triaging according to real severity.
And, bounty hunters do turn up real bugs that aren't real security issues, but are still bugs. If those bugs are easy to fix, they're going to get fixed! You have the same weird gaming and precedent issues if you start paying out non-exploitable bugfix findings; you encourage people to find and report non-exploitable bugs, and you screw up the team incentives on what to fix.
I don't expect people to like any of this logic, because it boils down to "you should just trust the Google VRP people". But: you should. This isn't worth the cortisol. If it's driving you up a wall, maybe don't participate? There are other ways to market security bug finding skills. :)
If you've never worked triage on a bounty before, my guess is that you can't really imagine how terrible the median interaction is. Maybe the next evolution of these programs will be long-term contract relationships with trusted, successful vuln hunters that address some of these concerns by separating out the people who are good at this stuff from the median submitter (the median submitter invests 2 weeks trying to litigate whether copying a cookie out of the Chrome inspector and pasting it into curl constitutes an account takeover vulnerability).
@tptacek -- you're so awesome. I keep meaning to reply on some of these security threads but then I see you've made the relevant points of sanity in a well reasoned manner.
For what it's worth, when I was setting up the culture and values of Google's first bug bounty programs, I hammered "be magnanimous" into the reward committees. i.e. look for reasons to reward more, not less. Find the value in the information provided, even if the person is being a jerk. etc. I don't think this culture has changed. There are teams of people rooting for incoming reports to succeed, and they get excitement and joy from issuing large bounties (because this means Google security is getting stronger).
They are gig workers. Security gets outsourced to these workers where normally the company would need to pay a salary.
The refusal to pay while fixing the bug in 48 hours shows a power inbalance. I fear this is where we are headed in the future as AI pumps out most code all of the remaining work comes in the form of contests where thousands spends 80 hours a week and the company (in the best case) will award one person with a cash prize small enough to not hit minimum wage.
Google does not use bug bounties instead of paying people salaries to do software security. Google pays more people to do software security than does any company on the planet. They do bug bounties because you get different bugs from them than you would from any employee. If bounty hunters are benefits-eligible employees, then there are virtually no services a company can buy from anybody that don't qualify similarly; read IRS P15-A; there are basically no IRS classification tests that bug bounties hit.
Here's the "20 factor test" (condensed from Oregon's condensed version) the IRS uses for employee classification:
* Does the company direct where, when, and how the work is done? (You can hunt for bugs from any country in the world, whenever you want, using whichever tools you like.)
* Does the company require company-provided training to provide the service? (The first contact Google has from a bounty hunter can be the bug they submit, and Google will pay).
* Is the service directly integrated into the business the company provides? (No: Google doesn't provide bug bounty services to other companies, and is not in fact a bug bounty with a small search engine attached.)
* Does the company insist on a particular named person to do the work? (No; in fact, until they're actually paid, many bounty hunters are psueodonymous.)
* Does the company control who the provider hires to assist in the work? (Nope, you can hire your own team of people to assist with your bountying, and Google does not care.)
* Does the company maintain a continuous relationship with the service provider? (Nope, you can submit a bug in June and then come back next April with another one, or come back never).
* Does the company determine the schedule on which the work is done, or is the schedule flexible? (Google does not even slightly care what schedule you find bugs on, or even the timeline you use to report bugs on once you find them.)
* Does the company demand full-time hours? (Google doesn't set any hours for bounty hunters whatsoever.)
* Does the company demand that the service be provided on-site? (Bounty hunters work exclusively from their own facilities.)
* Does the company demand that the work be performed in a specific order or sequence? (No, you can look for any combination of bugs in any set of services using whichever set of tests you want to use, be they fuzzing or mitmproxying or scanning, in any order you like.)
* Does the company require regular periodic written or oral reports? (Indeed they do not.)
* Does the company pay hourly, weekly, or monthly, rather than by the project? (Bounties are paid by the bug.)
* Does the company defray expenses for the work? (Nope.)
* Does the company provide the tools and materials needed for the work? (Nope, bounty hunters BYO laptops and Burp Suites.)
* Does the company provide facilities, office space, etc for the work? (Google doesn't provide even so much as a Slack account for bug hunters.)
* Are the earnings from the work predetermined, or can the workers realize greater or lesser profits from the amount of work they put on? (Regardless of the hours committed to bug-hunting, bug hunters earn revenue based on the value of the assets they generate; if you come up with a technique to find dozens of XSS vulnerabilities in a couple hours, you'll make a zillion dollars per hour.)
* Is the relationship exclusive? (Of course not, bounty hunters work with multiple companies as a rule.)
* Can the worker make similar services available to the public? (Not only can they, but independent consulting is the norm for this kind of work.)
* (This is 2 bullets, condensed) Can either party terminate the relationship independently? (Yep.)
Bug bounty participants are not, as a general rule, classifiable as employees. People make noise about it because there's drama about Uber --- where the company exerts substantial control over how employees do the work, when and where they do it, where they check in with their cars, &c, and do so while providing the core service the company offers. Someone on a message board is going to try to suggest that Google is in fact a bug bounty program with a small search engine attached to it, but I mean, come on.
It’s not a question of if a report really is a dupe. This is going to happen constantly, proportionally to participation. The assertion is that secondary reports can have measurable value. Automatically paying dupes is untenable, but it seems like triage should handle dupes containing new information differently. If a vulnerability is sitting as a low priority and a dupe of the underlying vulnerability contains new information about the severity that gets it prioritized, it must have value.
I'm not saying you shouldn't feel that way. I'm saying Google has no incentive to actually screw you over; that they have in fact the exact opposite incentive.
I understand your point, but saying that Google has no incentive isn't accurate. There's always an incentive not to pay for bug reports, simply because it results in a short-term gain. Whether it's a good decision in the long run is another question.
It's more helpful to analyze the situation from a game-theoretic standpoint. Bug bounty programs are a typical example of a cooperative game. If someone reports a security issue, they start the game by cooperating. Google can then choose to cooperate by paying a bounty or they can defect by refusing a reward. Like in the classical prisoner's dilemma, both parties can typically maximize their rewards by choosing to cooperate.
My case is a bit special. While I don't know the exact reasons why Google chose not to reward me, it's possible that they thought I would continue to cooperate even after they defected. But like most rational players, I chose to stop cooperating, which ultimately leads to the worst possible outcome.
Being the one who decides to pay or not pay bounties in our bug bounty program: trust me when I say that the internal discussion, fact finding, classification, quality control, release planning & the rest exceeds your bounty by a factor 10.
Same goes for the dialogs with unhappy hunters who like 'proof' for the arguments that a bug / vulnerability is not there.
There is literally no financial incentive for me at all to not reward, au contraire actually.
Aside from the monetary gain of simply not paying. You keep posting this but it’s simply not true, there is always a gain in not paying debts. Money is money, The making of which is Google’s cute business. Chill on the loop-aid consumption.
I've never taken part. I have seen so many people find a bug and not getting paid for a variety of reasons and even the ones who do get paid get paid so little compared to the value it gives the company. I'm sick of the everyday tech person getting taken advantage of and used as a cheap workforce. These bug bounties should pay people for the time they invest.
> only to be told it was a known issue (but private), won’t be fixed (but is within 48 hours), and that you don’t qualify for any compensation
This kind of issue is rampant and opaque among bug bounty programs.
IMO if a company says a bug is a wontfix, that's an immediate moral justification for public disclosure.
If they say it's a known issue, they don't give a timeline of when it will be fixed, and it's still not fixed in a week, that's also moral justification for public disclosure. If multiple people have reported the same vulnerability, there is a high likelihood that the bug is known by even more people. Bugs that are not hastily addressed are wasting participants in your bug bounty's time.
Companies with bug bounty programs need to treat security researchers with more respect, and when they don't the moral imperative shifts towards public disclosure so that others are warned that vulnerabilities exist and are not being addressed. Too many companies set up a bug bounty program as a box checking exercise and then have a lackadaisical attitude about addressing reports.
I found a pretty obvious XSS on Tesla's website. Submitted through bugcrowd and got no information besides "marked as duplicate". Publicly disclosed, bugcrowd temporarily suspended me for disclosing, but it was fixed within a week. Nothing lights a fire under people's asses like airing their dirty laundry. If they had told me "we are working on a fix and expect it to be live in 3 weeks" I would have respected that and held off on disclosure.
I specified "moral" because most bug bounty programs' terms have a blanket prohibition against public disclosure (at least until vulnerability is resolved), and in some cases public disclosure could be legally ambiguous as well (because CFAA is so vague and broad).
about duplicates - google has this thing called grants https://bughunters.google.com/about/rules/5479188746993664 that pay people for doing security research, even if they don't find any bugs. we agree that doing security research is valuable even if no bugs are fixed.
about having access to private bugs, we don't want to share vulnerabilities with others without the researcher's permission, but the original researcher can make bugs public on the new website, you can see some of them here https://bughunters.google.com/report/reports
> I would like to see private issues shared with reporters when they are independently discovered.
Other people have covered most of the rest of your comment, but there are things to say about this one specifically. The reporting platforms support it. It's a very frequent request from researchers who file duplicate reports. It's rare for a company to do this, because (1) it doesn't do anything to help with the issue being reported; (2) it doesn't change how the report will be handled; and (3) researchers hate it. They don't hate it when they file a duplicate report and want to see what scooped them. But they hate it when their report gets shared with someone who filed a duplicate finding.
Sharing duplicate reports causes a lot more fights than it solves. And it tends to piss off the higher-productivity researchers in an attempt to soothe lower-productivity ones, who generally aren't soothed anyway.
The ticket I saw the most outrage on over this issue was one that I duped to another ticket with a higher number. (HackerOne assigns ticket numbers serially. Fun!) How could I justify calling this report a duplicate of one that, as anyone could see, was filed later?
Well, the second report began
> Hi, I reported this through email and was told that in order to claim a reward I should open a ticket in the HackerOne program...
(This situation is odd enough, and easy enough to explain without sharing private information, that I could explain it to the guy on the lower ticket number without causing problems (or sharing the other ticket directly. That's a big no-no.). I present it here as an example illustrating that, even if you think you have ironclad evidence that the program is out to screw you over, it probably isn't.)
I definitely saw companies doing things that were unfair considering the program as a whole. That generally happened as part of an effort to preserve a relationship with a researcher who frequently filed useful reports. I never saw a ticket duped inappropriately. If you want my advice on how to get the most money out of your bug reports, it's this:
- Don't antagonize the team handling you.
- As much as you can get away with, demonstrate what you can do with the issue you report. The program will say "we investigate the impact of every issue and pay out according to the highest-severity potential impact." They are sincere. But they don't have the motivation you do to find the highest-severity potential impact. Any impact you actually demonstrate will automatically be considered, because nobody had to notice it was possible.
- Sometimes an issue might ambiguously fall into a low-paying category, or maybe a high-paying category. Try to characterize it as belonging to the high-paying category.
- Decide whether you're a "deep" guy who finds complex issues on a handful of platforms or a "shallow" guy who finds the same set of related issues everywhere he can with minimal effort. Both approaches work. If you're a "shallow" guy, make sure that when you report an issue, it's really there, and don't argue too much over what you think it should be worth. If you're a "deep" guy, you have more scope to develop a personal relationship with the team who handles you.
- Watch for existing programs to add new platforms. This often happens when a company with an existing program makes a new acquisition. The new platform probably has a lot of low-hanging fruit.
- Clean up after yourself. The most outrage I've ever seen on the company side was someone who demonstrated full-on RCE on a company server. He installed a webshell. And his webshell was still up when he filed his report. Don't be that guy. You can be disqualified from a bug that would have paid tens of thousands of dollars.
Couldn't companies just publish a list of hashes of existing but not yet public disclosures? Seems as if that could solve the issue of doubt wrt timing.
How would that work? I publish some 256-bit values. You file a report. I mark you duplicate and tell you a random 256-bit value. What did you learn?
The number of reports that eventually become fully public is nearly zero. Most don't become public at all, but of those that do, a lot of content is generally redacted.
The hash input could just be a generic description of the bug, minus any sensitive info, plus some salt.
All report hashes would have to go public as soon as the report is accepted. The hash input could go public once the bug goes public, so the duplicate reporters can then finally see proof that the bug had already been reported.
In what cases would companies be unable to publish generic descriptions after the bug is public? I'm not in the industry so I have no idea about this.
I can prepare generic descriptions of bugs well in advance, regardless of whether those bugs are known to me or not. That scheme lets me mark everything duplicate if I feel like it.
It comes down to how much companies are willing to reveal after the fact then. If companies aren't prepared to reveal enough unpredictable detail involved in an exploit after the exploit had been fixed, that's another issue. I think companies like Google would be ok with it though.
You're proposing that they do a lot of work for no benefit. What would they get out of it?
You're also still failing to account for the fact that reports rarely become public. I can refer you, again, to a random number just as easily as I can refer you to the calculated hash of my bespoke summary of an issue that was reported eight years ago.
As stated previously, it wouldn't work unless the company was prepared to disclose all reports at two stages: 1) hash of unpredictable description of the issue when the issue is reported, 2) hash input when the issue becomes public.
There would be no benefit except for a tiny amount of goodwill, so it's almost certainly not worth it. This is simply a method to address the duplicates issue. Nothing else.
So much this. It’s insanely annoying when they say something isn’t a risk and then they fix it shortly after. Google should improve this problem over new webpages.
yeah, these are all features of working on the development team of a project. involving every developer who wants to work on a bounty at that level would be an insane amount of management overhead.
It makes sense if you're vetting people beforehand to make sure that giving them this level of access and communication is worth the effort, but that's what a job interview is.
Not work on the bug, just be kept in the loop like the initial reporter. Putting in the same effort as the first reporter should earn you the same trust that is afforded to the first reporter.
If you're not going to win a prize, why would you need to be kept in the loop about an internal vulnerability that is being worked on?
As for getting a prize, it goes back to the dupe issue which is the source of a lot of abuse. There's no way to prove you also worked on it or if you just got the info from your friend and want to double your winnings.
Not sure why you're downvoted, but the $3M/year total rewards payoff is likely smaller than the corporate administrative and developer time (for review) costs. I.e. if this was a charity it would pay out less than 50 cents on the dollar.
I downvoted because "cybersec researchers" do not in fact routinely make 7 figures. For strong pentester types reporting the typical (real) vulnerability the VRP handles, the median is probably in the low 6's.
It's a combination of the lower value of the median bug bounty submission (we hear about the high-ticket vulnerabilities, but most of them are pretty low-test) and the fact that huge numbers of bounty participants are abroad. I know there are people who claim to make high-6's and even low-7's from bounties, but they're very rare. I think most people who participate in bounties would be best off financially by using them to build a portfolio they can exploit to pivot into consulting or full-time work of some other sort.
If only Google could run their other businesses as frictionless as this...
Found an interesting bug? Let's have a 1-1 chat over lunch, drinks are on us BTW.
Your developer account was banned by some bot gone wild? Sorry, no human interaction is allows in my department. Maybe if you have a famous buddy that can pester some hotshots on twitter...
It's tough; I wish I had a good solution to the latter problem. Half the trouble is the majority of what Google bans is semi-automated bot farms; the "minimal human contact" policies are to avoid making the company vulnerable to social engineering.
(That vulnerability goes deep. People would look up the phone numbers of Google offices and call with stories about kidnapped family members and the need to get into a Gmail account to find the ransom note.)
If you knew which unique person was associated with each account and if you knew which unique person you were talking to on the phone, it would be possible to verify both that you are talking to the person you should be and that you aren't talking to Notorious Social Engineer #2344 trying to do something shady.
But instead you're stuck trying to figure out if "Saal Weachter" on the the phone is the proper owner of the 'saalweachter' account, or even if it is someone named "Saal Weachter" on the phone at all.
I didn't see anything regarding higher rewards. Have they increased them at all?
$29m for 10 years of bug fixes seem like a steal for a multi billion dollar company. Especially if some of those bugs that have been reported and fixed are potentially lethal for the company.
Google and Apple's bids weren't high enough to get anyone in dozens of shady governments with access to NSO Group's services to successfully risk adding burner phones and then analyze these attacks.
I think any alternative explanation to too cheap is even less savory.
People will sell bugs on the grey market no matter what Google pays, because not everybody can do business with Google.
A reminder that grey market exploit purchases are tranched; the figures you hear for them are payout caps, not lump sums. If your bug is burned before all the tranches pay out, you're SOL.
The NSO group has been entrusting its vulnerabilities to people who would happily embezzle if it is some sufficient amount of money. If the bounty is ~2X these people's typical price in many countries that are NSO Group clients, then it is remarkable these bugs aren't being burned as a form of embezzlement. Your description of tranched payment risk only makes that more remarkable.
NSO's clients have effectively unlimited budgets. The bang for the buck on exploits is probably pretty shocking compared to alternative intelligence collection methods; sending actual people out to do stuff is incredibly expensive. When you raise the price of exploits --- which you should do for other reasons! --- you don't necessarily harm NSO. Since they effectively take a cut of exploit valuation, you may even help them.
As with any asset that's hard to lock down, if the scrap value is high enough relative to salary of employees then they can't estimate how many phones can be exploited before the next "steal stuff from work" event.
As such the NSO Group would end up limiting its clients to fit its pipeline and have trouble buying exploits since most other market participants have a single workforce with a drastically lower rate of loss.
I think that would devolve to countries needing to pay liability pricing per attack on possible honeypot phone's, etc, and more countries being cut off like Morocco, so no more using it like an unlimited plan. Sure these countries have unlimited budgets relative to their own GDPs, but when they can't find stability acting as a group they are all back to bidding for unique vulnerabilities, and there probably aren't 212 great ones on every platform at all times.
NSO's clients are all organizations with effectively unlimited budgets. That's the premise. Even the shadier companies in NSO's space sell principally to state actors. It's unlikely that Google can drive the price of an exploit past the level that any country can pay for. These are petty cash figures.
NSO builds implant technology, so they add some value of their own, but NSO is essentially a middleman in this market. Driving up the prices of the underlying asset, when buyers aren't price sensitive, helps the middleman.
Okay fair, it's unreasonable to ask Google (who have a limited budget) to 'compete' with the grey market.
But isn't then the conclusion to introduce bug bounties with unlimited budgets, e.g. sponsored through the European Union?
It would be interesting to see how high the prices really go - at some point NSO clients might see more efficiency in going for more traditional military means.
There must be something I am missing, because I dont understand how underpaid most bug bounty programs are.
If I ran Googles program, I would immediately 10x all payments, unironically. Yes, that means paying 1 million bucks for something you previously paid 100k for. Drop in the bucket. You also get a ton more eyeballs on you, letting you patch everything ASAP.
But they dont do this. I dont know why. Security through obscurity? I suppose that works if you are myspace.com in 2021. Nobody likely gives a shit to try and hack it, but at the end of the day this is still google so that really doesn't apply.
The downside of not paying handsomely is people realize they can make more money selling to third party vendors, (which some do) then every once in a while you get a bad PR story showing that your stuff was hacked and exploited for months/years and it potentially knocks a few points off your stock price.
Money is really the end all be all. If you pay more than third party vendors, I can see almost no reason people would sell to them. At that point, your only adversary's are gov employees of nation states and the staff of companies dedicated to finding vulnerabilities.
Bug bounty prizes are set to encourage a certain quantity of bugs to be reported.
If you offer 10x as much, your triage channels will get overwhelmed and you'll have to deal with a bunch of hostile researchers and development teams who hate your guts because you just blocked their next 2 sprints.
If a bug bounty program is effective, then the payouts should trend up slowly over time as your security program becomes more efficient and produces more secure code.
It's important to remember that purpose of bug bounty programs is not to reduce the number of bugs in the code base - it is a validation measure to check whether your controls are effective or if additional controls need to be added elsewhere.
As a customer, I'd be okay with dev teams being blocked for their next 2 sprints if it meant security I can trust.
Google Docs, Search, and Mail do little in 2021 that I need that they didn't do in 2016. There's a lot more churn than bona fide improvement. Most tech just doesn't change that much. Heck, I'd take an online version of WordPerfect 7 from 1996 if it was trustworthy. That's a quarter-century. There's nothing Google Docs does, aside from collaboration, that I need that WP7 didn't do.
On the other hand, I strongly distrust Google to maintain my data securely. As far as I can tell, aside from backwards compatibility/legacy reasons, the major reason people use Office 365, for better or worse, are issues like compliance and security.
Security bugs ought to be sold to Google, found, and fixed. They shouldn't be sold to a ransomware gang or a government.
> As a customer, I'd be okay with dev teams being blocked for their next 2 sprints if it meant security I can trust.
As an enterprise customer of said products, you'd probably switch to the nearest competitor who offers more features and better UX as soon as the "secure" option would lag behind. That's what numbers show.
> On the other hand, I strongly distrust Google to maintain my data securely. As far as I can tell, aside from backwards compatibility/legacy reasons, the major reason people use Office 365, for better or worse, are issues like compliance and security.
Out of interest, why do you distrust google to maintain your data securely? Having done no actual research, my impression is that google has a pretty good record when it comes to security (though obviously not perfect).
* Android loses security updates after a short amount of time, with no notification to the user. Lots of people run insecure devices and have been susceptible to ransomware attacks.
* Chromebooks expire likewise. It does better on notifying users, but many Chromebook users can't afford to upgrade. Google has planned obsolesce to increase sales, but in a particularly security-unfriendly way.
* Google has a long history of withholding security features based on tiered enterprise pricing, especially with regards to Google Workspace / Google Docs. I understand tiered pricing, but having users intentionally be unable to trace back attacks is bad for the internet at large. I know cases where bad actors weren't traced down due to Google charging for basic security features.
... and so on.
I could step through minor issues, and I could give large numbers of them, but that'd be a blog post. That sort of general apathy for user security is omnipresent in Google's culture. Google has an excellent track record in its own corporate security, and is paranoid about IP and internal data. That doesn't translate to my IP and data.
A lot of this comes from looking at customers at statistics. My value to Google are my eyeballs. If my computer is compromised, and I switch vendors, Google's cost is one user's worth of ad revenue, which is a manageable risk. Google doesn't at all care about the security of its customers. Unfortunately, that attitude carries over to the B2B space, not to mention increasing risks to normal Google users.
Third party vendors don't buy vulnerabilities on Google's infrastructure and web services. Third parties like Zerodium are interested in 0days on Android, iOS, Windows, Chrome...
You could try to sell it to criminal organizations or monetizing the vulnerability yourself, but it doesn't make any sense to be in that situation if you are making six figures as a bug bounty hunter.. even if you didn't have any ethical qualms regarding such acts.
This is very true. It's been a reality for a long time that the most successful (measured in $x rewarded) bug hunters sometimes have hundreds or even thousands of bugs submitted per year.
This way, they can capitalise on the fact that smaller security issues are much easier to find, especially if the bug hunter has expertise in the underlying framework.