Hacker News new | past | comments | ask | show | jobs | submit login
‘Google’ Hackers Had Ability to Alter Source Code (wired.com)
35 points by phsr on March 4, 2010 | hide | past | favorite | 26 comments



“Additionally, due to the open nature of most SCM systems today, much of the source code it is built to protect can be copied and managed on the endpoint developer system,” the paper states. “It is quite common to have developers copy source code files to their local systems, edit them locally, and then check them back into the source code tree. . . . As a result, attackers often don’t even need to target and hack the backend SCM systems; they can simply target the individual developer systems to harvest large amounts of source code rather quickly.”

---

Forgive me but isn't this the function of a SCM? What other models are there?


Locking down individual developers to only the source they absolutely need.

In that model, a single compromised workstation would only expose a few modules to malicious changes and only a few projects to code theft.

In the more-common wide-open implementation, a single compromised workstation exposes the entirety of code for every project to theft and malicious changes.

When you're talking about an under-the-radar hack that took place over months, hackers intentionally slipping vulnerabilities into the code is a very real risk.


And you can surely do this with most SCMs on the market, including Perforce. Now all the hacker needs to do is compromise the build server...


There is often a divide between developer needs and security.

A desktop developer wants their machine to run as quickly as possible to allow them to compile and run quickly. This may tempt them to turn off virus checking.

A challenge in security is making work without being too onerous. If security measures become a hassle for users, they'll find a way around them.


"I sorta find it amusing that McAfee released a PDF of the white paper, considering that Abode’s PDF Reader is also a popular attack vector. It’s like railing again IE6 being insecure, but using it to post the message that IE6 is insecure."

-From the comments in the original article.

I found the above interesting because it suggests to me one of the fundamental principles of security: we must necessarily try to improve our security from inside of systems which are already insecure.

We could say that no one should ever use an IE with a zero-day vulnerability. And that no one should use pdf because it can be an attack vector. Or view jpegs because there have been embedded executable code vulnerabilities. Or run executable code because sometimes it is malicious.

Security is always a matter of trade-offs. One can never build a house which cannot be broken into but one can build a house that is not worth breaking into.

Sounds like there were a number of vulnerabilities here. It also sounds like improving default settings is one of the best solutions here. But it's clearly not a justification for no longer using SCM, or pdf. Perhaps for old IE.


Why are we taking McAfee's word on what happened again? They never claim to have directly investigated Google, and from what I can tell Google isn't their client. They are purely extrapolating based on other companies they work with.

This just sounds like they are trying to scare everyone about what happened to Google so that they can sell them McAfee Security, which as we all know will fix all your problems.


Yeah, it's clearly said that McAfee "provided information to Google" which sure doesn't sound like any direct involvement with their investigation. In addition, from a quick scan of their linked PDF, I don't think McAfee mentioned Google at all - which makes it look more like the Wired writer just plastered "Google" all over the article to make it a bigger headline.

I'd blame Wired here more than McAfee.


The article is pretty ridiculous. Why would a company like Google run their Perforce servers under Windows?


Duh... headline should replace hackers with crackers.

On first sight I really thought that it was about regular Google employees who developed something new during their blessed self-time.


The headline is using a definition of "hacker" that has been common for 30 years now. This kind of linguistic purity argument is silly, and ultimately futile. Languages change. We need to learn to speak the language as it exists, not what we think it "should" be.

At least sites like this one preserve the original meaning of the term. That's the best we can hope for.


Finally, someone that agrees with me on linguistic purity!

Personally, I think that purity is for monks and hocky players! I use the word "hacker" to refer to Giant Amazonian Parrots, and the work "cracker" to refer to small, circular, non-magnetic yet still metallic hardware.

Now, excuse me while I mambo dogface down to the banana patch.


If I had a local working copy of a git (or any of the dvcs's) repository and a hacker attempted to taint the "master" copy, would that show up as a diff between my copy and the master?

With SVN, a hacker could inject changes into the repo on the server which would show up as a incoming change to my local repository.

In both cases, we could detect an intrusion if someone was vigilant enough to take a look at all incoming/outgoing changes and diffs -- or am I missing something?


Well, if there are enough people working on a repository, you might be inundated with changes every time you update. You'd never suspect anything unless you knew which files specific users should be editing, and checking that for each patch would probably be prohibitively time consuming.

But unless perforce allows you to delete history, it shouldn't be a big deal to fix. The hard part is finding it. I wonder if there are automated systems that can check the patches users commit and look for "outliers" in content or style?


So I guess Google's style of code-reviewing every commit should keep them safe from this...


Good point; This is something I wanted to see discussed but the paper didn't touch on it at all. It just focused on tools and vulnerabilities in the tool and didn't dip into the process surrounding the tool.


With svn, a clever attacker can make changes to past commits in the repository, so you will not see new commits to review, your own copy of the code will not contain the changes, but when a fresh copy is checked out by eg, a build server for deployment, it will get the modified code.

Git mostly prevents that; while sha1 can be attacked, a new, visible commit would have to be made containing the sha1 preimage.


“It is quite common to have developers copy source code files to their local systems, edit them locally, and then check them back into the source code tree. . . . "

Well, doh. How would it work otherwise?


At the time of the initial report, I suspected that a reason Google seemed so angry was that the hackers had tried to inject new vulnerabilities into Google software that would then damage third-party Google users (either via browser compromises of downloads of Google-distributed client software, like Desktop/Toolbar/Chrome/etc.).

Stealing 'secret algorithms' is in one category of transgression. It's still somewhat flattering to the target. It's driven by curiosity, not always a bad thing. To the extent such theft-of-secrets enables competitors to increase market share by improving their own operations, it hurts the target, yes, but might still be net-beneficial to society in a dynamic long-term analysis. It's theft, but not really violence. A victim will adopt countermeasures, and might expect compensation/damages, but may not be moved to retaliate in-kind. A rational weighing of costs and benefits tends to dominate the choice of responses.

On the other hand, to try to corrupt a company's offerings to damage their customers -- and possibly destroy the company's reputation as a trusted source of downloadable software -- moves into another more serious category. It's malicious and destructive. The victim may feel existentially threatened, and feel obligated to retaliate using any means available. A rational weighing of options may not matter; there is a urge to punish, even incurring large costs in the process.

Google's initial response made me think they viewed the China breaches in the latter category, despite the limited details.


Nice counter-example for anyone claiming expensive enterprise products (Perforce) are more secure than widely used open source products (git/subversion).


Sort of.... almost all of the security vulnerabilities listed in the article are addressed in the documentation: http://kb.perforce.com/article/1173/basics-of-perforce-secur.... Also, running stuff as System on Windows just for the hell of it is incredibly stupid.

Misconfigured software and shitty system administration != vulnerable product. Perforce may have security vulnerabilities, but the article lists none of them. It's just McAfee selling snake oil. At best Perforce simply has a dumb default install. Tune in next week when they claim leaving the root password blank is a security vulnerability in OpenSSH.


Not really. Read the article - the compromise was entirely down to user stupidity - clicking on phishing links and the use of an insecure, outdated web browser.

Perforce can be secured, but the administrators clearly chose not to.

The article is a joke as well. It cites the fact that all of the source code is on a developers machine - how else are they meant to develop, code and test? It then also cites that the files are stored in plaintext - source code can't be stored one-way encrypted can it! (And there's little point in two-way encryption if the hacker has access to the machine)

So in summary, it was not Perforce. It was, as always, human error


You're right that Perforce has little to do with the actual exploit, but "human error" and "user stupidity" don't explain all of it.

First, if you're in a corporate environment, your default browser is often chosen for you, even as a developer. Furthermore, you may not even have sufficient permissions on a Windows box change it.

Second, This is not an end-user problem. People are trained to click links. We click lots of them. If you put them in emails, someone is likely to click them. Perhaps they click during a moment of distraction while in a teleconference, working from home with a screaming 2 year old in the background. It happens. As a general rule I am very wary of links placed in emails. But when I get work email from coworkers, especially something work-related, on my work email system, I often just click the link because the odds of it being phishing are incredibly low. I'm certain these weren't exactly "Get VIAG'A cheap" emails.

The problem is that nobody is perfect. This does not make them stupid, it makes them human. If your security model relies on everyone being 100% vigilant 100% of the time, it fails. Somehow, systems need to evolve to add some more layers of security to backstop all us unreliable humans.


"the compromise was entirely down to user stupidity - clicking on phishing links and the use of an insecure, outdated web browser"

It was a spear-phishing attack (great name) - you get an e-mail from someone you know suggesting you check out a link. The browser security flaw was a 0-day, so even if you were fully patched you would still be affected. Sure, using IE isn't brilliant, but there are certainly 0-day holes in other browsers. I don't think it's reasonable to blame this attack on user stupidity.


Still it's not clear how the SCM vendor is supposed to mitigate against this? If an attacker can inject code onto a machine inside your enterprise (and they obviously can) then you're pretty much screwed whatever you're running (because someone has to be running something that can affect production systems, be they developer, administrator, etc.)


It's high time for a more pervasive use of e-mail encryption and signatures.


A bit late on this comment, but to reply to you all, I'm not saying Perforce is insecure, just that it's not more secure. And yes, stupidity/ignorance/laziness are the root causes of most hacks.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: