I'd like to hear how this was discovered and who's name ended up on git blame line next to that password.
I have done a malicious source code injection as part of a network security exercise at the university. That was before git or other sane source control and I basically inserted a semi-obfuscated piece of code in the source repository, which gave our team an advantage in the game (the game was to crack and find a weakness in a protocol, but the whole machine was a target/battlefield). The clever part was first rooting the server via suid vulnerability.
I won the contest, but while doing it I thought, yeah, this shit will never work in the real world. And then this story made me remember that. That's pretty crazy.
"I'd like to hear how this was discovered and who's name ended up on git blame line next to that password."
Yes. We all want to know a lot more about exactly how that got into the source code. It may take forensics and detective work to find out, but there's probably enough log info to figure it out. Pressure needs to be kept on Juniper until they disclose this. They can't be allowed to get away with claiming "it just happened somehow."
Doing that would be implicitly admitting they'd made America less safe, which might lead to them having less power.
If I was the NSA I'd blame those sneaky commies, because that might lead to me getting more power. I'd say "this Chinese attack wouldn't have gone undetected for so long if the NSA had access to Juniper's source code repos, and monitored on all their employees, to check for sleeper agents"
They have a lot to lose if they keep quiet -- customers, credibility, future business, etc.
What are the powers of NSL, can they be given to any entity / person / company for any reason at all and also gag them from discussion getting the NSL? What are the penalties for a company breaking the NSL silence? Jailing the CEO? NSA sending it would also reveal quite a lot and jeopardise their future operations, it seems they'd want to have plausable deniability of some sort.
Considering FBI is investigating, other one hand doesn't know what the other is doing, or at least if not more likely the other handticles of the U.S. (and other governments) have been depending on some versions of affected hardware for some time.
Don't assume that the FBI and NSA get along. There is constant infighting wherever intel/LE agencies overlap. The FBI is a couple steps down the pyramid from the NSA, and so would want to score a few points.
These are also vast organizations. Even if the NSA was behind this, there is a reasonable chance the official hierarchy doesn't know whether or not they were involved. This exploit could even be the result of a mistake, code never meant to be deployed.
There is inherent levels of deniability in the operations of such a code word clearance heavy organisation. It's entirely possible >75% have no idea the full capabilities and less than 5% know the impact any individual program has on the overall capability of the organisation.
I think new NSLs have been limited to 3 years now, but I doubt Juniper will ever say it was the NSA, or even that the backdoor that others used was because of an NSA algorithm (which would make both them and the NSA look bad). So in the end it will be more about Juniper's reluctance to say than it being gagged.
Companies that sell a lot to the US government have a tendency to make more "compromises" to protect the government, too, or do what the government says. See BlackBerry and its new found (or maybe older than we think) "anti-privacy"/pro-lawful intercept stance.
If you think about it a bit longer you'll realize it can not be a single name. Yes, someone wrote this code. But then someone else reviewed it and then a third person signed off on shipping the end result. It takes (or really, it should take) more than one person at a company like Juniper to get away with a trick like this. Unless their processes are completely broken, let's for the moment assume that they are not otherwise all bets are off. Juniper is going to be scratched from a lot of POs because of this story, if it turns out they don't have a process in place to protect against stuff like this and it does not involve two people or more operating in concert to circumvent that process then they're pretty much dead.
If I was doing an attack like this, I'd wait for an opportunity to slip it into a huge, mundane change.
For example, changing the repository from Mercurial to Git. Splitting or combining two repositories. Moving lots of files between directories. Running an autoformatter over the entire codebase. Something like that.
Every code review tool I've seen there's /some/ way you can get it to highlight thousands of trivial changes. And I don't know about you, but I can't promise I'd spot 1 evil change among 1000 trivial ones.
Yes, that would be the way to do it. And that is exactly why a change like that is just as dangerous as any other.
I did quite a bit of code review during a contract earlier this year. One day one of the programmers shows up with a 100K line changeset, a reformatting and clean up that he'd embarked on of his own accord. Imagine his surprise when I point blank refused the commit. He was extremely upset (understandably, it was a lot of work and well intended) but at the back of my mind were two things: one, we are already under pressure to get things to work and this does not add anything functional. two, if it does add anything functional it will be an error or a bug and there is no way I'm going to catch this manually reviewing all these changes (across 100's of files). So the safe thing to do is to simply not accept the change unless I can free up developer time to break this massive commit into smaller ones that can be reviewed and accepted one-by-one over a longer period of time. And that was a luxury we could not afford.
So too bad, one very inflexible reviewer and one very pissed off programmer. 1000 trivial changes in a single commit isn't going to fly (at least, not with me), unless I review each and every one of those with just as much attention as I would review a much smaller commit. It's that sort of attention to process details that probably makes me 'less than easy' (to put it mildly) to work with but I really feel that if a customer trusts me that I'm going to have to earn that and rubberstamping a change set because it is too large to review is definitely a breach of that trust.
That's a great way to do it: fix-as-you-go. The same goes for adding unit tests where those are missing (in bulk, rather than as a set going with a recently touched piece of code), fixing up documentation issues and other global changes such as naming conventions and so on.
Hitting all of the code base in one go with a thing like that is asking for trouble.
Downvoters are cordially invited to state why they disagree with this.
I work on a team maintaining a large code base. We follow a similar rule concerning unit tests. Our testing over 5 years has slowly grown from 0% to give us ~75% coverage, by using the methodology you describe.
I believe that's the wrong way to do it. You can only prettify/reformat if you aren't making changes to the file. That way you review only the prettify changes and not the functional changes.
Agreed, refactoring and functional changes should certainly be in separate commits, and ideally in separate pull requests. jacquesm can still have her/his rule to only change code involved in the course of functional changes; it's just that the refactoring and functional-change PRs will come in close succession and the refactoring PR exists only because the functional-change PR does also.
I agree with you, beautifying and reformatting code can mess up the diff, and all of a sudden, someone ends up with their name pointing to code they didn't right. Beautifying, reformatting should only done alone.
I'll offer another opinion on this - I an a frontend developer and so it's very common for me to have to deal with files that are written in PHP, HTML, JavaScript, and also sometimes including CSS embedded right in it.
Roughly speaking, PHP and JavaScript have a similar syntax and can be formatted/styled in a similar way, but HTML it a totally different beast and has very different structure, as well as CSS which is a glorified list of rules.
I don't refactor code for the sake of it, however, in an effort to read related files I often clean things up that I must read in order to edit another section. I don't discard my cleanups because I'm leaving it better than when I found it (and cleaning up a messy repository is tough, but a little in each commit can help).
So I agree that you should style where you're working - but there are valid reasons to refactor or restyle code in other places too. Here's an example of a file I've been updating lately:
There are exceptions to every rule, and yours is a valid exception to the rule that in general one should not commit non-functional change-sets. If something is particularly ugly then you could clean it up but that change should not be taken for granted not to have any adverse effects, there is really no such thing as an 'innocent' change. I've seen whitespace changes break code far too frequently to not be suspicious of any change at all.
You could use a tool to compare AST's for equality, at the least you could use it to highlight the real changes as opposed to whitespace cleanup, which would make this job much easier. Or perhaps compare binaries directly if you set up deterministic compilation.
Most repositories allow you to ignore whitespace changes if you want to.
For instance, github allows you to do a ?w=1 appended to a diff url to see the differences without seeing the whitespace differences (that's like git diff -w).
That's the correct choice, although it would have been nicer to review it (if possible).
That said, huge commits can be ok if they can (a) be reviewed very quickly, (b) be entirely generated by a script (review the script, and ensure that running it generates the proposed change), or (c) are small under e.g. git diff -w.
Incidentally, removing $SubversionThingy$ from the header of each file in a moderately-sized repository is enough to break the FishEye code reviewing tool...
every place i've ever worked would require the developer to notify or ask permission from the project lead before making a change of that scope, if for nothing else, just the time commitment.
That's how it should have gone. But to the persons defense, the codebase really was a mess and a cleanup would have really helped us in getting a better insight in what the heck was going on, it really was with the best of intentions. But code review tends to be a bottle-neck anyway and there is no such thing as an 'innocent change' in my book, especially not in code-bases that have too little automated test coverage and zero documentation.
You'd want one better -- slip it in when your co-worker was about to make a change, update the yet to be committed code in their local working directory. They'd have a low chance of noticing it but the blame would lie with them.
There certainly will be multiple people to blame for this at least because nobody noticed who should have been looking but, it could have been also been a single rogue/disgruntled employee as well. About to be laid off perhaps, or also playing on the black market selling 0-day exploits. Even better they managed to slip in the name of the co-worker they hate the most in the committer field.
I'll take that over a national intelligence operation: infiltrating, gaining trust, paying off multiple people (and thus risking revealing their hand and operations), I can see a single person, who found out what the price for 0-day exploits are and who was perhaps been unfairly treated, was about to be laid of or leave the country. This was their way of getting an extra bonus on the way out of the door.
You'd be surprised. While the -official- code will typically always get QA'd and reviewed, there are plenty of cases where custom code for a specific use case is done by one engineer and bypasses the proper channel because of deadlines.
I don't think it's necessarily true, I remember Linus saying that for Linux' kernel they used a web of trust. At one point you had to trust some people, you can't watch every updates. And he said something along the line "if you're not doing cryptography/security like this, you're doing it wrong"
> At one point you had to trust some people, you can't watch every updates.
Yes, so Linus doesn't. But his web of trust is hierarchical. In case of some malicious code making its way into the kernel, there will be a chain of trusted people that can be made accountable for it.
If you're suggesting the reviewer wrote the patch you've eliminated the developer but you still have someone higher up in the chain that needs to be totally incompetent. Even so, that's still two people, not one.
Linus' web-of-trust is more like a pyramid of trust, he has a couple of lieutenants that he trusts but I highly doubt they in turn will blindly sign off on a commit on something as critical as this.
Git does not give strong guarantees for authorship unless you are signing every single commit with GPG (which is only a recently available as of 1.7 or 1.8).
I assure you it will and that's been known since Myer's landmark paper on Subversion a long time ago. Interestingly, I predicted your move in a post earlier (last para):
Even university students come up with this approach. Status quo seems hopeless against nation states unless companies learn from the past and start doing high assurance at least where it counts. ;)
Yeah I remember it was an interesting thought process. I put so much energy into rooting the system, and after a short celebration when I got a root shell, it was a "well now what?" moment.
We debated intercepting the trafic to the game server and retransmitting (a man in the middle), initially. Then I wanted to run a hidden process (with some system looking like name) that have injected java byte code into the running server to give our team an advantage, but that proved too difficult, so the idea to just change the source in the repo that the TA kept on the same machine, was kind of a boring and un-exciting conclusion. It didn't seem very clever at all and felt like giving up.
Interesting case study. My background is high assurance security. Most attackers go for subversion there because it's powerful. You're the only one I've seen in a while to do it because it was easier. Often is if software source or config is available in a centralized place.
Btw, here's the Myer's paper in 1980 that fleshed out the threat and gave the field no excuse to have no seen much of it coming:
Isn't the name on the relevant "git blame" (or equivalent) line irrelevant unless the team signs commits with GPG (or equivalent)? It's trivial to forge the author.
git blame was figurative, I don't even know if they use git, I basically meant the person who is responsible for injecting that line (if they ever find out). And as you said, yes, the first thing someone would do who would insert that is of course put someone else's name there.
ScreenOS seems to predate git (the NetScreen line is an acquisition made in 2004, according to Wikipedia).
I'm not sure if the older VCSs were as tamper-proof as modern ones so the line may as well be part of the original commit which added password authentication, decade before the hack.
> The argument to the strcmp call is <<< %s(un='%s') = %u, which is the backdoor password, and was presumably chosen so that it would be mistaken for one of the many other debug format strings in the code.
That's very clever from the attackers point of view, extra kudos to hdmoore for finding it!
Huh, is that show coming back around? I kinda liked how they would spin current tech events, and the idea of an AI that spread itself over power boxes was neat to say the least.
They have a final season 5 scheduled for some time in the next few months.
I'm very excited myself. It's the only show, other than Mr. Robot, that has plausible tech scenes and often drops real bits of hacky trivia, remaining very enjoyable for a techie despite the premise itself being far-fetched. And their political commentary on mass surveillance is beautiful.
And the way it transitions from criminal procedural to scifi ... beautiful.
You may also be interested in Continuum. I love that show for its spot-on perspective of present-day technology mixed with a believable vision of the future. Anecdotally, I remember reading that Continuum had a hacker writing their technical scenes -- news that didn't surprise me one bit :)
I suppose that's the reason. Still, I found the result completely unexpected and amusing. I googled this string just to see if it has ever appeared anywhere else but apparently Google these days tries very hard not to return zero results.
When they say "presumably chosen" who (not the person, but what level of development was taking place) inserted this code? What was the purpose in inserting this password? The use of "chosen" means it was not an oversight?
Not really clever. Format strings have a fairly restricted format; that one can't have all that many bits of entropy. Once the obfuscation method occurred to an attacker, it wouldn't take long to brute force it even if they didn't have a file to search for suggestions. (And now that the idea is out there, I bet any number of hackers, wearing assorted coloured hats, are trying this out on other systems Even As We Speak.)
most attkers bruteforce common string passwords. Even if i had a binary to run strings thru this would pass by me. Not that im super smart or clever, but this is a interesting camouflage.
Why would obfuscation method randomly occur to the attacker? It wouldn't. You'd only realise by looking at the code and at that point you have the password anyway.
Can I just point out I find it hilarious reading about all the people debating how amateur it was to put the password into the code in plaintext was... all while ignoring the fact that this backdoor survived for 3 YEARS of code reviews? Obviously the way it was implemented was ingenious, and it likely far surpassed whoever put it there's wildest expectations.
> ignoring the fact that this backdoor survived for 3 YEARS of code reviews
While I don't disagree with the rest of your comment it's possible that this portion of the code was only reviewed once (if at all) during the entire 3 years whereas your comment makes it sound like it was constantly reviewed over the period of 3 years.
It'll be interesting to see a more in-depth look at the code surrounding this and some theories on how it could have been implemented in a way that avoided obvious detection. The naive assumption is that someone just added to the code:
which is certainly possible, but seems too easily detectable and risky for someone on the inside, and too lazy for someone on the outside that had already gone through the trouble to get write access to the source.
The string itself looks like it's part of some logging system, so my guess is that it already existed and was opportunistically chosen rather than created. If this was passed through a macro, then it's possible that the attacker didn't have to touch the auth code at all and may have been able to implement this by changing only a handful of characters in an area of code that was more amenable to obfuscation.
If you look at the disassembly in the link, the backdoor was inserted smack in the middle of the authentication function, which caused jump labels further down to change.
This is all trivial for a compiler to adjust, but it's not what someone manually tampering with the binary would do.
In addition, AFAIK this affects both the ARM and x86 firmware, so a patched binary would imply two separate modifications. Though that would still leave open the possibility that the toolchain was exploited before compilation occurred.
Why would you choose that particular password if you patched the binary? That particular string would stick out in a binary, it certainly looks more like source code.
That's assuming that this particular string was already present somewhere in the binary. Since it is only present as a reference, you would not see the string in a binary patch.
It would have been something that already existed in the string table for the binary, so you would have just been referencing an address and not inserting a string inline.
The string was not opportunistically chosen; it was added at the same time as the backdoor, and shows up in string diffs between backdoored and non-backdoored releases.
You're really going to have a hard time finding something that satisfies your 3 requirements (open source, high-speed and capable of what a Juniper Netscreen can do). OpenBSD can be tuned to operate really well, hit maybe 80 or 90% feature parity with those Netscreens, but the performance won't even come close to what a Netscreen can do -- Juniper fabs their own ASICs and a general von Neumann x86 won't come close to being able to packet switching at those speeds.
You might be able to get close in performance with your high-end Altera's and Xilinx's gear, but that certainly won't be cheap and AFAIK the FPGA synthesis tools are still closed source -- so you're not going to get an 'open source' from top-to-bottom solution at all 3 levels (Silicon, firmware, OS and packet switching). It's arguably more trust-worthy than buying Juniper/Cisco which has complete control of the stack, from silicon to software, but you'll be paying through the nose either way.
The standard Intel e1000's running OpenBSD and libpacket might be the best trade-off you're going to get in price/performance (again, if you make the concession that Intel isn't backdooring on: a) the firmware and/or drivers for the e1000s, or b) somewhere along the controller bus path -> PHY on the processor). This whitepaper [1] re: OmniPath + Xeon/PHI architecture boasts brilliant numbers (page 7 for an architecture summary, column 2 for the network switching summary, page 8 for the benchmark numbers). Again, not open-source at the hardware level, but the drivers, libraries and OS can all be audited. And at those rates of transmission, I'm guessing you could use physical probing along each component to see if any deep-packet tomfoolery is occurring, as the latency increase would be detectable, and you can't just add an extra login password at that level.
Tyan's OpenPOWER compliant stack will run on BSD and other than the processor (which uses an IBM POWER8) pretty much uses jellybean components-- so you get what I'm going to coin as "security through vendor diversity" from now.[2]
It comes down to how tightly your tin-foil hat has been sized I suppose. The upside is right now there's a huge open-source hardware revolution going on. OpenCores' Virtex based RISC stuff is available and that's top-to-bottom open source, and if you were really motivated and had the engineering know-how, ASIC runs can be done for under a million.
The grandparent post refers to "not using any juniper gear" at all - which brings the performance requirements for a replacement to much higher than just the Netscreen products (which as far as I know represent an old product series that has been end of lifed ).
Modern Juniper JUNOS products like the EX switch series contain 1GbE, 10GbE, and 40GbE wire ports and I think push the upper limits of standard linux / freebsd on plain x86.
Note that JUNOS is actualy based on freebsd (Juniper forked freebsd a while ago into JUNOS) - but they do it on custom asics.
Sorry, I should have been more clear. When I asked if FreeBSD and/or Linux could keep up, I was referring to the forwarding rates across four 1 GbE interfaces, like what the higher end SSGs came with. I certainly didn't mean to imply that an x86 BSD box could perform right up there with a QFX10000, for example.
Now that I'm thinking about it, though, Arista runs a Linux kernel on their switches. Or they did anyways, the last time I was out there being briefed; it was a Fedora Core 3 install (yes, it's been a few years), if memory serves, so it must be possible anyways. No idea if they're still doing that but, at the very least, they were at some point.
Arista does run software on their switches. But that is the supervisor, aka the control plane. The data plane, where the actual throughput is achieved, is done with ASICs, and in some cases where user programmability is needed, FPGAs. At no point does Linux handle 1 Gbps+ traffic in those switches.
On the flip side - now that this product, and it's code base, is presumably under such close inspection by Juniper, perhaps Juniper's product is more secure than others that aren't going through such close review.
One can say "oh well, they are looking at code and weeding this out, these guys know what they are doing" or you can say "these are a bunch of either amateurs or they are malicious and in bed with some high stakes player who is not our friend, stay away!".
So it kind of depends. Who do you turn for your high throughput switches.
I am not familiar with that market at all, but wondering if just buying asics, fpgas and some hardware then loading an open source OS on it is viable....
Once you know that your code has been corrupted, it would take a team of consultants put together by someone like ptacek a relatively short period of time to find the holes (and, at the same time, discover 10x or more code weaknesses that weren't deliberately put there.)
If the toolchain wasn't corrupted, and this was in the original source code (C?) - it's not going to be too difficult to find in code review, if you are looking for it - but how many companies are willing to spend the time (and $$$) to review their presumed secure code for holes added by third-parties?
Now that Juniper knows that their source-code/tool chain has been breached, they will be prepared to spend the money to clean it up - which, for any reasonable amount of code, will cost millions of $$$ if not tens of millions.
I'm wondering if patio11/ptacek didn't show some pretty amazing foresight in tracking down the resumes of the top security intrusion people in the industry - I have to believe that the Ciscos, Junipers, and other security hardware/software companies in the world are going to be in desperate need of Top People, and are going to be ready to pay large sums of $$$ for fast service in this space.
> If the toolchain wasn't corrupted, and this was in the original source code (C?) - it's not going to be too difficult to find in code review, if you are looking for it
I was part of a research project, in which, as part of the evaluation, we were given small Android applications (1-10k lines of code, not obfuscated). We were told that each of the apps contained a single piece of embedded malicious functionality hidden in its otherwise known/innocuous behavior. We were given a few weeks to find the malicious functionality. The main goal was to use our prototype software analysis tools. But in those cases in which we had to perform manual source analysis, the implanted functionality often eluded us for 10-25% of the cases.
Now, you might be able to find people who are more experienced at software auditing than us (a bunch of Ph.D. students in security and compiler-related areas). But you might also find more experienced implant writers as well. Also, in a case like this, you don't know how many implants there are, how much of the internal company network is also compromised, how many (if any) of your remaining employees is an attacker, etc.
Fair point - but I'm thinking that a company like Juniper, which has critical branding and revenue on the lines, can afford to hire the 100 top-of-the-line pen testers for 90 days to scrub through their code (with various members have different areas of speciality - which would be important in a situation like ScreenOS which has so much breadth). Figure a top-end pen tester goes for around $300-$400K/year + 50% consulting Markup - $600K/year works out to $150K/90 days * 100 pen testers = $15mm.
The level of performance out of a $400k/year pen tester is pretty good. 100 of them can really scrub a code base pretty well.
$15mm might sounds like a lot of money to you and I - but you would be amazed to see how quickly the money flows when there is a security issue at stake. Also - since the Target CEO (http://www.cbc.ca/news/business/tony-fisher-fired-as-target-...) got axed as a result of a security event, budgeting for security responses has strong CxO level support.
In addition to the consulting workforce you can pull in for this type of project, you also presumably have a highly motivated internal workforce that will be pulling long hours, and working hard to support the pen testers as well.
You won't find that many pen testers making 600K a year, I don't think you find a single one that's paid that much for actual technical delivery.
Pen-testers are also not the people that should be doing code reviews necessarily as they most likely lack experience in that regard.
Money doesn't solve everything, you can bring out 200 people and they'll still won't find everything and being able to figure out how everything works.
The hardcoded password backdoor was well obfuscated but at least something still "easily" detectable the Dual-EC vuln where some one replaced the values with their own pre-computed pair I don't even want to know how they found that out other than by chance.
And when you'll start going deeper It's just gets more complicated It won't be easy to find a 100 people that will be able to do code review on say the kernel of ScreenOS world wide, heck considering the specific skill set that they'll need to have: being able to understand the source code, assembly, being able to figure out how if it can be leveraged in any way which will affect the security of product on a software or hardware level (we are dealing with mother of all registers here after all) and being available for hire that's not something that trivial.
And lastly this might not be solvable with money at all identifying backdoors in unknown code is very difficult the technology that can do so is very limited and the manual approach is prone to human errors and oversight. Even if you get a team of a 100 leading code review, malware and security experts in the world I would not bet money that they'll be able to scrub anything within a year.
And I would not even attempt to hire a 100 of them, the time it will take to bring them up to speed would be pretty exhaustive, a safer bet would be to have your own developers which worked on the products from year to go over small parts of the code and flag something unusual.
If Juniper is smart they'll crowd source that internally and have various code snippets pop up on the screens of randomly selected developers and have them being evaluated, when a snippet gets enough flags forward it to an expert and have them review it.
But that again only works if a) the backdoor is localized and not staged across various steps in the logic flaw it self, and b) the backdoor is in the actual source code and not only present in the binaries because the tool chain itself was compromised.
Sr. Pen testers make $300-$400K/year. The companies they work for charge a 50% markup. For short projects on a tight timeline, the costs are even higher.
I don't know of any other profession that does do code review with explicit tasks of finding vulnerabilities in the code.
And pen testers love to critique the many, many, many ways in which CSPRNG can fail - even when appropriate algorithms are chosen, and correctly implemented - there are still ways in which compiler options can expose you to side-channel attacks. The Juniper Dual-EC vulnerability would have been identified in the first 5 minutes (Dual_EC_DRBG - WTF?!) someone skilled in that arena was looking at that section of code.
A hundred people gives you enough breadth to find people who are not only familiar with the higher level security concepts (IPSec, Firewalling, packet filtering, SSL, SSH, etc, etc...) in the ScreenOs code, but also people who can evaluate tool chains, compiler output, etc.. And get the job done in a reasonable period of time (90 days).
There is absolutely no way you could scrub your own code with your own developers in a short period of time - not only do they have endless internal responsibilities, they aren't experts in vulnerability assessment (typically) -, there just aren't enough of them to do this in a reasonable period of time. I can't believe that Juniper has large numbers of $400k/year pen testers just sitting on staff. Companies usually only hire a few of those people for several weeks a year - relying on code review the rest of the year.
Also - remember, it's unclear whether someone internal in Juniper installed this code in the first place - you want a third party to check everything over. Leadership has to do something very meaningful here, beyond just, "we had our engineers review the code" - bringing in a massive third-party high-level audit is the sort of thing that will demonstrate transparency.
I work in the field and I know very senior people even in the US NYC/Bay and none of them makes 300K.
Sr. Pentesters top out around 200K in NYC and that's really top of the line (world known) guys.
Now don't get me wrong some one who's wearing additional hats, like a CEO/CTO of a small consulting group might make that money while still doing some technical delivery, but they make that money despite of that fact not because of it.
But sorry there isn't a company on the planet that pays 300-400K to their testing staff if you know of one please let me know, I'll relocate there in a heartbeat :)
Heck the average pay in the US for pen testers I would say is even lower than London.
http://www.payscale.com/research/US/Job=Penetration_Tester/S...
Looks like it will be a good week to be a Cisco salesman. Hopefully Juniper can get past this as the competition between them is a healthy thing for the marketplace.
It was a matter of time before it leaked. If you're going to put a back door in, at least make sure only you can use it (Nobody But Us). Why didn't they use a keypair?
Well, simply storing and hashed version of the pass would be ok. Even an MD5 or a SHA1 would be enough to guarantee at least a "Very Few plus Us" security.
Hiding that in the could would be much more difficult. And using PKs would make the attack even easier to spot.
And then you have to consider that a "Nobody but Us" backdoor exist only on paper, not in reality. If a key exists and someone knows it, well sooner or later it will be discovered (disgruntled employees, hackers etc). The only safe backdoor is the none backdoor. Now, consider this problem from an attacker that doesn't give a st of your security. He knows that as soon as the backdoor is known (because someone find it in the code, because someone leak the password, because whatever...) the backdoor will be useless most systems. What you do, you design a safe backdoor or you design a backdoor with as little footprint as possible?
Obviously, legally sanctioned backdoor have a different set or constraint and making them safe is a requirement. The fact that it is impossible to prove them safe unless some assumptions (that always prove to be wrong, but...) are made, it's a totally different problem.
That's a really good question. Why did they store the plaintext instead of a hash? Doesn't feel like the sort of thing an attacker with any sophistication would do. Also doesn't feel like the sort of thing that underwent any code review. I'm starting to wonder about the "intern" theory again.
You're assuming the attackers care if the backdoor is used by others. If a hash would increase the chances of it being found and you don't care if other attackers discover it, only if the vendor does then it's the right choice.
I absolutely assume that if the backdoor was placed by a vendor, they would make sure it was a one-way function.
I (perhaps incorrectly) assumed that criminals would want to stick in a hash function, given a choice.
I think the explanation, that adding a hash function will attract more attention than a simple strcmp is a good one. As is the desire to stick in something that gets by code review.
All signs point to this 100% not being Juniper engineering officially adding a back door, and it being a party doing it without authorization.
- This one could be the work of an intern, maybe a smart one but definitively it doesn't seems very sophisticated (we should look at the code).
- The second one, we are not sure it actually existed but just that someone changed the parameters for the PRNG. But if it existed, well, it was a very sophisticated attack.
But that's not the point of your question! A string compare with an if is pretty easy to hide in the code. Especially if the string looks like a logging string. I'm pretty sure in a big patch could pass through a quick code review. Calling an HMAC function or hiding it in the auth code that already call the HMAC function is much more complex. Plus it is much more difficult to generate a password, salt couple where both the salt and the HMAC seems legit code. Even for an intelligence agency or an attacker with huge resources, it is difficult to get such result!
> Why did they store the plaintext instead of a hash?
Since we haven't seen the source it's difficult to speculate but I would venture to guess that the less function calls around your backdoor the better. It may look weird putting the code in another place where hash comparisons may be needed but in the place they put it where it's plaintext must have been harder to notice.
Apparently, the attackers had a different threat model; they were trying to keep the hash from getting noticed in source code or binary dumps, and thought a hash would stick out more (whether ascii-encoded or as random bits with statistically different properties from what surrounds them).
They might also have expected adversaries to be able to invert any hash function accessible in the code -- which is plausible for a nation-state adversary with some hash functions. (If scrypt isn't already there, you don't get to add it; that would really stick out.)
That's what I thought as well, but it might have been worth the effort to obfuscate it. I mean, they must've known it was going to leak sooner or later, leaving a lot of vulnerable boxes behind.
It is obfuscated in a way that will still avoid detection during static code analysis and casual code review.
This looks like debug or logging string builder which no one would give it a second look while glancing over it.
Hashes and binary blobs will look out of place, same thing goes for intentionally obfuscated secure string builder.
When you put in a back door the best one will always be the shortest possible complicating it will only increase the likelihood of it being detected.
The sign of a true high level adversary isn't overly complicated and obfuscated software but one which is remarkably simple and elegant whilst still achieving the desired functionality.
Obfuscation and complexity screams cyber criminals which care more about covering their own trail than ensuring functionality.
From a very practical standpoint - obfuscation and complexity also makes it more likely that the code will be broken in the future. Not broken like "discovered and disclosed" but broken like: "it doesn't work any more".
I mean - it's not like the unit tests are going to contain check_backdoor_works() (etc).
So the simpler and more straight forward you can get your backdoor snuck in - the longer you can know it will remain, and keep working.
No, your run of the mill malware and back doors which are implemented by people without much real trade craft experience that like to show off being smarter than anyone else - cyber criminals.
Spies rarely are egotistical and never are flashy, James Bond is pretty much the most anti-spy as one can get :)
I agree that functionality and simplicity is very important, but having the password in plaintext is just sloppy. LukaAI suggested using a hashed password. Surely that would have been much better at containing the fallout.
A hashed password means that you have to implement the entire hashing mechanism to evaluate it that will take a huge amount of code compared to a single string compare.
The hash string value will also be detected by most common static analysis rules which look for hashes and strings that look like passwords in the code.
The beauty of this hack is that even if you are looking for hard coded credentials you are likely not to find them as that PW string will not trigger any reasonable regex which will look for a clear text passwords (or a hashed one for that matter).
You wouldn't have to implement sha1(), I bet it's already there. It'd be the same as calling strcmp; the difficulty is hiding the hash (although it probably wouldn't be the only "magic number" in their code), but given enough time and creativity you could come up with a clever solution. For example, you could check one byte of the hash at a time in different parts of the auth function.
You would have to implement it somewhere which means you'll either "import" it which will add it to the assembly or have to call functions / services which handle SHA1.
One of the common techniques in static code analysis is to cross reference function calls against a desired state and to flag any undocumented links which again increases the likelihood of your backdoor being detected.
Such kind of work is almost certainly within reach of wealthy individuals, let alone companies. As the KGB showed, it doesn't cost much to turn people. Decades-long spies operated in the CIA and FBI for very little money, despite only doing it for the money.
That was my first guess. Based on how password looked it could have been a multi-line split or some sort of other such trickery. A set of macros perhaps...
That was my first thought on seeing the password: the string wasn't chosen per se; it was a string they could assemble (or block of memory reused) in a way that appears benign in a code review.
Actually, it's very likely the string was used for some other (valid) purpose and the pointer is being hijacked afterward -- it's VERY unlikely to appear as a singleton string constant anywhere.
The password looks like what someone would put at the start of a function to trace stuff; and, there's very likely a macro that does that all over the software.
Lets imagine what that might look like:
#define TRACE(__f, ...) \
const char *tok="<<< " __f;\
printf(tok, __func__, __VA_ARGS__);
u32 check_pass(const char *un, const char *tock) {
u32 res = valid(un, tock);
TRACE("(un='%s') = %u", __func__, un, tock);
res += valid(un, tok); /* could look like a cut/paste blip... */
return res;
}
So, did they have time for a git blame yet? It's one thing to say you have found a backdoor, another is to clear up how it got there in the first place.
If you are capable of inserting this kind of backdoor, how easy is it to compromise a version control system? (or the login details of one of the programmers?).
Is it possible to prevent history of the code being modified? Do DVCS use blockchains?
I'd imagine it would be nearly impossible to generate a collision that a.) does what you want, b.) is small enough to be unobtrusive, and c.) can be discovered in finite time with the computing power reasonably available to NSA/GCHQ/insert SIGINT organization of choice.
git blame will only really help if the backdoor was inserted into the original code (and even then stolen credentials are possible) AFAIK. It's fairly easy to imagine a scenario where this is not the case:
* Compromised build script - probably under source control as well?
* Compromised compiler - If the attackers had this level of access, they probably had enough access to erase logs showing who/when the compiler was replaced
* Binary patch the compiler output - Not impossible (though I suspect unlikely?) that the original source and build system are clean, but the binary is tampered with after the fact.
So ... if this is a backdoor for some three-lettered agency, what are the chances that this backdoor, even this same password, is present in other products?
Unlikely in my opinion. You don't reuse the same backdoor across any products if you don't want to be noticed. Reusing means someone could run some sort of statistical analysis and find a commonality.
Ideally every product you get a backdoor into is one done in a different way so that even if one is found the others won't be found easily.
This may be a good time to bring up the various big corps who are trying to prevent reverse engineering... while even they can't keep tabs on knowing what is actually being executed on the hardware we pay them for.
> We were unable to identify this backdoor in versions 6.2.0r15, 6.2.0r16, 6.2.0r18 and it is probably safe to say that the entire 6.2.0 series was not affected
this sounds fishy, like Juniper trying to push users to upgrade from _non affected_ builds to a new firmware with a fresh set of NSA backdoors.
Not really fishy. The 6.3.x series is the only version that is under active development [1] so quite naturally it follows that at least since 2013 (EOL of 6.2) many users have upgraded to the supported version.
A bit more disturbing IMHO is that you need an active support contract to even get the updates (based on information I got from Juniper customers, I couldn't find a direct confirmation on their site) meaning that aftermarket users are left in the cold.
That makes more sense. Juniper lied about older firmwares being backdoored to force old customers into fresh support contracts. Not as fishy, just more Oracle.
Anyone running ScreenOS was already scrambling as fast as they could to patch the issue before the backdoor password was posted. Just thought I'd point that out.
You can telnet or ssh to a Netscreen device, specify a valid username, and the backdoor password <<< %s(un='%s') = %u. If the device is vulnerable, you should receive an interactive shell with the highest privileges.
> This password allows an attacker to bypass authentication
> through SSH and Telnet, as long as they know a valid
> username. If you want to test this issue by hand, telnet or
> ssh to a Netscreen device, specify a valid username, and
> the backdoor password. If the device is vulnerable, you
> should receive an interactive shell with the highest
> privileges.
The fact that you thanked _jomo for explaining something that effectively quotes almost verbatim what the article tells you indicates you probably didn't bother to read the article. Like with asking help on various forums, it's generally appreciated (or expected, depending on the community) that you do some homework yourself to understand things, state your current understanding, and then ask for confirmation or clarification if your understanding is incorrect.
Saying "ELI5 please" comes across as intellectual laziness on your part. If you haven't made any effort to understand an article in plain English (it's no scientific paper we're discussing) then why should others make that effort for you?
On December 18th, 2015 Juniper issued an advisory indicating that they had discovered unauthorized code in the ScreenOS software that powers their Netscreen firewalls. This advisory covered two distinct issues; a backdoor in the VPN implementation that allows a passive eavesdropper to decrypt traffic and a second backdoor that allows an attacker to bypass authentication in the SSH and Telnet daemons. Shortly after Juniper posted the advisory, an employee of FoxIT stated that they were able to identify the backdoor password in six hours. A quick Shodan search identified approximately 26,000 internet-facing Netscreen devices with SSH open. Given the severity of this issue, we decided to investigate.
Did you read that paragraph (It's the first one)? I think the downvoting occured because it didn't even appear that you clicked on the article. Presumably anybody reading HN on a regular basis would have had little difficulty understanding the content of that paragraph.
I have done a malicious source code injection as part of a network security exercise at the university. That was before git or other sane source control and I basically inserted a semi-obfuscated piece of code in the source repository, which gave our team an advantage in the game (the game was to crack and find a weakness in a protocol, but the whole machine was a target/battlefield). The clever part was first rooting the server via suid vulnerability.
I won the contest, but while doing it I thought, yeah, this shit will never work in the real world. And then this story made me remember that. That's pretty crazy.