Hacker News new | past | comments | ask | show | jobs | submit login
SSH Backdoor found in Fortinet firewalls (seclists.org)
366 points by CariadKeigher on Jan 12, 2016 | hide | past | favorite | 121 comments



It's a custom SSH authentication method invoked with a special username, "Fortimanager_Access". The protocol is a weak "challenge/response" using hash of the challenge concatenated with a string (used in multiple firmware versions and not at all unique to the device).


So they have an account that has one password that can bypass this firewall on any device?

Does this sound familiar?


Juniper back door #1?


I was thinking NSA, actually.


This is why http://Pfsense.com should get even more coverage on here than it does. Chris and his team do an incredible job of creating secure open firewall software.


pfSense might be fine for home or small business use, but I wouldn't recommend it for datacenter or medium/large enterprise use.

It is primarily GUI-driven, which won't cut it for automation, version control, or some auditing.

AFAIK it is still based on m0n0wall. The init scripts are written in PHP. There is (or was when I used it years ago) one big PHP script with a bunch of includes. If any part of the PHP script fails the rest may fail.

This led to a bunch of problems with packages. For example I installed the RADIUS package through the web UI, decided I didn't want it, removed the package through the web UI. Removing the RADIUS package removed something the large PHP init script was trying to include, which led to the script exiting immediately with an error.

The PHP script was also reasonable for loading the firewall rules, but it exited before loading the rules. Which resulted in a booted internet gateway with _no firewall_ (allow any).

This aspect of pfSense may have changed, so I'm not trying to hate on it, just trying to point out that it may have shortcomings for some use cases.

I've managed straight pf on OpenBSD for gateways/firewalls. pf is very nice to work with, much nicer than iptables/tc for internet gateways. But that also won't scale to large enterprises or datacenters . The amount of data is too much for OpenBSD and generic hardware, at least as for as I've seen. At one employer the largest Palo Alto Networks firewalls couldn't handle our office network traffic, they had to be replaced by firewalls from another vendor.

So it really isn't fair to compare FortiGate firewalls, some models of which can do 300+ Gbps, with 40/100 GbE ports, to a little box running pfSense.


> So it really isn't fair to compare FortiGate firewalls, some models of which can do 300+ Gbps, with 40/100 GbE ports

You forgot: and an ssh backdoor


> I wouldn't recommend it for datacenter or medium/large enterprise use.

Fortunately (for us), the large consulting firms disagree with your assessment.


I'll certainly agree that they do a pretty good job of bundling it all up and slapping a web interface on top of it... but the operating system and firewall (pf) and 95% of the software that comes with it? That's an incredible job done (mostly) by many other people.


While what you say is in some sense true, we also make fairly large contributions back to FreeBSD as well.

Case-in-point: AES-GCM, including AES-NI support, and support for same for IPsec.

There are other, more minor contributions, such as the new 'tryforward' code (replaces what was 'fast-forward', but doesn't break IPsec). r290383

Or r290028, where we eliminated the performance impact of IPsec (which is now on by default in -CURRENT).

I could go on to detail around 30 recent changes to FreeBSD, but I think the point is made.

In any case, it's a bit more than "bundling it all up and slapping a web interface on top of it", as you assert, but you're not the only person who thinks this way.

Your point that we leverage others work is correct.


Kudos for the AES-GCM and AES-NI work, and for your contributions back to FreeBSD.

Best wishes for your 2.3 release and the new bootstrap based webGui!


Thank you!


thank you guys for you work, our pfsense is a bastion since 10 years


you are welcome. thanks for your support.


Also OpenWRT: https://openwrt.org/


not as long as everything runs privileged as the superuser. running all services as root by default is bad juju.


That's certainly true of servers, but it's way less important for the limited set of services a router typically provides. Aside from the services that really need root because they're changing system settings, you've pretty much got just dnsmasq and hostapd out of the box. Isolating them is nice if you've got the storage space for that extra complexity, but it's not like OpenWRT is a massive attack surface with gaping holes.


And what guarantees we have that when they ( or any number of other OSS vendors ) include OpenSSH into their product, it isn't some boopytrapped variant? Oh wait, it's open source, so we can go and find out? Anyone?



Why? pfsense code quality isn't very good either, and if that's not where it defeats fortinet... then where?


The BSD tcp/ip stack is well respected. Maybe the most respected?

What is your definition of "quality code"? (Without going in to a huge rant.)


I was talking about code specific to the pfsense project, their PHP code looks like it was written in the 90s and has several greppable bugs.


a) I hope you're looking at 2.3. I agree than the PHP in previous versions was abhorrent. Going forward, the plan is to eliminate the PHP.

b) if you're serious about greppable bugs, please open bug reports (redmine.pfsense.org) or, failing that, email me with a description. (jim-at-pfsense-dot-org)


Their software is good, but their hardware? Awful. PFsense doesn't know how to do reliable hardware at all.


I'm surprised at this - I run four pfSense boxen; my two homebrew ones on repurposed old machines have both died on me at least once, and the two running on purchased pfSense hardware are still up & running. Anecdote != data and all that, but I'd buy their hardware again if I didn't have a large stack of machines I can reuse for "free." (ahem. free does not count my time or that of my students, but sometimes it's fun to admin things a little.)


I own the SG-4860 and it's been solid for about eight months. It's the first hardware I've purchased from them and I have no regrets.

https://www.pfsense.org/products/product-family.html


Do you have any experience with the lower end SG-2220? I'm looking at replacing the Aspire Revo AR1600 I've hacked into a firewall at my aunt's shop running Sophos UTM with something that doesn't require a USB NIC, it looks like a pretty good value at the price.


Thanks. It's what I run at home.


Do you have specific points here?


Check out https://opnsense.org/ It's a more modern fork of PFSense with a better community behind it.


Are you a sockpuppet? Routers are not a toy to be dicked around with by amateurs and called "modern". Opnsense is a bad fork which focuses on dubious UI tweaks at the expense of testing and validation. As far as I can tell its raison d'être is marketing and spite.

http://m0n0.ch/wall/list/showmsg.php?id=376/07

https://www.google.com/search?q=pfsense+vs+opnsense


> Are you a sockpuppet?

Accusations of astroturfing, sockpuppetry, and shillage are not allowed in HN arguments unless you have evidence. Someone disagreeing with your view doesn't count as evidence. So please don't do this.

If you want to understand our thinking on this, I've posted about it many times, e.g. https://news.ycombinator.com/item?id=9277068 and the links back from there.


Thanks for the robo-call quality advice.



Yes, they do awesome job. Check reddit.com/r/pfsense or check forum.pfsense.org or check their contributions to FreeBSD (both $$$ and code). Don't just link some random stupid thread from trolls.


Tedious drama.


agreed


This is why high-assurance security products were/are required to have:

1. Clear description of every feature and requirement in system.

2. Mathematical spec of those where English ambiguity could effect results.

3. High level design with components that map 1-to-1 to those.

4. Low-level, simple, modular code mapping to that.

5. Source-to-object code verification or ability to generate from source on-site.

What people in faux security mocked as mere "paperwork" or "red tape" were actually pre-requisites for defeating subversion my mentally understanding a system from requirements all the way to code. A problem like this would've been impossible in such a system because it would be beyond obvious and probably unjustifiable with requirements tracing.

Every story like this further validates the methods that consistently produced systems without all the security problems plaguing modern security products. Situation isnt inevitable or even necessary: merely an inversion of scientific method where security companies and professionals consistently refuse to use what's proven to help and reuse tactics proven to fail. It's gotta stop.

That it wont is why I favor liability legislation tied to a reasonable baseline of practices. We can use an inexpensive subset of what worked in highly assured systems. 80/20 rule. Baseline would look more like Secure64 or HYDRA firewall than shit like Fortinet and Juniper. Hackers would work for exploits. I know Im dreaming, though, as DOD and NSA just dropped mandate to EAL1 w/ 90 day review for some stuff. (Rolls eyes).


Your assurances are good but the software simply _must_ be open source with a reproducible build.

We're kidding ourselves to put our faith into these closed-source products and that's only just now becoming clear. Open source. It's the only thing that will work for us long term.


Did you read my comment at all before replying? One step is the source, one step implies reproducible build, and alternately specifies a stronger requirememt (source-to-object verification).

And no, OSS with reproducible builds is nowhere near enough for software to be trustworthy. It's why even Orange Book had more than one sentence in its feature and assurance activities recommendations.

Let's put it to the test though. If Im right, most the majority of OSS software will be similarly to proprietary full of easily-prevented holes, undocumented/barely-clear functionality, and difficulty even building it. Whereas high assurance systems would've had the opposite attributes while faring well during professional pentesting w/ source.

One of us was right for a decade straight. Maybe it's because the principles and practices I promoted... work? Evidence in the field is on my side. Neither OSS nor good builds are enough.


There's more than process going on here; there's also a policy axis in this graph.

Closed source has no obligation to reveal vulnerabilities, fix anything, or even work with customers who report vulnerabilities. ORCL will sue you [1] if you learn too much about what you bought. It's often more in their interest to fix something after public discovery for PR reasons but leave it on the todo list otherwise.

So yes, of course closed and open source have holes. The question is, will they be found, announced, and addressed or will they lay secret for years behind a legal wall?

1. http://arstechnica.com/information-technology/2015/08/oracle...


Thats a good point. I should factor that into the next version of my essay. It will be a natural advantage for OSS. On proprietary end, perhaps the contract will give disclosure of problems plus immunity to suits to the reviewer. There are still problems that can happen in that model but negates most of what you said.

Btw, about obligation, my essay assumes the company is trying to differentiate by taking initiative and having their product reviewed. Companies that don't shouldn't be trusted at all. End of story.


In that case, of course you get into mitigation options.

Depending on your contract, proprietary vendors offer few choices about getting a vulnerability patched, if ever. If you're in the riffraff section (ie, most router owners only have a few), you might wait a very long time. One [1] from netgear that languished for months. And what about the Juniper backdoor: won't fix?

With open source, you can take the code to whomever you wish, fix it in house, offer a bounty, etc etc. There are plenty of houses that give away code and sell support. If GPL'ed, this model also accelerates fixes because everyone gets immediate benefit of everyone else's fixes.

http://betanews.com/2015/10/10/hackers-exploit-serious-unpat...


Hmm. I'm probably going to have to rewrite or supplement the essay to account for these other dimensions. So, what do you think of it's primary intent to show that belief in security or no subversion come down to trusting a reviewer and methods put in rather than source access? In general rather than for, say, a project whose source you personally would review in full for every kind of security issue, etc.


You're not wrong that a correct process dramatically limits classes of issues (I've worked in a very high ceremony requirements-tracability shop). But it still requires the customer to have faith in source code they can not themselves see.

As a customer, you can claim what you like about your process and your glory: until I can actually verify the code as part of the deliverable, it's just faith on my part. I'd rather use an open source firewall rather than a closed source firewall, regardless of the claims made by the proprietary company. Again, this is about faith. I'd like to avoid having it when it comes to security.


"But it still requires the customer to have faith in source code they can not themselves see." (pnathan)

"4. Low-level, simple, modular code mapping to that.

5. Source-to-object code verification or ability to generate from source on-site." (me)

Seriously, did someone hack my comment where it doesn't show that on everyone else's end or did they hack my system where 4 and 5 are only visible to me? Shit! Here I was using OSS, reviewed, well-maintained software specifically to reduce the odds of that. I'm blaming Arclisp: must have called a C function or something.

"You're not wrong that a correct process dramatically limits classes of issues (I've worked in a very high ceremony requirements-tracability shop)."

Well, there we go. Least you saw that and have experienced that assurance activities can increase assurance. Now we're getting somewhere.

"Again, this is about faith. I'd like to avoid having it when it comes to security."

You're probably going to have it anyway unless you specifically verified the software, libraries, compiler, linker, build system, and all while producing it from a compiler you wrote from scratch. Nonetheless, open-source can increase trust but I say closed can be more trustworthy. Not is or even on average but can be.

Here's my essay that claims and supports that the real factors that are important are the review, the trustworthiness of the reviewers, and verification you're using what they reviewed. I'd like your thoughts on it as I see where you're coming from and like the faith angle. Faith in the integrity of the process and reviewers are the two things I identified as core to security assurance. So, I broke it down to give us a start on that.

https://www.schneier.com/blog/archives/2014/05/friday_squid_...

Note: I have stuff for other aspects like compilers, dev process, HW, etc. I'm just holding off to focus on the source aspect here.


I don't think most understand what (4) and (5) are, which is why you're seeing the responses you're seeing. Not being in that field, I had to re-read it a couple of times to understand what they meant.

I think OpenSSL's past disproves many of the pro-OSS claims.

As most things, a blended approach is probably best. Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.

And even then you might still have a bad time.


That could be the problem. If it was, I take by what was in the comments to those people. Will have to make it more clear next time. No 4 was source code that maps directly to specs, high-level design, requirements, whatever. Modular, code that clearly belongs. No 5 either means generating the system onsite from source-code or an audit trail that goes from source statements to object/assembler code so you can see they match.

"I think OpenSSL's past disproves many of the pro-OSS claims."

100% agreement.

"Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.

And even then you might still have a bad time."

Decent points. Other engineers and I went back and forth on discussions involving the latter point due to all the factors involved. A high assurance design usually worked pretty well. Yet, it might not, so Clive Robinson and I's consensus was combining triple, modular redundancy w/ voters and diverse implementation concepts. So, three, different implementations of concepts that shouldn't share flaws with at least one high assurance (preferably three). The voting logic is simple enough it can nearly be perfected. Distributed voters exist, though.

Shit gets worse when you think of the subversion potential at EDA, mask-making, fab, and packaging companies. You have to come up with a way to trust (or not) one entity. For HW, esp low hundreds of MHz, the diverse redundancy with voters should do the trick. Until the adversary breaks them all or the voter. Lol...


Not really. I mean, open source code is definitely preferable in addition to any other security guarantees, for multiple reasons. But an open source spec of a closed source implementation, together with a machine checkable proof (that works on the binary itself), would be equally if not more valuable as assurance that the system performs as intended.


I agree it's preferrable to have open source code. Many more possibilities open up that way.


> "...security problems plaguing modern security products."

Not sure what your definition of modern is, but this is a pretty old school "support interface".


INFOSEC started with Ware Report defining issues and predicting requirements. Lots of CompSci and experimentation by military followed. MULTICS pentest and other landmark works happened. Most key activities that boost security were identified then included into standards like Orange Book. A niche market and CompSci field flourished studying and building systems that used every technique known to improve security with great results (on security anyway). U.S. government canceled mandate on high assurance to switch to COTS for features & cost. Private market did same for similar reasons. High assurance market imploded with research going to a trickle.

PC and Internet eras were really getting going around that same time. Languages and approaches that introduce vulnerabilities one way or another became huge. INFOSEC research and products shifted toward all kinds of tactics for hardening, analyzing, monitoring, and recovering such inherently, insecure stuff. Revisionism kicked in where people forgot the old wisdom, starting to reinvent it slowly, plus how and why they got there in the first place. The products, both regular and security, had tons of vulnerabilities old methods and tools prevented. I call this the Modern Era of INFOSEC. It's still running strong.

Good news is the Old Guard published tons of papers and experience reports telling us what to do, what not to do, and so on. There's a steady stream of CompSci people and some in industry building on that. Keeps advancing. Even mainstream IT and INFOSEC adopted some of the strategies. Rust, "side channels" analysis, unikernels, trusted boot... all these are modern variations (sometimes improvements) on what was done in 70's-early 90's. So, it's not dead but it's mostly ignored and barely moving.

That's what I'm thinking when I see another modern firewall or whatever with less security than the guards from the 80's that predated them. You'd think they'd have learned something by now past just the features. The assurance activities were there for a reason. Guards, if you were wondering... https://en.wikipedia.org/wiki/Guard_%28information_security%...

Good essay on security assurance from engineering rather than subjective point of view that development often takes:

http://web.cecs.pdx.edu/~hook/cs491sp08/AssuranceSp08.ppt


> 2. Mathematical spec of those where English ambiguity could effect results.

What does a mathematical specification of "secure" look like?


Something like this:

    We develop a tool to verify Linux netfilter/iptables firewalls 
    rulesets. Then, we verify the verification tool itself.

    Warning: involves math!

    This talk is also an introduction to interactive theorem 
    proving and programming in Isabelle/HOL. We strongly suggest 
    that audience members have some familiarity with functional 
    programming. A strong mathematical background is NOT required.


   TL;DR: Math is cool again, we now have the tools for "executable 
   math". Also: iptables!
https://media.ccc.de/v/32c3-7195-verified_firewall_ruleset_v...


does it? so you can say "yes, we do not allow these ports" or whatever... does that mean it is "secure"? this doesn't check iptables, only the rules that you write using iptables (as far as I can tell)

the point I am getting at is that in a firewall product, what you are checking is something like "only those users listed from this source may login" which is "easy" but other more complicated things like "this channel must be encrypted". what does "encrypted" mean? Is it just a word that you use in your specification language? if so, does it mean what you think it means? etc etc etc...

formally proving stuff about the behavior of a system, or a distributed system, is hard, but formally proving stuff about security, especially as it relates to information flows, side channels, etc is very hard...


""yes, we do not allow these ports" or whatever... does that mean it is "secure"?"

No, it means you don't allow those ports. That's all it means. Saying precisely what you're doing or what attributes your aiming for are what the formal work is all about. Whether your security policy is enough and your design embodies it are different things altogether.

"what does "encrypted" mean? Is it just a word that you use in your specification language? if so, does it mean what you think it means? etc etc etc..."

That's actually one of the easiest things you could ever check in formal systems. It's helpful to think of it like programming. Actually, done side-by-side. You implement a formal spec for a requirement and/or high-level function that takes plaintext as input and outputs ciphertext. This typically uses Red/Black separation model. You also produce and vet an implementation for the encryption module. Now, how to know if something was encrypted in the system before going on the wire? Wait for it... wait for it... answer: it went through the encryption module first while it was initialized and in a state saying it's encrypting. Just like that. Labels and checking at interfaces were used for identifying what went with what and making policy enforcement easier.

Note: Guttman has a security kernel in cryptlib that does this at the interface level for every function in the system. In CompSci literature, it's called Assured Pipelines if you want to look it up. Easier to support now with Design-by-Contract and advanced typing systems. Past systems were kludgy when they happened at OS level except with capability systems.

Anyway, such components and functions are composed with the interactions and compositions analyzed. Each component and composition usually has a small number of execution traces it can perform so one can brute force the analysis if necessary. Finite state machines, both success and fail states, were common in high assurance development because they can be analyzed in full in all sorts of ways.

Note: It was Dijkstra that invented the method for this in the THE multiprogramming system. I think PSOS did it for security first. VAX Security Kernel Retrospective has nice sections where they show layering and assurance activities plus effect they had on analysis and defect hunting. Google any of those to understand the method better.


> You implement a formal spec for a requirement and/or high-level function that takes plaintext as input and outputs ciphertext.

this misses the relationship between the size of the plaintext and the ciphertext...

edit: part of the problem is tooling, to deal with stuff like this you need dependent types and those do exist, but not in a way that "the programmer on the street" can use...


It doesn't and you don't. Size should be specified in both the spec (data properties) and the pre-conditions of the function implementing it.



One of the best in the field. :) Although I think they just have dev assurance. Not total high assurance. An early one with total was GEMSOS by Schell, one of INFOSEC's founders:

http://www.iwia.org/2005/Schell2005.PDF

Smartcard OS by Karger, another founder of INFOSEC:

https://www.acsac.org/2012/workshops/law/pdf/Lessons.pdf

A comparable one for dev assurance, but maybe easier to emulate, was CompCert. Testing of it against many other compilers validated formal verification gave best reliability.

http://gallium.inria.fr/~xleroy/talks/compcert-lctes08.pdf

Ironically, Microsoft compete neck-to-neck with seL4 in terms of verification with their excellent work on VerveOS:

https://people.csail.mit.edu/jeanyang/papers/pldi117-yang.pd...


that's a proof that the system corresponds to a specification though


That's right: a formal specification, formal security model, and proof the spec implements it. An implementation formally proven to implement that spec will then posses the security property unless done in by stuff not covered by that model.

Which is where EAL6/7's other assurance activities come in.


I'm guessing he's referring to something like TLA+, which is a "mathematical" language for formal specification/verification of software systems.


Yes, so what does "secure" look like in TLA+...


Since you asked about TLA+ specifically...

http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-s...

Amazon notes many kinds of problems that lead to reliability and security failures that TLA+ helped knock out. Their engineers are sold on it now and have no intention of dropping it. Certain sentences in that article are nearly identical to those I read in ancient papers using ancient methods. The benefits of precise, easy-to-analyze specifications are apparently timeless.

Here's it done in Z via Altran's Correct by Construction methodology:

http://www.adacore.com/uploads/downloads/Tokeneer_Report.pdf

They apply about every lesson learned in high assurance in their work. Defect rate, from this demo to old Mondex CA, is usually around 0.04 per 1,000 lines of code. That's better than the Linux kernel.

Rockwell-Collins formalized a separation architecture, HW, microcode, etc then integrated it into a CPU:

http://www.ccs.neu.edu/home/pete/acl206/papers/hardin.pdf

NICTA, who did seL4 verification, use tools to model stuff in the language that causes security errors then use provers to verify they're used correctly. Example tool:

https://ssrg.nicta.com.au/projects/TS/autocorres/

Lots of groups using lots of different tools with great results. The difficulty and impact on time-to-market varies. The use of compositional, functions or state-machine models with subsets of safe languages, design-by-contract (or just interface checks), static analysis, design/code review, testing, and compilation with conservative optimization seem to be the winning combo. There's free tools for all of that. It takes about the same time to code as usual while preventing lots of debugging during testing and integration.


See relevant thread in r/netsec: https://www.reddit.com/r/netsec/comments/40lotk/ssh_backdoor...

> It leaves no traces in any logs (wtf?). It keeps working even if you disable "FMG-Access". It won't let you define an admin user with the same name to mitigate it, so make sure that SSH access on your devices is at least restricted to trusted hosts!


The interesting thing from that thread is that it appears it has been patched years ago. Then again, maybe they only changed the "password" in the newer versions.


Open hardware and open source. It's our only path. In my opinion the best way for this to happen is to make it part of the government procurement process, that will inject cash into the ecosystem.

I really believe this has already begun with the FANG[0] tech giants with Open Hardware initiatives. At some point you can begin pooling your resources to create safe, secure, and fast platforms that everyone can use.

[0] facebook, amazon, netflix, google


This probably isn't as bad as Juniper's, because you don't generally get external SSH access to a Fortinet box.


It's still good to hear that these backdoors are being discovered (and hopefully expunged).


I think the main problem is the companies attitude towards the security of their problems. What else is in there?

Credibility is in a way binary - you either have it or don't.


I fail to see how Fortinet had a bad attitude towards this issue. It was found, and fixed, 18 months ago before any of this information was released publicly.


The existence of the issue in the first place.


You don't get external SSH access to Juniper FW's either its either from the local LAN or more commonly through a separate management network/vlan.


they're equal on that part.. but the ssh backdoor was only one part of that announcement. The second part was a backdoor in the VPN (via Dual_EC_DRBG), that permits a passive eavesdropper to decrypt the connection: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7756

https://www.schneier.com/blog/archives/2015/12/back_door_in_...


More specifically, the backdoor had been backdoored, as someone replaced the Juniper constants (which itself replaced the NIST constants) in dual_ec with their own, which allowed passive traffic snooping. The backdoor to the backdoor was backdoored.


I know but the DualEC one is kinda well meh. Only a single agent can most likely exploit it, it still requires quite a bit of computing power to decrypt even with selected values and you need to be able to capture VPN traffic.

The SSH backdoor means that any idiot with basic computer operation skills can log onto a firewall and start playing around with what ever they want.


There are a lot of Fortinet boxes directly connected to the Internet:

https://www.shodan.io/search?query=product%3Afortinet

Of course not all of the results also expose SSH but some do.


Maybe there's another backdoor that hasn't been discovered yet, something like port-knocking.


I'm pretty sure some (of these https://www.shodan.io/search?query=fortigate) do, especially considering SSH is usually trusted for its security. But I might be wrong.


External SSH access to Juniper firewalls isn't normal either.


Another one bites the dust. I'm ready for more though, because it is vindicating my position on FOSS. While FOSS isn't a panacea, at least you can read the code!


Like heartbleed, gotofail, or the debian ssl entropy bug? :)


Not at all. Those were bugs. This is/was a feature.

For this, there'd have to be a specific function in some Fortinet products for handling the special challenge/response backdoor.

A magic string like `"FGTAbc11*xy+Qqz27"` in firewall source code is going to jump out at you. Unlike an extra goto...


How do you know those were bugs and not features? ;)


You can. Do you? When's the last time you read the IPTables code? OpenSSH? The console login prompt? The Grub2 bootloader login prompt? NSSwitch? Krb5?

Do you mean "hopefully someone else will"? Because that's what I mean.


Nice, who's next?

Maybe it is time we build open hardware and software for important things. Can't trust anyone.

Doing audits of open hard and software is a whole other problem however.


It's mostly software backdoors we've been seeing lately, no?

(It could of course be that nobody's finding the h/w ones...)


P(A|B) =/= P(B|A)


I almost had a reallllllly.. bad day. Thankfully, it is only version 4.x up to 5.0.7.


Me too, 5.2 phew


official statement from Fortinet http://ftnt.net/1TTc1Bz


Can someone provide some context? A python script alone is kind of hard to decipher.


Just reading the code, it looks like it connects to an SSH session with the name Fortimanager_Access and does some special handling of the password request (see custom_handler) which grants access. Once that's complete, it turns control over to you to type and run commands on the firewall.

I don't quite understand the special handling. Looks like it takes a byte from the server's output, hashes a special string containing that byte, and passes that back to the server. This is the backdoor.

Edit: Maybe that "special handling" is just standard protocol and it's just sending a plaintext password. I dunno.


It uses a kind of challenge/response, where the device provides a salt that is used with a hardcoded password for that account. It seems to look like 'AK1' + base64(salt|SHA1(salt|password|Fortinet magic)) according to http://fossies.org/linux/john/src/FGT_fmt_plug.c, a cracker for Fortigate passwords.


There is a hard coded SSH password, 'FGTAbc11*xy+Qqz27', in Fortigate devices < v5.0.7. You can use this script to access any device before that version.

EDIT: 5.0.7, not 5.0.2


It's bit more complex than that; it uses challenge-response system, the "FGTA..." string is used as a "key"


Correct, you can't use the password alone, but the challenge/response method used is readily available online, and easy to implement.


Apparently, some people who make firewalls believe in security by obscurity. (They could at least have used an RSA key to verify. Though that would still have been bad.)


I've worked at a large telco testing CPE devices (routers and whatnot) and it was common place to find backdoors like this. The devices were made by a third party vendor and most of them had hardcoded passwords and hidden debug features.


Hugged to death. Archive link:

https://archive.is/WU8l3


These backdoors in the news lately - Juniper and now Fortigate - are scary, but thinking back on 10 years in IT I've never operated in a network where SSH on network equipment was accessible to anyone without intranet access through either physical location or VPN.

On top of that I am now in an organization where we're starting to implement security levels on networks, anything above level 0 requires 2FA to access and you can never connect a lower level to a higher level. So best practices are a good thing.


"I've never operated in a network where SSH on network equipment was accessible to anyone without intranet access through either physical location or VPN."

Doesn't help. The attacker just has to get user-level access on some machine on the intranet or in the data center, which can be obtained via other attacks. Then then can attack other machines via the local network to escalate.


Yes but best practices still apply for example client networks in office do not by default have access to network equipment.

This is where VPN services like Junos (ironically Juniper) work well because they give you 2FA and group based access. So if you're not in the networking admins group then you have no reason to have SSH access to the networking equipment.


The weakest application in your server farm can provide a way into the local network. One of the reasons Amazon uses their own software-defined network switches is so they can limit internal connectivity within their "cloud" to prevent such attack escalation.


What are you using for 2FA?


Ironically enough Juniper Junos. :)


that's a shame but we're used to it these days I guess.

if you want to do backdoor probably should do it better, something like port knocking to start with at least.


if you want to do backdoor probably should do it better, something like port knocking to start with at least.

Come to think of it, backdoors are fundamentally "security by obscurity". Or insecurity through obscurity, depending on your POV.


> Come to think of it, backdoors are fundamentally "security by obscurity". Or insecurity through obscurity, depending on your POV.

This one is. But they aren't always.

For example, if a manufacturer put in a support/recovery backdoor, documented it, and utilised a secret that only the end user and manufacturer should know (e.g. something on the physical label), then that would be a backdoor while not relying on any obscurity for its security (or no more than a password does).

The biggest difference between a "good" backdoor and a "bad" one is if it is documented. If the manufacturer is too scared to document it then it likely sucks and they know it.


Knowing the Juniper Dual_EC_DRBG constants were fiddled with doesn't help you actually snoop, since working backwards to the generating constant is computationally unfeasible.

Similarly hardcoding someone's SSH public key isn't going to help anyone else gain access just by knowing it's there, is it?


Also if you look at the complexity of the code, you'll know something like port-knocking would have had no affect (probably just an extra line in the custom handler).

People that discover these these types of exploits each machine code for breakfast.


This script worked for me once I enabled SSH on the lan interface on my FortiGate 100D running 5.0. But the only command that seemed to do anything is "exit". Everything else gave an "Unknown Action 0".


But think of the upside - so many terrorists were probably caught because this code existed. We must fight to ensure all firewalls have back doors or face a true terrorism threat.


Maybe backdoor was a bad way to describe it? Maybe it's used as a customer-initiated support channel for when the customer wants the vendor to access the device.


Support channel has to be the best euphemism for a backdoor I've yet heard.

USG probably would have had better luck if they pitched backdoors as a consumers protection measure and had a law mandating that:

1. Software companies must always have a remote and practical method to correct dangerous flaws in the software they issue.

2. To protect consumers' valuable records, all device manufacturers must back up all data on any device the produce. Such backups should ensure that the data is always accessible by the user even if the user were to lose their password or keys.

It would be a security disaster, and it already is.


Please don't give away ideas. You be surprised how well these things would resonate with layperson.


That may be, but it's still a backdoor by definition. There are better ways to allow a vendor to access a device, and a hardcoded password that nobody else knows about is not exactly a "front door".


Fortinet support just uses TeamViewer or Webex when they need access to a device, or you create an account for them.

Source: Fortinet admin


Yes - if it isn't obvious (which is should be to anyone here) this was probably used as an beta method for a control/communication channel protocol, Probably inter device only (not meant for a user or support). Built by a programmer who was told to 'get the communication working for this new feature we want to implement'. So he hacked open a hole with a very large cudgel where he should have used a scalpel.


Do you know what FortiManager is? It has nothing to do with support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: