Given that the advisory advises you update rather than stop using them, hopefully a year and a half to actually implement some sort of actual encryption. And hopefully the reporters were paid for "discussing technical details", like, how to actually use an encryption library.
This "XOR encryption" is vulnerable to an eavesdropper with almost no technical capability.
Even just HTTPS with no checks whatsoever (think 1990s Perl scripts or Python Requests with all the checking explicitly switched off) is protected against eavesdroppers, so that would require an active attacker (or a large quantum computer) to defeat, far more sophisticated than this trivial nonsense.
This is the problem with 'responsible disclosure.' Companies will just drag their feet indefinitely, like an undergrad who stalls and manages to talk the instructor into assigning them an incomplete instead of just failing them outright. Just flunk them and don't reward that behavior: full disclosure.
I've thought for a long time that this is a broader problem with the whole firewalls-and-NAT network security paradigm. It leads to a "soft underbelly problem" and a complacency that leads to insecure internal systems.
From what I've seen: God help most organizations if someone gets to the internal LAN!
The rule should be: If your thing can't run securely on the unfiltered Internet, it's broken. Any firewalling and such should be a defense-in-depth afterthought.
Of course this is kind of perfect world thinking. The reality is that quite a lot of software is either broken or insecure by design (e.g. databases that assume a secure LAN and don't even authenticate nodes) and so in practice we're quite far from this being practical for all but the most carefully managed deployments. Developers are also notoriously lazy about security while developing. I'm as guilty of this as the next dev sometimes.
I think the latter part of this is too pessimistic - network segmentation works as long as it's fine grained. Put that kind of apps on their own isolated vlan/vpc/equivalent and put an authenticating ssl proxy or something as the gateway.
That was also very bad, but this now is even much more obviously worse.
The common is, both are by the same company NASDAQ: FTNT, with almost 2B revenue which, as stated officially, provides "top-rated network and content security, as well as secure access products that share intelligence and work together to form a cooperative fabric"
Yes. If you can build a cryptographically-secure random number generator, which can output a long stream of random bytes from a seed, congrats, you just invented a stream cipher. Just XOR your plaintext and random bytes, and you are good to go.
XOR encryption is unbreakable with an one-time pad that matches the message length (that's basically AES CTR). It's one of the most secure cryptography primitives we have.
The problem is deciding to use home-grown encryption.
I suspect it was more that they never intended to use "true encryption" but just wanted to lightly obfuscate the data. Using strong crypto in a commercial product, especially one intended for international distribution, still brings up a bunch of legal issues that they may have been trying to sidestep:
Having worked before (fortunately not for long) in an environment where trying to argue for "using the OS's crypto library is easy" would've been shot down with a strict "NO!" from management, I can definitely see how this situation occurred. That was a long time ago, but this may have been code left over from that era, and I'm not surprised if many more examples of such, along with people who are still in that mindset, still exist today.
I've seen similar things in all manner of other software (mostly DRM-related, so I won't say more...) --- it's not a hard wall, but a shield from casual observers and "keeping the honest people honest" type of idea.
> Using strong crypto in a commercial product, especially one intended for international distribution, still brings up a bunch of legal issues that they may have been trying to sidestep:
But apparently they sell VPN software. That's their product. The obscured by not encrypted output is the information from the inside of the VPN, and more.
They could not avoid solving legal issues anyway when selling VPN.
They completely deserve a "doghouse" tag on Schneier's blog:
"A decade ago, the Doghouse was a regular feature in both my email newsletter Crypto-Gram and my blog. In it, I would call out particularly egregious -- and amusing -- examples of cryptographic "snake oil."
I dropped it both because it stopped being fun and because almost everyone converged on standard cryptographic libraries, which meant standard non-snake-oil cryptography. But every so often, a new company comes along that is so ridiculous, so nonsensical, so bizarre, that there is nothing to do but call it out."
FortiGuard qualified IMHO. A big seller of VPN software to companies (2B revenue!) who is deliberately faking encryption.
It would be funny if Fortinet, specifically, were to claim that their lack of encryption is to avoid violating export restrictions since according to their wikipedia page [0] they've been seen selling their products to repressive third world dictators [1][2] (Myanmar).
How does this not result in FortiNet customers suing, to say nothing of shareholders and the FTC?
To intentionally design an insecure obfuscation for customer confidential traffic, and then keep it in place for a year and a half after someone discovers and reports it, is obviously negligent and causes real damage to the companies who bought this defective product.
They use the "-k" curl flag throughout their code (disabling ALL certificate validation), since I assume is to make initial configuration easier. Rather than fix this going forward, they created a workaround document which all new and existing customers need to follow to secure their setup.
"Responsible disclosure" is an Orwellian term coined to coerce researchers into aligning their actions with the incentives of vendors. The better term is "coordinated disclosure", and coordination sometimes does make sense, and other times does not.
Bug bounties are the real chilling force here, not the terminology used to describe a disclosure protocol. From what I recall before then there was a full spectrum of behavior, some would let the vendor steer the disclosure, some would set a timeline and stick to it, and some would hang the scalp as soon as they could.
I don't know that I personally ever witnessed any major fallout from the release of a zero day. Seemed like more of a speculative risk than anything. Plenty of shaming to go around though.
"Responsible disclosure" is an opinionated term, and there's nothing wrong with people using it if they think that what they're doing is actually responsible. In fact, one advantage of using it is that it should, in theory, make one thing about whether the coordination you're doing is responsible or not.
I was at first inclined to think that a year and a half was way too long to actually be considered "responsible". But if the outcome of that process was that FortiNet now has actual encryption in place (as one would hope since the advisory recommends people update their software, instead of recommending switching to a competing product), then I think there's an argument to be made that the effort put in will probably have been beneficial on the whole.
The term was literally coined by a group of vendors, and uses the word "responsible" so as to launder their commercial preferences into a universal norm. It worked so seamlessly that people casually use the brand without even thinking about where it came from or what it implies; it's like the word "kleenex" but with malign moral freight. It's the purest example of Orwellianism I can think of in our field; a direct weaponization of language itself.
On the other hand it's 5 in the morning and I've been awake for 4 minutes so it's possible that when I'm fully conscious I'll feel differently.
> It's the purest example of Orwellianism I can think of in our field; a direct weaponization of language itself.
It doesn't seem to me any different than "pro-life" or "pro-choice"; or "Catholic" (which means "universal") or "Orthodox" (meaning "right-believing"), or "progressive".
I know people that won't use "Catholic", but instead will say "Roman" or "Papist" (since that organization is objectively based in Rome, and does follow the Pope); and people who won't use "pro-life", but always say "anti-abortion" or "anti-choice", or who won't use "pro-choice" but use "anti-abortion" instead. I understand where they're coming from and respect their preferences. I myself tend so put "progressive" in quotes, because I don't consider all causes "progressives" pursue to be making progress.
But such people normally don't go around injecting themselves into other conversations and complaining about the terms people do use. Rather, they simply engage in the conversation and use their preferred term.
I use the word "responsible disclosure" because I think we should be responsible about both how we disclose things and how I keep things secret. Both disclosing immediately, and sitting on an issue for years, are irresponsible in my view. Ideally issues would be reported, fixed, pre-disclosed, and disclosed within a few weeks. Having things fixed in that time frame is not always possible, particularly when you're dealing with a company that is so utterly clueless as the one described here. Given that, whether it's more responsible to zero-day everyone or to sit on it for a year isn't very obvious, and reasonable people can come to different conclusions.
It turns out I still feel the same way as I did when I woke up at 5AM. I mean, I went back to sleep. But I've been up for a little while now, and I'm pretty sure I'm right, and "responsible disclosure" is way more Orwellian than "pro-life"; there's at least a colorable argument that "pro-life" is descriptive of a policy position, whereas there's nothing "responsible" at all about independent researchers disclosing information to the public only on a vendor's schedule and terms.
> there's nothing "responsible" at all about independent researchers disclosing information to the public only on a vendor's schedule and terms.
Right, so the reason we disagree is that we have different understandings of what "responsible disclosure" means. From Wikipedia:
> In computer security or elsewhere, responsible disclosure is a vulnerability disclosure model in which a vulnerability or an issue is disclosed only after a period of time that allows for the vulnerability or issue to be patched or mended. This period distinguishes the model from full disclosure. [1]
And full disclosure:
> Full disclosure is the practice of publishing analysis of software vulnerabilities as early as possible, making the data accessible to everyone without restriction.
Nowhere does it say that this is done "on a vendor's schedule and terms". On the contrary, unless there is some other agreement in place, the discoverer can publish any time they like; and so in reality control of the schedule is always at the discoverer's terms.
This is recognized in the XenProject's Security Response Process [2]:
> When a discoverer reports a problem to us and requests longer delays than we would consider ideal, we will honour such a request if reasonable. If a discoverer wants an accelerated disclosure compared to what we would prefer, we naturally do not have the power to insist that a discoverer waits for us to be ready and will honour the date specified by the discoverer.
Google Project Zero typically report a vulnerability and say, "We're telling everyone about this in 90 days, hope you're ready."
That is my expectation of "Responsible disclosure". If SEC Consult waited 18 months, it's either because 1) they had a contract with Fortinet of some sort, or 2) they were convinced that waiting 18 months would cause less harm to people than zero-daying everyone.
As I said at the top of the thread: it often makes sense to coordinate with vendors, and natural preferences often do align. But it sometimes doesn't, and, at times, it even makes sense to disclose without any coordination. Un-coordinated disclosure isn't intrinsically irresponsible, and so the term "responsible disclosure" is misleading. This isn't just my idea, and the term itself has become disfavored among vulnerability researchers.
> and so the term "responsible disclosure" is misleading.
I think this is a key point. As I said, I think "responsible" implies being responsible on both sides: neither simply publishing without sufficient time for a vendor to make a fix, nor waiting indefinitely while the vendor waffles around or tries to pretend the vulnerability doesn't exist. "Responsible disclosure" focuses (or ought to focus) on minimizing harm to users, whereas "coordinated disclosure" focuses on cooperating with the vendor. As such, "Responsible disclosure" in fact carries within itself the threat of going public if the vendor is dragging their feet, where "coordinated disclosure" doesn't.
I continue to think that we should use the term "responsible disclosure", and insist that it mean actually behaving responsibly to users.
I understand what you're trying to say. You're saying that conceptually there is such a thing as being "responsible" or "irresponsible" about disclosure, and I think that's true!
The problem is that (charitably) the term of art (or uncharitably, brand) "responsible disclosure" is attached to some very specific norms, including "not releasing vulnerabilities without a patch" and "giving vendors a commercially reasonable amount of time to create a patch" and "working closely with vendors to coordinate that time window" and "redacting or carefully reducing POC code", which are not themselves universal or even generally "responsible".
They're commercially responsible, to be sure! But it should not be an obligation of unpaid third party researchers to expend any effort whatsoever to be responsive to a vendor's commercial concerns. It's a nice thing to do, and some people are just preternaturally nice to vendors, and that's usually fine, but there's nothing deontologically "responsible" about that.
> You're saying that conceptually there is such a thing as being "responsible" or "irresponsible" about disclosure, and I think that's true!
I'm saying more than that.
We both seem to agree that there is an ongoing war about disclosure; and that large vendors (through a mix of good, neutral, and bad intentions) are warring to make disclosure more convenient and less painful for themselves, to the detriment of their users (and ultimately themselves as well); and that the use of words is one arena in which that warfare exhibits itself.
But we've come to opposite conclusions about the best way to fight the war in this specific arena.
You've observed that companies are trying to define "responsible" to mean "commercially responsible". But rather than recognizing this attempt at redefinition as an attack, and insisting on using the word "responsible" to actually mean responsible towards users, you seem to think that the use of the word itself is an attack; and want to instead try to insist on using a different term, "coordinated disclosure".
I think that's a bad strategy. You're advocating that we surrender the word "responsible" entirely to large vendors. Large vendors are not going to stop using the word "responsible"; if right-minded security researchers simply abandon the word, then the broader public are going to be entirely at the mercy of vendors to decide what's "responsible". Furthermore, as I've argued, using "coordinated" shifts all focus to the vendor, removing any focus from the user at all.
In the war over disclosure, your strategy seems to me to hand a massive win to big vendors.
I think a much better strategy is to counter-attack. The word "responsible" is too valuable a term to just give up. We must continue to insist that "responsible" means "responsible to users"; and we must continue to insist that there are times when pressuring and even embarrassing large companies is the most responsible thing to do.
There's nothing I can say to this that I haven't already said. "responsible disclosure" is a term of art. It means something you don't mean. You can redefine it for yourself, but people reading you will take its actual meaning, not yours.
> You should dig in to the history of “responsible disclosure”.
That doesn't give a history of responsible disclosure, but it does correspond with my understanding of the situation.
He mentions "RFPolicy" invented by security researcher Rain Forest Puppy [1]. This policy includes the following stipulation right at the top:
> You basically have 5 days (read below for the definitions and semantics of what is considered a 'day') to return contact to the individual, and must keep in contact with them at least every 5 days. Failure to do so will discourage them from working with you and encourage them to publicly disclose the security problem.
This is completely the opposite of "the vendor is entirely in control of the process". A few paragraphs after this reference, the author of your article says:
> This entire charade [a push by Microsoft about disclosure] is nothing more than an elaborate PR scam. The five security companies that are involved (@Stake, BindView, ISS, Foundstone, Guardent), were they not following these general rules along the lines of responsible disclosure?
This paragraph implies that the author of the article identifes RFPolicy -- a policy created by security researchers themselves -- with "responsible disclosure", and is blaming Microsoft for trying to hijack the term.
But instead of insisting that "responsible disclosure" means something like RFPolicy, you're insisting that actually "responsible disclosure" means something like what Microsoft wants it to mean. Rather than fighting to maintain control a term that security researchers invented, you're advocating surrendering the term to Microsoft and other organizations like them.
There is a lot of history here that you may not be aware of. Security researchers didn’t invent it, that’s kind of the point. Microsoft did, to further their own interests, in a way that was very adversarial to security researchers, and one of their vendors at the time (@Stake) tried to formalize it in an RFC that went nowhere. And the primary author of that RFC, who was an @stake employee (Wysopal), has also publicly disavowed it.
This isn’t surrendering the term, not only because Microsoft largely invented it, but even they disavowed it 10 years ago. Not even Microsoft wants that term anymore!
You cannot “insist” what it means. You’re trying to redefine it from it’s original framework to fit your personal definition of “responsible”, which nobody agrees on, and which is one of the many reasons that it’s a dumb term.
It’s like saying you believe in buying cars that are responsible colors. If you want to communicate efficiently and without being inflammatory and presumptuous in a community where reasonable minds have long disagreed, you should just say “red”. Everyone knows what that means.
Google Project Zero do not use the term “responsible disclosure” to describe this. They quite openly reject the term and do not follow “responsible disclosure”, which would prohibit then from publishing exploit details even after the fix is out, and would prevent them from disclosing that an issue even exists prior to a fix being available. The term was specifically a social engineering attempt to brand exactly what GPZ is doing as irresponsible (even though they didn’t exist at the time).
Much like a "due process" standard so-called "responsible disclosure" is only anything like responsible if you have set out how you'll be responsible in advance and then stuck to those rules. If reality argues in favour of changing your methods so as to be "more responsible" whether that means disclosing sooner, or later, to different parties or in a different way, that should be a _separate_ process which changes future policy.
Google's Project Zero is an example - regardless of whether they're finding holes in Android or iOS, it's the same policies, Google's Android team needs an extra week? Same result as when Apple's iOS team asks for one.
If you don't have the policy set down in advance, you're vulnerable to manipulation which will invariably be a bad idea as well as leaving you looking _less_ responsible than you intended.
The outcome is that bad guys had _over a year_ extra to read everything FortiGuard products were doing without customers having any awareness this was happening.
Acting responsibly as a researcher who has discovered a vulnerability requires delicately balancing a question of two greater evils. Will more people get hurt overall if we announce the vulnerability sooner, or will more people get hurt overall if we wait until the vendor is ready?
In most cases, working with the vendor to allow a patch and a warning to be released is the path of least harm. But sometimes the right decision is to announce a vulnerability before the vendor has issued a patch.
Like the potential fallout of known broken "encryption" in a security vendors products being hidden from their customers for 18 months?
The ethics of publicly disclosing way quicker than that, despite what the vendor wants to label "responsible disclosure", seems pretty straightforward to me...
I hope that 18 months of conference calls was extremely lucrative for the researcher here, because I'd feel like a jerk sitting on that one for a year and a half while the vendor was no doubt selling more and more of their broken and insecure crap to unsuspecting customers...
And this right here is why “responsible disclosure” is a dumb term that people need to stop using.
Look how much time is wasted arguing over the highly subjective definition of “responsible” that breaks out. Communicating these issues would be far more optimal if we use objective language.
That was the point when Scott Culp coined that awful term in the first place. People are still taking the bait.
Tangential, but are Fortinet products annoying as well as dangerous? What should I tell a boss who is considering their devices in front of their ISP line?
1) Fortinet tries to explain weird SSH 'backdoor' discovered in firewalls - https://www.theregister.co.uk/2016/01/12/fortinet_bakdoor/
2) Fortinet Finds More SSH Backdoors - https://www.bankinfosecurity.com/fortinet-finds-more-ssh-backdoors-a-8826
3) Fortinet backdoored fortios or hackers did for monitoring since last 5 years - https://www.securitynewspaper.com/2019/08/29/fortinet-backdoored-fortios-or-hackers-did-for-monitoring-since-last-5-years/
We run one of their firewalls and it's pretty good, but between the security issues, and overhauling the licensing between 6.0 and 6.2 release, it's got me considering ditching it when we upgrade our network in the new year.
We're having absolute misery with their support at present. Panorama's not been logging data for 45 days now. They've just completely missed the last two teleconfs.
This is the company that shuffled a hard-coded password instead of fixing it. The developers got around the strings AP_image|grep "hardcoded password" "check" the managers were doing on the AP code by just moving it to the controller code instead.
They still have a hard-coded root password on their WLC controllers.
I hate fortinet stuff. My number one reason being the binary blob vpn client for linux, that you can't even download without an account. (so not in repos, have fun deploying to many machines) Also the annoying but more benign fact that everything has "Forti" in front of it.
Translation: half a year to communicate:
- you send it unecrypted
- yes we do.