Hacker News new | past | comments | ask | show | jobs | submit login
Cisco Adaptive Security Appliance SNMP Remote Code Execution Vulnerability (cisco.com)
134 points by runesoerensen on Aug 17, 2016 | hide | past | favorite | 127 comments



This is the vulnerability exploited by EXTRABACON: https://xorcatt.wordpress.com/2016/08/16/equationgroup-tool-...


So it seems the dump contains at least one legit 0-day, and it's been in use for 3 years.


Which does at least HINT that it might be what it claims to be. That's a pretty impressive 0-day which they just gave away as a freebie, who knows what they didn't give away.

I will say we'll never get real confirmation if this was actually stolen from the NSA, but if the other bundle contains a bunch of nice original vulnerabilities people will presume it was.


Washington post got former NSA TAO employees to go on record (anonymously) confirming the leaked toolkit comes from NSA:

https://www.washingtonpost.com/world/national-security/power...


Good. Given that these tools no longer can be considered available only to the NSA, they might start working with vendors to close these particular set of holes.


I wonder how this leak affects their "vulnerabilities equities process".

The publicly available data would suggest that thus-far NSA-hoarded vulnerabilities are definitively known to actors who appear willing to act against US interests.

Vendor disclosure means those vulnerabilities can be patched and US interests can cease being vulnerable, but could also confirm NSA awareness of vulnerabilities - which could in turn cause attribution concerns for past or present operations the NSA is undertaking or has undertaken using these vulnerabilities (in addition to providing additional credibility to the leaker).

What a tangled web.


Worked with the US govt (selling to it) and can tell by browsing those files, there is a high chance it came from a 3 letter US govt agency. It was just by looking at stuff they reference, packages, tools they use. The language and phraseology in comments (excluding bundled software like requests and scapy of course). After many years you start to get a feel for stuff like that.


Yes, I think so, too.


Makes you wonder if they could have made more money by pretending to find them and reporting them to the respective bug bounty programs.


Bug bounties almost never pay market value for exploits. Only reason to participate in them is charity.


And legality. I'm not sure why people seem to entirely discount that portion. There's more reward by selling on the black market, but there's also more risk associated with that.


Yeah. Homeowners don't pay market value for me not robbing them, either. After all, think how much that jewellery is worth. And the damage of ID cards and passports.

A laptop alone could get me $250, but no one wants to give me even $10 for telling them their door is unlocked.


Most people only care about tangibles. When i politely advised about security holes, i was told that "we don't need people like you' or just called the police. I understand.


They discount it because it's not true. Nothing illegal about looking for vulnerabilities in products and being compensated for your findings. It's only illegal to attack someone else's deployment.


What's illegal about selling them? Is there an anti-security-consulting-market legislation?

In general what are some risks invovled (I am just not very familiar and wondering in general). Is it a tax issue, the chance IRS could come after you for undeclared income?


Depending on jurisdiction and the particulars of the sale and who you sold it to, I think it's possible you could be charged as an accomplice if the exploit is used in a crime. For example, if you had any reason to believe the individual or organisation you sold it to might use it illegally, and someone singles you out after they do use it illegally, I don't think it would be hard for a prosecutor to make a case. I also don't think under those particular circumstances that's necessarily a bad thing. IANAL though.


Nothing, there are businesses doing it in the US paying taxes on their income.


> Only reason to participate in them is charity.

Maybe believing that it's good when fewer vulnerabilities exist and when attackers are less able to exploit things? Does that count as charity?


...noun: the voluntary giving of help to those in need.


Getting a CVE on your resume isn't bad either.


> and it's been in use for 3 years.

At least 3 years.


This is why "responsible disclosure" is a joke. The flaws put in by these companies are not responsible. (Sometimes people make mistakes, but we're at the point of carelessness).


That may feel good to say, but as someone whose job it was to find these kinds of bugs in software from companies ranging from tiny startups to financial exchanges to major tech vendors, this is a kind of carelessness shared by virtually everyone shipping any kind of software anywhere.

That said, the term "responsible disclosure" is Orwellian, and you should very much avoid using it.


How is "responsible disclosure" Orwellian?


It's coercive. It redefines language to make any handling of vulnerabilities not condoned by the vendors who shipped those vulnerabilities "irresponsible", despite the fact that third parties who discover vulnerabilities have no formal duty to cooperate with those vendors whatsoever.

The better term is "coordinated disclosure". But uncoordinated disclosure is not intrinsically irresponsible. For instance: if you know there's an exploit in the wild for something, perhaps go ahead and tweet the vulnerability without notice!


Do you think there's a moral imperative for researchers to responsibly disclose discovered vulnerabilities?

I see it as a kind of Hippocratic Oath in the field.


No.


Maybe I don't understand you. Are you suggesting that, if you find a vulnerability in a piece of software, you aren't ethically obligated to confidentially disclose the vulnerability to the maintainer so it can be patched before the vulnerability becomes publicly known? If so, why? What is a person who found a vulnerability ethically obligated to do?


No, of course you aren't. Why would you be?


... because if you don't and someone malicious also discovers this vulnerability they can use it to do bad things? If I can get a vulnerability patched before it can be exploited, I can potentially prevent a hacker from stealing people's identity, credit card numbers, private data, etc. To have that opportunity and not act seems irresponsible.

I must be misunderstanding. Would you mind expanding on this more?


You are not misunderstanding. I do not in the general case have a duty to correct other people's mistakes. The people deploying broken software have a duty to do whatever they can not to allow its flaws to compromise their users and customers. Merely learning something new about the software they use does not transfer that obligation onto me.

I would personally in almost every case report vulnerabilities I discovered. But not in every case (for instance: I refused to report the last CryptoCat flaw I discovered, though I did publicly and repeatedly warn that I'd found something grave). More importantly: my own inclination to report doesn't bind on every other vulnerability researcher.


Well, I'm glad you do report the vulnerabilities you find. Maybe it's my own naive, optimistic worldview, but I profoundly disagree with your stance that a researcher is not obligated to report. I think it is a matter of public safety. If you found out a particular restaurant was selling food with dangerously high levels of lead, aren't you obligated to tell someone, anyone for the public good? If you don't, you aren't as culpable as the restaurant serving this food, but that's still a lot of damage you could have prevented at no real cost to yourself.

I understand morality is subjective, but that's my 2 cents on the matter.

EDIT: about the vulnerabilities you didn't disclose, I really can't understand why not. Why not just send an email to the maintainer: "hey, when I do X I cause a buffer overflow"? You don't even have to help them fix it. You probably won't answer this, but can you tell me why you wouldn't disclose a vulnerability?


I do not report all the vulnerabilities I find, as I just said.

I confess to being a bit mystified as to how work I do on my own time, uncompensated by anyone else, which work does not create new vulnerabilities but instead merely informs me as to their existence, somehow creates an obligation for me to act on behalf of the vendors who managed to create those vulnerabilities in the first place.

Perhaps you have not had the pleasure of trying to report a vulnerability, losing several hours just trying to find the correct place to send the vulnerability, being completely unable to find a channel with which to send the vulnerability without putting the plaintext for it on the Internet in email or some dopey web form, only to get a response from first-line tech support asking for a license or serial number so they can provide customer support.

Clearly, you have not had the experience of being threatened with lawsuits for reporting vulnerabilities --- not in software running on someone else's servers (which, absent a bug bounty, you do not in the US have a legal right to test) but on software you download and run and test on your own machine. I have had that experience.

No. Finding vulnerabilities does not obligate someone to report them. I can understand why you wish it did. But it does not.


I see you point about it being overly difficult to report vulnerabilities, especially legal threats, that seriously sucks. I guess I believe you have an obligation to make some effort to disclose, but if a project is just irresponsible and won't fix their shit, or will try to sue you, it's out of your hands.


Somehow my doing work on my own time creates an obligation for me to do more work on behalf of others.

Can't I just flip this around on you and say you have an ethical obligation to spend some of your time looking for vulnerabilities? If you started looking, you'd find some. Why do you get to free-ride on my work by refusing to scrutinize the stuff you run?


> Somehow my doing work on my own time creates an obligation for me to do more work on behalf of others.

To some small extent, yes, though how much work is up for debate. Maintainer's email and PGP public key is right there on the website? Yeah, I think you're obligated. No email you can find, no way to contact them, or are just outright hostile? No, I think you shouldn't have to deal with that.

But I feel like you agree with that, though maybe not in those exact words. After all, you've had to jump through all kinds of hoops to disclose vulnerabilities, been threatened with lawsuits for doing the right thing, and yet you still practice responsible disclosure in almost every case in spite of the burden of effort and potential risk. Aren't you doing it because you think disclosure is the right think to do? That's all I mean by obligation.

EDIT: sorry, not "responsible disclosure," "cooperative disclosure" or whatever term you want to use for disclosing the vulnerability to the maintainer.


I think it is a matter of degree. Here - not sure how this is handled in other countries - it is a crime if you come across an accident and do not attempt to help. And to me this is obviously not only the right thing to do because it is required by law but because there is a moral obligation to do so.

Nobody has to enter a burning car and risk his life but at least you have to call the emergency service or do whatever you can reasonably do to help. And it really doesn't matter whether you are doing your work delivering packages, whether the accident was the fault of the driver because he was driving intoxicated, if somebody else could also help or whatnot.

Discovering a vulnerability is of cause different in most respects - the danger is less imminent, the vendor may have a larger responsibility and so on. But the basic structure is the same - more or less by accident you end up in a situation where there is a danger and you are in the position to help to make the outcome probably better.

So I think one can not simply dismiss that there might be a moral obligation to disclose a vulnerability to the vendor on just the structure of the situation, one has to either argue that there is also no moral obligation in the accident scenario or argue that the details are sufficiently different that a different action - or no action in this specific case - is the morally correct or at least an morally acceptable action.


Accidents and vulnerabilities are not directly comparable, so a position on vuln disclosure does not necessarily imply a particular position on accident assistance.

I would feel a moral obligation to help mitigate concrete physical harm to victims of an accident. I feel no such obligation to protect against hypothetical threats to computer systems.

Chances are, you recognize similar distinctions; for instance, I doubt you feel obligated to intervene in accidents that pose only minor personal property risks.


That is also my point of view, severity and other factors matter. But that also seems to imply the same thing for vulnerabilities - discovering a remote code execution vulnerability in Windows might warrant a different action than a hidden master password in an obscure forum software no one really used in a decade. The danger is still more abstract but it can still cause real harm to real people.

IF there is a vulnerability, it might already be in use by hackers. People need to know about it immediately, so they can defend themselves (by closing a port, or switching to a different server or something). Companies need to be encouraged to find and fix this kind of thing without waiting for a embarrass them by finding it.


I object strongly to your claim that I practice "responsible disclosure", for the reasons stated earlier in the thread.


There is no such thing as responsible disclosure. The concept is nonsensical. Also, you're overestimating the consequences of a single bug. The boring reality is that bugs rarely matter.


When you say obligation, do you actually mean that? An obligation is enforced by some sort of penalty, either legal (ultimately a threat of violence) or social (public shaming). There is no incentive for meeting an obligation outside of avoiding punishment, so why would individuals and private enterprises do any infosec work?


You assume that your own research machine can't be compromised, nor are the communication channels of the organization at fault.

So, it won't be fixed.

Hopefully only one or two people know about the same flaw you found...

Oh, but you would know ahead of time if concrete physical harm could possibly come to the victim of an accident?

Well good for you! You should probably be in charge of defending all infosec research, since apparently you can't be hacked.


[flagged]


Then you misunderstood your own logical conclusion...

You said (and I quote):

  Can't I just flip this around on you and say
  you have an ethical obligation to spend some
  of your time looking for vulnerabilities?
No. No, you can't. Unless you could convince me that my Dwarf Fortress skills have a similar magnitude of real-world affect as the vulnerabilities I've discovered on my own and decided to pocket for one reason or another.


By your logic, I am better off not doing vulnerability research in my spare time --- as is virtually everybody else. How is that a good outcome?

This is a fascinating exchange. Now I wonder how much of the general population, or even the tech-but-not-security population thinks like this.


To your second question: because some projects are fundamentally irresponsible, and providing vulnerability reports to them means making an engineering contribution, which decreases the likelihood that the project will fail.


The meaning of the words "responsible" and "irresponsible" extends beyond "formal duty".


I'm sure that's true, but that's not responsive to my argument.


I obviously thought so otherwise I wouldn't have said it.


The only responsive argument I can come up with based on your original comment depends on you not knowing what the term "responsible disclosure" means, and instead trying to back out its meaning from the individual words "responsible" and "disclosure". But that's not what the term means.

A good shorthand definition for "responsible disclosure" is "report to the vendor, and only to the vendor, and disclose to nobody else until the vendor chooses to release a patch, and even then not until a window of time chosen by the vendor elapses."

Maybe you thought I was saying "the only way to disclose responsibly is to honor a formal duty to the vendors of insecure software". No, that was not my argument. If you thought it was, well, that's a pretty great demonstration of how the term is Orwellian, isn't it?

Or I could be missing part of your argument (it was quite terse, after all). Maybe you could fill in some details.


this is a kind of carelessness shared by virtually everyone shipping any kind of software anywhere.

I don't feel wrong saying that all of those are irresponsible. There are some people who write good code, who at least make an effort to avoid vulnerabilities, and those are the responsible ones.


If you find one of them in the wild, take a picture, so we can have some evidence they exist.


They exist all over the place. OpenBSD, DJB, Knuth, at companies I've worked for, you'll find people who care, and code responsibly. The rest of you need to get your act together.


Someone mentioned selling vulnerabilities on the black market as a better alternative than doing these "responsible disclosure" and bug bounties. What's your take on that? Is it a better route to take?


For the most part I think selling vulnerabilities on an actual "black market" is intrinsically unethical, and makes you a party to the bad things people who buy exploits on an actual black market do with them.

Thankfully, the black market doesn't want 99.99999% of the vulnerabilities people find.

I have friends who have sold vulnerabilities to people other than vendors. I do not think they're unethical people, and I don't know enough about those transactions to really judge them. So, it really depends, I guess. But if it were me, I'd be very careful.


It's dangerous, and might be illegal, so be careful if you decide to do that.


Yep here's Cisco's statement on this: http://blogs.cisco.com/security/shadow-brokers


...secure asymmetrical (public-key) cryptography...

Hmmmm.


Is there some foreign government or organization that buys large numbers of ASAs, enables SNMP on them, exposes SNMP to the Internet on them, and uses predictable SNMP community strings? (For people w/o net ops experience: the SNMP "community" is your shared SNMP password, and in competent networks will be approximately as unguessable as a login password).


Every single snmp device I've ever encountered was reachable via the private interface from any other private nic and used "community" as the shared secret.

Would it be very had to sniff traffic and brute force the secret otherwise?


I'm pretty sure that if I have learned one thing then that the answer to a question along the lines of "are people really that stupid" is an emphatic yes no matter the bar.

In any case, this is after all just the free stuff. If you believe that what they are offering is the real deal, of course, and with this release I'd put the probability at >0.


I scanned. The answer seems to be no, nobody is doing that.


So then this vulnerability is pretty much only useful (1) for persisting onto networks you've already compromised (2) and only in cases where you can apply consultative effort to discover the SNMP community string?

Or maybe there are lots of overseas networks where they enable SNMP and leave the community string "public"?

(Also: how batshit crazy is it that the ASA will let you use "public" as your community string, let alone default to it?)


> So then this vulnerability is pretty much only useful (1) for persisting onto networks you've already compromised

No! For example, at one place I was employed at, the switches had different VLANs - one for private internal network, one which had external (direct) internet access, one for VoIP telephones, one for printers, one for servers and one for BYOD external consultants. Basically, compartmentalization - and everything was firewalled, and every cross-VLAN access had to be separately allowed.

So this exploit (or, for that matter any switch/router exploit) can be used not just for persisting, but for escalating privileges. Assume you have hacked a fax printer via the telephone line (hey, given that, I'm tempted to actually grab a modem and do some fuzzing with my fax printer...), you can then use its network connection to punch holes in the firewall and spread.


Yeah I guess, but I think that scenario is way less common than you think it is, only because almost nobody reliably segments networks. Once you're internal, you've usually got everything within a few hops.

Whereas persisting onto an ASA sounds like an actually widely useful capability! The ASAs don't get reimaged during incident response.


> Yeah I guess, but I think that scenario is way less common than you think it is, only because almost nobody reliably segments networks. Once you're internal, you've usually got everything within a few hops.

Indeed, yes, but entities large enough to afford dedicated teams to run hundreds of pieces of Cisco gear with proper segmentation etc. usually also tend to be those of most interest to any espionage outfit.

Last I heard, ex-employer switched from huge VLAN switches to dedicated, unconnected switches for each network part after Snowden. Given the leak here, I'd say their fear wasn't totally unjustified.


It seems like a massive oversight to implement segmentation like that and yet still allow the fax printer SNMP and telnet/SSH access to the firewall.

In environments I've seen, the network management network is the thing most likely to be isolated first.


You can learn the community string by monitoring traffic. The community string is included in each and every SNMPv2 PDU. SNMPv2 performs no handshake, so is vulnerable to trivial spoofing. Enterprises and ISPs reusing community strings on every device and never rotating them is not unheard of.


Yes, obviously you can sniff community strings, but that only helps if you're speaking SNMP over the Internet.

Again the case I'm making is that this particular bug is really only useful for persisting onto networks you've already compromised.


I would not be surprised if there are companies & organizations out there using SNMP monitoring tools to monitor cloud hosted systems in the same on-prem instance they're monitoring their on-prem systems from.

I'm thinking specifically of my old company, which used Nagios to monitor a few hundred VMs on AWS in addition to the several thousand servers & all the networking gear running locally.


That's what puts the P in APT, after all.


Further supporting the hypothesis that these are implants (meant to persist access gained through other vectors), not external or pivoting exploits, is the fact that the other firewall exploits from the batch (for Fortinet firewalls, for instance) target web management interfaces that also aren't exposed on external interfaces.


the lede is @ the bottom of the announcement:

> Exploitation and Public Announcements

> On August 15, 2016, Cisco was alerted to information posted online by the Shadow Brokers group, which claimed to possess disclosures from the Equation Group. The posted materials included exploits for firewall products from multiple vendors. The Cisco products mentioned were the Cisco PIX and Cisco ASA firewalls.

> Source

> The exploit of this vulnerability was publicly disclosed by the alleged Shadow Brokers group.


You would have to have the ASA configured to accept SNMP packets from the IP your sending them from (or maybe spoof the source address if you knew it as it would be a UDP packet) and you would also have to know the SNMP community string.

Chances are if you had all of this info you could cause all sorts of damage even without the vulnerability.


I'm not sure I follow, what you can do with the bits of information you named when this vulnerability is not present?

In any case, the scenario for this exploit is that you have access (possibly only restricted, no superuser) to an internet-facing machine and are looking to expand your reach into the internal network. That's why they are keen to exploit these Cisco boxes, they are a stepping stone to the wider network that might be otherwise firewalled off and a pretty permanent one at that.


> what you can do with the bits of information you named when this vulnerability is not present?

If you have SNMP write access you can effectively control the ASA. You could for example get the ASA to fetch a new configuration from your own TFTP server.

For example: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-...

Hence why SNMP is always protected by an ACL. If you have SNMP exposed then you already have big problems.


Not really. As the preceding comment points out: it's pretty unusual for an ASA to speak SNMP to the Internet. Not having SNMP publicly exposed is CISSP-level (read: elementary, and idiosyncratically specific) best practice.

Rather, my (uninformed) guess is that this implant is exclusively used to persist onto networks that have been compromised through some other vector. It's not a pivot bug.


If SNMP wasn't public exposed (along with default community strings), we wouldn't ever see DDoS making use of SNMP amplification attacks.

As the senior network engineer at an ISP, I probably see this more than a lot of others here but recent history shows us that SNMP being publicly exposed is rather common.


On an ASA?

Obviously, as I mentioned downthread, even something as simple as a Shodan query can show you lots of public SNMP servers. But how many of the are firewalls?


Actually this is kind of a sad situation; I have seen some SNMP publicly exposed, and a lot internally even when its' not actually being used and with default community.

It's one of those issues where a CISSP will evaluate the Impact x Likeliness metric and schedule a fix for 'next quarter'.

Similar to when the various big padding oracle web attacks came out; you'd have been in a much better position had you fixed the default error pages, but that's not enough a high-risk issue to prioritize a fix.


I'm not sure what I think about the first two parts of your comment, but I'm a nerd so I can't let the last sentence go: what do padding oracle vulnerabilities have to do with error pages?


my understanding was that at least one of the attacks from a while back only gave the oracle due to different actual error descriptions in the response, whereas had there been custom error pages defined for all errors, there was no oracle to use.

I could very easily be wrong though.


Nope. It's any behavior change in response to bad ciphertexts, which changes are easily inferred. For instance: Lucky 13 is a padding oracle that uses only very fine-grained timing information to discern errors.


That's what I was trying to say, you have access to a machine like a webserver that had internet-facing services you could exploit and now you want to go deeper into the network.

Or do they usually only speak SNMP on a management port?


No, I'm saying you can't do that with this bug; that's what I mean by "not a pivot bug". It's a way of persisting inside access you got some other way. Because, again, people don't listen to SNMP on the outside interface.


And that is what the parent is saying. The "other way" is through the web server that is exposed to the internet and can reach the internal ASA.


Reaching "the internal ASA" isn't giving you access the webserver didn't already have, hence, persisting, not pivoting.


Just a quick thought... what about all the monitoring software that relies on SNMP?


Not only is it all internal, but in modern networks SNMP is usually run on specific dedicated backchannel networks, precisely because anyone who has done network security since 1994 knows that SNMP is terribly insecure.

It may be a little less rigorous because ASAs are often prem boxes in enterprise environments, not like tier 1 backbone components. But it might be a little more rigorous because ASAs are firewalls.


I'd say that not only is it all NOT internal, it is often not run on "specific dedicated backchannel networks". Ask anyone who was a victim of a DDoS that made use of SNMP amplification.

I would agree that it should be internal and should be run on internal-only interfaces/networks but the reality is that that very often isn't the case.

The average ASA is better off than most other devices simply because one must explicitly configure and enable SNMP on it. Too many other devices ship with it enabled, accessible from 0/0, with the default community strings set to "public" and "private". I believe the last abuse@ e-mail I received notifying me of a customer with a device exactly like that was on Saturday.


Just a quick google for: Remote grafana SNMP, Remote LibreNMS SNMP, Remote Observium SNMP, etc. leads to all kinds of good stuff.


I mean, you can literally just ask a site like Shodan to give you a list of publicly available SNMP interfaces. Do you see a lot of what look like ASAs on that list?


it's not that common to monitor your network infra from the internet, is it? surely you're piping SNMP over local interfaces?


NSA supposedly listens to our traffic, so learning the community string and management station's IP address is straight forward.


They capture at the fiber going out or in (or undersea), if you follow the other commenters here SNMP traffic would never go over those.


I mean never say never. You definitely CAN tell an ASA to allow SNMP on an external interface. But it's probably not where you end up by default.


That's true if you're speaking SNMP over the Internet. But how many ASAs actually do that?


ASAs are often used at the perimeter of small satellite networks using a local ISP's internet access, and then connecting back to HQ with IPSEC tunnels. I would guess that it is not uncommon, though bad practice, to centrally monitor SNMP on the external interface instead of over the IPSEC tunnel (which can be a little tricky to do).


Yeah, it's true I guess, and if you're using a random community string and this is NSA, I think we can all safely assume NSA knows every community string spoken anywhere on the public Internet.


I can count over a dozen easily, off the top of my head, without even looking into our customer database. I'm certain I'm not alone.


I wonder if Cisco has any legal basis for suing the NSA for developing/allowing the leak of this software?


I like the idea in principle. But the answer is no. In general you can't sue the US govt unless ... wait for it... US govt lets you sue it. Like say it does for some form of tort.


Am I missing something? This is like saying "remote code execution possible if the attacker knows your ssh password"


One case that immediately springs to mind is a network monitoring system exposed to the Internet in order to provide a live "status" page, either for the staff, users, or the public. That system would have SNMP access to the ASA firewalls (which are otherwise normally well-protected).

Gaining access to that public host would then grant you RCE to the protected internal firewall.

If the public host were hosted, say, at AWS/DO/etc., and used a VPN for access to those internal network devices, you've just gained access to the internal network itself.

(Note also that a) SNMP uses UDP and b) community strings, v1 and v2c at least, are plain-text. SNMPv3 has a bit more protection but it's not as widely used.)


So what you're saying is it would be trivial for, say, a state actor to tap into the fiber optic cables and snoop for this unencrypted community strings going over UDP? And then to use them to gain access to internal networks?

Well fuck, NSA was inside pretty much any corporate network they wanted then.


Pretty much. See previous disclosures discussing how the NSA captures, for example, configuration files for Cisco devices passing over the Internet in e-mail, TFTP, etc. If you've sent a community string over the Internet, it's quite possible that they have it too.


Edit: I see what you're saying now, if the NMS is in the cloud for some reason, the attacker could get the community string since it's sent clear text. Really glad one of the first things I did when I got my new job was to get us on snmp v3

Wouldn't the firewall be just as exposed to the internet? After all, that's what it's job is: to protect the internal network from the external. In your scenario the attacker would first have to hack the NMS to get the community string on the firewall running SNMP


Someone correct me if I'm wrong, but isn't the SNMP community string sent in plain text? If that's the case (and it certainly used to be the case), and someone compromised the LAN they could theoretically sniff the community string and use this exploit to own the appliance.

SNMPv3 supporters different, more secure, authentication methods but a lot or organisations continue to use SNMPv2 because it is a simpler more streamlined setup.

Plus the appliances support different user levels, so you'd still potentially be able to use this to escalate access.


SNMP should not allow remote code execution under any circumstances.


It's how attack surfaces work. Yes, it shouldn't allow. But SNMP invovles that pesky ASN.1 encoding, written often by cheap contractors or people who don't really care about security much. So it allow for various buffer overflows, timing attacks, it is basically a large hatch to open and start jiggling all the gears in there and see what gives.


Yeah, I've been scratching my head over this as well.


Cisco is maintaining way to much property code, while laying off people. Wow, that's a recipe for success...


The bit-rot struggle is real. XD


The link ( https://tools.cisco.com/security/center/content/CiscoSecurit...) hits as 404 right now...

HTTP Status 404 - /security/center/content/CiscoSecurityAdvisory/cisco-sa-20160817-asa-snmp

type Status report

message /security/center/content/CiscoSecurityAdvisory/cisco-sa-20160817-asa-snmp

description The requested resource is not available.

Apache Tomcat/7.0.54


Cisco is running an old version of Apache Tomcat and the links on their security pages are dead? Shocker!

They had so many good things to say about their security-mindedness in this post, they even explained Pythons pexpect, and I can't imagine any parallel universe where that somehow connects to the security of Cisco devices.


Everybody's fired.


I see, so this is why they laid off so many employees today.... :P


Yup, I mean since when doesn't correlation mean causation!


Netcraft would be able to confirm if only their Cisco gear wasn't pwned.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: