Hacker News new | past | comments | ask | show | jobs | submit login
Engineer admits he wiped 456 Cisco WebEx VMs from AWS after leaving (theregister.com)
123 points by swatkat on Aug 29, 2020 | hide | past | favorite | 63 comments



I left one job to switch to another. A year later, my new job started using a cloud-based task management app. When I went to sign in, my 1Password auto-filled the credentials I'd used for the same app at my previous job, and there I was looking at all of my old employer's current projects and other confidential info. I called my old boss (who I got along with just fine), told him what happened, and asked him to please cut off my access immediately.

When you leave a job, it's in your own best interest to make sure that all of your access is removed. It's a lot harder for them to blame unexpected happenings on you if you can't even log into the thing. (Not that this happened here. I just wanted to point out a gotcha you might not have thought about.)

If you find out that they missed something, report it to them immediately and keep that paper trail demonstrating your good intentions toward them. Then hound them about it until they get around to fixing the situation. And for the love of God, don't ever, EVER log in "just to look around". Absolutely no good can come of that.


How would you know that they didnt disable you accounts without trying to log in.


You come up with a plausible story, possibly involving an automated password manager, and post it in a public forum where you mention that you have documented all the events and your honest behavior in the matter.


Heh, I'm not that clever.


Exactly the sort of thing a clever person would say...


You ask them if it was done with a paper trail. Don't try to log in to test your credentials ouf of good faith.


I think they meant ‘don’t log in, discover you have access, then browse and take no further action’. You can try to log in, you just need to inform someone immediately if you have access when you shouldn’t. That’s my take anyway.


That's what to do when it happens accidentally. It would also make sense to do so immediately after you left. They already made sure that you left behind all relevant keys, documents, and your work computer. It would be natural and in both parties' interest to ensure that all other credentials have been revoked as well in time.


Successfully trying to log in can already be a crime depending on your jurisdiction in my opinion.


You're right - I guess in that case it would be wise to ask your previous employer to provide proof that you're no longer able to access anything, and no longer liable. Many wouldn't offer that, but I'm not sure how else you could navigate that.


I think the common sense approach would dictate that a persons motive would need to have been shown to be nefarious for criminal proceedings to have any chance. Ie. Intent. Without intent, I struggle to see any court move with this, but I'm no lawyer - just an engineer!


This article leaves more questions than it answers. Room-elephant number one: access being available after an employee has left is bad. That access remaining five months later is beyond the pale, unless the real story is that the employee created a backdoor. Barring a backdoor, there are further serious questions about the employee retaining this access, presumably without any employer-provided and controlled hardware (e.g. laptop, yubikey, or what-have-you).

Room-elephant number two: motive. The reported facts naively summarize as "oops, ex-employee blew up some stuff in prod, caused problems". <meme>But whyyyyy??</meme> There's no indication of specifics, and seeming denials of some obvious guesses: attempts at hacking (e.g. data exfiltration for profit, which are denied), ransomware, revenge, or anything else that would explain this behavior.

Further confounding everything is the bit where the new employer's response to these revelations is apparently "shrug".


I worked in consulting up until Covid. When I got laid off, my employer locked me out of every corporate system within 15 minutes. But every client who gave me VPN, AWS or other credentials didn't get notified.


Interesting angle. I wonder if the perp was employed by Cisco directly or was a contractor and Cisco wasn't informed when he changed employers.


Do me it reads as if he was fast and loose with something, and didn't really care whether it affected other systems, but didn't intentionally seek to damage systems. That sounds like it would be a hard situation to have happen, but there's so little real info it's hard to tell.

Was it a script on a personal machine he had that was connecting to an old account he didn't thing would work? They say "deployed code", and that can be frightening easy to do in a cloud centric workflow (and if it's old code, who knows what would happen).

Something like that would also explain is current employers reticence to fire him. A mistake where you run something you don't imagine will even work, much less cause major problems that then does so because your prior employer forgot to remove credentials is something that might be looked on with a bit more understanding (and a lot of schadenfreude about he other company's lax controls causing them major problems).


I've had to juggle personal and professional AWS accounts for a while, I could see someone being confused about which account they were on and accidentally wiping out some stuff. Who knows though.


I too am confused about motive.

Timing aside, I myself would have to have Malicious Hate in my heart, or some ethical//moral equivalent in my brain, to do active big-cost "fire in the hole" damage on to a former employer.


Regarding 1, Cisco will definitely have some explaining to do to their customers and industry compliance bodies, but legally they are in the clear. The precedent has been set time and again that knowingly accessing a system that you know you shouldn't is enough to be considered a criminal act, regardless of how (in)secure it was.


> Cisco will definitely have some explaining to do to their customers and industry compliance bodies, but legally they are in the clear.

Violating numerous compliance regulations by leaving the accounts of a terminated employee active for months doesn’t put Cisco “legally in the clear.” Depending on the regulator they could be in for a good sized fine.


Biggest question for me would be why the employee still had access so long after terminating employment with Cisco.

A common piece of auditor evidence across many compliance frameworks is whether employees have access proportionate to their role (which is naturally highly subjective), but also proving that access is revoked when employees leave the company. This seems like an outright failure on Cisco’s part.

Hopefully they’ve learned from this and put effort into enhancing their identity governance situation.


I knew someone that worked at WebEx in their FedRAMP environment performing vulnerability remediation, and most of the work was outsourced to China because the founders are from China. They posted screenshots of a conversation with their teammate saying that because of the convoluted process to install packages, they had concerns that they were installing some sort of backdoor. They reported this and had investigators constantly checking out their LinkedIn profile for months.


About 5 years ago I had [contractor] job offers for Cisco and Google. I was negatively impressed about Cisco; I still don't see that changing any time soon.

I say this without rosy glasses about Google.


I can't find any place explicitly saying this was done maliciously. A theory: this could also be a really bad accident where (for example) he works without a company provided computer and didn't clean out his old AWS profiles from previous company - ended up deleting resources from the wrong account.


Considering this, it also seems negligent to not provide computers to your employees. Employees are free from worry about leftovers of the employer's secret sauce and credentials on their machines and employers can eliminate a big source of trouble. It would pay off very quickly if even one big outage like this gets prevented.


While yes, this is a problem for several reasons (They should've taken care to clean any company IP off the laptop).

But the biggest, as others have already said is - why wern't his credentials revoked after leaving?

I can understand this at smaller companies, but Cisco has no excuse - they have enough people around that there's surely multiple people who's job it is to ensure that credentials are tied to a person, and that after a person leaves they're all revoked on anything approaching a production/customer-facing environment.


Especially 5 months after being fired. Steam should have had time to cool off.


i don't think he did this on purpose. it's probably the classic "ran with wrong credentials" story of one really unlucky negligent fellow.


That seems like a lot of resources to delete... It’s “possible” but I’m not sure they I find it plausible. Is there some magic terraform or cloud formation that just destroys everything? And a Cisco engineer with devops like experience wouldn’t at least sniff around before running it? Doesn’t seem realistic.

Cisco should wear this too though, this is shockingly negligent. The only reason I can think of suggests a lot more problems and likely noncompliance with regulations and standards I’m sure they claim to comply with.


I was at a large company some time ago that had a script to delete any instances not tagged with a cost centre (from the testing account - they had planned to eventually run it in production but that was still a ways off).

One day an AWS API had an outage causing it to return an empty array for the tags list; the script deleted over 500 in-use instances.


I built an automated VPS system that would do things like create new VPS's, install various pieces of software, and delete test VPS's, etc.

I kept asking the sys admins to create a limited access account for my testing so that it flat couldn't delete existing customers VM's. I would walk into the office of the lead for that team once every few months and make the request again.

Until 1 day a bug in the VPS automation accidentally deleted a customer VPS thinking it was a failed deploy. They finally got around to giving me a limited account for dev/testing work.

It's scary how often this stuff falls through the cracks, even when employees KNOW it can happen.


Tip: Set the termination protection flag on important EC2 instances to prevent accidental deletion.


classic. in the professional leagues whenever you do something like this it's properly vetted and even with that you want to bake a velocity check inside of it (ie don't erase more that 5 instances at a time). I cannot believe if you're expecting to see 3 instances go away but the tool says it's gonna do 500 you're going to be: sure thing boss. go ahead.


He "deployed code" that caused the deletions. I can definitely come up with many scenarios where something automatically removes VMs.

Edit: And https://news.ycombinator.com/item?id=24320495 provides a terraform command that is claimed to be potentially very destructive.


yes. there are tools out there that can wipe away everything: https://github.com/rebuy-de/aws-nuke

My money is that this guy was just negligent an did not remove his old credentials + used whatever thing he was using before without doublechecking. It's extremely plausible he had this setup for a test account and ran it with the wrong credentials.

On Cisco side this bad all around. Revoke the credentials. Audit the credentials periodically. Don't allow direct production access even for engineers that work there unless it's a live production issue or a deployment that's going on (and even then you allow them access to the tooling that does the deployment not to run whatever they want with : permissions)


A reused cloudformation template accidentally run on the wrong account could do an awful lot if it's set to tear down existing infra first so the env is clean


Sure. Essentially "for X in list-stacks; delete-stack $X". In this case it's only VMs (instances, I guess?) so it could be any cleanup script like "terminate not tagged", "terminate anything running over N days", or even just "delete-asg" which happened to have 456 instances in it.


Is there some magic terraform or cloud formation that just destroys everything?

Terraform has a nasty habit of instead of changing what you have into what you want, destroying what you have and then building what you want from scratch. It needs to be used very, very carefully.


This paragraph is baffling:

>According to a court document, Ramesh is in the US on an H-1B visa and has a green card application pending. "Although he and his employer recognize that his guilty plea in this case may have immigration consequences, up to and including deportation, his employer … is willing to work with him regarding the possibility of his remaining in the country and continuing to work for the company," the document [PDF] says.

Why would you re-hire someone who quit and wiped your servers?


It doesn’t sound like this is what happened though.

> During his unauthorized access, Ramesh admitted that he deployed a code from his Google Cloud Project account that resulted in the deletion of 456 virtual machines for Cisco’s WebEx Teams application

It sounds like this may have been more accidental than malicious.


Pure speculation, but I wonder if he had gcp service account credentials sitting around his laptop which applied terraform to the wrong project. terraform apply -auto-approve can wipe out a lot of infrastructure in a few seconds.


Most crimes require mens rea for conviction. That is, the prosecution has to prove beyond a reasonable doubt that you intended to do it. If it's really the case that he accidentally ran terraform with the wrong project/credentials, I doubt he'd accept the plea agreement.


HN has previously discussed the egregious power imbalance at play in plea agreements.


That would be amazing. And I could believe it.

Not properly setting up and configuring auth could result in long duration of auth tokens, which could be sitting around unknowingly.


Just a few lines above what you wrote:

> Sudhish Kasaba Ramesh, who worked at Cisco from July 2016 to April 2018, admitted in a plea agreement with prosecutors that he had deliberately connected to Cisco's AWS-hosted systems without authorization in September 2018

How does that sound accidental to you?


It's a plea agreement. It's quite common in the US to admit to crimes you didn't do to secure a plea bargain.


Could just be very risk-averse plea-bargaining on Ramesh's part.


Yep. It may just be a fact that he ran terraform and it said to delete all VM instances, only he forgot he had his old credentials in the environment variables, and who would have ever expected them to work.

That said, the lack of details on this do leave a lot to be imagined - it’s just as easy to read this as revenge, and that large companies don’t bother to publicly shame people most of the time.


I believe his current employer (Stitch Fix) is willing to keep him employed, not his ex-employer (Cisco).


I can’t recall the deal with h1bs, but green card applications require that you haven’t committed any acts of moral turpitude, which would easily include this.

All work visas have a no criminal charges rule, so if this is a criminal case I believe being found guilty puts him in the area of instant visa revocation


Criminal vs. civil, afaik the former is gov. vs. ____. Moral terpiude, deployed a project to GCP and it deleted VMs—not my idea of a moral failure. Generally I’d not fire people for honest mistakes no matter the cost. You have to pay it anyway, and now the person has hopefully learned and probably feels honored to work hard to be better.


It’s not a matter of firing - it’s the us gov terminating the visa, which honestly in this case seems reasonable


His current employer is actually stitchfix (not Cisco)


Because his skills are valued higher than his cost.


Well he’s good at saving money on VM’s


"Responsible for capex savings of over $1.4MM by implementing cloud computing cost savings tasks that reduced cloud computing resource usage by 20%"


one weird trick to save millions on your AWS bill.

cloud providers hate him


This feels like may be he at some point discovered that his AWS access to Cisco account was intact and got curious about it and maybe even did some harmless things. Then while playing around with GCP he managed to run something that deletes stuff (Terraform maybe) but the credentials used were that of Cisco AWS account which wasn't what he intended - clearly just deleting stuff is not the smart thing to do when it will be recorded against his AWS credentials.

I think he is pleading guilty to unauthorized access which was intentional - but not to the deletion which was unintended.


With that kind of sloppy security in place, Cisco is going to be easily ransomwared.


They already are. Based on someone I know, they were warned by their coworkers at WebEx that much of the work was outsourced to China and they had concerns that they were installing backdoors.


Ah yes, Cisco, well known for their good security practices. Such as this galaxy-brain manoeuvre: https://twitter.com/RedTeamPT/status/1110843396657238016


Phhhh, 6 months... I'd still have access to my first company's Google local directory console almost 14 years after I've left it (this is despite numerous messages on my part to remove my access) if Google were not to remove the listing altogether


This is a really scary situation for an employee: maybe this case was a mistake or maybe it was bad intentions, but imagine you are leaving a company and your access should be closed, but is actually not, then your account still active without your knowledge might get hacked... and you could be responsible?


It also means there was no orchestration for the VMs. System should have recreated them 1 by 1 on a health check triggered restore action.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: