Hacker News new | past | comments | ask | show | jobs | submit login
The Triton malware is murderous and spreading (technologyreview.com)
116 points by jchrisa on March 12, 2019 | hide | past | favorite | 61 comments



From https://www.fireeye.com/blog/threat-research/2017/12/attacke...

>The targeting of critical infrastructure to disrupt, degrade, or destroy systems is consistent with numerous attack and reconnaissance activities carried out globally by Russian, Iranian, North Korean, U.S., and Israeli nation state actors. Intrusions of this nature do not necessarily indicate an immediate intent to disrupt targeted systems, and may be preparation for a contingency.

This reeks of Stuxnet 2.0


" We have your critical infrastructure by the balls, you have ours. We would both be MAD to trigger anything"

Is that the planned situation here?


That was my thought exactly.


At some point, we're going to start seeing an internet connection not just in terms of the benefits, but in terms of the liabilities too. I really would have thought that by 2019 we'd be there with industrial control systems, but apparently not.

One wonders if the governments of the world wouldn't be well advised to go ahead and hack their a couple bits of their own critical infrastructure a couple of times and horribly break it, just to make the point, before a bad actor hacks all the infrastructure. That visibly has huge costs, but it's not clear that the hidden costs of just blithely letting people keep hooking up critical stuff to the Internet isn't orders of magnitude higher.

And by no means could such a result be called a "Black Swan", because that it's going to occur is perfectly predictable. It's only a question of when.


this reminds me of all the printers and other peripherals I see that are connected directly to the internet ; even the folks who maintain them sometimes aren't aware that these things are a potential [I daresay likely] attack surface


I'd have thought we have abundant evidence already that bad consequences will have little to no impact on behavior as long as the responsibility can be dumped onto someone else.


An air gap won't even save you. See also: Stuxnet.


It decrease the attack surface. Delay the attacker getting feedback from the system. Attacks take longer and more likely to be uncovered.


Are attacks on air gapped systems more likely to be uncovered though? Having it isolated makes auditing/alerting harder too, and could instill an overconfidence in the security of said system.


On the other hand it's easier to spot irregularities, the auditing is simpler because there isn't any variance.


agreed! including many other methods to get around air gaps.


[flagged]


Stop posting this garbage


Redundant Safety PLCs run the same program in parallel in lockstep and if they get different results then this triggers an error. I think Triconex in particular requires 2 out of 3 controllers to agree.

It is odd that the attacker tried to modify the program in the PLC configured this way. They should have known it would cause a noticeable disturbance.

The Schneider Quantum PLC literally runs a pentium 166 or 200 and there is a steady string of firmware and operating system (VxWorks) updates. We had one from 2006 that would simply stop communicating if it was plugged in to a cisco switch from 2016.

A zero day in VxWorks which is the operating system for a large swath of controllers would be pretty bad.


Agreed, but it wouldn't even necessarily take a VxWorks zero day. Having worked in the industry as a security professional, you'd be amazed at how many embedded devices ship with VxWorks debug ports open and listening, with default (or easy) passwords. The debugger is basically a C interpreter, so an attacker has totally pwned the box if they know what they are doing and get to that port.

Misconfiguration by developer teams happened all the time. Usually somebody just forgot to build with debug disabled, but sometimes we'd see an engineer leave it on intentionally to make debugging in the field easier for him :facepalm:


I don't want to call anything without dedicated gates and hard traces a 'PLC'.

You have a computer there; something that isn't continuously integrating results via hardware but is rather emulating that in software.

The future probably has more of those systems than what I think of when I hear PLC, and maybe that industry has loosened the terms since I was in college, but it's important to call tools what they are so that their shortcomings are obvious.


As you know PLC is just s programmable logic controller. All computers are, by definition, that. Perhaps you're referring to the 'gated logic' programs that exist? To me they're just like the old punch cards, but with the purpose of controlling a contactor -- they're very simple systems indeed.

However, it's also not hard to incorporate hardware watchdogs into industrial systems that check for proper running software (& vice versa). I've done them.

It might be time (if it doesn't already exist) for the industrial networks to upgrade to newer security practices though... (eg code signing, encrypted networks, changes to fs, ...)


Upgrading control systems in a plant is both costly and risky. You incur significant downtime to replace expensive equipment that has a really long life time. And then shaking down the new systems is going to cause additional costs and likely also a temporary decrease in the quality of the plant's output. This is why the industry is basically stuck in a state where the most widespread standards originated in ancient, sometimes even analog times and have had tons of extensions tacked on in ways that preserved compatibility. So all of these devices operate based on pure trust. Bad input is rather attributed to failures than deliberate malicious actors inside the system. Nothing is authenticated. None of the field bus systems I am familiar with could be upgraded to incorporate that kind of distrust between components.


I haven't worked with it yet but Schneider's M580 PLC I believe even supports authentication!

It is crazy that a Quantum or M340 PLC on an ethernet network basically has unauthenticated DMA. any device on the network can read or write to any addressed memory using the dead simple modbus protocol, and there is some more complicated protocol for reading and writing unaddressed memory.

I don't think Allen Bradley is any better as I don't recall ever having to specify any credentials or any other means of restricting which clients could connect and write to the PLC.


All de facto modbus implementations that I am familiar with use virtualized register banks that map to higher level parameter accesses including input validation. So the shenanigans you should be able to do with then are somewhat limited. But there is no authentication at all. This was designed at a time when notion of having a bad actor mess with an control system was not even invented yet.


sounds like the definition has changed since you were in college:

'A programmable logic controller (PLC) or programmable controller is an industrial digital computer' [0]

PLCs that are computers with digital processors have been around since about 1984. It is exceedingly rare for any digital equipment to still be in service that is more than 30 years old. The equipment may still be working fine but nobody is around who knows how it works, how to program it, or what to do if it stops working. A real con for digital and pro for mechanical systems!

Of course all of the analog systems that preceded the digital ones have been ripped out too. Nobody could be bothered to learn how they worked, parts became scarce, and digital is so much cheaper and faster to develop and own!

[0]: https://en.wikipedia.org/wiki/Programmable_logic_controller


Maybe I'm mis-remembering plD where D was device, but at the same time a single letter of differentiation, and one that is pronounced similarly, is asking for this kind of error in communication/memory.


Speaking as a former embedded and now in the infosec space, I think a zero day in VxWorks is trivial. It was used because it is easily hackable (originally in the fun sense now the bad sense).


https://twitter.com/SarahTaber_bww/status/110525655715412787...

>I don't I've quite articulated why I'm so critical of the tech industry. Tech isn't just software anymore. They're coming for ag, food, & manufacturing- & they're bringing a negligent attitudes towards risk & safety that they learned in the cushy world of apps.

And this malware is affecting industries with a strong incentive for safety, think about what that might imply about every other industry.


> I don't I've quite articulated why I'm so critical of the tech industry. Tech isn't just software anymore.

She is picking the wrong target. Tech would do security if directed to do security. Business management doesn't care about security.

Until someone has to pay big money or do jail time for lack of security, this will not change.


Hierarchical management structures are part of the problem. Managers generally treat technicians as flunkies and don't value their opinions, and most technicians aren't willing to get yelled at or fired, so corporate systems select for the lowest common denominator instead of the highest common factor. It's not going to change under our existing system because capitalists don't give a shit about consumers or employees, and 99% of managers just want to get into the winners' circle.


Repeat after me: safety is not security. Ponder this for a while. Safety is about giving guarantees that the right response happens at the right time to prevent disaster. Security is making sure that whoever you are communicating with is who they claim to be and not an imposter.

Heavy industry cares a lot for safety. An exploding refinery is not in any company's best interest. But the systems they rely on have absolutely no notion of security. They have no way to distinguishing between true and spoofed input values that are fed into a control system.


By those definitions, a refinery whose control systems ensure it can never explode, but allow anyone with internet access to override anything, is insecure but perfectly safe. Yet the result is explosions and dead people. That doesn't make sense.


No, it does make perfect sense in a way. Safety in these terms means that things are extremely unlikely to go wrong by themselves. Interventions are simply not a part of that model.

There is some, but not a lot of awareness in the industry that the implicit trust between components in these systems is insecure and ultimately jeopardizes their safety. It is too easy to spoof process values that are sane, but do not reflect the true state of the system. The systems fundamentally miss any mutual authentication between their components.

Process measurement devices typically even offer a simulation mode in their firmware to do just that. At worst, all you have to do is to crack a simple password to gain access to this feature remotely.


Really good rant there


@SarahTaber_bww is a really good twitter follow for people who want to learn interesting bits about agriculture & tech.


> [The malware contained] an IP address that had been used to launch operations linked to the malware.

> That address was registered to the Central Scientific Research Institute of Chemistry and Mechanics in Moscow, a government-owned organization with divisions that focus on critical infrastructure and industrial safety.

Ironic, sounds like they also have the job of subverting critical infrastructure and industrial safety.


I suppose the best way to improve critical infrastructure is to create an emergency that makes improvement a priority lol


Russian strategic doctrine dictates that you obtain control of the battlespace before the shooting breaks out.


This is definitely not the first time malicious software was implanted in industrial control safety systems. Here is an example from the Cold War (it caused the largest non-nuclear man-made explosion in history):

https://www.zdnet.com/article/us-software-blew-up-russian-ga...

The actual sabotage involved adding an integer overflow to valve control software, and making sure it took months to hit (so testing would miss it).


I think people need to keep in mind, that "disconnect it from the Internet, it shouldn't have been on the internet" doesn't fix this. If the injection works from USB devices, then the typical field engineer is not going to scrub their USB before downloading the field upgrade. Almost everything worldwide now uses USB as a field-upgrade path. Maybe as a cost cutting and simplification method this was ok, but the risk side? way way above the benefit (in my opinion)

What mitigates this (if anything does) is signed code on media you have to work harder to program. Rather than a USB device, this should be some form of media which doesn't present as a bootable device to a BIOS/UEFI. The field unit should have signature checks over images based on PKI. This is what a lot of things do, but somehow it seems not the ones which matter here?

Field upgrade by kermit or xymodem would be better than this, in that narrow regard. -The risk of an unexpected packet hitting the code path is lower if the code upgrade is reading a byte stream for a hash/sig check, compared to mounting a USB device, loading drivers, enabling HID mode ...

I deliberately avoided working in engineering contexts where the risk was above my comfort factor. It ruled out industrial process control, health, civil engineering and a host of fascinating fields, but I was just too worried about the liability side and my own competency to work in these areas.

I did not foresee (inter)net technology becoming so critical it exposed all of these risks, in my core competency. I still feel inadequate to these risks, 37 years later.


Industrial operations are going to have to start giving a damn. A lot of them just don't right now. Most of the ones I've been in are an amalgamation of devices and software spanning the last thirty years. The number of xp boxes still controlling vital systems while being connected to the internet is insane.


> "...likely through a hole in a poorly configured digital > firewall that was supposed to stop unauthorized access. .."

'Every' penetration tester I talk to says that this is what they find all the time: actual 'reality' within networks does not align with assumed network policies or topology.

But, I don't talk to that many. Is this really the case? We put great care to have network architectures and policies that define network segmentation, isolation, and other strategies to harden and protect the network. But those policies are not implemented properly, or over time their technical enforcement isn't guaranteed?


Yeah, I don't think I've ever seen an 'airgapped' network that was actually airgapped.

About half the time no discernable effort was ever put into airgapping and it was only ever a paper goal. Most of the rest of the time it started out configured reasonably but either drifted out as business needs changed ("Chloe get me a port!" is a pretty common joke) or someone just didn't realize it was special and configured it badly.

The rest of the time you just stack up edge cases: bad management credentials, forgotten management interfaces, canned router or switch exploits, broken q-in-q implementations, etc. The list is endless. And all that's without getting someone authorized to carry you onto the airgapped network, which happens facepalmingly often.


Yeah, it's pretty common. The network admins don't necessarily have security training, so they might not understand the reasoning behind the recommendations from the security team. Ideally they'll work together, but most of the time our recommendations get ignored or implemented incorrectly. For example, after one test I performed on a Fortune 500 company, we recommended that they have separate VLANs for different parts of the company; employee workstations could connect to management interfaces on network hardware. The networking people just created separate subnets and called it good, even though every subnet could still talk to every other subnet.


Instead of having Internet connectivity 24/7 for IoT devices or critical infrastructure, why not have a small window for things like updates and so on, but be physically air-gapped the rest of the time? The window doesn't have to be at the exact same time either: if you need 20 minutes to download and apply updates once a week, then you can start that 20 minute interval at anytime on whichever day. The air-gapping could also be done using analog means or another network that isn't connected to the Internet.

The best solution would be to be air-gapped 24/7, but in cases where that is not possible, there are other viable and more secure approaches than being online 24/7.



What causes some of the finest hackers to come from Russia? Is it attributable to their education system? Comparatively, I don’t see as many hackers coming from any other country? I don’t mean it in a bad way. Just curious.


Excellent mathematics in secondary schooling and lots of folks with degrees but not high paying jobs is my guess.


I know that this is tinfoil-hat territory, but isn't the problem of Boeing 737 MAX recent crashes related to software issues? It would be scary. I'm referring to this article: https://www.businessinsider.com/boeing-737-max-receive-updat...


You're right to be scared of course - a software bug in a safety-critical system is a bug in a safety-critical system, no matter if it happens in a plane or in some piece of critical infrastructure. And yes, they have now patched this flaw, but most likely it's still a pile of old F0RTRAN code or something like that, that they're now stuck dealing with in some way. The notion that these sorts of systems are somehow less prone to having serious bugs in them is being revealed as a dangerous delusion.


No software is perfect, but the aerospace industry has standards as far as software validation/verification that mean airplane software is in general less bug-ridden than comparably complex low-level software written for less risk-averse industries. Hard to say if that applies to any particular chemical or energy plant as well.


I don't agree with this comment neither finds this speculation particularly interesting, but it's polite and on topic, and shouldn't be downvoted.


Yes but there's been no suggestion of malware introduced into the 737-MAX control software, as far as I've heard.


Mobile platform apps run in considerably stricter environments than do desktop apps.

I'm wondering why MS has not come out with a similar kind of Windows, wherein every app is effectively sandboxed.


> Mobile platform apps run in considerably stricter environments than do desktop apps.

Mobile phones (android) are full of malware, most only get patched for a couple of years at best and these are often late and infrequent. Stricter != safer.

> I'm wondering why MS has not come out with a similar kind of Windows, wherein every app is effectively sandboxed.

That would be windows 10 and the app store included, but it looks like it's limitations led to failure.


Stricture definitely means safer. Android apps written in Java cannot wipe the device or run amok and do whatever. The opportunity for malware is considerably reduced if the API attack surface is limited.

Security measures on Win10 are superficial in the sense that any app compiled to the platform can essentially do whatever.

It would I think something quite more fundamental than the app store or signatures, effectively a totally new OS architecture.


More than 17 years ago, Chairman Bill sent out his infamous memo on "Trustworthy Computing".

https://en.wikipedia.org/wiki/Trustworthy_computing#Microsof...

https://www.wired.com/2002/01/bill-gates-trustworthy-computi...

Now, 17 years later, you're wondering if Microsoft shouldn't do better? Exactly how much time are you willing to give those clowns to clean up their act?


> In attacking the plant, the hackers crossed a terrifying Rubicon. This was the first time the cybersecurity world had seen code deliberately designed to put lives at risk.

This is no regular malware. This is war.


I don't know enough about the production of nuclear centrifuges to say for sure, but it seems probable that the damage intentionally inflicted by Stuxnet[1] may very well have put some lives at risk. Triton looks more like another step down this path than like a watershed moment.

Mike Hayden, 2012 [2]:

> We have entered into a new phase of conflict in which we use a cyberweapon to create physical destruction, and in this case, physical destruction in someone else's critical infrastructure.

Still, an alarming development.

[1] https://en.wikipedia.org/wiki/Stuxnet

[2] https://www.cbsnews.com/news/stuxnet-computer-worm-opens-new...


Don't forget about Crimean power outages before the Russian annexation. And the cyber breadcrumbs we've found in our dams and power stations. Oh, and I hear Venezuela is coming out of a 4-day blackout right as we're ramping up all our aid/regime-change talk...

About a year ago there was a major transformer that blew in downtown SF, knocking most of the fidi offline. I didn't think much of it -- stuff breaks, and this was pre-PG&E fiasco -- but then I heard the same thing happened in NYC and another major city (maybe Seattle?) that same morning. Things break regularly, and there's always some 3-city combined probability function, but it still made me glance over my metaphorical shoulder.

Ideally we'll still be too scared to use nukes in the next hot war. Everything else that makes modern life bearable is fair game, though.

[edit]: LA was the third city, and all failures were traced to physical faults/aging infrastructure: https://www.snopes.com/fact-check/power-outages-la-sf-nyc Like I said, there always is a combined probability function, but point is we're gonna be doing a lot more glancing over our metaphorical shoulders the more we see stories like this.


If I wanted to blow a transformer over the internet, I'd pick one that would be regarded as aging, too.


Sure, you could make the argument that Iran and {some list of countries, including the US} are in a low-level state of war. The world isn't shedding much of a tear over Iran's nuclear program.

Targeting civilian infrastructure? With potential mass civilian casualties? That's not in the same league.


Stuxnet only damaged centrifuges, anybody that was hurt would have been because of a freak accident. Triton's whole purpose is to kill people.


Just like the purpose of all American and Russian nukes.


This is a kind of alarmist paragraph though - the malware probably isn't intended to primarily or specifically kill or harm people.

If it's from a state it's probably intended to shut down industrial processing. That might require doing something that causes harm, but that's not certain and it's probably not the goal.

Other malware like Stux', and all the various intrusions into power infrastructure etc. that are always being talked about in the media, all share that same purpose - shutting down the logistical or productive capacity of a country, either in a very specific area like Israel v Iran re Nuclear or in a wider sense, like turning off the electricity.


It will be quite interesting to see what the cause of Venezuela's ongoing power blackout is. On the one hand the government of the country is deeply incompetent, but on the other shutting down power is a normal precursor to kinetic conflict and the US has been making a lot of war talk about Venezuela and has a track record of doing that exact thing in other places.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: