Hacker News new | past | comments | ask | show | jobs | submit login
Vulnerable industrial controls directly connected to Internet (arstechnica.com)
145 points by wglb on Jan 29, 2018 | hide | past | favorite | 99 comments



PLC manufacturers and the controls community in general is lagging behind technologically by twenty years. It is very frustrating for a security-minded, computer-literate controls engineer to work with decade old hardware and software.

In a company I worked at, there were PLCs controlling sensitive processes, including heaters for welding plastic, connected to the company network via Modbus-over-TCP. A visitor with a guest access, nmap and pymodbus could read AND SET temperatures of any of the devices. A black hat could easily have started fires or leaked acid in our production floor.


I mean, you're not wrong. PLCs are a digital technology evolved from... a electromechanical computers of the 50s. PLCs are not just twenty years behind. They're really just 60+ years of legacy.


It's true that PLC's use ladder logic, which is a representation of relay logic from 60 years ago. There's nothing wrong with ladder logic. It's easy for electricians to understand and change. More importantly, though, it's easy to see the logic flow that makes that conveyor belt that could rip off an arm, or that 1000HP fan, or that critical pump or that critical valve, function. It's important that the safety interlocks, switches, timers, and sensor values that control that piece of equipment be all grouped together visually to help avoid the frequent surprises (bugs), so common with text based programming languages.

If you want to beef up the network security to the PLC, be my guest, but don't disparage ladder logic. It's the right tool for what it's used for.


True to a point.

The difficulty arises when trying to implement something in ladder that is too complex for simple Boolean operations, and would probably be much clearer if written in a few dozen lines of Structured Text (assuming an IEC 61131-3 PLC). Unfortunately, company policies (especially in North America) mean that engineers are often forced to use ladder, resulting in a ladder logic program which is inscrutable for both the electrician and the engineer.

What would be sensible is using the right tool for the job: ladder for simple Boolean operations, and ST for more complicated stuff. Tech schools should also teach electricians and technologists the basics of how to read structured text - they do this in Europe, but I don't think it's common in North America yet.


I use ladder only on Allen Bradley PLCs and then only for primarily boolean logic. the AB ladder editor is much better than their function block editor.

Generally I prefer function block, especially in Schneider's Unity, since I find it is easy to watch a process operate the way function block diagrams are animated. The function block also promotes code re-use by creating your own blocks.

I use structured text only when the problem can't be well represented in function block, or more like I haven't figured out how to well represent it in function block yet. It is much harder to observe the operation of ST as the tag values aren't animated. Of course you can't use breakpoints on a process without stopping it, which usually you don't want to.


The company I work for is at the inscrutable ladder logic point. If our PLC engineer dropped dead, nobody would be able to pickup and go. I would be at least a month or two of software archeology for someone to come in and take over.


I will be the anti-ladder logic person here. It is a good language for simple systems. But it breaks down as the system gets more complex. Its really only good as a glue layer between modules.

Also the inherently binary and visual nature of ladder (and HMIs) makes version control impossible or expensive. Many PLC workflows I have seen are basically just a directory of files that you hope someone has been keeping up to date. Maybe there are previous versions. You will also have a hard time getting a code change history. Orchestration and configuration are also pretty primitive.

Mostly I have dealt with Rockwell, maybe Siemens is different.


I agree. Fortunately this tends to lead to PLCs driving relatively small systems. Like a small number of stations on a small rotating assembly line. That's probably a plus, actually.


Yes, PLCs do work well, especially since one typically just arranges to need relatively small PLC programs. But it is a lot like working with VHDL or assembly, and the only way in which you get to organize/abstract things is by organizing the actual hardware controlled by the PLCs into manageable units (e.g., a station on an assembly line, a small number of stations on a rotating assembly line, ...).


Agreed. Long live ladder logic!


It seems like there's opportunity here. A little box with raspberry pi or similar device and two CAT5 connectors to be placed inline between the PLC and the network, as well as between the network and all other network based displays, interfaces, and sensors. This would allow non-hardened equipment on a non-hardened network to be easily retrofit with encrypted connections.

Edit: These already exist. [1] ;-( Oh well. Just need to have facilities start using them throughout the site.

[1] https://www.lifewire.com/best-vpn-enabling-devices-4140254


Devices exist to filter out modbus data that doesn't belong [0], but I have yet to see one in the wild. Whenever I have offered plant owners additional security such as a VPN endpoint at the satellite ISPs base station, they have opted against it due to cost.

[0] https://www.tofinosecurity.com/products/Tofino-Modbus-TCP-En...


I've done this too. It's neat to watch a 'read-only' register on a PLC change because of a packet you sent it. Overflowing a buffer into an adjacent buffer remotely. It's an art that should have died long ago. PLCs need to be upgraded from C to C++ and devs should use standard C++ containers and remove all the fixed size C arrays.


> PLCs need to be upgraded from C to C++

You are implying that C++ code is generally safer than C code, even in the hands of developers who introduce buffer overruns in the first place.

This is so wrong I don't even know where to start. In C++, I doubt they will ever get it out of door. Which would solve the security problem, incidentally.


Modern C++ should be safer. If you keep dynamic memory allocation to a minimum, use the STL heavily and are OK with lots of copying then it's harder to screw up because there's not much manual freeing of stuff and buffer sizes are systematically checked.

Of course if you're going to leave C, why not use real time Java or some other safer language.... C++ doesn't seem like a very ambitious sort of upgrade.


software for setting parameters for hardware systems might not require complex logic. a language with formal verification would be a good idea in that case.


std::vector<whatever> in C++ is safe and is nothing at all like a fixed size array of whatever in C. People who don't understand this are ignorant of the facts. They are totally different languages. C++ is much safer than C. Go ahead and try to overflow std::vector<whatever>. You can't. It's not possible.

Also, it is relatively straight-forward to compile C code with a C++ compiler and adopt C++ constructs while throwing out all the old insecure C constructs. Doing this would vastly improve security and is achievable in a short period of time with little or no performance loss (it will probably run even faster).

Yet, all I read about on web forums such as HN are unattainable suggestions such as re-write X in the newest, non-standard, corporate backed/controlled language that isn't nearly as tested nor as performant as ISO Standard C++. Maybe it's just me, but I find these suggestions to be totally out of touch with reality and really uniformed about the vast differences between C and C++.


> std::vector<whatever> in C++ is safe

I may be remembering this incorrectly, but vec.at(x) is safe (i.e. does bounds checking), but vec[x] is not safe.

I'm going to try some test code here and check, but I'm pretty sure vec[x] will happily overflow.

Edit: https://gist.github.com/tonyarkles/e702e4d5b530a9b7cd3fc9837...

Definitely overflows and segfaults while unwinding the stack (presumably while deallocating the stack-allocated vector)


You can't overflow a vector. It's not a fixed size array. It grows automatically as needed.

What your example shows is an attempt to access a vector element that does not exist (std::out of range). And, that is undefined behavior and is documented.

Also, that code is 95% C (not C++). Even the includes and prints are C. Pure, idiomatic C++ would use iostream rather than stdio.h, and a vector iterator to access the elements rather than looping over the vector (in C like fashion) using an integer (which is unrelated to the vector) in an effort to access elements via index.


> You can't overflow a vector. It's not a fixed size array. It grows automatically as needed.

His example shows the precise opposite: the vector didn't grow automatically. Instead, memory outside the vector bounds was accessed, i.e., the vector overflowed.

Yes, idiomatic modern C++ makes that less likely, by reducing the use of explicit indexes. Even then, you still can overflow a vector with innocent-looking code.


> that is undefined behavior and is documented

In C, buffer overruns are undefined behavior and are documented. Back to square 1.


But can the compiler warn about such undefined behavior? Because if it can't, it's not much better than C.


I just tried:

    g++ -Wall -pedantic vec.cpp
And got no errors or warnings.


<Rant> Worser still, they have even trouble adopting object orientation. The whole area is stagnant in a Win95 Programming kind of way. Its truly horrifying.

Worser still, even the people producing the newer tools to allow for OO, do not really test these tools, so you develop in a environment where you are basically a beta tester on a budget (TC3 - looking at you).

Nobody dares to move, to not break precious backwards compatability and devalue the experience of "experts" who for several generations simply did fall into every trap state-chart based design can provide. Means, every project runs into complexity walls at the end, there are no automatic proofers to check the state charts, there are no unit-tests (If i here the poor excuse -"But there is no hardware to test against" one more time).

In my opinion, this industry is as ripe for disruption as it gets- if you can generate and update code from a high lvl object and process description here, and proof reliability and speed + allow for user-code - the industry would throw money at you.

Half of the projects in this area are not ond budget and not on time. There is actually not even planning for that- just a factor, which usually is bigger then 2.0.

Im so sick of the excuses. You cant compare conventional coding and PLC programming. Yeah, you cant, one moved forward and the other got stuck with the lowest common denominator - due to lots of non-software guys trying to get a food in. All these fb lego-players, with no clue what they do...

</Rant>


Working in embedded systems, also with a control background and software affinity, I find we have the same mindset problems in non-manufacturing contexts as well (any kind of industry that needs embedded systems).

>In my opinion, this industry is as ripe for disruption as it gets- if you can generate and update code from a high lvl object and process description here, and proof reliability and speed + allow for user-code - the industry would throw money at you.

They won't, because they don't see the value in doing it right. An industry managed by mechanical engineers is not going to see how they're spending 100x as much as they need to for a worse product. Because they don't see the value in software in the first place. For them it's just this nasty thing you need to throw a lot of money at until it everything works. The mindset would need to shift to software being the main space in which problems are solved, because in 2018 it is. I guess this also turned into a </rant>


> They won't, because they don't see the value in doing it right. An industry managed by mechanical engineers is not going to see how they're spending 100x as much as they need to for a worse product. Because they don't see the value in software in the first place.

This has largely changed in the automotive industry over the last few years. We still aren't quite there yet on getting all managers to spend on security preemptively, but at least we now have had the big honchos hand down the gospel that software is important.


I hear ya. The issue, though, is that the end product has to be at least readable, if not maintainable, by any half competent electrician, so lowest common denominator (ie. usually ladder logic) is as abstract as you can go. Also for most industrial controls, the system is "wide" rather than "deep", in the sense that almost everything is pretty trivial ('read inputs X and Y, set output Z if X and not Y') but there are thousands of X and Y. And finally, X etc. are tied so intimately with the hardware, which by the time the site is actually built and all the vendor packages integrated is only loosely related to the original design. So while I absolutely agree that it could be improved on significantly, the end result will still look (on the surface) quite like what we have now.


> the end product has to be at least readable, if not maintainable, by any half competent electrician

You can get that by using Python or similar languages; you can get that by programming in something like a Haskell DSL and hiding the underlining libraries; you can get that by creating an actual DSL and interpreting it on the fly.

But C or C++ state machines manually encoded into structured code is one of the things that won't get you there.


Absolutely not. Most industrial automation engineers, let alone electricians, are unable to handle Structured Text (the standard IEC text-based programming language for PLCs), let alone Python. Haskell is not even on the same planet.

And a DSL is the exact opposite of what we're trying to accomplish: An electrician with no prior knowledge of the system needs to be able to read the code and understand which switches and sensors need to be turned on in order for that motor over there to start. Having to learn an entire new custom language at 3AM when you're losing $700,000 an hour in lost production is not an option.

(Edit: I should clarify that I'm talking about mining here. High end factory automation engineers may very well be more familiar with 'traditional' languages.)


The people I've worked with who are operating power plants can't read code. Perhaps they can read a schematic and could understand ladder logic. They are usually quite skilled mechanically though.

Even the automation professionals I work with whose jobs are to write and maintain PLC programs look at the 'structured text' PLC logic I've written (a constraint solving algorithm) and can't be bothered trying to understand it.

Controlling a physical system is different from programming computers. The simpler the logic is to read and understand the better. Direct and verbose is better than layers of indirection and abstractions.


The systems where wide, and not deep. But by now they want industry 4.0 - means they want intelligent behaving machines. As in - material flow on fail rerouting, process controlling and self-adjusting, as in self checking for upcoming flaws. All parts are tracked over network, all guides to the machine have to be electronic.

And they want this with the conditions you described.


Is this in factory automation? I work in mining and everything is still very simple there, nothing more complex than a PID controller or speed ramps on some variable speed drives. I wish we could do some more advanced tuning, but the key criteria for anything we do is reliability, robustness (even in the face of sensor failures, everything is monitored and has manual overrides) and simplicity.


OOP isn't always needed. Most of the PCLs are very simple state machines where there really is no actual use for inheritance, or even encapsulation.

It it true though that the mindset is decades behind the web-oriented world.


Then again the web oriented world all too often come across as architecture astronauts...


Its simple at the start. And sorry, but with the stuff i work, its usually not enough.

OO would allow for easier code reuse, by composing objects out of pre-existing objects, something that is usually done by copy paste today. OO would enforce encapsulation and prevent global periphery fondling by serveral free running state-charts, where its really tough to find out, who in what order did flip the switch and cause a crash in 1 of 1000 runs of machinery.

List goes on... Sorry, but for my use cases the complexity is usually not completely avoidable.


Did you ever have the pleasure of working with Siemens' SCL? I found it very frustrating to use but overall a better experience than just using the lego blocks programming you usually see. Combined with your data blocks you could at least get some structure into your programs.


I did, i did and i used the TC synonym.

The problem is- as with all state-charts fast grows in complexity to a level where nobody has a overview. Are we the first industry to encounter this. No- chip designers have this all the time, game industry has this all the time, every fucking industry using state-charts has this - all the time. So how comes, we are the only industry taking a lot of overtime all the time?

<chirpin ciccadas since decadas>

I have people whos VMs literally collapse under page-long state-charts. So how about breaking them up, just have small state-charts in FBs and small state charts in FBs marshalling them.

Not happening. And they use assembler, not for a final tweak but as base language.

And usually, when the whole mess is collapsing in on itself, like a black hole of bad design, the project manager call in some external consultants, who should be happy to work a project that is "that far along -its allmost done".

A castle made from dinosaur bollocks.


I think the big problem in us being that slow is that you can't properly test changes. The initial programs get written under immense time pressure to put it productive and what after that? Do you want to touch your productive system and refactor it and risk causing hundreds of thousands in damages for production loss? You pretty much need to nail your initial design and can't afford to do the usual "we will take care of our technical debt later".

I hear you saying "but you have debuggers/simulators!", for anyone actually having worked in the field you will know they are pretty much useless for big changes in machines that speak to hundreds of other systems, sensors, motors, etc. At least that's what my experience was and nobody has a backup factory to test changes on. In the tight schedules we had during production stops (Sunday nights, mainly) we were busy enough maintaining everything else than having fun on a PLC.

I completely agree that this is all a big mess and people don't tend to write maintainable programs, they just want their machines running and this is where everyone needs to be trained and improve.


Manual simulators don't give much benefit, manual testing in simulation is only marginally better than manual test on hardware.

Need automated tests in simulation, preferably based on data collected in real systems. Ideally also hardware in the loop testing.


> So how comes, we are the only industry taking a lot of overtime all the time?

Have you never met a game developer?


Don't know why the downvotes. I was once asked to work on the original Gears of War. After talking to the developers at Microsoft game studios, I noped on out of there. Most game devs I've met have horror stories of death marches followed by massive layoffs. It's... not a great sector of the tech industry, if you value work-life balance.


I would hope that the cycle of crunch -> ship -> layoffs is in the past now. From memory it used to be that way because the studio needed to crunch to get the product out the door to sell. Then once that happened they had no need for all of the people since there wasn't another product to work on.

These days there are a lot of studios working on so called perpetual games (MMORPG, etc) which have constant maintenance and improvements so they don't need to let go of people. Also the other AAA game companies are bigger and can shift people between projects as needed.


My graduate project at university was about air-gap malware, soft TEMPEST and hacking on industrial PLCs (S7-300, Stuxnet and all that :P).

What terrified me most is not only the state of ICS nowadays (it's pretty much fucked), but also the fact that cyber-security vendors try to make a profit from that by shoving their products down the throats of scared managers.

And what @donquchotte says is pretty much true - on facility there I practiced, all PLC were connected via TeamViewer to company network, passwords were written on stickers and slapped on monitors. They used Modbus over TCP, plain-text packets easily viewable with Wireshark.

On the first day I accidentally broke my S7 because of wrongly uploaded configuration, it took them 3 days to find some special Siemens laptop to connect to this PLC and reset it. Also tried to play around with Metasploit and follow the work of Beresford [1,2], but the most trivial attack of mine was just flood on PLC's IP with random junk - it stopped responding to any commands.

[1]: https://www.youtube.com/watch?v=33kouEKm0zo

[2]: https://github.com/moki-ics/s7-metasploit-modules


The first time I tried to automate the S7-1200 web UI, it crashed hard when I sent a POST requests with an empty body.

These things do not belong anywhere near the internet, patched or not.


Pretty much built for someone that knows exactly what they are doing. Like they've read the manuals cover-to-cover and taken an exam.


An industrial control system that crashes when it receives an unexpected network request isn't built for someone that knows exactly what they are doing. It was built by someone who wanted to expend the minimum effort possible and make contingency-handling someone else's problem.

This isn't just about hacking. A device that crashes hard when it receives a POST with an empty body may do the same thing if there's a physical glitch with its connection, or if a cosmic ray flips a bit somewhere. Devices that control industrial machinery should be more reliable than that.

If this were a hobbyist project, or a toy, that would be one thing, but these are specialized devices that cost a lot of money. They're fragile liabilities that don't belong anywhere near a shared network.


> An industrial control system that crashes when it receives an unexpected network request isn't built for someone that knows exactly what they are doing.

Yeah - I can see even the simplest of network vulnerability scans wreak havoc on a device like this.


Its enough to just plug in a cable into two switch ports.


Yeah, that's the reasoning someone needs to use as to why something like this exists. It only works at all if you everything exactly correct.


I am not convinced air-gap is even a good idea for PLC networks. More and more these systems are developed by consultants and OEMs. Who then need to service and troubleshoot them. I work for an OEM with this problem and it plagues me all the time. "Sorry, I can not fix your issue without access and I am not flying someone down for this."

This business reality is going to get worse as time goes on. Companies would be receptive to change if people at the executive level were educated first. The carrot being more cost effective system development.


I would argue that air gaps are critical in ICS networks and PLCs because it is an extreme safety risk to push a potentially bad update from offsite without clearing the operator. The major issue will be pushing back against cost pressure from management.

This is already turning into a major problem with IOT devices from supposedly reputable companies i.e. home thermostats that let pipes burst because of a bad or incorrectly pushed update. Now consider if that bad update could instead kill someone.


The operator should never be in harms way period. If they are working on equipment it should be physically locked out, say a valve would have a 6" diameter pin through the mechanism and a padlock on it so the pin can't be removed except by the operator, and electrical equipment is powered off and the breaker is locked in the off position. If they put themselves in harms way they will eventually be fired for unsafe work.

bad updates are bad updates. they are incompetence on the part of the programmer. but the worst they should be able to cause is minor equipment malfunction or the physical system has been poorly designed.


I think you misunderstand. When maintenance is being done on the machine itself, yes you are going to do a lock-out tag-out routine. On the flipside, if you remote push the update and it contains for instance, a register read error on the rate controller that shifts the bits by 1, the operator's first notice us going to be when he loads it up and the ensuing chatter violently throws the work piece.

Or something dumber, maybe they pushed a temp variable to the wrong type of memory (say flash eeprom) that updates every cycle of the control loop, and then did a push to an entire line of manufacturing equipment. Congratulations, all of those are going to brick in an hour or so when the memory hits it's maximum write cycle lifetime and you'll have to get new boards shipped in while your entire line is down.

Remote pushing needs to be handled very carefully when controlling real world equipment.


My experience is limited controlling turbines, generators, pumps, valves, hydraulics, and dams using Schneider, Allen Bradley, and Unitronics PLCs. No motion controllers. no conveyor belts, no factories, no robots, no VFDs.

I have made a total of at least 7,500 remote updates over maybe 25 different control systems. In fact we basically get it working well enough the operators can handle day to day stuff and then go remote after that, because who wants to stay away from home at some dirty industrial site eating crappy food. The exception to that is backup power for hospitals, no remote access there and it is a simple enough system we can test all the different scenarios and then walk away.

Generally the last thing I put before an output in a PLC is some rate limiting. If it is a discrete output it won't be allowed to operate more than once every 5s for example. If it is an analog output it is limited to achieve the maximum desired actuator velocity or acceleration. This is a good catch all to avoid damaging equipment. I watched somebody learn this the hard way as a DC motor starter exploded when it was told to start and stop the motor 10 times a second.

Certainly I have made errors. A bad one was I forgot to limit a position so that it could not be less than 0. The position was subtracted from some other number. Substracting a negative number is adding! That was a nasty positive feedback loop that resulted in fast oscillations in the position of a 1m diameter pressure reducing valve. However that was during on site commissioning not remote.

Certainly for major changes I will require a shutdown and co-ordinate with the operators, but it is a judgement call on my part as to what I can program and test at the office and unleash on remote equipment vs. what needs to be tested on the actual equipment. Most testing on the equipment is required to determine the equipment characteristic.


A large amount of this sounds like just awareness, authorization and authentication issues with updates. The plant should be aware and know what changes are being made. As well as be able to roll them back.


I work in the industry and I can definitely say that these things aren't built to be internet facing, but nor should they be.


It honestly doesn't surprise me that so many PLCs are exposed directly and are not up to date. Having worked in maintenance at a large factory where we directly worked with Siemens PLCs, nobody is really prepared to make their PLC secure. Most people handling and programming those PLCs do not have a deep understanding of "Internet of things" (in Germany, the term "Industrie 4.0" is used a lot) but management is happy to jump on that topic.

You usually don't worry about anything other than the task your PLC is supposed to perform (we're in a factory after all and need to make money, so better do it quick). I don't think I can remember ever updating the firmware of any of our PLCs, most of them even rot in a storage room for a couple of years before even being put in use.

Everything is connected to the local network for monitoring purposes and if it weren't for another department strictly managing our network I reckon there would be a lot more of them accessible directly from the internet.


It's hard to make money out of attacking stuff like this because you can't normally keep the system broken in a way that is easily fixable when the money shows up. Once you contact your victim they just fix whatever is broken and then ignore you. Industrial processes are unreliable enough that one extra outage might not make that much difference.

It's kind of like making programs to run on Windows 95. You don't have to worry that much about making your program reliable because no one will notice a few extra crashes.


Making money is not the only possible motivation for exploiting vulnerable industrial systems, though of course as the other reply says, there are ways to make money indirectly anyways.

First off, the damage to third parties might be extremely severe, especially in a targeted attack. Think of long-lasting power outages due to damaged equipment. Or think of environmental damage (e.g., suppose you got a nuclear power plant to vent radioactive gases, or worse, to have a meltdown).

Second, the damage to the owners of the industrial facility might be severe even if not to third parties. Here the attacker might benefit if, e.g., they are a competitor or have shorted the victim's stock.

The first case is of extreme interest to the public, don't you think?

Let's not dismiss concerns about industrial security.


Of course damage is bad but I was specifically addressing the apparent lack of interest from the criminal set.

For your first case, you failed to provide a possible. motivation. I will assume "terrorism" as that seems to be brought up as a motivation for most everything these days. There is more than one problem with this.

First, it is pretty much impossible to prove that an attack over the internet was actually carried out by any particular entity or even any entity at all. Things fail on their own. Second, it is usually easy to entirely prevent another attack. So no ongoing sense of vulnerability.

For your second example, it has always been fairly easy to damage industrial infrastructure but there have been almost no attacks of this kind. Stock market movements are not that severe in the case of even serious catastrophe much less the sort of trivial problems you could cause by attacking industrial controllers on the net.


you could make indirect money by doing things like investing in some markets/betting on them to fail.


"The market can stay irrational longer than you can stay solvent" is the traditional response to that.


Put this crap all on a segregated network. It's how we did it 20 years ago, are people still making the same mistakes? Were firewalls not common even 30 years ago?


I'm an engineer for an industrial integrator. No matter how often we tell our customers to keep the industrial control networks separated, they'll still find ways to lazily screw their networks up.

A couple years ago at a Fortune 100 company, I dealt with a WinCE panel that began randomly spewing millions of broadcast UPnP packets - enough to make Wireshark lag badly. Took us about fifteen minutes to find and unplug the offending item, and the recommended mitigation from Rockwell Automation was to just disable UPnP. No patch was available for a problem that's been known since about 2006 or so, on an HMI product that is still being sold today.

Due to some terrible switch configurations on the part of the customer, the traffic made it out to the campus network and DoS'd most of their intranet servers. About 3,000 employees were sent home early for the day because none of them could read their email.


How do the computers running the HMIs get security patches then? Only the most critical equipment would have a budget large enough to pay someone to put windows or linux updates on a USB and get all of the computers updated.

I don't allow the HMI computers to access the internet, but then they remain unpatched. As an OEM I'm out of the picture once the plant is working.


Segregated networks are hard, and usually end up punctured very quickly when someone wants remote access to the segregated system. Or builds a bridge using a system connected to both networks.


> A 2013 attempt (attributed to Iranians) to "hack" a dam in upstate New York failed, largely because the dam's flood control systems had been broken for some time.

This seems more like a satire than real world.


That reminds me of VNC roulette. (http://5.230.225.107/)

It however appears to be offline but it isn't hard to create your own.


I just read up on this, it's absolutely terrifying.

So far the news stories seem to have shown unsecured x ray machines, farm equipment, cctv, maybe oil wells, bank computers, cashflow software, waste treatment plants, parking lot systems, petrol station storage tanks [1] and a Swiss water reservoir [1].

It shows everything wrong with the lack of incentives for security!

[1]: https://securityledger.com/2016/03/vnc-roulette-feasts-on-in...


I would like to see some authentication and encryption in the communications protocols like modbus and ethernet IP. Could be as simple as entering the same phassphrase on the client and the server. If you don't have the passphrase your requests are dropped or you can't decrypt any packets you might get your hands on.

I can't see why that wouldn't be achievable. Yes it would take some extra processing power on the PLCs and would generate a bit more heat.

It would have to be implemented by Schneider and Rockwell, who have been pushing all responsibility for security to the network level.

At least the latest schneider PLC, the m580, allows for requiring a password to connect using the programming software[0], and also some ability to restrict what client IPs can connect[1]. The m580 was released in the last two years, so everything prior to 2016 doesn't have those features and comes set up to allow anonymous upload of new firmware by TFTP.

[0]: https://www.schneider-electric.ca/en/faqs/FA238215/ [1]: https://www.schneider-electric.com/en/faqs/FA282558/


The problem with requiring a plaintext password is that it may slow down attacks when a device is simply connected to the open internet, but it is trivial to defeat in all other cases. Modbus RTU is broadcast, so you would just listen for the password. Modbus TCP can be defeated with ARP stuffing and man-in-the-middle attack (I've seen this automated for taking over SCADA systems).

A better solution would be a shared symmetric key that is manually programmed into all devices. However, this complicates install and maintenance. Its a hard sell when your controls are installed by contractors paid by the hour. Pretty soon everyone is setting their 128-bit key to 0x12345678 so nobody forgets it during emergency maintenance.


Substitute passphrase in my post with shared symmetric key. It is still a string you have to enter on both ends.

if the underlying mechanisms just work in every case except for key mismatch and give you a good error message in that case, requiring your contractors to enter the same string in two devices that they already have to program is going to be hard for them to justify as a great additional expense.


When I was younger I had this friend (cof, cof) who used to war-dial 0800 numbers (telephones beginning with 0800 are national toll free numbers in Brazil) looking for modem tones.

The biggest share of toll free numbers with modem tones were fax machines and but many were 9600 baud terminal for some CLP. I guess these were remote maintenance accesses for isolated machinery - I wonder if people still do this (may be an updated version with 3G or 4G).


As mentioned in the article https://www.shodan.io/ is in my book, the go-to site for this kind of thing. Anyone aware of any similar sites?


https://censys.io/ is also very cool.


I can speak first hand about this. You want to know what the real problem is? Cheap motherfucking executives and boards. Full stop. I was at one time a one man miracle show for a company that had 6 branch locations, and was supporting at any given time ~50 operational PLC's in the field, among other PLC's internally for other operations...

Now, I've learned a lot about mistakes I've made, and I was never a perfect senior sysadmin, mostly because I wasn't playing the meeting room presentations/reports and politics game. So I'm not trying to make myself out to be perfect in any fashion.

That said, when I requested a full time t2 person. Denied. Requested a full time t1 person. Denied. I literally had to poach some part timer who was sweeping floors and train them myself... which did nothing to alleviate my time crunch dealing with desktop users in order to tackle the substantial infrastructure security issues, which takes time and thought (did I mention it was an open office area, so good luck getting good thinking work done...) Requested contractor structured cabling job (just horizontal)... denied...

Us sysadmins can secure this stuff, mostly through really good network policies and firewalls, etc, but if management is cost-cutting corners left and right and would rather hire some contractor-MSP type that does 1/4 the work at x3 the cost, while refusing to support internal IT teams with the tools and funding they need to get their job done properly, it's no fucking wonder insecure systems abound in the industrial world.

This is managements problem, and until the public or others start punishing them for not seeing IT and security as an investment instead of just a janitorial cost-sink, it will continue to happen.

That's the reason behind my last burnout. Now I'm happily on break pursuing my data science degree. I've been on the contractor side too, (dropped out of college after the Marine Corps to start an IT support company that is still alive today even after I left) so I've seen the inside of hundreds of companies, from fortune 500 oil to 20 man lawfirms, so I'm not just pulling this info from a couple of jobs. I saw it everywhere.

A good CIO or CTO should be able to address many of the technical-political disconnects, but they hardly exist. Go try asking /r/sysadmin how many of them even get a real budget. Half the time they just have to "ask" and hope they were convincing enough that management approves. Usually to a CFO or CEO since the CTO/CIO doesn't exist.

Oh, and of course they all want to immediately jump on the IoT bandwagon but still don't want to hire the people to get them out of their old technical debt, much less the new massive technical debt many IoT devices will bring!

Don't even get me started on the "big data" issues IoT and many PLC type devices bring.

edit: The key to the increase of this issue is the direct LTE connections on the PLC's, whereas you used to have a satcon to a router and everything was internal from there, that's less true these days. Even so, you still have issues with the security of that edge.


To touch on this a bit. To be a good controls guy that does PLC programming you need to have some mechanical background to understand the equipment you are controlling, you need to know enough electrical to do the wiring and parse what your sensors are telling you and you need to be a fair shake at programming/comm protocols/logic. Oh, and most control loop implementations are actually sets of differential equations with DSP inputs so having a knowledge in that stuff helps a lot when you get into the precision stuff.

If you go look at the current postings for these roles you'll find that the average salary is between $15-$25/hr, you'll be on call 24/7, and you have to work physically demanding jobs in relatively harsh environments. I've seen a few outfits that don't even offer to pay their employees for commuting time between sites, only for when they are actually sitting down and working on the boards.


This is every (bad) company ever. I worked for a company with 500m in revenue, that made nothing but consumer electronics, and defined its own ENGINEERS as "cost centers."

You know the only people who weren't defined as cost centers by management? Themselves.


What is the opinion on IPv6 on this kind of problem? I mean, IPv4 being often behind a NAT, I guess a LOT of devices that would be exposed are protected by a NAT.

But in an IPv6 world where NAT is no longer required, don't you fear that a lot of device might get exposed?

I know it will be impossible to scan the whole address space, but I am sure other means of discovering IPs will arise.


Yeah, that's one of the known issues with IPv6. You can still use a NAT or better yet a firewall that blocks all direct traffic between the internet and internal nodes. You can also do things like assigning nodes a new IPv6 adress every 24h.


NAT does not prevent inbound connections.

A stateful packet filter does.

Stateful packet filters work with both IPv4 and IPv6.


I know, but I meant how NAT with IPv4 was deployed by ISP (at least in europe), which is a single IP with all inbound connection blocked.

I guess it will depend on how ISP will deploy IPv6, but where I saw it deployed, the ISP did not provide NAT by default and all allocated IP were routable and inbound traffic was not blocked.


> I know, but I meant how NAT with IPv4 was deployed by ISP (at least in europe), which is a single IP with all inbound connection blocked.

The single IP does not prevent inbound connections. The stateful packetfilter does. This has nothing to do with NAT.

Also, NAT is not usually done by the ISP, but by the customer's router.


Most of the time the customer router is controlled by the ISP. And I don't see why you are so pedantic on this, the point is about how NAT is generally deployed, and for consumers, this is with a single IP, stateful firewall and UPnP for port forwarding. And this might change with IPv6.


No, that is how IP(v4) is commonly deployed: With a single IP, NAT, UPnP for port forwarding, and a stateful firewall.

Yes, that could change with IPv6. But it still doesn't have anything to do with NAT. Just because stateful firewalling happens on the same device as NAT, doesn't somehow make it a property of NAT. There also usually is a web interface on the same device. Now, with IPv6 that could change as well. Does that mean that web interfaces are a property of NAT? Or that it would be sensible to say "NAT is commonly deployed with a web interface"?


To be fair, the is might do carrier grade NAT, but then usually no/very limited filtering...


Well, but then, you arguable don't have a single, IP, but rather none at all.

Also, as you usually still have only one (non-)IP, the CPE still does NAT as well.

And in any case, NAT does not prevent inbound connections, not even CGN. CGN might prevent inbound connections from the public internet, but your ISP can still connect to your internal network if you don't have a stateful filter on your side.


You can use DNS and NTP to discover IPv6 nodes.


How does NTP fit? You mean listening for packets, or some kind of announce/resource broadcast that I'm not aware of?


I guess there have not been any large scale or very public attacks against these computers because the people that would do this would be an enemy of every state that has manufacturing.

The disruption may be temporary as things would recover quickly, and the perpetrators would be hunted down by law enforcement and thrown into jail for a long time.


I wonder if this was in response to the top comment on this HN post from 3 days ago https://news.ycombinator.com/item?id=16237026


Is this related to this recent finding?

https://news.ycombinator.com/item?id=16237026


How do they know they aren't honeypots?


Never attribute to malice that which is adequately explained by stupidity.


Simple answer because they are not that clever


Speaking from experience, Siemens is the company where they ask you what is SQLite when you mention that a half-binary xml is not necessarily the best format these days.


That sounds weirdly familiar. (Hello S7 project files?)


Something related, still ACX.


I wouldn’t be surprised if there are some controllers work from home and use VNC to access controls!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: