Hacker News new | past | comments | ask | show | jobs | submit login
Creating a ZigBee Chain Reaction (iacr.org)
160 points by nanis on April 13, 2017 | hide | past | favorite | 130 comments



Security is expensive for vendors to implement, so they externalize the risks to their customers and society in general. To reverse this dynamic, "insecure by default" needs to be more expensive than "secure by default".

It seems simplest for the Govt to avoid trying to mandate detailed security standards for continuously changing tech, and rather simply make the vendors legally liable for damages and let the market evolve effective standards and practices.

However, constructing effective legislation/regulation for doing so is a non-trivial legal challenge. Simply proving when the vendor is liable vs a user could get tricky, as well as estimating damages and divying them up among multiple vendors if damages involved products from multiple vendors. Among other things.

Anyone, especially lawyers, have insight on best way to fix this problem?


> It seems simplest for the Govt to avoid trying to mandate detailed security standards for continuously changing tech

Govs can do a lot of broad legislation rules that is non-specific.

The software industry requires a legislative bitch slap like the auto industry received. These rules would wreak havoc on the industry but if you ask me for the better.

- Are you running unpatched software exposed to the internet for which CVE patches exist? Pay a fine every day until you do so.

- Ban IoT devices that do not have automatic signed software updates over encrypted channels (which would probably ban all current IoT devices).

- Ban all IoT devices without crypto capabilities. Must have a hardware RNG and a set of standard crypto algorithms.

- Does an IoT maker have a CVE and has not patched all their devices in X amount of time? Daily fine.

- Are you a vendor who has not patched a CVE for your software after X amount of time? Pay a fine every day until you do so.


Cars kill people. At worst current IoT let a pirate control your light bulbs, see your heart beat or record your sleep patterns, waste your food and listen to your music.

This won't interest anyone.

For this to start seriously motivating people, you'll need:

- even more numerous DDOS, with more expensive consequences. Companies are affected so they act.

- scandals with naked people, preferably famous ones or underage, or both. People that gets the medias talking.

- money stolen. A lot, to bother insurances.

- people dying. Scaring the public always work. E.G: fire started by pirated IoT device.

Otherwise nobody is gona bat an eye.

We don't live in a world were most companies do the right thing because it's the right right. Remember that tobacco companies use to run ads to show you how cool your life is with cigarettes while lobbying the congress to state how non dangerous for health it was. Remember that people are putting their entire life on systems with text analysis, geolocation and face recognition and they don't see the big deal out of it. Remember that the government spies on every citizen, considers it perfectly acceptable, and that the citizen let it to.

So really, the fact that your connected fridge has an open telnet port is not going to move anyone.

You need damage done to get a reaction. Not potential damage.


> Cars kill people. At worst current IoT let a pirate control your light bulbs, see your heart beat or record your sleep patterns, waste your food and listen to your music.

Once you're home network is compromised it's much easier to infect other computers on it, a lot of IT people don't even run firewalls on PC's anymore. We largely rely on a single line of defense for home cyber security.

A compromised device becomes a launch pad for a bunch of other attacks, like data collection and credit card theft. Imagine how many pedophile rings would love to remotely watch kids using a built in webcam? All without the family ever knowing.

Or you could simply use them to stream illegal torrents.


I said "Cars kill people", not "cars could kill people".

You are talking about potential. People don't care about that.

Potential that I mentioned by the way, with "scandals with naked people, preferably famous ones or underage, or both."

They will care only once pedophiles will have watched THEIR kids using their web cam. Repeatedly. With media coverage.

Not before.


If cars were a new technology today, two-way highways without center dividers would not exist.


> You are talking about potential. People don't care about that.

What people care about is irrelevant. What should be legislated for the better of all is another matter. Car safety laws were unpopular with a lot of regular people as well.


That's fair but it was because dead people cost money.


That was more about what lax security could do, which you were downplaying as nothing significant.


> Imagine how many pedophile rings would love to remotely watch kids using a built in webcam?

Already happening. Access to webcams is sold on dark markets.


If you know of it, then report it. Otherwise you're allowing it to exist.

If you don't know if it, then this is some fancy way of slashing up FUD about IoT.


As I recall, these were Tor onion service gateways. Victims weren't identified. And dark markets generally have an extremely laissez faire attitude toward such matters. So there's nothing to report, and nobody to report it to.

If you have IoT devices, it's prudent to periodically scan for connections to the Tor network. Tor Project provides a comprehensive list of relays. A clever adversary can use unpublished bridges, however. But you'll still see traffic to unexpected IPs.


> Cars kill people. At worst current IoT let a pirate control your light bulbs, see your heart beat or record your sleep patterns, waste your food and listen to your music. This won't interest anyone.

The IoS (Internet of Shxt) goes way beyond light bulbs, and in fact it has been around for ages.

There are all kinds of really dangerous devices connected to the Internet - power stations, gas distribution centers, robotic stuff in factories. Each of these three can easily kill or severely injure persons when hacked by someone with this intention.

Remember Stuxnet?


At worst, current IoT devices can be used to remotely murder someone. Or to indiscriminately massacre anybody using one of these devices unlucky enough to come within range of a rouge transmitter: https://www.theregister.co.uk/2016/10/05/animas_diabetes_pum...


>Cars kill people.

Two words: "national security"


> Two words: "national security"

The NSA is the number one hoarder of exploits, so this argument is very weak.


Someone mentioned an idea I really want recently, a use by like label "this device will receive security updates until 1/1/2020" and "this device will receive updates 1/1/2018".

We all hear how consumers don't care, but that's largely because they aren't given enough information to care.


Yeah sure. Like people are caring about Facebook so much right now because of all the informations they've been given about how dangerous this is ?


How about we do it so that I and anyone that cares can make an informed decision? Just because not everyone reads the nutritional information on food doesn't mean it's not worth having.


What you share on Facebook is mostly and primarily your problem, and maybe a problem of those who you associate with. Insecure IoT devices are already being leveraged to perform large-scale DDoS attacks, which makes this an infrastructural issue. It affects many other people who are in no way affiliated with you. It's less like nutritional labels and more like vaccination. Solutions to such problem should not depend on your consent, informed or otherwise.


Yet look at the reaction I get for advocating informed consent. As an industry we hate collective action.


Sure I don't mean we should not do it. You can't do without education. It's a mandatory condition.

It's just not enough. People's priority is not to be actors in their society right now. They don't want to exercise the power they have, because that mean also taking the responsibility for it.

I have no idea what the solution is to that problem though.


One standard solution to the "tragedy of the commons" problem is to change the incentives such that the actors don't destroy the commons. Usually involves an external authority (i.e. government) to enforce though.

Additionally, this is not a new or unique problem. Similar problems exist in the areas of car safety, flight safety or product safety. The way it's solved there is by enforcing safety standards and denying market access to products or services that don't meet those standards.


This would have the wonderful effect of making the CVE yet another tool that can hold corporations hostage in exchange for money.

Yes, security failures are pretty bad, but startups not happening thanks to excessive regulation is probably worse.


What? No.

If IoT (and important software in general) doesn't get some aggressive regulation, then people will die at some point. There were already cases where the heating went out in a whole town because of some connectivity issues ...I don't remember the details, but i think it was in Sweden.

Medical devices, industrial machinery, power plants and at some point god forbidden nuclear plants.

I flatly don't care about a startup getting shut down because they had shitty security, there will be another one. The people don't die from that in general. But if a person dies because of lack of regulations, or a strip of land gets contaminated, that is it.

And instead of thinking about corporations being "extorted" for not fixing their shit, how about society having to deal with the fallout of shitty IoT security? DDOS paradise, possibly critical infrastructure being taken over etc

If your business model breaks down if you are forced to do it properly, it was shit. Find a new one. Welcome to capitalism, it's nobodies job to make sure your company succeeds


> I don't remember the details, but i think it was in Sweden.

What about the cases where increased connectivity has prevented blackouts due to increased response time (of which there are probably on the order of 10000 more)?


Strong assumption. It's basically selling tiger rocks claiming "but think of all the bad things which DIDN'T happen" without some data that at least hints at the prevention.

If you don't have connectivity, you can still prevent blackouts the same or almost the same level. You just need more redundancy in the system, which drives up costs.

I actually checked, it was finland and put simplified due to a heating pump continually restarting because it thought there was an error, since its sensors couldn't communicate.

If you don't have connectivity, then that heating pump would need a human to regulate it, which leads to higher latency and less efficiency.


Your position is very extreme. Extreme enough that I don't think it's a productive use of time to debate with you.

Maybe tone it down a little, take a breath, and realize (a) that people are not in fact dying, and (b) if they do, we can deal with it then.

The problem with all of these "People might die!" arguments is that everyone who makes them is always thoroughly convinced that they are right at all costs right now so just fix this!!! that it gets so tiresome.

Neither of us has considered all of the possible effects of either action, so we should both carefully examine the consequences of our respective opinions.


You are arguing against something I did not say. I never said we need to do it "at all costs right now so just fix this!!!". I am saying that I value the risk of human lives over the risk of some startups not being created.

You didn't specify which part of my positions makes me extreme, so I will just address your a) and b), and your last sentence.

b) is what I would call extreme. Yes, people die. But in the discussion "we should protect people" vs "we should't stiffle business", if one of your arguments is "we can deal with it if people dying" is pretty cold (this is me seeing the implied "but we can deal with it worse if startups fail" in this context, so you might want to correct me here)

as for a) as far as I can tell, people have not died yet. But we see scenarios on how they could. I'll give examples with real life incidents as inspiration

1. Hacker wants to extort municipality, shuts of heating in winter: http://metropolitan.fi/entry/ddos-attack-halts-heating-in-fi...

2. Foreign interest wants to sabotage infrastructure, civilians get caught in collateral damage (also stuxnet) https://www.bloomberg.com/news/articles/2014-12-10/mysteriou...

3. Competitor or hacker wants to extort company: https://www.wired.com/wp-content/uploads/2015/01/Lagebericht...

I would ask you, how many people have to die or how much damage needs to be caused before we but some basic security standards as regulation onto companies? Will one person suffice before I can make that argument?

As for this:

>Neither of us has considered all of the possible effects of either action, so we should both carefully examine the consequences of our respective opinions.

1. Yes we should, and then we will run out of lifetime because we cannot possibly think of everything. But I'll assume you didn't mean it literally.

2. Considering all relevant consequences I can think of(feel free to add any I forget):

If we create a lower bound on security, mandate that all "consumer/industry grade" devices need to live up to that lower limit or face heavy fines, I see the following tings happening:

* lots of stuff will not get created, because the margin would be gone.

* there will be a new market for security middle ware, or a strengthening of the existing one

* there would be a halt in the incrase of power of DDoS attacks

* certifying your procuct as consumer or industry grade would need to be an efficient process and would probably add onto the cost of developing a product

* GPl and open source software would have to be treated carefully. But if you start in IoT and not touch "classic" software, that can be managed. Probably the distinction between "consumer/industry grade" doesn't make sense for server software and would have to be shifted to the process, i.e. a company can use any software they want in their server AS LONG AS they make use of certain key technologies and industry best practices and have a good process in place. (yes this means no more SaaS without a security team...or a founder willing to learn that shit, or an new company providing that service. You could make an exception for revenue below 50k /year or something if you really want)

That are the rough consequences I can see from regulation. Nothing too negative imo

On your side, we have the status quo. With dishwashers running linux and having directory traversal vulns https://www.theregister.co.uk/2017/03/26/miele_joins_interne... companies ROUTINELY storing passwords, possibly without any good crypto and facing no consequences if stuff just stops working https://www.wired.com/2016/04/nests-hub-shutdown-proves-your... or if your data gets stolen because of their shitty practices


Okay, but you are massively underestimating the cost. I feel bad responding to your comment with essentially a one-liner, so even though it's late, here you go: "certifying your product as consumer or industry grade would need to be an efficient process and would probably add onto the cost of developing a product" does not at all capture how thoroughly screwed a new startup will be if they have to devote $60k to a pentest before even getting off the ground. That would have sunk Apple, for example. I don't think most people recognize or appreciate how brittle startups are at the very beginning.

You're also assuming that there are two states: "secure" and "not secure." You're further assuming that there is a way to transition from one to the other, by "becoming secure" through some state-mandated process. But it just ain't so. No matter how much money you throw at it, you can only increase security, you cannot prevent security problems. If you've shipped code, you've probably introduced some security problems. We should still try to improve the situation, but it is nearly impossible to make software secure. And in the meantime, it's the perfect tool for competitors to stamp out competition, since only incumbents can afford to be labeled as "secure" (when they're not).

I don't think it would significantly curb the power of DDoS attacks, both because DDoS attacks are an inherent problem with the web's design and because hackers are always finding new and innovative ways to increase their DDoS power anyway. Some IoT devices aren't going to make a huge dent in their abilities in that regard.

What would help is if pentesting became a regular occurrence due to massive cost reduction. It shouldn't take $60k for a pentest, but it does. The way to achieve this is through open-market competition. Regulation will only raise the price.

Regarding your first point, I could take the easy route and say nothing. It's tragic when anyone dies. But it's also a fact of life. Many people will die to self-driving cars, yet there will be many thousands fewer deaths thanks to them. The idea that a single human life is more valuable than raising the standard of living for everybody is as strange to me as it is to you that a single life isn't the most valuable thing.

Obviously, my perspective would probably change if someone close to me died due to some fool's bad software. But in that circumstance, the justice system would be as available to me as it is to you, and it was designed for just such an occurrence. It would not offset the heartbreak I'd feel, but at least society has processes in place to do something.

this is me seeing the implied "but we can deal with it worse if startups fail"

This seems self-evident: Many of the enjoyments we take for granted are thanks to startups. Our quality of life has dramatically improved due to the technology they help usher in. Technological progress is not nearly as inevitable as everyone would like to believe, and it's easy to forget how much better our lives are thanks to it.


I'm currently working at a startup (5 people full time) doing IoT devices. I am by no means a security expert; but as the primary software engineer (and being rather afraid of the internet), I've assumed the role of "security guy."

While I've certainly spent time on getting everything to a point that's "secure enough" to let me sleep at night, that amount of effort has in no way jeopardized our business.

And it's really simple things that cover the vast majority of problems: - Encrypt everything. - Use unique certificates/keys for every device, no master keys. - Have (and use) automatic update capabilities. - Don't use default credentials anywhere for anything. - Disable unused protocols/close unused ports.

After the initial setup work, we incur a (very) minor additional cost of manufacturing to provision unique keys instead of flashing identical images. That's it.

What this regulation would really do is extinguish a lot of the bullshit cheap/knockoff products coming out of China.


It's good that you secured your IoT device. But if this regulation were in effect, you would not be declared secure until you had a pentest. That means regardless of how much effort you put in, you'd need to pay a (relatively) massive amount of money.


I guess the regulation wouldn't require a pentest in my mind, at least not for all classes of devices. Something along the lines of self-regulation/providing basic documentation that you've adhered to some set guidelines would be a good step in my mind. If you're found in violation of those guidelines though, by all means require a pentest going forward.

I honestly haven't a clue whether or not that might work in practice though.


>I feel bad responding to your comment with essentially a one-liner, so even though it's late,

ah, the strategic advantage of timezones :P

> Okay, but you are massively underestimating the cost.(... )here you go: "certifying your product as consumer or industry grade would need to be an efficient process and would probably add onto the cost of developing a product" does not at all capture how thoroughly screwed a new startup will be if they have to devote $60k to a pentest before even getting off the ground. That would have sunk Apple, for example. I don't think most people recognize or appreciate how brittle startups are at the very beginning.

1. Apple grew up in a different world.

2. For consumer wares we are talking about an inspection, not a pentest whether or not they conform to basic security (SSL, update mechanism in place, salting and strongly hashing passwords). A lot of people (including me) learned this stuff from lurking on the internet. It could even be a self audit, making you financially liable if you cannot document your security if something breaks (we hereby certify that the product fullfills the following basic standards: strong hash and salt => using NaCl etc).

3. I have been involved in enough startups now that I know how fragile they are. And bluntly put: so what. Society makes rules, businesses conform. If it is properly enforced, it just becomes another thing everybody has to do, not a competitive disadvantage (long term even an advantage if your economy runs on secure devices)

>You're also assuming that there are two states: "secure" and "not secure." You're further assuming that there is a way to transition from one to the other, by "becoming secure" through some state-mandated process. But it just ain't so. No matter how much money you throw at it, you can only increase security, you cannot prevent security problems. If you've shipped code, you've probably introduced some security problems. We should still try to improve the situation, but it is nearly impossible to make software secure. And in the meantime, it's the perfect tool for competitors to stamp out competition, since only incumbents can afford to be labeled as "secure" (when they're not).

No. I am assuming there is "obviously insecure", "not obviously broken" and a spectrum of "secure" which depends on your threat model. For the latter I agree with you, it cannot be defined. But you CAN define a bottom and bitch slap anyone into bankruptcy who thinks they can screw with that.

>I don't think it would significantly curb the power of DDoS attacks, both because DDoS attacks are an inherent problem with the web's design and because hackers are always finding new and innovative ways to increase their DDoS power anyway. Some IoT devices aren't going to make a huge dent in their abilities in that regard.

I disagree with that, simply because there will be LOTS more devices in IoT than we have now. Bruce Schneier also agrees with me https://www.schneier.com/blog/archives/2016/10/security_econ...

> What would help is if pentesting became a regular occurrence due to massive cost reduction. It shouldn't take $60k for a pentest, but it does. The way to achieve this is through open-market competition. Regulation will only raise the price.

Moot because I specified earlier we are not talking about full pentesting. But even if not, first force EVERYONE to get it, drive up the prices, then people will start learning it and drive done the prices heavily and start to automate. Look at web developers. Once prized, now a dime a dozen (compared to before). Pentesting is no different from anything else, it can be tiered, the lower tiers more and more automated and pressed into frameworks.

> Regarding your first point, I could take the easy route and say nothing. It's tragic when anyone dies. But it's also a fact of life. Many people will die to self-driving cars, yet there will be many thousands fewer deaths thanks to them. The idea that a single human life is more valuable than raising the standard of living for everybody is as strange to me as it is to you that a single life isn't the most valuable thing.

We are not only talking about some persons dying vs higher quality of life. We are talking about millions of people possibly being vulnerable to extortion, data theft, degradation of services through DDOS etc. ON TOP of poeple dying vs slightly slower rollout of higher quality of live.

> Obviously, my perspective would probably change if someone close to me died due to some fool's bad software. But in that circumstance, the justice system would be as available to me as it is to you, and it was designed for just such an occurrence. It would not offset the heartbreak I'd feel, but at least society has processes in place to do something.

Can't comment

>this is me seeing the implied "but we can deal with it worse if startups fail" > > This seems self-evident: Many of the enjoyments we take for granted are thanks to startups. Our quality of life has dramatically improved due to the technology they help usher in. Technological progress is not nearly as inevitable as everyone would like to believe, and it's easy to forget how much better our lives are thanks to it.

This is one of the biggest myths...startups are good at developing products. Established companies, Universities and Institutes are good at developing new technology and driving science forward.

The latter is non obvious, fragile and in decline. The former is driven by making money, and is a lot more self deterministic.

"From scratch" Startups have given us (if we simplify a lot) PCs, Amazon 1.0, numerous SaaS,etc. all making previously very cumbersome and clutchy tech viable for the market. Which is a huge risk, tremendous work and deserves all the money they make.

But the PC was done before at Xerox, amazon could only start doing research after it had become the retail giant it is now and Intel, Google and almost all other highly innovative Companies had their roots in academia, being either spin offs or PhDs applying their knowledge into business. Silicon valley exist mainly because of the US military (not only but especially because of DARPA), not just because of some "self made businessmen" taking risks (they made huge contributions I don't want to disparage, but "great person" history is strong in our cycles). And some of the largest contributions to our way of life had nothing to do with business at all (Tim Werners Lee, Linus Torvalds and above all Richard Stallman don't get nearly enough recognition IMO. Especially RMS created a whole new world of value creation by popularizing the idea of software freedom and making sure generations could learn freely). Startups don't contribute to this nearly as much as they like to claim.

Heck, even Peter Thiel is moaning about startups not really tackling high tech issues any more. SpaceX and Tesla are some of the rare shining innovative lights, but a lot of the value that startups provide comes from applying existent tech to old processes (Salesforce, netflix, uber, airbnb, zenefits...). IoT will probably be in the second category, not the first.

So if they are only going to modernize existing processes with new tech, lets take the time to force them to do it right from the beginning


For consumer wares we are talking about an inspection, not a pentest whether or not they conform to basic security (SSL, update mechanism in place, salting and strongly hashing passwords).

This is a pentest. Whether you call it something else is irrelevant.

Moot because I specified earlier we are not talking about full pentesting. But even if not, first force EVERYONE to get it, drive up the prices, then people will start learning it and drive done the prices heavily and start to automate. Look at web developers. Once prized, now a dime a dozen (compared to before). Pentesting is no different from anything else, it can be tiered, the lower tiers more and more automated and pressed into frameworks.

This isn't true. Pentesting is fundamentally different from programming, in ways that are subtle and non-obvious. It's not something that can be automated, for the same reason you can't determine whether an arbitrary program will halt.


I don't understand how pentesing can't be automated.

In what way is it not possible to write a script that launches a battery of tests?


What? Are you serious?

If some startups failing is the price we have to pay to get proper security around IoT devices then I'm all for it.

Startups aren't the be-all-and-end-all of technology.


Actually, in many ways they are. It's one reason the US is in a superior position today vs other countries. Some technology would not happen at all without startups, such as airbnb.

You don't need to go far to find reasons why regulation can be a terrible thing. Here's an example from five hours ago: https://news.ycombinator.com/item?id=14103008

If regulation is so bad, why are there so many people stamping out any opposition to regulation, and reacting with expressions of shock and disbelief that anyone could be against them?

Because in recent years, it's become fashionable to hate business. You see it everywhere, from /r/LateStageCapitalism (a subreddit whose explicit purpose is to hate rich people the same way some people hate minorities), to the PR outrage at Uber (who certainly deserved it, but now everyone is gleefully waiting for Uber to burn), to this creeping distaste for anyone who would seriously consider startups' interests over the interests of the individual.


Well, excuse the business hate, but people are being fucked by various "entrepreneurs" daily. From startups "socializing risk, privatizing profits", through planned obsolescence, crap products, to daily shenanigans like your corner grocery store washing their stale meat with dishwashing fluid to make it smell fresh. Many of us have seen the giving end too, being a witness of such policies at our workplaces. So I apologize if I don't trust a random business by default. I definitely distrust startups by default, because their typical business model is this: lie to people telling them you care about your product, where in fact all you want is to make growth curves good enough for a profitable exit, and fuck users after that.

As for Uber, yeah, I'm gleefully waiting for them to burn down and die - not because it's fashionable (it wasn't when I started waiting) - it's because their business practices have no place in a civilized society, and the longer they're alive, the more damage to the public perception of the rule of law they do.


I sincerely hope you belong to the class of people making money off skimming regulations and immoral business practices. You know, one of the people who benefits from US being what US is. One of the 'rich people'

Because if you are not, if you are actually one of the plebs, this adoration of the corporate boot and arguing against your interests as a customer is pretty sad.


I read 1984 when I was in my teens, and I remember being let down by it. It was so outlandishly comical that it was impossible to take it seriously. Then I was incredulous that more and more people were saying 1984 was starting to match reality.

Remember the "daily hate"? That's pretty much what's going on here. Not only do people thoroughly enjoy the act of hating something, but entire communities form whose explicit purpose is to do this. And of course, no one wants to admit that they're hating people, so they construct comfortable euphemisms and say they're hating systems, not people. It's a "corporate boot," not a person.

It's much easier to frame anyone who interrupts your daily hate as an outsider than to face the reality of what you're doing.


So your point is that we should not criticize despicable business practices, because those businesses are headed by people. And we should not 'hate' people because that is bad, I guess?

Definitely, next time we catch Nestle aggressively pushing their formula in Africa or private prison companies bribing judges to jail kids for minor offences I will stay my outrage. Of course, the companies are not just faceless abstracts, they are people, that absolves them of all the guilt.


FAKE NEWS! WE'VE ALWAYS BEEN AT WAR WITH EURASIA!

You misunderstand 1984 bigly. SAD.


> Because in recent years, it's become fashionable to hate business

Hmmmm, funny that, I can't imagine it has anything to do with people being fed up with being ruthlessly and endlessly fucked by businesses for so much as a minor gain on their bottom line.

Being in favour of regulation != Implying that all regulation is good and perfect. Regulation can be hard to get right, but it's there to protect the interests of the public (which should necessarily override any corporate interest).

Case in point: we now need net neutrality regulation, to ensure that we preserve (ironically) some of the very things that make the internet so great: low barrier to entry, fair and equal access etc etc.

Want to know what happens when uber squeezes it's competition out and then runs out of VC money? They'll shamelessly extort the money out of the consumer. So yeah, I'm pretty happy to watch them burn.

> creeping distaste for anyone who would seriously consider startups' interests over the interests of the individual.

But why are they so deserving? What makes a startup special that they would be preferenced above people and communities?

> /r/LateStageCapitalism (a subreddit whose explicit purpose is to hate rich people the same way some people hate minorities)

If by "hating rich people" you mean "taking issue with rampant exploitation of people and resources by corporations", then yeah, is say they have some pretty legitimate complaints.

> Some technology would not happen at all without startups, such as airbnb

Right, because only startup culture would have come up with the revolutionary idea of a bedsit. Truly an idea we've never before had in history.


What is the new innovative technology that AirBnb has delivered?


They delivered an innovative business model. Not every company is a technology company.


> Yes, security failures are pretty bad, but startups not happening thanks to excessive regulation is probably worse.

How so? I mean, I'd rather avoid either, but if I had to pick one, I don't get what's so special about "startups" that they're more important than data security.


People starting new businesses is usually a good way for a society to become wealthier and to also raise the standard of living.

In this particular case, I have no idea whether or not the safety risks/externalities are severe enough that added regulation is needed.


Please point out the graph and/or story that shows current startups raising the general standard of living for citizens in the middle/lower classes?

(No. They can ride in a freelancer driven taxi is not raising the standard of living.)


Well, most of the things you enjoy today were at some point created by entrepreneurs starting companies; whether VC-funded startups (or whatever we called them in the past) or regular companies. There is a strong indication that the easier it is to start a profitable business, the more people will do so, create new products and services, and thus improve standards of living.

But there need to be boundaries, because not every business model is good for society. As a thought experiment for those thinking it's better to have deregulated startups in high-tech industry, imagine taking down biosafety regulations. Think of all the biotech startups blooming...


Simple solution, fix CVEs in a timely manner.


For the CVE points, I'd expect enforcement to be difficult. Might also create perverse incentives, like if you neglect to patch, it might be better to never patch in the hopes no-one notices, since patching late would draw attention to your not having done so promptly and result in fines.

As for the automatic updates, personally, I view the vendor as the biggest threat. I'd prefer they didn't have any access to my gear. Which is one reason, in addition to just never having seen anything useful, I don't have "smart" appliances.


> For the CVE points, I'd expect enforcement to be difficult.

Ever used kali linux? Have an gov org that is scanning for volnurabilities and fining companies. Not difficult at all.


No. The government should NOT be doing any of this. Yuck


Your solid arguments have swayed me /s


>> It seems simplest for the Govt to avoid trying to mandate detailed security standards for continuously changing tech

>Govs can do a lot of broad legislation rules that is non-specific.

>The software industry requires a legislative bitch slap like the auto industry received. These rules would wreak havoc on the industry but if you ask me for the better.

>- Are you running unpatched software exposed to the internet for which CVE patches exist? Pay a fine every day until you do so.

Just to play devil's advocate, I'll try to pick the harder cases, not the easy ones.

Define unpatched.

Does this include free to use websites?

What about non-profits?

What if a CVE does not exist? (yes you said for which patches exist, why is this an exception, how well is this exception going to hold up in the legislative session, and if it's not explicity made as a distinction, how is it going to hold up in court?)

What about software I host for free and let you download, But the hosting software itself does not have CVEs?

What about software I let you automatically download and automatically update, and is exposed to the internet from your machine, by default or by configuration, but is not installed by default? What if I'm the manufacturer of the device?

What if I'm the manufacturer the device in the previous example I give access to install the software through my portal, but I don't own the software, nor have the ability (because I don't have the source) or the legal rights to patch it without the owner consent (ala iOS)? In this case I do have the legal right to remove it, how does that interplay with this?

What if in the above scenario, the software manufacturer is outside your jurisdiction? How is that decision going to affect the software industry in your jurisdiction?

>- Ban IoT devices that do not have automatic signed software updates over encrypted channels (which would probably ban all current IoT devices).

How does that affect the right to repair if that was passed as a law?

>- Ban all IoT devices without crypto capabilities. Must have a hardware RNG and a set of standard crypto algorithms.

Who defines the standards, both for the hw RNG and the algos? The US gov't has been shown to be more or less subversive to good crypto. If you select another org, a nation state with said resources can just neuter that org by infiltration.

How are you going to verify that the standards are actually being followed? Refer to the most recent question, Snowden, Reflections on Trusting Trust, and ultimately even if the end user had the source how does he verify that's what is actually being executed? Most end users won't go through this effort, but if nobody can, then your essentially not verifying anything.

How do you verify compatibility for this standard? Generally standards that succeed, do so by defining a common sense test suite. Where does that fit in?

>- Does an IoT maker have a CVE and has not patched all their devices in X amount of time? Daily fine.

Does the fine increase per day? Why or why not? If yes, how much and why that much?

How much is the fine, is it based on revenue, profit, units shipped etc?

How do you pick the amount of time? I bring this up mainly in reference to

    a) The Project Zero disclosures that seemingly didn't give Microsoft enough time, and yet if you don't set a deadline... 

    b) What about products like Android that have security patches that aren't deployed to end users due to the product org, manufacturer apathy, and carrier blockades? Is there a fine, there was a patch... Who gets the fine?

How do you levy it on firms in other jurisdictions that don't let you levy fines like China?

What if the devices are meant to be offline for a certain period of time? How much is acceptable before a device needs to call home? Specifically, what about industries where the devices will eventually sync, but may not for an indefinite period of time (military, mining, heavy construction, anything done in the middle of nowhere)?

What if not all devices are accounted for? More generally, how are you counting devices?

Referring to the previous question, how do you count devices that were discarded by the owner, but not reported to the manufacturer?

>- Are you a vendor who has not patched a CVE for your software after X amount of time? Pay a fine every day until you do so.

Define vendor. Software or Hardware? Does FOSS count?

Otherwise same as previous.

These questions are meant to get you thinking, not to be argumentative.

edit: formatting


What about the 100's of millions of Windows computers that spread most of the viruses, malware and ransomware? Is there a fine for Microsoft's negligence? What about the millions of zombie Windows computers launching DDOS attacks daily? Is there going to be a fine for that too?


> Define unpatched.

Software that has a vulnerability, for which the vendor has made a fix available.

> Does this include free to use websites?

Obviously. A compromised server is a compromised server that is a vector for other attacks.

> What about non-profits?

Legal status of the organization is entirely irrelevant.

> What if a CVE does not exist?

You cant legislate all software must not have security issues. CVE pathway is industry standard and the best practice we have come across. It's not full-proof but it's good enough.

> (yes you said for which patches exist, why is this an exception,

Because the makers will also be liable to provide patches.

> how well is this exception going to hold up in the legislative session, and if it's not explicity made as a distinction, how is it going to hold up in court?)

Have a public DB of CVE's, plenty of which exist. I fail to see what the problem is.

> What about software I let you automatically download and automatically update, and is exposed to the internet from your machine, by default or by configuration, but is not installed by default?

Software you put on the internet is your responsibility. The manufacturer must provide a pathway to update, you must make use of it.

> What if I'm the manufacturer of the device?

Provide patches for all CVEs that can be used on your devices. Otherwise fines.

> Who defines the standards, both for the hw RNG and the algos?

Plenty of very good crypto algos exist. Pick a group and give manufacturers the option to implement a subset.

> The US gov't has been shown to be more or less subversive to good crypto. If you select another org, a nation state with said resources can just neuter that org by infiltration.

There are branches of government. It's not one single thing.

> How do you verify compatibility for this standard? Generally standards that succeed, do so by defining a common sense test suite. Where does that fit in?

New agencies have to be made. Just like there are is a FDA and whoever certifies cars so they can be used on roads.

> Does the fine increase per day? Why or why not? If yes, how much and why that much? > How much is the fine, is it based on revenue, profit, units shipped etc?

Details that are really not an issue.

> How do you pick the amount of time? I bring this up mainly in reference to

Not enough time is complete bullcrap. Give them a month and the fines start. Google made a fucking disaster with Android. Anyone attempting to do anything similar today should get fined in the billion range.

> How do you levy it on firms in other jurisdictions that don't let you levy fines like China?

If they are selling products in a market, they can legislate standards. If you use something for free, you are liable yourself.

> What if the devices are meant to be offline for a certain period of time? How much is acceptable before a device needs to call home? Specifically, what about industries where the devices will eventually sync, but may not for an indefinite period of time (military, mining, heavy construction, anything done in the middle of nowhere)?

Then those devices aren't really a security issue and a danger to the rest of the Internet.

> Referring to the previous question, how do you count devices that were discarded by the owner, but not reported to the manufacturer?

Whoever is connecting the device to the internet is liable. Devices must have the capability to call home (manufacturers servers) and update if necessary.


> Then those devices aren't really a security issue and a danger to the rest of the Internet.

They aren't connected regularly, but they are connected eventually. This distinction can be used to great advantage by attackers if the security side ignores it. Also, you are now forcing devices to be connected to the net eventually and call home or they brick themselves? This seems customer/user hostile to say the least.

> Whoever is connecting the device to the internet is liable. Devices must have the capability to call home (manufacturers servers) and update if necessary.

I was under the impression that the manufacturer was responsible in this scenario, not the user.

At any rate, I was moreso interested in trying to get you to think through some of the implications, and see how well you had thought this out.


> The software industry requires a legislative bitch slap like the auto industry received. These rules would wreak havoc on the industry but if you ask me for the better.

When did that ever happen?


It's actually not that hard to find a framework for a solution.

If you sell anything that connects to the power grid, you need to get it UL Listed. No exceptions. You cannot sell it if it doesn't check out in the lab.

If you sell anything that emits radio frequencies either as part of its fundamental operation, or as incidental to it, you must get FCC certification.

Why isn't there an agency like this, or a subdivision of one of those existing ones, that certifies things for being able to connect to the internet? Right now there's absolutely no way for an ordinary consumer to know if that product they're buying is rock solid or absolute trash.

Enormous corporations like Samsung consistently crank out some of the most appallingly slip-shod code, and relatively tiny companies like those behind the Pebble have a considerably better track record. There's no external signs for the non-technical people to work with here. They'll need to hope and pray that their product isn't a total turkey.


I could get behind an "FCC certification" style program for internet connected devices.

I don't necessarily agree with a lot of the commenters saying it must have secure crypto, or that they need to be 100% liable for all issues down the road, but at least making sure the worst offences are covered, and there is a big database of these devices will go a long way to being able to do things like recalls, and holding the manufacturers responsible if there are major problems.

Like you said, it's worked wonders with RF regulations. You don't ever see products that mess up and flood the RF spectrum with trash ruining everything else in the area, and I have a suspicion that's due to those certification processes.


It doesn't have to be a perfect process, or even one that's "feature complete" when it launches. It just needs to be something people can latch on to.

Like if a previously certified device is found to be full of holes because of a new attack vector, that vendor can be forced to address that issue or lose their certification. Right now the consumers have no recourse other than an ugly class-action lawsuit.


How many people in the world have studied crypto? Radio and EE are both areas studied by a magnitude greater number of people at a serious level. Does crypto take a higher level of cognitive overhead to enter or do people only think it does? Is there a cryto structure similar to amateur radio?


It doesn't have to be crypto. Right now a basic check done by a post-grad student could probably catch 90% of the most glaring problems.

Your dishwasher has an HTTP port open? That's not good. Your webcam uses port-popping UDP to open up a VPN to some random server in China? Yeah, we're not certifying that.

We're talking about basic port mapping and monitoring of the device in operation to make sure the network activity observed has some reasonable correlation to the actions performed on the device. Most consumers don't have the tools to do this, nor the patience to perform these tests, but these tools are cheap and easy to use by anyone with the inclination.

You can do additional fuzz testing quite easily, plus inspect the data stream for plain-text data that should be encrypted.

If as part of the certification you needed to submit a document describing what encryption methods you use and what system you have in place for dealing with compromised keys and such, great, but I don't see that as strictly necessary. Considerable improvements to the marketplace can be made with very simple tests.

This is the equivalent of plugging in an electrical device, observing it catches on fire, and giving it a failing grade. No specific knowledge of electronics is required.


All the software you use likely already came with a liability waiver / disclaimer:

> IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.


And regulations can make all such waivers nonbinding in whatever cases are deemed important enough, like IoT devices for example.


The key point being

> UNLESS REQUIRED BY APPLICABLE LAW

Not to mention that liability waivers and disclaimers can never supersede the law anyway.


>Anyone, especially lawyers, have insight on best way to fix this problem?

Why lawyers? This is a technical problem. Layers will come up with bad legal solutions that will already be outdated and probably too onerous when they are finally passed. I think the fix is obvious. IoT device should not have an open connection to the internet.

You want to control the lights in your house with your phone, when you're at work? VPN to your home network and do it. Obviously people won't know how to configure a VPN, and you can't expect a Chinese IoT manufacturer to not have a a web-server listening on a standard port or issue weekly security updates for the lifetime of the product, but this could be automated by smart routers like Google Home/Wifi.

Frankly, this is probably going to happen anyway. OS vendors have largely removed the OS as the most vulnerable component of a home network. All OSes are either locked down (Android/iOS) or have continuous security updates with sane out-of-the-box defaults. That leaves the router as a critical weak point. The cheapo $30 Staples router may work great for a household, but it isn't getting monthly security updates and is probably running on an unsupported kernel with a whole swath of zero-day vulnerabilities.


Have you ever had to follow coding standards that are 10 years out of date? I have and it was worse than working retail. Technology moves too fast. (And for what it's worth a VPN is a non-solution to the wrong problem in this case).

Legal is absolutely the right place for the problem. Put the liability in the right place and companies will figure out what works and what doesn't.


>Have you ever had to follow coding standards that are 10 years out of date?

I have. Our product is 10 years old. 10 years is nothing. And I'm not sure what your point is, any regulation is going to take years to develop, and years to be assimilated (if ever).

>Put the liability in the right place and companies will figure out what works and what doesn't.

Uh huh. You're proposing creating regulation to govern thousands (or hundreds of thousands) of products globally, and you think that's easier? The closest example of a regulatory solution to a real technical problem is the EU mandate of cookie notification ... which was disruptive for everyone and insanely stupid and pointless.

>(And for what it's worth a VPN is a non-solution to the wrong problem in this case).

It's a solution. The problem with IoT devices is that they are open for anyone to compromise and you cannot expect manufacturers to support them with security updates indefinitely. You put them on a private network with a controlled end-point and you're 95% there. Instead of needing people to vet every device they buy on Amazon (or creating a regulatory body to govern and control them), you just need to convince them to get a 'smart' router. You don't need every IoT manufacturer in the world to play ball. You don't even need every consumer to play ball either. Your phone will have an app that will talk to your home router/gateway and you can access your home service via a VPN you don't even know is there. Makes a ton of sense to me. Regulation would be a mess.


> Uh huh. You're proposing creating regulation to govern thousands (or hundreds of thousands) of products globally, and you think that's easier? The closest example of a regulatory solution to a real technical problem is the EU mandate of cookie notification ... which was disruptive for everyone and insanely stupid and pointless.

I'm proposing not legislating any technical matters. Just legislate where the liability lies, make it clear who has what kind of security responsibilities and what the penalties are when they fail. Businesses are good at responding to appropriate financial incentives.


How do you legislate security standards - especially in the general population of products and in a way that would be applicable to all suppliers? And how do you quantify liabilty due to a security vulnerability in some sub-component or somewhere in the software stack? How do you assign liability if thousands of different IoT devices are compromised and DDoSing the White House page? How do you deal with bankrupt or foreign suppliers? Or people using products that have been end-of-life-ed? Or how do you keep regulation up to date with current security standards? This is an incredibly hard problem.


I don't know how (I'm not a lawyer), but it seems like our legal system already deals with the same kind of issue for other things? Suppose faulty wiring in an appliance leads to an apartment building burning down - at that point you could have the same kind of issues in terms of it being a sub-component from a foreign supplier, end-of-lifed, affecting different people and so on. But somehow we deal with it.


I disagree, the immediate problem is not developing secure code, the problem is nobody wants to pay for security, and that's because it's cheap to slap on a disclaimer.

There's no point coming up with technical solutions that companies will never spend $$$ to implement.


Disagree with what part? Like you I don't see a regulatory security mandate as a solution. Security is hard and expensive and it just isn't feasible to expect every crappy IoT toaster to go through a security review.


I disagree where you say that it is a "technical problem", because that's not the real bottleneck at this point.

You can tell because there are many security issues (e.g. plaintext passwords) where relatively easy technical solutions have existed for a very very long time... Yet they still aren't being consistently applied.

Why? Because the root-cause is a matter of incentives, rather than a matter of inventions.


> Why lawyers? This is a technical problem.

You need to push the industry into solving this technical problem, it's not happening on its own.


>it's not happening on its own.

How do you explain pretty much every standard that governs the web and computing in general.


There's nothing stopping civil suits for damages resulting from poor security now. If your product does harm, you aren't absolved of liability because the internet was involved.


There all disclaimers all over the place for exactly this situation. You'll need to outlaw these.


You can't wave your liability for neglegence. A disclaimer is legal armor, but it's not impenetrable armor.


All the ones used in open source licenses? We're playing with fire here.


Making vendors liable has the problem when they spin up a new incorporated entity every year. The company who made it is now bust! Who is liable?


The owner. If you can't patch it, put it off the grid. If owners become liable they will care whether companies will be able to provide fixes in the future.

A somewhat radical solution: if an IoT device has vulnerabilities you go free in bricking it (=making the device non-functional, nothing else). This will put strong incentives on secure devices.


Security should be embedded in the IoT frameworks and should be easy for it to become the standard practice.

But most IoT stuff are hacked on, rarely using anything standard, and even when there is a framework involved, it seldom has security as a main feature.

Even when it does, it's still a lot of work. Take crossbar.io, which is my go to tool to communicate within a IoT context (or anything soft real time really). To secure it you need to:

- setup the TLS certificate. Default communication transport is over unencrypted websocket.

- configure the provided authentication service (and write a backend for your system).

- declare several realms to isolate the clients, and configure the permissions accordingly (default permissions are YOLO, to ease the "hello world", which I understand). Make sure you don't expose important RPC to the wrong clients or allow anybody to declare callbacks.

- manually code the procedure to use their hot reload system to swap code updates. It's made for local updates, not remote ones.

- be very careful when updating your clients. Crossbar routed RPC is transparent and it's tempting to replace a call from JS to Python to a call from JS to Postgres to remove a layer of indirection. But do you make proper permissions checks in your SQL ? Are you sure you don't expose too much ?

So basically, you can make it secure. But only if you know what you're doing and don't have a deadline tomorrow.


> Security should be embedded in the IoT frameworks and should be easy for it to become the standard practice.

To be fair, almost all of the security libraries suck. The only thing which is SMALL and solid is DJB's TweetNaCl (http://tweetnacl.cr.yp.to/)

If I'm running on a Nordic nRF51 series, for example, things like SSL/TLS are a HUGE chunk of my RAM, ROM, battery, and time budgets. This exploit is a good example. Even if you wanted to use something like a public/private key system, it's not clear that the the Atmel SoC could handle it.

In addition, there are still gaps in security libraries that we need. We don't have a good PAKE (password authenticated key exchange) library, for example. HomeKit standardized on SRP with a 3072-bit key, and then discovered that it was too heavyweight and slow for devices working with a lithium coin cell battery. Even Microsoft with AllJoyn had to deprecate SRP and switch to a non-standardized elliptic curve key exchange to better match tiny hardware.

The crypto folks are falling down on the job here. These things aren't standardized, and they don't seem to have been beaten on very hard. And they certainly haven't been tested on small hardware very much.

Everybody can bitch about security, but until someone figures out the tools required for these small systems, it's going to remain the wild west.


Well you said it.

There is a reason IoT is not secured. It's hard to make thousands of connected devices with little system resource but connected on foreign networks in heterogeneous context secure.


Then simply do not connect thousands of such devices if you can't handle it…


Yeah. And to avoid theft, simply don't acquire things that are not yours.


A better comparison would be: to avoid theft, don't build your house out of toothpicks that can't support a deadbolt door.


And yet we built houses out of such materials for thousands of years.

Security has 2 problems--technical and social.

The technical problem will eventually get solved as transistors are almost free. We are integrating hardware accelerators into almost everything since transistors are so cheap.

The social problem isn't so easy. Companies don't give a crap about security. Only when companies start losing 25% of their stock price after a breach will they care.


While reading this, I went looking for the ZLL master key. What surprises me is that it got DMCA'd everywhere, including Hacker News:

https://news.ycombinator.com/item?id=9249841

And there was really no outrage about it. Very strange.


It's amazing how 16 bytes of data can be under copyright.

Not arguing, just pointing out how there can be a DMCA request for something so small, citing copyright laws.

Reminds me of the AACS controversy when people starting printing keys of t-shirts, and illegal numbers were born.


John Cage's 4'33" is silence that is under copyright, and has been the subject of legal controversy.

http://edition.cnn.com/2002/SHOWBIZ/Music/09/23/uk.silence/


Differnt issue.

4'33 is a recording of the audience, not silence. I studied it in a music class in school.

Cage's estate's infringement claim on "silence" was not upheld by a court.

The defendant on that suit put Cage's name on the album as a songwriter, of his own accord.


Just because someone sends a DMCA doesn't mean there is a valid copyright.


Oh yeah that was a miswording on my part. I wanted to say it can be claimed using copyright laws.


Googling for "ZLL master key" results in the key being visible in the second hit, a Reddit link. You don't even have to click the link, it's in the preview.


Ah, nice. I was actually looking for the DMCA notice. I wanted to see what they wrote to justify the takedown, and whether or not anyone counternotice'd them.


It seems to start with 9F 55


It seems to end with EE 31


I don't work in the IoT department but they use our chips and I can guarantee you that if you make security a legal requirement my company will not hire more engineers, they will hire more lawyers.


OK, but then lawyers will say to hire more engineers.


I genuinely can't tell if you're joking.

In case you're serious:

No, they will work on some legal dodge to avoid the liability. The company does not see it as a technical problem.


OK, I was joking, a little.

If the regulations are crafted properly, legal dodges won't make the nut. Firms that go that route will fail.


Companies do lots of technical work to comply with regulations. It's silly to propose that they don't.


previous discussion was here: https://news.ycombinator.com/item?id=12893793

Though the report does say it was recently updated, unsure what the diff was.


Yea.. Is there a good way to find the diff? Only if research papers were published on git :P


ZigBee / Z-Wave are just getting started in their WEP everything stage. Most of the installed base is unencrypted. It will take many more years until they are at WPA2 levels of robustness.


Wow. That's awesome. The first WiFi worm, I think.

Think of the art that's possible with this. You could create city-scale images. Maybe larger, in high-density areas.


IoT is a bad vision for the future. 20 years from now I don't want a million devices in my home running software. Either they'll all constantly be pestering me with updates that break functionality I rely on, or they'll be out of date with bugs and security holes that last forever.

My vision of a good future is one where I have exactly one smart device: a robot butler which will operate all my other devices. I don't need a smart lock if the butler unlocks the door for me. I don't need a security webcam if the butler monitors the house while I'm away. I don't need a smart thermostat if the butler sets it for me. Etc.


Meh, I'm much more interested in a future where all my devices aren't "smart" but they all include an API contract regulated by an org akin to UL or FCC backed by legislation providing legal remedies to security and usability deficiencies. It's not about _if_ legal regulation will come to software but _when_, and the further in front of it hackers are, the better the future can be.


It's too cheap to just put a cpu into things not to do it. Your usb flash drive probably has an arm chip in it, bease it was cheaper then making specialized circuits for it.


Over half your body weight is microbes that are not part of your human body. Some helpful, some harmful. That is our digital future.


I don't know why, but I kind of want to see a truly gargantuan IoT debacle unfold at this point.

Something beyond stupid, and completely preventable, and all the more horrendous, because at this point, it can only be funny.

I want to see something like a TV commercial accidentally trigger a home automation system, which corrupts the operation of a class of light switches, which cascades onto smart microwave ovens, which transmit kill signals to self-driving cars which synchronize with flying cars at which point they all swarm the nearest hospitals and explode, demolishing trillions of dollars of health care, and imploding society because of failed credit default swaps on all of the health care insurance (even obamacare), which then causes automated trading platforms to sell, killing off everyone's 401K's, destroying the retirement plans of all survivors, such that the living envy the dead.

Can we make that happen?

IoT is retarded.


Hyperbole, or just wanting to watch the world burn, aside, why is IoT, as an idea, retarded? It seems to me that having the underlying platform for secured communication to semi-smart technology is good. If my house could intelligently govern itself within a set of parameters I define that fit my life, I bet I could save a few bucks a month on power, not have as much food, and help the environment in my own little way.

I do see the idea of IoT with no security and no long term commitment to the products as actually, technically retarding (we'd be worse than we are no for the reason you enumerated). Could you make an argument for your last statement as to why a good implementation of IoT is bad?


Because it's basically a bunch of gimmicks. That refrigerator with the camera will get me to buy stuff I didn't need because I couldn't see behind the milk. That coffee machine will make me a cup even though I'm too hung over to drink it that morning.

In short, nothing will be intelligent enough to be worth it. I don't believe this for every case, just playing devils advocate, but it does apply for most things.


> why is IoT, as an idea, retarded?

The usual complaints about IoT as an excuse for surveillance capitalism aside, the key problem with IoT in most products is the (currently obscured) costs do not outweigh the (often novelty) benefits. By benefits I mean actual, significant time or effort savings that need to outweigh the large risks inherent to anything IoT.

> underlying platform for secured communication

That illustrates a big part of the problem. There is no such think as a "secure platform", because "Security is a process, not a product."[1]

The internet is and will always be an incredibly hostile place. If you plan on internetworking on the shared global network or anything that connects to it in any way, you need to plan on a way to maintain vigilance over the devices you created or are responsible for. This means continuous work into the future[2].

> I bet I could [...beneficial outcomes...]

You're only listing the positive side. To judge IoT properly also need to enumerate the known problems and possible risks. A few examples of the risk that most IoT devices bring are:

* The other end of the supposed "secure communication" being compromised by governments, criminals, disgruntled workers, etc.

* Bugs (everything has bugs) allowing assholes of the "swatting" persuasion messing with your power, food, etc "for the LULZ".

* All that data being logged - even when stored locally - becoming the target for discovery in a trial (maybe involving you, maybe not).

* The manufacturer of your IoT device selling data to your insurance company, or you insurance company requiring that data from you directly (e.g. fitbit data for "cheaper" insurance that now has more ways to deny you coverage).

That's just some obvious examples. The real problem is that after data is collected it tends to be permanent. Nobody has thought of the big risks of plugging your devices into a hostile network. You see the potential benefits of IoT devices, but you also need to consider what some black hat (or script kiddie) will do with all of those devices - and the data they collect - in 10+ years with a clever new exploit.

[1] https://www.schneier.com/essays/archives/2000/04/the_process...

[2] It might be possible to limit this with products that have a limited lifespan and are guaranteed leave the network.


All the things you listed are things to be planned for. None of them are extremely terrible in and off themselves with the proper vigilance. Even the data logging should be solvable with reasonable laws.

Apply the general argument to personal computers. Anyone can attack your PC. Once pawned, they can get valuable information. Your IP could be wrongfully associated to a crime, which brings Jonny Law to your door. Given all of this, I still assume you see the idea of being connected via a PC as a good thing since you wrote a response via a browser.

My question was essentially, why dismiss something whole cloth? You raise valid things to consider, but I don't think that anyone of them is a death stroke to IoT. They are, at least in my opinion, design considerations for products that make sense.


> proper vigilance

You seriously expect the average person to have anything close to "proper vigilance" with a collection of IoT devices?

> reasonable laws

I'd absolutely love to see strong data protection laws passed, but that isn't likely in the near-ish future. Also, laws don't protect against bugs.

> All the things you listed are things to be planned for.

The worst problem in a new, unexplored area are the unknown/unexpected problems. You believe these data risks are minor - I strongly disagree - but how can you even begin to make that kind of judgment? Data persists and CVEs increase with time; how can you be certain that your data (which includes access credentials, e.g. ssl keys/certs, passwords) won't be stolen off some server (or your home devices) 20 years from now?

These are huge, unknown, open-ended risks that could suddenly become a problem at any point in the future.

> personal computers

The PC isn't tied to sensors around the house, with the ability to control various important hardware. The thermostat (nest) is an obvious example: it should be a trivial device, because simplicity is one of the better ways to guarantee reliability. Adding massive complexity and network access left a lot of people with a freezing house[1]. My PC isn't tied to important thing like the thermostat, because adding risk for effectively a nerd toy, social status symbol, and (allegedly) minor heating-bill benefits isn't a good trade-off, and it's terrible security.

The PC is a risk, but it can also serve as a place to contain the risk of being connected to a hostile network.

> why dismiss something whole cloth

I'm not: "...the key problem with IoT in most products is the ... costs do not outweigh the ... benefits."

Internet connectivity can work if the benefits sufficiently outweigh the cost of having to actually secure the device and remain vigilant and responsive to new security issues for the lifetime of the device. This is expensive, and approximately nobody is doing that right now. I also find it hard to believe that anything remotely similar to the current IoT toys on the market can ever be profitable enough to pay for their own security. There may be exceptions, of course, but they will be expensive (in some way) and rare.

[1] https://www.nytimes.com/2016/01/14/fashion/nest-thermostat-g...


  ...products that have a limited lifespan and are 
  guaranteed leave the network.
So, perhaps, something like, say... a four year lifespan? And maybe they "get retired" if they fail to leave the network?

Maybe we could give them names like Roy, Zhora, Leon and Pris...


That's is more or less the model I intended. Specifically, I was referring to one of Dan Geer's extremely important recommendations in "Cybersecurity as Realpolitik"[1].

[1] https://www.youtube.com/watch?v=nT-TGvYOBpI ( http://geer.tinho.net/geer.blackhat.6viii14.txt )


How many personal possessions can you think of that cannot operate unattended, but should?

Given that my personal possessions are few to begin with, I have a short list to review.

I honestly can't think of a single one, save my refrigerator, which I do not want buying food for me. To be honest, I don't even like owning a refrigerator. I didn't need one in college, and still don't need one. I don't use DVR, because I don't subscribe to cable TV.

My arms aren't broken. I can get up and turn a light on. Self-driving cars are technically beyond the scope of IoT, even though the "T" in IoT is deliberately vague.

But these are the things in my life, as it is, and not how it could be. The way my life works right now, I spend (at best) 10 hours away from my home, and maybe 8 hours asleep. So possibly 6 hours to reap the benefits of more clutter, automating... whatever.

Even though arranging and aligning the automated systems that hypothetically support the maximization of my free time, consumes time in order to perfect. But wait! Planned obsolescence promises me that if I step onto the product treadmill, it will be harder to exit, ensuring that there will be cycles of realigning and integrating new IoT systems, into my own private ecosystem of personal automation!

We know this, because look at how often we discard our mobile devices, and even our laptops and desktops.

But nevermind that. Maybe I'd value my free time more, if I had more of it. Maybe if I wasn't lashed to a desk all day, working for someone else, living paycheck to paycheck, I'd have more freedom to expound upon all the nothing I can't imagine not doing at the moment, because I'm consistently busy on someone else's terms.

I don't want robots buying shit for me. I don't need robots telling the world which room I just walked into. I'm sick of going to work all day, and sitting in someone else's chair. Fix that, before wasting my paycheck on lightbulbs that change themselves, but never go out anyway, because I'm not even at home 40 or 60 hours a week, and I'm asleep in the dark for another 50 besides.


There are at least two issues you raise in this response, neither are directly an answer to IoT as an absolutely retarded idea.

First, you say you don't own much, therefore IoT won't help you. That's fine, but it doesn't generalize. It especially doesn't generalize to non-consumer tech of which you'd have little part even if you wanted to own things.

Second, you're life appears to be stuck in dire straights. I have no idea why your stuck in the life you're in. As a result, I have no idea how automation might help or hinder you. Again, it is not a real argument against IoT.


Oh, except it is, because the only reason we constantly see all this IoT press, is because there's a PR machine pushing the idea of consumer-oriented IoT devices. More devices in more homes means more analytics inputs, which means more targeted marketing, which means more brand loyalty and lock-in for key purchases, which secures cash flow for established businesses.

We never see non-consumer IoT tech stories. It's always more bullshit, in aggregate, because the consumer market is huge. So, web-enabled security camera gadgets, refrigerators, light bulbs.

Never industrial control and automation. No SCADA. But honestly, critical systems are the things we DON'T want to see on the IoT, because that's where the IoT fuck-ups wind up causing the most pain.

We DON'T want to see hotel heating and ventilation systems reversing flow, and start sucking car exhaust from the parking garage at 3AM, when most are asleep in their rooms, because there's a default port open, because of a bug, and some asshole thought it would be cool to do that.

We DON'T want to see a dishwashing system's filter check go ignored, because the filter purchasing sub-system fails an SSL handshake, because an old CA is no longer available and a new CA is untrusted, and 200 people get sick because their plates were washed with grey water.

We DON'T want to see a sensor fail on a specific model of water pressure gauge, but, due to the nature of the failure, a recall is eluded, and aquifers are drained because of constant leakage gone unnoticed, because no one was paying attention anyway, because everything's automatic now, and there's no staff to support such a massive wave of recall repairs, because automated plumbing has produced a shortage of plumbers, and there are too few specialists to change the valves and sensors out, and the drought hits, and irrigation fails, and then crops fail, and there's no harvest and then people starve, and then children die, and then, and then, and then...

No seriously. IoT is retarded.


That last paragraph is the real important one to me, concerning automation in general, be that IoT, self-driving cars, or other kinds of connected devices that do jobs previously done by more humans with fewer.

Case in point: Someone snips the wrong cable and a system that has replaced human operators who would route in-field nurse calls goes down. This is a small system for now with low volume, so an improvised manual process is in place in a matter of minutes. But what if this system was serving 100x more users? This process would not be scalable, and said provider would not have the infrastructure or man-power in place to handle that situation.

Similarly with the idea that in the near-term self driving cars will reduce the need for drivers, not eliminate them completely. It's like some fallacy of averages. What about that week with so shitty visibility that the sensor-suites are blind? Does the world completely stop for a few days?


Forget IoT. What's stopping that insane Broadcom Wi-Fi bug from spreading between phones like a virus? There will be plenty of Android phones that are vulnerable to it for years to come.

I kind of hope someone does it so Google finally do something about the Android update situation.


A lot of people are calling for government action against IoT. Think twice, people. You are undermining your own profession (and I don't just mean IoT, I mean software engineering in general). The laws the state will come up with will not be great, they never are, and they will stifle innovation. The internet is pretty darn stable, I don't think we need good old state to tell us how to write software, we will fix our problems ourselves as we have in the past.


What do you think about IoT devices with powerful motors that are able to kill people?

Or conversely, what do you think about current regulations regarding cars and aircraft?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: