Hacker News new | past | comments | ask | show | jobs | submit login
Schneier: It's Time to Regulate IoT to Improve Cyber-Security (eweek.com)
294 points by pwg on Nov 15, 2017 | hide | past | favorite | 177 comments



Even though many people scoff at the idea of government regulations, the economic incentives in IoT security are really all messed up and it's not really clear that the market will fix itself because so much of the damage can be externalized somehow. Does the manufacturer of a cheap and outdated IoT device care if it's participating in some ddos attack? Or like Schneier said, does the consumers care if they don't notice?

There seem to be some soft mechanisms that governments could explore. Maybe something like demanding opening the source code once security updates for the device stop, so consumers could help themselves. Or at least, an even more modest regulation, simply allowing all consumers to hack their own devices without fear of violating any laws.

One thing that could hurt the market is maybe making manufacturers liable for the damages caused by security holes in their devices, but regulation doesn't have to go that far to make an impact.


> One thing that could hurt the market is maybe making manufacturers liable for the damages caused by security holes in their devices

That's what insurance is for. Something like this was discussed in my torts class in law school, except that was long before IoT devices existed so it was about things like lawn mowers.

The idea is that it might make the most sense economically to make the lawn mower manufacturer liable when users cut off a finger or toe, even if it was due to consumer stupidity instead of any negligence on the part of the manufacturer, because the manufacturer is in the best position to estimate the risks and purchase insurance to cover them.

Presumably, the manufacturer will pass on the costs of those insurance premiums to the consumers. The manufacturer still has an incentive to try to build safe mowers, because if their insurance company ends up paying out a lot the premiums will go up. They can pass those higher premiums on to the consumers, of course, but that will make them more expensive than their safer competitors.

The manufacturer is in a good position to deal with insuring for these injuries because they know how many mowers they are selling. Other candidates have less useful information. For example, a consumer usually has no idea what their chances are of suffering a lawn mower accident, so have no idea how to decide how much insurances is needed. Health insurance companies will have a good idea in the aggregate of how many of these accidents occur in a year, but they have no idea which of their customers use lawn mowers.

Done right, this should not hurt a market much.

I'm not sure it could work for IoT, though, because a lot of IoT devices are made by new companies that probably will not be around long. With things like lawn mowers, you could takes years to get around to cutting your hand off, and still reasonably expect the manufacturer to be around. Not so with a lot of IoT devices.


> That's what insurance is for

What are the outcomes for using insurance, and what are the outcomes for using regulation? Does anyone know the answers in a technical policy sense (not in a philosophical sense)? They are different tools useful for different problems.

Thinking out loud, insurance seems like a poor solution when people will suffer serious, irreparable harm. If the lawnmower severs a foot, then an insurance payout isn't really sufficient; regulations should prevent that from happening in the first place. More broadly, my point is that there are larger issues than economics.

> Presumably, the manufacturer will pass on the costs of those insurance premiums to the consumers.

In economics, that is not the case. Businesses don't price goods at 'cost-plus'; they don't look at their costs and add a profit margin. Think of the soda at the movie theater on one hand, and on the other the car being sold at a loss because the market is soft; think of software. Like all businesses, the lawnmower manufacturer already is pricing their product at the level that maximizes total sales revenue, which depends on supply and demand; raising the price will reduce revenue (probably because unit volume will decrease too much to be compensated for by per-unit revenue increase). If they could raise the price and bring in more revenue, they would have done that already.

There is an issue of elasticity: If your customers' alternatives are limited - i.e., if they can't go without your product, if there are limited competitors and substitutes - then you can raise prices more easily. A Van Gogh painting is highly inelastic; lawnmowers are much less so.


On insurance vs regulation -

What isn't being mentioned here is that insurers will require insured companies to do a bunch of stuff in order to remain insured. Have some processes in place, do some things, and so on.

Just like your car insuruance isn't valid if you drink and drive, your software company insurance might not be valid if you aren't using source control and have no testing or code review proccess.

So in some ways, what you get out of insurance is market driven 'regulation'.

Possibly, because a rival insurance company can impose different conditions, insurers are incentivised to require only the stuff that really reduces risk, while regulators might make irrational regulations driven by moral panics in the press. Similarly, they might be more responsive to supporting new technqiues that reduce risk at lower costs, because customers will seek out an insurer that lets them use those.

Possibly, insurers are less suceptible to moral hazard, where regulator employees have close relationships with industry heavyweights and make policy that helps their buisness, at the expense of consumers and sector rivals. Insurers mostly want to make money.

Regulators might be better sometimes because their staff tend to be very mission driven (they want to fix the problem, not make money). They will get a kicking from the public if there is a big accident and laws and regulations did not prevent it, while an insurer is only punished if they don't satisfy current law. So a regulator might be more proactive.


Excellent points; thanks.

I'd only add that the insurer's incentive to reduce risks to their profits is not always aligned with consumer's risks. As a simple example, the insurer may require the vendor to have consumers sign away their legal rights to reduce the risk of expensive lawsuits.


> What are the outcomes for using insurance, and what are the outcomes for using regulation?

Bad question, false dichotomy: these are not apples-to-apples comparisons. How about some minimal regulations that include an insurance requirement?


Even "insurance" is a distraction, the argument is really about tort law (which raises the insurance rates, case by case.) Note that ordinary risks aren't economically insurable, almost by definition, so corporations insure far less than most people would guess. A building burning down is an ordinary risk for a conglomerate so it may not be insured - insurance is too expensive since the company can more cheaply absorb the risk itself.)

So I agree: yes, the existence of tort law sometimes obviates the need for regulation; but has hardly banished regulation or the need for it entire!


In economics, that is not the case.

Eh, more or less. Increases in the price of supplies sometimes do lead to an increase in the end price of consumer goods. I agree that this is more likely to happen for inelastic goods, though.


> I'm not sure it could work for IoT, though, because a lot of IoT devices are made by new companies that probably will not be around long. With things like lawn mowers, you could takes years to get around to cutting your hand off, and still reasonably expect the manufacturer to be around. Not so with a lot of IoT devices.

This is the problem.

And "solutions" like that can easily backfire, because they squeeze medium sized companies out of the market, who might stake out a middle ground between $20,000 Cisco hardware and $150 "disposable cameras" from Fly By Night Corp, by charging $200 for a product that actually guarantees patches for a specific number of years.

Because the liability risk raises the price the middle-ground company has to charge to the point that they can't compete with Fly By Night Corp who doesn't care. Which means more sales of insecure garbage products, because the customer no longer has the option to buy an affordable product that receives patches, and that customer can't afford the $20,000 industrial grade product, so more people end up with the insecure garbage.

It would be much more effective to just require manufacturers to provide security patches for e.g. 7 years. Fly By Night Corp still wouldn't do it, but then at least Samsung would.


Doesn't that mean that every single insurance payout will have to involve the courts, taking a long time and not always panning out? If I have an accident I probably need the money on the kind of time-scale that my bills are due on, not the time scale on which courts operate. Even then, that's ignoring the huge overhead introduced.


What court? What payout?

Freedom Markets™ are best served by binding arbitration.


You obviously hit a nerve with this one. I agree with you though, binding arbitration is a major issue. For example, there was recently a thread on dell shipping Ubuntu. The Dell image of Ubuntu has a EULA that includes, you guessed it, binding arbitration.

My comment about this got 0 attention though...


> I'm not sure it could work for IoT, though, because a lot of IoT devices are made by new companies that probably will not be around long.

If we regulate insurance so that a product needs to be backed by a 10-year liability insurance, it might have 2 good outcomes to fix this:

1) the insurance company might ask a lot of hard questions and require audits, plans, etc 2) the insurance company might require escrow to the iot device's update mechanism in case of bankruptcy, so they can remote-brick the devices

It's unlikely that the reality would be so rosy though - mainly the claims against the insurance would likely be too small in many cases. It might help in 10 years against the biggest ddos botnets. Not nearly as good as direct regulation.


In principle I'm pro insurence, but it often creates an entry barrier for new companies as they don't get the same conditions from insurance companies compared to e.g. Google.


Requiring insurance is such a huge barrier to entry it would completely destroy the market. Only the incumbents will be able to play. Imagine you came up with the next greatest fizzbuzz app and wanted to publish it to the market. Immediately you need what, $10/mo, $100/mo, $1000/mo for insurance? Does an adjuster need to go through your app and determine your risk and thus rate? Does your rate go up overnight when your app lands on the front page of HN and you gain a ton of users?

It sounds great if I'm MegaCorp with an entire department for dealing with this stuff, but as an independent developer trying to start something it's a horrible idea.


Everyone in business should have some indemnity insurance anyway, it's just that software and products that contain it seem to have escaped liability claims for quality up to now.


How do you deal with insurance fraud if the manufacturer is the one paying for the insurance?


If a manufacturer engages in fraud and is caught, they would (a) be subject to criminal penalties for insurance fraud and (b) it would become very difficult or expensive for them to acquire insurance in the future. These two negative consequences serve as a strong deterrence against fraud.


What I mean is that the buyers of the products would be incentivised to damage the products (or even themselves) since the manufacturers are paying for their premiums.


The problem is that this subsidizes stupid people at the expense of not-stupid people. Injuring yourself in dumb, preventable ways should have personal repercussions.


Oftentimes the dumb, preventable way has repercussions on others. In the case of not updating these IoT devices, it could be a botnet, that really doesn’t even impact the dumb person who failed to take preventive steps. In the case of the lawnmower, expense to the healthcare system.


That's the point. You have a manufacturer that sells a product with N years of support and a customer who buys it and keeps operating it out of support for N+5 years, what is the manufacturer supposed to do about that? Support the product until the end of time? Remote brick the customer's property?


Apple (at least) has shown us for some time that the device you buy is not really 100% your property. All kinds of proscriptions are in the EULA that prevent you from doing whatever you want with a Mac, and it's far more extensive with iOS devices. So yes, the company is remote bricking their property. What the consumer paid was a one time rental payment, not a purchase.


It sounds a lot like you're proposing enshrining that sort of corporate serfdom in the law and prohibiting the situation where the customer actually owns their own property.


Nope. It was just an observation that I think most people are completely unaware of; or they accept it because the alternative of crappy phones from companies that do not care to keep them updated, or drop all kinds of trash all over them that cannot be removed, sucks so badly that a company can get away with what Apple does. You are voluntarily agreeing to that EULA. You don't in fact have to agree.


> opening the source code once security updates for the device stop, so consumers could help themselves

That might help the readers of HN, but not users in general. Most users won't bother installing security updates for their PC if it's not forced on them. Updating one's light bulbs with something off github is a non-starter.


It could help the market. It wouldn't be that hard to scan your local network and gather devices and firmware versions (if supported) or fingerprint them if all else fails, and compare against a database of known bad versions and provide weekly or monthly reports by email. I could see this being offered as a selling point of routers. AT&T and Comcast would almost definitely include support in their modem routers just for the extra protection it provides for their networks (and the extra data it gives them about every customer... I'm sure Comcast would love to know the number and types of Rokus and Fire sticks people had, if they aren't already doing this).

Edit: To complete the thought, it's not generally that hard to flash devices that support it, so a report that says "X,Y and Z have exploits, here are some options" could go a long way. Making devices support some minimum standard of local upgradability would help immeasurably.


I'm sorry, I don't understand.

Who's going to update the firmware in your light bulbs using a patched open source version of the manufacturer's code?

Edit:

> it's not generally that hard to flash devices that support it, so a report that says "X,Y and Z have exploits, here are some options" could go a long way. Making devices support some minimum standard of local upgradability would help immeasurably.

You're right, it's not hard. But can you imagine regular users doing that? Anti-viruses, Microsoft, Apple, Google and all major browsers had to automate and force updates on users to keep devices secure.

I have a hard time believing that any significant portion of users will flash updated firmware onto IoT devices to fix security vulnerabilities. Heck, when was the last time you checked for firmware updates for your home router? I do this stuff for a living and I don't think I've updated my (current) router's firmware more than once in the last few years.


> You're right, it's not hard. But can you imagine regular users doing that? Anti-viruses, Microsoft, Apple, Google and all major browsers had to automate and force updates on users to keep devices secure.

If you've already got a database of router versions, it's not hard to include how to submit a new firmware version in that database, whether it be a POST URL, any params and the expected payload(s) or a TFTP upload. At that point, the same device that generated the report could give you a management page that listed some options and semi-automated the process. Want to update fridge with out of date/exploitable firmware with community fridge firmware X? Click here. Want to update with community firmware Y? Click here. Want to update to newer/latest proprietary firmware? Click here.

I do agree it's not a solution to the problem. But it might help. It would also allow your techie friend to run an app on their phone or laptop when they come over and let you know, or handle it for you with a minimum of fuss.

This problem won't ever get better until someone starts being held accountable for exploitable network attached devices. I think part of the reason that doesn't happen is because it's not feasible with current norms of behavior to expect producers nor consumers to do so with any level of confidence. Providing tools for this may take us one step closer.

Personally I think a blend of regulations will be required. If devices sold are required to be updatable in one of a few ways, and workable firmware open sourced if obsoleted (or clearly marked or all labeling as may be entirely unsupported after 20XX), then it becomes feasible to require users to have some level of accountability over what they put on their local networks. I think it's the same as cars and the public road system. Cars have to adhere to certain standards to be street legal, and car makers must adhere to certain standard to sell their cars as street legal.

> Heck, when was the last time you checked for firmware updates for your home router?

Every 6-12 months, generally when I'm checking why performance is bad at the moment. But I'm not getting a report, so I only check when it's on my mind.


I don't know about AT&T and Comcast, but in general the state of router security suggests ISPs don't care very much about the "extra protection for their networks".


If it's at least theoretically possible for a knowledgeable user to fix their devices, that's a pretty big improvement over it being a totally unfixable black box. For something like a lightbulb most people wouldn't bother, but if it's something like a car, a person without the technical skills to fix it themselves can pay someone else to do it.


It’s an improvement ideologically, I suppose, but not a practical reduction in economic damage potential. In a very real sense it doesn’t matter it you have quit making something seriously damaging if you need to disable millions or hundreds of millions of devices—ie some automatic update functionality is in itself hugely more helpful than just opening it.

Ideally we would have this suggestion in addition to a stick to ensure people have an incentive to sell relatively secure hardware.


That right there is the problem that regulation is meant to solve. The manufacturer is the one in position to enable ease of _secure_ updates. Right now the manufacturers are not doing that.

Try walking into Home Depot and asking the salesperson how to update the firmware on the lightbulbs he is selling. You will get your sanity questioned, especially once you explain _why_ it is important to update.


That's missing the point. Third. Parties.

We have them for practically everything. End users can't figure it out, the manufacturer (say Microsoft) doesn't give two craps, so a third party (>90% of my career) involves solving those problems for users.

If things were open sourced, that'd make my job insanely easier in terms of man hours invested in any project.

I've literally had to spend >100 hours diagnosing that Dynamics CRM 2013's main data import tool has a bug that they refuse to fix. (Ever since it launched.) The import wizard refuses to actually load TIMESTAMPS (as well as some other columns) in even though it specifically supports setting that column. So when you do a data import, ALL of the user's data is out of order. That's completely unacceptable for any business to have their e-mails, notes, all have the same timestamp and no proper ordering. So "all" you have to do is first diagnose that this is the problem, then find this super obscure website where a guy made a fix except his code is half completed and full of errors, then install the entire CRM SDK stack. Learn how to write a C# plugin that exploits the CRM SDK. Write the corrected version of this guys code. Compile it. Use the CRM SDK toolkit to "attach" the plugin to your CRM. Figure out (no docs) what messages/events the plugin is supposed to actually trigger on with no error messages (while it takes >5-10 mins of user input every test to find out if it worked and then remove all the data again from the DB.)

If that was open source, that'd be a one line of code fix and I wouldn't have to play the "prod the black box" game of trying to figure out exactly where the failure point is. Hell, just adding a debugger would be insanely powerful. And that was just "one" story I have with "one" product that I support out of dozens if not hundreds.

So that's the thing. Regardless of whether a manufacturer open sources or not, people like me still have to fix the damn problem. Failure isn't an option. The checks have to go out. The lung machine has to keep blowing. And whether or not the business makes it easy or hard, doesn't enter the equation (except in terms of cost and man-hours). Nobody cares that you want to protect your IP. They care that their damn time clock doesn't work and they've got millions of dollars of product to ship while their employees stand outside of a locked automatic door.

So the real question is simply: Do you love, or hate, the people who support your software? Because we're still here, every day, slogging through insane problems by poking blindly in the dark. Closed source software breaks just as often as open source. The difference is, that the essential third-party (me) has to bill 10X as many hours to fix the problem. So if you're a business, it's in your best financial interest to demand open-source code whenever not prohibited (security constraints, HIPPA, whatever).

Closed-source is like buying a car and pretending it'll never break down, or that the manufacturer has the time, energy, and willpower to come down to your house and fix it. Some might go that far. The other 99% won't. And if there's a piece of software that has zero bugs, I haven't seen it in my lifetime.

So to bring this back to IoT. When a clients IP Camera goes down, they don't give a crap why. They need it back up. Yesterday. And they especially don't appreciate buying expensive hardware that ends up having security flaws... and worse... that is physically impossible to close. The clients aren't going to easily understand, "I paid $3,000 for it, and it works fine, but I should NEVER use it because it could be hacked?" That's like buying a car and a recall hits that says "the locks no longer work" and instead of fixing the locks, or allowing a third-party to fix the locks, everyone is just supposed to stop driving their car. You'd never see that in the real world, but somehow "software is magic" so it plays by different rules.


Indeed, I have the feeling that all regulations will do is lead us down the slippery slope of regulating all software and computing devices, eventually creating the dystopia predicted by RMS in his famous story: https://www.gnu.org/philosophy/right-to-read.en.html

Or at least, an even more modest regulation, simply allowing all consumers to hack their own devices without fear of violating any laws.

More simply, they could make all reverse-engineering legal, but then the copyright/IP lobby would strongly oppose.


> Indeed, I have the feeling that all regulations will do is lead us down the slippery slope of regulating all software and computing devices, eventually creating the dystopia predicted by RMS in his famous story: https://www.gnu.org/philosophy/right-to-read.en.html

Unless said regulation was the ability to actually own your device and install your own firmware, doubly so once it becomes unsupported?


demanding opening the source code once security updates for the device stop

They probably don't even have the (usable) source code.


Arm recently announced an open source firmware for IoT devices. Now all it takes is for OEMs to want to replace their proprietary firmware with this (while keeping all bits open source as they extend it).

https://www.arm.com/news/2017/10/a-common-industry-framework


Maybe we should mandate that a reasonable source code is deposited somewhere?


And you'd only discover that the code doesn't even build until it's too late.


Well, they probably would if they had an incentive to do so.


Treat security issues as defects in the product and apply normal consumer protection laws.

Put pressure on ISPs to make them take responsibility for traffic originating from their networks. They have the the tools to notify customers if customer is sending suspicious traffic (and if necessary, they can temporarily shutdown the connection).


> demanding opening the source code once security updates for the device stop

Copyright law is not in agreement with this. Reasonably, a manufacturer could argue that follow up installments of the software are still actively sold. Then one question is why the fixes wouldn't be ported back to the old release and I guess because differences in the code require significant effort to adapt the fix.

Regulation should seek to equalize ...

> manufacturers liable for the damages caused by security holes in their devices

This is not easy if a chain of bugs in different programs is used to create an exploit. And it would be damaging to warranty wavers especially in open source. After all, the server side will likely run full fledged open source stack. This is where service providers step in. Google would probably like to do monitoring and instrumentation as a service, among others.


> One thing that could kill the market is maybe making manufacturers liable for the damages caused by security holes in their devices, but regulation doesn't have to go that far to make an impact.

What reasonable case is there for making them not liable for the damages caused?


> What reasonable case is there for making them not liable for the damages caused?

After what period of time? Microsoft updated Windows XP after the official end-of-life of the OS for consumers with the patch to the Samba protocol to prevent WannaCry, but if they hadn't bothered, would they still be liable? Should they?

Are they obligated to update pirated versions of their OS?

Is the manufacturer liable if they release a software patch but the product owner doesn't apply the upgrade? What share of the liability should each party take?

Software is largely immune to liability litigation in the USA because the current legal status of it is not legally "a product". Converting it to "a product" for the purposes of liability is a major sea change for our understanding of what business models can be applied to software, licensing, ownership, etc.

Also, does the average product programmer carry some sort of programming insurance? Are we going to force every web developer and every open source programmer to carry insurance, to be licensed to program, and to live up to specific ethical standards?

I'm not saying these are undesirable changes, just that they are changes and there are a ton of issues programmers don't foresee that should be discussed before making


Well, we just did a recall where the airbag was replaced in a more-than-ten-years-old car. I'm pretty sure they did that because, yes, they were still liable.


You're right that they should be liable, but pragmatically it's maybe too much of a risk for smaller companies to face some potentially frivolous lawsuit for millions of damages supposedly caused by a ddos originating from some of their devices or something.

Surely the right idea in principle, though. I'm just not sure how realistic is it to implement in a smart manner.


That's what the insurance industry is for. Just make the companies liable, and they will seek insurance. Insurance companies will force them to take reasonable measures in order to carry a policy. Nothing new under the sun.


> because so much of the damage can be externalized somehow

It's interesting to see how the US citizen ordering the product of a back alley company in Shenzhen via Alibaba is supposed to recoup his damages. It's impossible, period.

With wired IoT devices: segment your home network, always use a trusted gateway application and never allow your IoT devices direct WAN access. With wireless devices, all bets are off, since you don't know at all who can access them (i.e. they can interpret WiFi frames in unassociated state directly in silicon, and you'll never now, even if your stack is completely open source) if they are in the vicinity.


> maybe making manufacturers liable for the damages caused by security holes in their devices

Then they will sell two versions of the same product. One lets you voluntarily waive that requirement for the same price as today, or a "normal" version where it costs orders of magnitude more.


Not that I'm a fan of government regulation for technology issues like this but the security situation is beyond a joke.

For one, it's time to hold companies (and executives!) accountable for security of the data they are charged with protecting, often without your consent (eg Equifax).

For another, insufficient product liability for companies being lax--even negligent--with security. Honestly I don't see an outcome like network-connected lightbulbs bringing down the Internet as particularly far-fetched.

Frankly I don't even know what the market for IoT even is. Who needs $50 light bulbs that will DDoS someone one day? Or, worse, compromise your network to an attacker.

And all for what? So you can turn the lights on after you go through multiple steps to unlock your phone?


> And all for what? So you can turn the lights on after you go through multiple steps to unlock your phone?

The fact that they're IoT devices is kind of irrelevant, as is their function. The underlying question is: to what extent should manufacturers be made responsible for the damage their products cause?

If you make an electric heater that routinely combusts during normal operation, your customers and the state have some recourse against you – not that it would happen, because there are generally standards in that industry. But if you make a device that quietly becomes part of a botnet, you really aren't going to suffer; even the reputational issues are generally minimal.


> And all for what? So you can turn the lights on after you go through multiple steps to unlock your phone?

I wanted network-connected lightbulbs so I could have them turn on at the time I needed to wake up, when that time was well before dawn.

I never installed them because I didn't know how to secure them and my schedule got more reasonable, but I think the use case is pretty compelling.


Why would light bulbs need to be connected to the Internet for the use case of being turned on at a specific time? They'd just need to be connected to a timer for this.

I mean, an Internet connected light bulb use-case would a bulb that flashed whenever a stock you owned went down in price, which is ridiculous despite being the least ridiculous example I could think of.

IoT security cameras and an automated kitchen you phone to, to prepared you dinner if you were coming home unexpected seems like the least crazy IoT device an individual could own - most IoT stuff seems more like what a global company would want rather than an individual.


Exactly. Nearly everyone who tries to explain why I would want Xyz device to connect to the Internet cites a use case that... doesn't require an Internet connection! Turning lights on and off at certain times, buzzing when a doorbell is pressed, thermostats and sprinklers that respond to the weather. Even security cameras don't need an Internet connection when being viewed locally, yet my Dropcam insists on sending all video out to a web service, just so that I can request it all back when viewing on my computer! Insane!


Indeed,

You get a combination of a wifi network being the one sort network every person is guaranteed to be already on and every company dreaming of cloud-based-app supremacy "synergizing" every useless whatsit they have access and the IoT nightmare is in full swing.

"What could possibly go" - wait, we've seen answers to that one already.


> Why would light bulbs need to be connected to the Internet for the use case of being turned on at a specific time? They'd just need to be connected to a timer for this.

I already have a network connected to the internet. I don't already have a dedicated light bulb timer, and producing one of those, or buying one that had somehow gotten produced, would be stupid.

I had no particular desire for the bulbs to be connected to the internet, but I did want them connected to my home network, and I did also want my home network connected to the internet.


> despite being the least ridiculous example I could think of.

A lighbulb flashes when visitors ring the doorbell, which is useful for people with visual impairment. the doorbell has a hidden rfid reader, and certain guests have an rfid card. the doorbell flashes differently for each visitor.


"A lighbulb flashes when visitors ring the doorbell, which is useful for people with visual impairment. the doorbell has a hidden rfid reader, and certain guests have an rfid card. the doorbell flashes differently for each visitor."

How is this an example of something that needs to be connected to the Internet. My point is most examples are about local connect to justify a jump into a world-wide, insecure net.

I mean, all the light bulb examples are contrived because a PHONE has all information-conveying ability of a light bulb and is designed for information-conveyance. Light bulb are designed for the light-consumption needs of those nearby and examples of light-bulb-control at a distance are either poor substitutes for phones or "weirdness" - haunted houses and art-happenings.


That could also be done without network connectivity, even local. A microcontroller with a relay and rfid reader would do it. A fancier version might use bluetooth to let your phone set up new cards and disable/change others - still no need to talk to anything external beyond the phone itself.

Take it one step further, and you could trigger it via an NFC read of your vistors' phones. This also would not need to connect to a network (beyond installing the app on the phones).

Actually, sounds like it might be a fun project to play with :D


That only needs a local wireless (or whatever) connection between the reader and the light. X10 et al did this over power lines over 20 years ago. Communicating over 802.11 - with or without the internet - adds complexity and exposes the device to the world.


It doesn’t matter whether you know of any use cases or not. Somebody else does. Don’t interfere with their liberties.


>Don’t interfere with their liberties.

that's well and good until they cause negative externalities


> And all for what? So you can turn the lights on after you go through multiple steps to unlock your phone?

I don't get it either. I can get the idea of switching on and off all the lighting in a home e.g. when leaving (hotel room style) but to do that you need to have a hell of a lot of gadgets, not just a couple of "smart lightbulbs".

The only reason we have smart lightbulbs for switching to begin with was because someone realized that the sockets are standardized and users can switch bulbs.

The elephant in the room is the wall switch, it's now in series with the bulb! Every time someone switches off the light using the lightswitch, the smart bulb can't be switched on again! And no - people will NOT stop switching lights manually just because they have a smart bulb!

What people really want, would be smart switches, not smart bulbs. With a smart switch, you can switch either by hand or remotely.

I just don't get why manufacturers (and customers, most of all) don't realize this. Are people happier to buy the bulb because it's a 10 second job to change, compared to the wall switch which might be an hour worst case require an electrician?

I'd be thrilled to change all my switches and dimmners to smart ones, once there are gadgets that are cheap, secure, reliable etc. I'd mostly use it to have a master lightswitch somewhere near the front door.


Always love Mr. Schneier, but I think torts would be a better way of handling this.

There are several problems with regulation: a) Whack-a-mole, new ideas and business models arise faster than the speed of government. b) Regulatory capture, like what happened to our banking regulations. c) More often than not, penalties are captured by the regulator, but compensation is not made to the injured parties. d) International law / trade agreement complications. I'm sure there are many more.

If manufacturers knew they'd be on the hook for damages in a dollar-to-dollar way, they'd put more engineering into their work, they would price it accordingly, and personally I'd be fine with that.


The creation and definition of torts (or the generalization of existing torts to cover new relsted domains) is a mechanism of regulation.


Yes. In the postwar period, the word regulation typically entails new laws, possibly a new regulating agency, and a mix of civil and/or criminal penalties. I think a more liberal interpretation of existing torts would be simpler, more just, and harder to game.


It also encourages the tiering of IoT such that insecure-and-cheap remains on the market and is pushed towards people who can least afford to be pwned.

Regulation is not blind, but it does raise the floor.


"least afford"??? As though these IoT devices weren't pure bored yuppie disposable income in the first place?

I would figure large service providers, and big companies with fleets of marginally protected devices, would be the ones to bear most of the damages coming from shoddy IoT devices being pwned and rolled into larger ddos/ransomware/etc attacks.


At the moment, yes, they are at the high end of the market. In three years, you can expect that to be either in the process of moving downmarket. In five, they will be comfortably midmarket and still moving downward. And, the cheaper the device, the more corners cut in production. The more corners cut, the higher the likelihood that at least some of them are in terms of security.


We are governed by people who have trouble agreeing that 30 year olds propositioning minors is a bad thing.

I just think broadening the concept of liability in our industry to the point where some company's negligence is finally made an example of has a much better chance of improving things than a panel of regulatory agency appointees that have either already been paid as industry lobbyists, or are operating under the carrot of a future position as a highly-paid exec--just like the FCC, or Robert Rubin, or any of Trump's appointees, or so many others...


Schneier's said elsewhere that torts would be another way of handling this (I also think it's probably the better way); don't know why that isn't in this article.


On the surface, I agree with this.

In practice I expect it to result in fewer products on the market that are more expensive and no more secure as this sort of regulation will simply select for large companies who are experts at paperwork and soft bribes.

I wish I had a better idea.


> I wish I had a better idea

Something that already works are various forms of certification.

Examples are:

* "Norton protected" on websites

* Underwriters Laboratories on US products

* US DOD Trusted Computer System Evaluation Critera for how the US military checks the security of a product

* ISO 9001 for quality management

* Oregon Tilth for certifying organic products

Some of these are more valuable than others, but are at least a separate stamp of approval.

With my magic wand I'd have a group come together to form such a certifying organisation and provide it with a marketing budget. The marketing would mirror the success of "check for a green padlock on a website to know it is secure" - look for the stamp on a product box.

If that doesn't happen and there are more serious incidents is that consumers would look to Google, Apple, Amazon etc. eg if the proposed device doesn't work with the Apple hub and been vetted by Apple, then they wouldn't buy it.


The problem with certifying a device (presumably as secure) is that security is a process, and a device that is secure now may not be tomorrow. For a device to remain secure requires regular updates.


ISO 9001 for example does certify process. I'm sure organic does too.


Do you think end consumers care at all about certifications?


Consumers certainly can't evaluate each product individually, and that is where certification is a simpler form of evaluation.

It also isn't necessary for every consumer to care about certification. All it takes is a minority to do so - enough to tip sales away from the competition. Certification is also easy to mention in reviews and product feature lists.


What about VW though? They got around regulation.


The expectation isn't that such regulation will prevent violations of the regulation from ever happening, but instead that there is a legal mechanism by which appropriate punishment can be administered. Laws and regulations don't in-and-of themselves prevent bad behavior from happening, they just grant entities permission to apply force as a response; the force can be jail time, fines, sanctions, etc, but it's granting an entity permission to apply force if the law or regulation isn't adhered to.

So yes, regulations are circumvented or ignored intentionally, but that doesn't mean that it's somehow a bad idea; the idea is to enforce positive behavior by attaching an undesirable response to not doing the positive behavior. It's meant to shape behavior in the long run, and it's why pulling funding and reach from regulatory agencies is usually shitty, as more than anything, it's meant to induce a scenario where one can say "look, we have these regulations, but X, Y, and Z continue to do the bad behavior! The regulation is senseless!"; in reality, X Y and Z have just run the costs and determined that it's cheaper to fight the regulation than to adhere to it.


Utopia, but marketplaces selling only the ones they consider secure would work maybe.

Similar to costco - sometimes they do not sell products that they believe don't have the quality their customers deserve.


How does regulation work in the aviation industry? The impression I have is that regulation in that space is pretty effective.

Without effective regulation I imagine you'd see aircraft falling out of the sky left and right because - and let's be honest here - safety is probably at the bottom of the priorities, both for manufacturers and airline operators.

Most people don't believe that accidents can happen to them. People rationalize the risk away, so they skimp on security to save a buck.

So, operators don't have incentive to maintain and upgrade their aircraft because that has only a negligible effect on performance, but costs a ton and the perceived risk is low. Manufacturers, likewise, have no incentive to produce safer airplanes because safety doesn't sell: range, capacity and fuel efficiency do.

Yet, air travel as never been safer. Seems to me like regulation works ok in this case.


> and let's be honest here - safety is probably at the bottom of the priorities, both for manufacturers and airline operators

The bottom, really? That's a pretty bold statement to make, and a disservice to the work those have done to ensure we have thousands of safe flights every single day. I'm certainly not arguing against airline regulation, but I don't believe that's the only thing holding them back from crashing planes on the regular.


Seems like you just decided that the airlines would "obviously" be unsafe if not for regulation, and therefore regulation is working great. I could also say that killing your clients and crew "left and right" would be terrible for business, and that therefore regulation is useless.

Neither argument seems particularly persuasive without being falsifiable.


...this sort of regulation will simply select for large companies who are experts at paperwork and soft bribes

What do you mean "this sort?" Schneier just says "we need regulation", that's pretty definitely all the article says.


Any regulation tends to select for companies that can appear to comply at the lowest cost.


> this sort of regulation will simply select for large companies who are experts at paperwork and soft bribes.

Which are exactly the kind of companies who can do good engineering, so why wouldn't they do that instead?


Agree, regulations could deter entrepreneurs from trying since the barrier to entry could be high. Then again, regulations like HIPAA haven't really stopped a slew of Digital Health shops from trying, so it might not be as bad.


China already ignores safety and radio regulations. Sure, add more. They'll keep flooding Amazon and eBay with cheap crap and people will keep buying it. Good IoT will just become even more expensive.

The only way to avoid a botnet apocalypse is to secure home routers. Outbound traffic should not be an automatic right. People should have to authorise each device for each type of traffic.

I argued this long-hand when OVH was taken down by "security" cameras. Good to see we've made no progress.

https://thepcspy.com/read/when-did-we-stop-caring-network-se...


> People should have to authorise each device for each type of traffic

Nobody wants to do this. I'm a quarter way paranoid about electronic security and even I don't want to do this.


Sure. I've been through this though. See the link.

I recommend a certification programme, with manufacturers justifying the access their device needs. A little signed JSON blob of hosts and ports it plans on connecting to. The device communicates this to the router. If the signing certificate is still valid and the manufacturer trusted, the router could just allow that access, or prompt the user to just let them know that device is trying to connect. No confusing detail. And just once, at the same time you're setting up network stuff, so it's not weird or extra hassle.

It's leaps and bounds better than what we currently have. The vast majority of us have zero idea what the devices on our network are actually doing, all while we're each throwing dozens of these cheap internet-enabled things online.

For legacy devices, a more iterative approach might be needed but it can still be prompted: "Dell computer is trying to connect to clearlybaddomain-dot-com. Allow, Allow All, Deny, Quarantine". You could even layer on some "known bad" hosts or traffic patterns via centralised lists to automatically quarantine devices at the router level.

Nothing here is rocket surgery. One developer for a few months. An entity like Google could do this in an afternoon. There's just surprisingly little appetite for it.


> People should have to authorise each device for each type of traffic.

Seems somewhat unreasonable to expect people to know what traffic a device needs, and if you ask too much, people will just default to allowing it without even thinking about it (I still occasionally see people suggesting using a DMZ to make online games work, rather than forwarding the necessary ports [and even that seems rarely required]).


See link or my other reply for implementation detail. This can be both very secure and simple.


While we all lament about incentives & regulation which are out of our control, there are still a lot of technical issues to solve, perhaps we can start there.

A few examples: - It's currently pretty hard to add HTTPS on a router admin page. - Browsers can't do service detection on a local network, so we have to resort to central servers to manage headless devices (or ugly, unreliable local IPs). - Punching through firewalls / NAT is still hard, so we again resort to central servers.

It's really fucking hard to do IoT at scale, in a easy-to-use way that's secure and that respects the user's privacy. I think we can solve that.


I think a better solution is to buy non smart versions of toasters, microwaves and kettles.


May end up in a situation where the network layer of hardware and their respective wrappers/software is monopolized by a group of compliance savvy folks who have done the necessary audits, certifications, etc. In theory it is not necessarily bad, but regulations have a bad reputation because of how poorly these things are generally implemented.


IoT is a solution that has been looking for a problem for at least 20 years now. It's like every EE department in existence feels compelled to cram micros and ethernet into every toaster, thermostat and lightbulb.

I guess it only took this long for enough people to become insane enough to justify a market for it.

Preach on brother Bruce!


I'm surprised more people aren't willing to consider not having the IoT as the solution. It doesn't look like we'll get a decent solution to security anytime soon and the average person doesn't seem to get much out it anyway. It's just more trouble than it's worth.


Yes it is. Importing a cheap Chinese WiFi access point that has an exploitable default password should be as illegal as importing Chinese fentanyl.


Banning the possession of insecure devices would make it much harder for security researches to find vulnerabilities in them.

As soon as they are successful, they'd be breaking the law.

Just make the owners of IoT devices liable for damage they cause.


Ah, I remember the old days of "BABT approved" modems that were substantially identical to non-approved ones other than costing twice as much.


That's ridiculous, why? I'd understand some sort of certification process and requiring certified products to have ample warnings but why should it be illegal?

If I want to buy cheap hardware or software that isn't certified I should be able to.


The reason a static device connected to the Internet with terrible security should be prohibited is the same reason that devices not meeting FCC certification standards are prohibited.

Both such devices do as much if not more harm to your neighbor as to you. An electrical appliance that interferes with TV broadcasts may not bother you but it bothers your neighbor. An IoT camera that's hacked to DoS a hospital may not bother you but it bothers the patients of the hospital.

In some fictional Libertarian wonderland, these problems could be dealt with neighbors suing neighbors but in the real world we need the state to regulate products with serious cost "externalities".


Same reason it's illegal to drive a car that's not certified for roads or build a building that don't meet safety standards.

You have a right to pose a danger to yourself. You don't have a right to pose a danger to others.


In my opinion you should be liable for any such danger you pose to others with the ability to shift that liability to whoever sold you source of such danger while assuring you that it is safe.

In fact it is then inconsequential whether some device puts you in direct danger or in danger of somebody comming after you for putting them in danger.


> In my opinion you should be liable for any such danger you pose to others with the ability to shift that liability to whoever sold you source of such danger while assuring you that it is safe.

Right. So you, the little guy, is going to shift the blame to a rich company that does this for a living. Say it's a company such as Google which makes some IoT devices. Would you be able to prove in court, and in front of their engineers and lawyers that they've sold you an insecure device? Do you even have access to their source code?


I think that EU's approach to this is in its idea the correct one: whoever imports and sells stuff (the legal term is AFAIK "introduces to common market") has to declare that it conforms to relevant safety regulations and is then held responsible should that declaration prove to be wrong. (Slight implementation problem is that the punishment incurred for that is usually too small, typically it boils down to ban of sale of the item involved without any punitive fines)


> I'd understand some sort of certification process and requiring certified products to have ample warnings but why should it be illegal?

Maybe our current situation is what it might've been like when we had devices that could generate and receive RF emissions but before there was an FCC/global regulatory distribution of spectrum.

If that analogy holds at all, perhaps that might be why it should be illegal to manufacture or import devices which don't conform.


We can think of the Internet as a public resource, like the electromagnetic spectrum.[0] The FCC regulates what can use the spectrum, requiring that devices do so safely, in a manner that will not interfere with other devices or cause harm to people or property. The same could (and I think should) be required of devices that connect to the Internet.

Frankly, I'm tired of trying to find time to find specs (and then learn how it really works) to figure that out for myself; I secretly dread receiving anything networked for the holidays. It's difficult for me; most end-users have no hope of protecting themselves - and they seem to assume that any product sold must be safe. They assume it is regulated, in effect.

[0] Arguably the Internet is physically different than spectrum. Physics 'creates' the spectrum and its physical limitations make it a public good; there are only so many frequencies, and propagation is part of the equation. The Internet is a creation of humans and in theory can be recreated or modified at will. But that theory isn't realistic: The Internet cannot be replaced or substantially modified; the public has no realistic option of using a different Internet if they don't like this one. In any practical sense, it's a public good.


This is also a business model problem. Consumer hardware companies do not have the margin to make and support software that needs to run for ten years or more.

Before the iPhone, software and hardware were often different and had different business models.


This is the big problem, and regulation can not fix it. If companies are expected to pay programmers and testers and support staff to keep their devices up-to-date, that money needs to come from somewhere, and cloud "services" that don't provide any value but exist only for MRR and lock-in are only going to result in a bigger attack surface and a bricked device when the company folds or loses interest.


Then maybe there isn't the economic case for making non-premium stuff that connects to the internet.


So ... how do you fix, or route around, the bad business model?


It's simply not going to happen as long as elected politicians and officials are mostly technically-illiterate. These are the same people seriously considering back doors to encryption in the name of security.

Give it 5-10 years when enough of them have died off; then change will happen.


No, in 5 or 10 years they will be replaced by new technically-illiterate people.


Can someone educate me on one IoT point: devices presumably send and receive traffic over a router. That router presumably has security measures such as a firewall in place to reject malicious traffic. So, assuming a competent user, shouldn't security be primarily handled at the router level rather than the IoT device level? Of course IoT devices should also be secured, but my thinking is insecurity and lack of political motivation to regulate could probably be largely mitigated this way?

That said, I've more or less completely ignored IoT so far aside from passing interest in how easy Mirai was and I've only briefly dabbled in firewall configuration, so many of my assumptions could be wrong.


"Assuming a competent user" is absolutely not what IoT is about, it shouldn't be what most of our decisions as engineers should be about. I don't want to have to be a "competent user" for my fridge, lightbulbs, sex toys - i.e. everything is potentially going IoT.

Separately, no - attacks like CSRF will quite happily be routed and compromise an incompetently designed IoT device.


I was thinking about blocking all traffic routed for the IoT device which comes from any address outside a set of explicitly trusted sources (such as the vendor's service and the user's smartphone or something). Then attacks like CSRF and default admin credentials become a moot point unless those trusted sources become compromised.


That's how CSRF works - I get you to communicate to the device from your "trusted" smartphone or other device. There is nothing you can do at the routing level to protect against it. It is entirely up to the endpoint receiving the request to have implemented proper CSRF protection against attacks.

CSRF has been around since 2001 and is in the OWASP top 10. It would be absolutely valid for regulators to require reasonable steps to be taken to prevent its abuse, along with similar attacks.


Home router security is pretty much a joke as well, often they're themselves the vectors of attacks. But even if not, it's hard (I'd say impossible) for a generic firewall to distinguish most malicious traffic that exploits application-level protocol bugs. Plus many of these devices rely on Cloud services that themselves can be hacked and used to exploit the device.


I assume there's an important distinction between general purpose PC traffic and IoT traffic, which is that IoT devices should usually communicate with a small set of external entities (let's say a vendor service and a user's smartphone). So we can ignore the content of the traffic and instead consider everything malicious by default if it doesn't come from a small set of explicitly trusted addresses.

This of course doesn't do anything to secure attacks via one of those trusted addresses, but does prevent someone just happening across an open device.

There's also the problem of coping with dynamic addresses, but that can probably be handled separately.


Not just dynamic addresses; mobile Internet carriers used NAT layers covering many devices, so a single address is used by many people at once: https://en.wikipedia.org/wiki/Carrier-grade_NAT

Routers do have a solution: some support setting up a VPN, to which the phone could securely authenticate against. But good luck getting users to configure that.


IOT devices tend to talk to a backend which is controlled by the vendor rather than the user.

Or they can be attacked over the LAN before the router gets a look at the traffic.


I am cautiously leaning towards the perspecive that Schneier is right. This problem is not going to be solved by market forces. Ordinary non technical consumers will buy things like wifi security cameras for the absolute cheapest price at $45/unit, based on them having attractive retail packages or what appears to be a good feature set/spec/price. I have not seen any signs that people are moving away from known-insecure things in droves, because in my estimate, only 1 to 5% of users of such things actually care about the operating system/under the hood software configuration of their IoT devices.


Hell, even for highly technical users it's almost impossible to evaluate a lot of this stuff.

I mean, I program embedded systems for a living - and I couldn't tell you which IoT dash camera or digital camera with wifi or internet-connected car entertainment system is secure.


That's easy - presumably none. At least if you take secure as an absolute value.


I don't think that government certification is the answer here. This will turn security into a check-mark. Companies will do the bare minimum to get certified and won't invest a penny more. This would solve some of the more extreme cases we see, but I doubt it'll make a real impact.

Instead, I feel that accountability would work much better here. If you're selling an IoT device, and you haven't taken industry standard precautions for securing it, then you're on the hook for whatever your device is used for. The same can be applies to companies storing personal information e.g. Equifax.


I think Schneier is right. The market has utterly failed here and there is no reason to think it will start working. Class action lawsuits are very slow and you have issues of trying to prove actual harm. To use his example if my TiVo is part of a botnet but continues working perfectly, have I been harmed in a way that’s likely to let me sue someone?

What happens when you want to sue a company for lack of updates when the company went out of business 6mo after it was created (like Juciero)? You can’t sue them, where a law could have forced them to be secure from the start or put up a bond to support the devices for a while.

Companies will do the bare minimum to get certified? You realize that’s a massive jump compared to what happens now.


You're right, and perhaps ideally we should have a mix of both accountability and certification. I don't know who could or would sue TiVo for the attack, and I don't know how to solve the problem of out of business companies. This approach has its drawbacks.

However, give the certification process some thought too. I can see quite a few drawbacks here as well.

First, a significant advantage for established, rich companies. We'll be swamped with IoT from Apple, Google, Facebook and Amazon while small competitors have a hard time getting their products to the market.

Second, you'd need give the regulatory body access to both your software and your hardware. And what if the device is connected to some cloud server? That body may need to look at its code too to make sure that your control server is compliant. And what about the network? The database? Where does it end? And do you need to re-certify each and every version of your server? What if you introduce a security vulnerability?

Third, certification can't be a one-time deal. That protocol your lightbulb uses to talk to the microwave oven for whatever reason? Well, someone broke that and can now make both of them divulge your dirtiest secrets. The same regulatory body would have to keep track of such vulnerabilities and force manufacturers to update their devices - and what if the manufacturer has gone bankrupt? What if he doesn't want to update these devices? Are you going to force people to throw away their light bulbs? You'll have to, otherwise you're back to square one in which all devices are compromised, only now it takes a little bit longer.

Imagine the bureaucracy all of this will require.


You’re right, it’s not easy. But even specifying hilariously trivial stuff like HTTPS, certificate pinning, no hardocded backdoors, and per-device random initial passwords would probably be a huge boon. Simple security without even talking about the problems on the service servers.

I imagine a market would appear for some of the basic software (Linux diaries, etc) to help make things easy for small companies that do want to do it all themselves.

I like the government idea because frankly I can’t think of anything else that would work (outside a rediculously improbable change in consumer behavior).


> I like the government idea because frankly I can’t think of anything else that would work (outside a rediculously improbable change in consumer behavior).

I'm surprised PCI (payment card industry) security standards have not been mentioned on this thread. There's a case where non-government regulation has, by some definition, worked.

PCI isn't perfect: if you check all of the boxes, it doesn't mean you're secure, but I'd rather have the industry self-regulate than have politics come into play.


> HTTPS, certificate pinning, no hardocded backdoors, and per-device random initial passwords would probably be a huge boon

That's what I meant by check-mark security. Yes, it is better than nothing, and by all means let's do that. It's low hanging fruit, and it should be plucked. But in the end it amounts to little more than hanging an air re-freshener on a huge pile of garbage.

I'm just pointing out that such certification might cause executives in companies that today put more effort into securing their devices to stop putting in that effort. If it's all the same to the consumer, why spend any more than the bare minimum to get a check mark?

Today there's no clear bar, and a good engineering team will always be able to convince a responsible management that they need to put effort in security. But once that fairly low bar is set, I think that the next order from the management will be "make our devices certifiable and nothing more".


Just like car safety we have to keep raising the bar.

Checklist car safety means a pretty safe car these days, and the companies that go beyond do amazing things.


Make consumers liable. They're really the guilty (by negligence) party anyway, right? Okay, that would be a shock to the system. So grandfather in old devices and/or slowly phase it in.

That's still quite a chilling effect though. Well, maybe it should be. Now we're really careful about what we buy. But maybe it's too much. Who wants to expose themselves to a small chance of high liability? Okay, so allow insurance against said liability. Wouldn't that defeat the purpose? Well, no. Insurance companies would only insure against liability for devices they've vetted and approved.

Boom, market regulation. It works for cars (see IIHS).


I’m a software developer. I’ve worked for a network security company.

I don’t think I’m qualified to try to do that for something I might buy, let alone ‘normal’ people.

That’s one of the problems with the market approach. The information assymetry is so big that it’s not a reasonable demand on a person for a $15-20 lightbulb.


To be clear, the only thing they'd need to do (in the ultimate phase) is only buy approved devices. Maybe a small hassle but not all that complicated.


From my POV all I can come up with that feels concrete is somehow trying to figure out a plan / system where we can ensure companies that should have a certain level of security can be held liable (no question) in a way that prevents things from being handled in a reactionary way. The problem I keep running into is that each and every scenario and company this type of system will effect will be a case-by-case basis.

I can't brainstorm many simple solutions that blanket cover many different types of businesses, markets and scenarios.


Something similar existed called FIPS-140, though it is hard to certify and not a good fit for IoT.

IoT router/firewall might be one of the solution here, i.e. adding IoT pattern into existing routes/firewalls to protect IoT devices, in addition to your PCs and sometimes BYODs(smart phones etc).

It is very hard to make all IoT devices secure due to limited resource they have, so the first line of protection should be done on the router/firewall/gateway I think.


OK, so how do you distinguish automatically "abuse" from "proper use" for arbitrary devices, and how would putting the code that is able to do that on a separate device be easier than compiling it into the firmware of the devices themselves?


Look, your PC and BYODs are still prone to attacks, they're much much more powerful than those networked IoT devices, and they still need firewall to protect.

I of course hope all firmware will be safe, and they should be safe as much as possible, still, you need a more powerful device to safeguard them. Put another way, no matter how secure my wifi-bulb is designed, I'm not going to expose it to the internet, and I will put it behind my firewall/NAT-router.


> they still need firewall to protect

Why?

> you need a more powerful device to safeguard them

Why?

> I'm not going to expose it to the internet, and I will put it behind my firewall/NAT-router.

Why?


Is anyone working on this? Also, can anyone recommend a site that compares various router firmware?


As we move towards a more decentralized future, it's hard to see governments controlling or regulating something like IoT.

Sure, the idea of billions of devices around the world connected somehow is scary, but government regulation is not the answer. If anything, regulation needs to be decentralized. More open source, community involvement with reviews and discussions, more self regulation


One thing that should be happening is that ISPs should have to monitor traffic to look for DoS agents, bad bots and perhaps some common vulnerabilities, with the ability to throttle the pipe or shut it off if problems aren't remedied.

Regulating IoT is tougher, but is analogous to licensing the airwaves.


I would rather see independent (private) solutions before inviting a bunch of bureaucratic red tape to manufacturers.

For example, someone could create a company that issued certificates of security. Manufacturers would pay a small fee to these companies to perform security tests and give them a certificate of security. They can put that label on their products to provide confidence to consumers. Some products may warrant a much higher level of scrutiny than others so there could be different levels or different companies that offer it.

I think people will naturally choose the products that are 'certified' over the ones that aren't, and manufacturers will have to end up doing it to stay competitive.


Lightning cables that aren't MFi certified are still widely used. See their presence is gas stations and other stores nationwide. Lots of "MFi certified" cables online are likely fake. Who knows?

USB Type-C can deliver enough power to seriously damage your $1000 MacBook if the cable/adapter is designed poorly. There is a certification process, but most products on the market are still below-par. Below I will link a list. Guess what, those "bad" products are still bought en masse.

This week, I discovered that pretty much all the water filters that are popular for the type I'm looking for aren't even certified to filter out harmful materials. NSF 53 certification exists, but it looks like the market didn't do any research into it and trusted NSF 42, which was touted but is a much less strict standard, filtering out odor and taste (important in its own way). Theoretically, these filters could be passing on lead and asbestos.

Your solution _might_ work for a _part_ of the market, but it is almost guaranteed that there will still exist a significant (if not majority) part of the market, that doesn't care about certification/prefers the cheaper product.

https://docs.google.com/spreadsheets/d/1vnpEXfo2HCGADdd9G2x9...


Bullshit.

What makes 'IoT' any different from an ordinary network-connected computer? You're either saying "it's time to regulate networked computing devices" or, "I want to carve out an easygoing regulation-free niche for MY product[s] to artificially excel in."

I try not to be needleslly pessimistic, but this article has no definition of 'IoT' beyond 'networked computer with sensor', so three guesses as to which one it is.


> What makes 'IoT' any different from an ordinary network-connected computer?

Basically the same things that make, say, 'rats' different from an ordinary human being: specialization, capabilities, defenses, and deployment density.

A different response might be needed to deal with millions of rats moving into your town compared to what worked when 15,000 juggalos congregated for the ICP music festival.

They're both 'mammals', true; they both sleep and eat cheese. But they still might necessitate different strategies to manage or cope with them.

(Having said that, I am still deeply skeptical of governmental regulation as a solution. I think Schneier is right about the scope of the problem, and that it's a perfect example of the class of problem that markets can't fix. Although trying to hold device makers liable might help somewhat, I'm afraid that the problem is like global warming — in theory, there are various solutions that might devise, but in practice human societies, even the minority of them that might have their shit sufficiently together to address the problem on their own, still aren't capable of the level of coordination required, so what we really need to do is "get ready to deal with it, because there's no way to stop it".)


I can easily update my Mac or my Windows PC. I also know that Apple and MS will be around for a while.

How do I update my lightbulb? Who will make updates for? Maybe Phillips will for their product but what about smaller OEMs? What if the company quickly goes out of business like Juicero?

Depending on what you buy and where you buy it do people even know who made it? Would you even know how to check for updates (assuming they exist)?


You can, but you very well might not. And your desktop computer is a far, far more valuable target in terms of computing power and network connectivity. Should we be regulating that device as protection against your choosing or forgetting to not follow best practices?


> And your desktop computer is a far, far more valuable target in terms of computing power and network connectivity.

It’s also FAR more secure. IoT devices are often easy to hack. And while they may not have much horsepower they have a network connection. You won’t mine many Bitcoins but it doesn’t take a lot power to be part of a DDoS.

And I have one computer, one tablet, one phone. I may have 5 smart lightbulbs, a DVR, a security camera or two, a indoor/outdoor thermometer....


Also a good point, but how would you propose that we measure 'security'? Is an Android phone that hasn't received a carrier update in 8 months "secure"? How about a home server running an ancient distro which long since stopped receiving package updates?

The phone is probably a bigger concern at scale, but I have seen plenty of families with dusty "photo storage/backup" boxes that their family's resident IT person set up and networked when they were in high school.


It is a bit out of control. I've written a small Python script [1] that finds dozens of vulnerable devices within minutes just by checking random IP addresses. There shouldn't be that many poorly secured devices floating around out there. It shouldn't be that easy to find them.

[1] https://github.com/wybiral/dex


> there is a difference between when a hacker crashes a computer and you lose your data and when a hacker hacks your car and then you lose your life.

Forget about hacking your car, what about the hacker that hacks a car fleet? What could a hacker do with a botnet of cars, each with cameras and maybe even face recognition. How about killing off people for the highest bidder at the push of a button.


I fully agree on the need for action. But I don't get why IoT should get a special treatment compared to outdated smartphones, PCs, Macs and even video game consoles. Any device that connects to a network is a potential target.

So what about the old iPhone 5 and MacBook Pro from 2008 we gave to our kids for music, YouTube and getting started with computers?


Last year I wrote a report on IoT for an info sec and crypto undergrad class, where I covered the usual flaws and explained why that’s a worrying standard for an industry that’s about to scale up massively.

The professor‘s response “this is just like all of those internet hit pieces”. Hmmmmm.


It doesn't help that most of the worst IoT devices come from China in which case regulation seems to be ignored routinely. Things like UL and CE/FCC markings are usually fake.


One simple way to improve your security at home is to have a "guest" WiFi network which is separate from your real one and which all these questionable IoT devices can use.


That doesn't necessarily prevent them from infecting each other and other people/devices on the internet, or being used in attacks.

It's sort of like living in a neighborhood and having a rock pile you enjoy the aesthetics of, but know it's prone to having rattlesnakes move in, and instead of fixing the rattlesnake problem either as it happens or at the root, just putting a wall around your property excluding the pile. Sure, you're mostly safe, but when animals/children get bit, your solution starts to look quite a bit worse.


Agreed, it's definitely not full-proof by any stretch. But it's amazing how many people enter their WiFi password into devices they really have no control over that can then for example sniff your network, slowly but steadily crack your passwords, possibilities are endless.


If that is a security problem, you probably have a much bigger problem anyway. Secure passwords cannot be cracked, the public internet is hostile anyway and you should be protecting your communication with strong cryptography. Pretty much the only sensible reason why you should protect your internal network from access (including sniffing) is because you might have IoT on it that tend to have terrible security. Putting them all in the same, but separate, network, essentially achieves nothing.


That's precisely what I do - all of my trusted devices are behind a second router[0], with anything untrusted (family devices, devices that only need Internet access and not e.g. access to my NAS) being effectively treated as if they weren't on my local network.

Far from bulletproof though - it limits risk, but does nothing to prevent those trusted devices from becoming infecting and having essentially free-reign (which when you use Windows as your main OS is far from a theoretical concern).

[0] Introduces a double-NAT, which isn't ideal, but hasn't caused me any issues in practice.


This is called network segmentation and it's the direction that the network security industry is moving.

It will be more complicated than just 2 networks and it should be based on behavior and trust rather than device type. Consumers don't yet have the tools to monitor/categorize devices based on behavior, but some corporations (like mine) offer platforms that automatically move risky devices to network segments to mitigate risk.


I always get nervous when governments create laws regarding technology due to the relative speeds at which governments and technology evolve (slow and fast, respectively).


Watch this go terribly wrong.

Honestly, I don't understand why consumers lack the restraint to simply not buy unfinished products, but this is where the leverage to improve IoT security has to come from. If today's IoT devices are such a liability, then prove it in court; but don't think that you can write a law that ensures security instead of mere standardization.

Meticulously studying the introduction and effects of regulations in this vein, I highly doubt they will have the intended effect.


It can be all but impossible to do so (try finding a "dumb" major appliance currently), nor can buyers deterrmine or distinguish what is or isn't "finished", or what will be supported in 18 months, let alone 18 years.


Require manufacturers to pay in to a bounty fund for finding exploits, and make them responsible for fixing the exploits found.


Government regulation doesn’t infer that a technology will be better. IoT is a very immature industry/technology. Adding a byzantine of obsolete compliance laws is a good way to hamper this industry.

If we are going to regulate, we need to improve laws for consumer electronics across the board, with all the big players on board and participating.


Ironic that the site with the article about improving cyber-security doesn't utilize HTTPS.


Can't legislate security into existence...at least not very safely


You seem to be suggesting a law must say "the maker of any unsafe insecure product is subject to 50 years of jail or $4million".

More realistically, legislators should be crafting incentives. Establish liability statutes. Carve out liability exceptions for companies that can show they used industry best practices, hired engineers that are members of professional/industry organizations, pay for ethics training, establish good faith effort of security development (unit tests, integration tests, traffic encryption, encryption at rest, well designed key exchange architecture, software/firmware update architecture for at least X years after sale, etc).

Conversely, if a company does none of the above, it's easier for a consumer or an Attorney General to bring a case against the company, even if it's years too late to be useful.

The problem with litigation (as well as "free-market" solutions) is that this generally doesn't happen fast enough. The security damage of IoT will be externalized extremely quickly. Being able to identify and sue a foreign company that is far up the supply chain from the end-product.


[flagged]


Have you met users?


Of course Schneier would want regulation in Iot. That's literally billions of tax dollars that would go to his and other tech consulting and compliance companies. What we really should push for is Open source regulation. Naturally government is always behind on cutting edge tech issues. Open source regulation would improve the efficiency of regulation while saving billions of dollars.


Could you elaborate? I'm being genuine here when I say that I have no idea what you mean by Open Source Regulation.


There are two pieces here, and Schneier (or at least this tiny summary of him) is wrong on both.

First, yes of course automobiles are regulated and should be. The fact that some things that are and should be regulated include embedded internet hosts does not mean that all devices that include embedded internet hosts should be regulated. This is basic logic.

Second, holding one set of botnet victims responsible for the harms suffered by another (overlapping) set of botnet victims is perverse. Every host should be "secure"; very few are. A secure host wouldn't be a victim of a botnet, either by donating processor cycles or receiving unwanted traffic. If lawyers really need a job security program, consider that ISPs have lots of money and they actually could reduce DOS attacks; why not hold them responsible?


> consider that ISPs have lots of money and they actually could reduce DOS attacks; why not hold them responsible?

You could hold ISPs liable, and they would block untrusted IoT garbage at the network level. Or you could hold IoT garbage producers liable, and they would make security changes and/or pay ISPs to do some firewalling. Either way, the costs and results will probably turn out about the same.

The best solution, of course, would be having fewer things uselessly connected to the internet.


Yes the best solution would be if the world were perfect.

How on earth is a civilized society going to keep "useless" devices off the internet? Who gets to decide what is useless? For example, medical devices are notoriously insecure: what politician is actually going to get behind an effort to make Grandma's life more inconvenient and also shorter just to satisfy some nerds' idea of a perfect internet?

The nerds on HN disappoint. When faced with a hard problem, instead of doing the hard work to fix it, they want to involve the lawyers...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: