Your position is very extreme. Extreme enough that I don't think it's a productive use of time to debate with you.
Maybe tone it down a little, take a breath, and realize (a) that people are not in fact dying, and (b) if they do, we can deal with it then.
The problem with all of these "People might die!" arguments is that everyone who makes them is always thoroughly convinced that they are right at all costs right now so just fix this!!! that it gets so tiresome.
Neither of us has considered all of the possible effects of either action, so we should both carefully examine the consequences of our respective opinions.
You are arguing against something I did not say. I never said we need to do it "at all costs right now so just fix this!!!". I am saying that I value the risk of human lives over the risk of some startups not being created.
You didn't specify which part of my positions makes me extreme, so I will just address your a) and b), and your last sentence.
b) is what I would call extreme. Yes, people die. But in the discussion "we should protect people" vs "we should't stiffle business", if one of your arguments is "we can deal with it if people dying" is pretty cold (this is me seeing the implied "but we can deal with it worse if startups fail" in this context, so you might want to correct me here)
as for a) as far as I can tell, people have not died yet. But we see scenarios on how they could. I'll give examples with real life incidents as inspiration
I would ask you, how many people have to die or how much damage needs to be caused before we but some basic security standards as regulation onto companies? Will one person suffice before I can make that argument?
As for this:
>Neither of us has considered all of the possible effects of either action, so we should both carefully examine the consequences of our respective opinions.
1. Yes we should, and then we will run out of lifetime because we cannot possibly think of everything. But I'll assume you didn't mean it literally.
2. Considering all relevant consequences I can think of(feel free to add any I forget):
If we create a lower bound on security, mandate that all "consumer/industry grade" devices need to live up to that lower limit or face heavy fines, I see the following tings happening:
* lots of stuff will not get created, because the margin would be gone.
* there will be a new market for security middle ware, or a strengthening of the existing one
* there would be a halt in the incrase of power of DDoS attacks
* certifying your procuct as consumer or industry grade would need to be an efficient process and would probably add onto the cost of developing a product
* GPl and open source software would have to be treated carefully. But if you start in IoT and not touch "classic" software, that can be managed. Probably the distinction between "consumer/industry grade" doesn't make sense for server software and would have to be shifted to the process, i.e. a company can use any software they want in their server AS LONG AS they make use of certain key technologies and industry best practices and have a good process in place. (yes this means no more SaaS without a security team...or a founder willing to learn that shit, or an new company providing that service. You could make an exception for revenue below 50k /year or something if you really want)
That are the rough consequences I can see from regulation. Nothing too negative imo
Okay, but you are massively underestimating the cost. I feel bad responding to your comment with essentially a one-liner, so even though it's late, here you go: "certifying your product as consumer or industry grade would need to be an efficient process and would probably add onto the cost of developing a product" does not at all capture how thoroughly screwed a new startup will be if they have to devote $60k to a pentest before even getting off the ground. That would have sunk Apple, for example. I don't think most people recognize or appreciate how brittle startups are at the very beginning.
You're also assuming that there are two states: "secure" and "not secure." You're further assuming that there is a way to transition from one to the other, by "becoming secure" through some state-mandated process. But it just ain't so. No matter how much money you throw at it, you can only increase security, you cannot prevent security problems. If you've shipped code, you've probably introduced some security problems. We should still try to improve the situation, but it is nearly impossible to make software secure. And in the meantime, it's the perfect tool for competitors to stamp out competition, since only incumbents can afford to be labeled as "secure" (when they're not).
I don't think it would significantly curb the power of DDoS attacks, both because DDoS attacks are an inherent problem with the web's design and because hackers are always finding new and innovative ways to increase their DDoS power anyway. Some IoT devices aren't going to make a huge dent in their abilities in that regard.
What would help is if pentesting became a regular occurrence due to massive cost reduction. It shouldn't take $60k for a pentest, but it does. The way to achieve this is through open-market competition. Regulation will only raise the price.
Regarding your first point, I could take the easy route and say nothing. It's tragic when anyone dies. But it's also a fact of life. Many people will die to self-driving cars, yet there will be many thousands fewer deaths thanks to them. The idea that a single human life is more valuable than raising the standard of living for everybody is as strange to me as it is to you that a single life isn't the most valuable thing.
Obviously, my perspective would probably change if someone close to me died due to some fool's bad software. But in that circumstance, the justice system would be as available to me as it is to you, and it was designed for just such an occurrence. It would not offset the heartbreak I'd feel, but at least society has processes in place to do something.
this is me seeing the implied "but we can deal with it worse if startups fail"
This seems self-evident: Many of the enjoyments we take for granted are thanks to startups. Our quality of life has dramatically improved due to the technology they help usher in. Technological progress is not nearly as inevitable as everyone would like to believe, and it's easy to forget how much better our lives are thanks to it.
I'm currently working at a startup (5 people full time) doing IoT devices. I am by no means a security expert; but as the primary software engineer (and being rather afraid of the internet), I've assumed the role of "security guy."
While I've certainly spent time on getting everything to a point that's "secure enough" to let me sleep at night, that amount of effort has in no way jeopardized our business.
And it's really simple things that cover the vast majority of problems:
- Encrypt everything.
- Use unique certificates/keys for every device, no master keys.
- Have (and use) automatic update capabilities.
- Don't use default credentials anywhere for anything.
- Disable unused protocols/close unused ports.
After the initial setup work, we incur a (very) minor additional cost of manufacturing to provision unique keys instead of flashing identical images. That's it.
What this regulation would really do is extinguish a lot of the bullshit cheap/knockoff products coming out of China.
It's good that you secured your IoT device. But if this regulation were in effect, you would not be declared secure until you had a pentest. That means regardless of how much effort you put in, you'd need to pay a (relatively) massive amount of money.
I guess the regulation wouldn't require a pentest in my mind, at least not for all classes of devices. Something along the lines of self-regulation/providing basic documentation that you've adhered to some set guidelines would be a good step in my mind. If you're found in violation of those guidelines though, by all means require a pentest going forward.
I honestly haven't a clue whether or not that might work in practice though.
>I feel bad responding to your comment with essentially a one-liner, so even though it's late,
ah, the strategic advantage of timezones :P
> Okay, but you are massively underestimating the cost.(... )here you go: "certifying your product as consumer or industry grade would need to be an efficient process and would probably add onto the cost of developing a product" does not at all capture how thoroughly screwed a new startup will be if they have to devote $60k to a pentest before even getting off the ground. That would have sunk Apple, for example. I don't think most people recognize or appreciate how brittle startups are at the very beginning.
1. Apple grew up in a different world.
2. For consumer wares we are talking about an inspection, not a pentest whether or not they conform to basic security (SSL, update mechanism in place, salting and strongly hashing passwords). A lot of people (including me) learned this stuff from lurking on the internet. It could even be a self audit, making you financially liable if you cannot document your security if something breaks (we hereby certify that the product fullfills the following basic standards: strong hash and salt => using NaCl etc).
3. I have been involved in enough startups now that I know how fragile they are. And bluntly put: so what. Society makes rules, businesses conform. If it is properly enforced, it just becomes another thing everybody has to do, not a competitive disadvantage (long term even an advantage if your economy runs on secure devices)
>You're also assuming that there are two states: "secure" and "not secure." You're further assuming that there is a way to transition from one to the other, by "becoming secure" through some state-mandated process. But it just ain't so. No matter how much money you throw at it, you can only increase security, you cannot prevent security problems. If you've shipped code, you've probably introduced some security problems. We should still try to improve the situation, but it is nearly impossible to make software secure. And in the meantime, it's the perfect tool for competitors to stamp out competition, since only incumbents can afford to be labeled as "secure" (when they're not).
No. I am assuming there is "obviously insecure", "not obviously broken" and a spectrum of "secure" which depends on your threat model. For the latter I agree with you, it cannot be defined. But you CAN define a bottom and bitch slap anyone into bankruptcy who thinks they can screw with that.
>I don't think it would significantly curb the power of DDoS attacks, both because DDoS attacks are an inherent problem with the web's design and because hackers are always finding new and innovative ways to increase their DDoS power anyway. Some IoT devices aren't going to make a huge dent in their abilities in that regard.
> What would help is if pentesting became a regular occurrence due to massive cost reduction. It shouldn't take $60k for a pentest, but it does. The way to achieve this is through open-market competition. Regulation will only raise the price.
Moot because I specified earlier we are not talking about full pentesting. But even if not, first force EVERYONE to get it, drive up the prices, then people will start learning it and drive done the prices heavily and start to automate. Look at web developers. Once prized, now a dime a dozen (compared to before). Pentesting is no different from anything else, it can be tiered, the lower tiers more and more automated and pressed into frameworks.
> Regarding your first point, I could take the easy route and say nothing. It's tragic when anyone dies. But it's also a fact of life. Many people will die to self-driving cars, yet there will be many thousands fewer deaths thanks to them. The idea that a single human life is more valuable than raising the standard of living for everybody is as strange to me as it is to you that a single life isn't the most valuable thing.
We are not only talking about some persons dying vs higher quality of life. We are talking about millions of people possibly being vulnerable to extortion, data theft, degradation of services through DDOS etc. ON TOP of poeple dying vs slightly slower rollout of higher quality of live.
> Obviously, my perspective would probably change if someone close to me died due to some fool's bad software. But in that circumstance, the justice system would be as available to me as it is to you, and it was designed for just such an occurrence. It would not offset the heartbreak I'd feel, but at least society has processes in place to do something.
Can't comment
>this is me seeing the implied "but we can deal with it worse if startups fail"
>
> This seems self-evident: Many of the enjoyments we take for granted are thanks to startups. Our quality of life has dramatically improved due to the technology they help usher in. Technological progress is not nearly as inevitable as everyone would like to believe, and it's easy to forget how much better our lives are thanks to it.
This is one of the biggest myths...startups are good at developing products. Established companies, Universities and Institutes are good at developing new technology and driving science forward.
The latter is non obvious, fragile and in decline. The former is driven by making money, and is a lot more self deterministic.
"From scratch" Startups have given us (if we simplify a lot) PCs, Amazon 1.0, numerous SaaS,etc. all making previously very cumbersome and clutchy tech viable for the market. Which is a huge risk, tremendous work and deserves all the money they make.
But the PC was done before at Xerox, amazon could only start doing research after it had become the retail giant it is now and Intel, Google and almost all other highly innovative Companies had their roots in academia, being either spin offs or PhDs applying their knowledge into business. Silicon valley exist mainly because of the US military (not only but especially because of DARPA), not just because of some "self made businessmen" taking risks (they made huge contributions I don't want to disparage, but "great person" history is strong in our cycles). And some of the largest contributions to our way of life had nothing to do with business at all (Tim Werners Lee, Linus Torvalds and above all Richard Stallman don't get nearly enough recognition IMO. Especially RMS created a whole new world of value creation by popularizing the idea of software freedom and making sure generations could learn freely). Startups don't contribute to this nearly as much as they like to claim.
Heck, even Peter Thiel is moaning about startups not really tackling high tech issues any more. SpaceX and Tesla are some of the rare shining innovative lights, but a lot of the value that startups provide comes from applying existent tech to old processes (Salesforce, netflix, uber, airbnb, zenefits...). IoT will probably be in the second category, not the first.
So if they are only going to modernize existing processes with new tech, lets take the time to force them to do it right from the beginning
For consumer wares we are talking about an inspection, not a pentest whether or not they conform to basic security (SSL, update mechanism in place, salting and strongly hashing passwords).
This is a pentest. Whether you call it something else is irrelevant.
Moot because I specified earlier we are not talking about full pentesting. But even if not, first force EVERYONE to get it, drive up the prices, then people will start learning it and drive done the prices heavily and start to automate. Look at web developers. Once prized, now a dime a dozen (compared to before). Pentesting is no different from anything else, it can be tiered, the lower tiers more and more automated and pressed into frameworks.
This isn't true. Pentesting is fundamentally different from programming, in ways that are subtle and non-obvious. It's not something that can be automated, for the same reason you can't determine whether an arbitrary program will halt.
Maybe tone it down a little, take a breath, and realize (a) that people are not in fact dying, and (b) if they do, we can deal with it then.
The problem with all of these "People might die!" arguments is that everyone who makes them is always thoroughly convinced that they are right at all costs right now so just fix this!!! that it gets so tiresome.
Neither of us has considered all of the possible effects of either action, so we should both carefully examine the consequences of our respective opinions.