> Every user need to know their devices and vendor before purchasing.
This completely ignores the reality of how the majority of people buy and use their computers. Sure, personally I (as: the HN user typing this comment) can portscan my router, solder in the serial port to get the system console... but that's not something that can reasonably be required from any average Joe.
Things on that level of negligence should be dealt with just like bare cables touchable from the outside: Have some entity go into stores, buy the box, check it for the worst vulnerabilities, declare it unsuitable for sale and place a ban on the vendor.
We do this for childrens' toys, electrical appliances, ... why not do it for IT equipment? Just to weed out the worst offenders in terms of security holes.
Be careful what you wish for... giving more control to the authorities is something that may have quite oppressive long-term effects.
You might solve this problem in the short term, but end up with only being able to buy approved devices containing more obscure, government-sponsored backdoors instead.
Yeah, I mean, who hasn't heard about the time the UL people snuck a backdoor into someone's lighting so the DEA could kill pot-farms. Or that time when Consumer Reports forced car manufacturers to give the highway patrol a backdoor to prevent speeding.
Ignoring the history of non-government regulation, there's a reason why the FCC isn't the agency which recorded everyone's phone calls. There are huge civil liberties issues right now but they're orthogonal to consumer protection; we'll need to rein in the NSA no matter how or if we decide to do anything about companies shipping shoddy devices.
I know you fear losing control and utility of your computing devices (I do too!) "because security", but I'm with GP on this one. This level of negligence is something public needs to be protected from. It's akin to a company manufacturing toys from toxic materials. Whether by accident or cost-cutting, the product is dangerous and should be taken off the shelves, and producer should be open for liability for any damages it caused. It's a matter of consumer protection.
Like all those Lemon laws. Used car dealers reputation might someday not be synonymous with fraudster that try to peddle broken machines to gullible people.
I must say you a taking quite a leap from "do not sell devices which is intended to harm consumers" to "sell only approved devices which has government-sponsored backdoors in them". Should consumers be scared that government try to do the same with food safety?
I'm sure somebody will probably point out to me why this is a terrible idea, but I have thought that ISPs probably can detect traffic known to be coming to/from bot networks and then send a letter of warning to the customer. They can do it for supposedly illegal file sharing, so why not do something helpful and let customers know that fishy traffic is coming from their system?
I don't mean they should be able to shut you off or anything, but I personally wouldn't mind a notification letting me know that I may have been hacked.
They are allowed to look into the traffic only as much as they need to in order to maintain quality service. They notice stuff like botnets and piracy because both of these activities have the potential to generate abnormal amounts of traffic. Another reason they are likely to notice these things is because a third party will often notify them about the activity. They would have to monitor your connection/activity in a way that's highly unethical and possibly illegal in order to detect anything that isn't overly noisy.
That makes sense. I'd imagine though that it shows up on the ISP's radar when major DoS attacks happen. It seems like they could set up alarms for such things. Maybe it could be an opt-in thing.
It wouldn't even be that hard: Just require the vendors to publish the state machine diagram, and if an examiner discovers a hidden “weird machine”[1], they recommend a sanction that depends on how powerful the weird machine is. If it’s Turing equivalent, then it gets banned outright. If the weird machine was knowingly created and hidden, then we start talking about fines and other penalties.
Most IT equipment is not built in such a way that its behavior is formally-verifiable or even formally-describable as a state machine - for the most part, we're talking about a full Linux installation that comes with the vendor's preferred web server (for administration) and default routing configuration.
Sorry, I should have made it more explicit that I'm just talking about the stuff at the edges. For instance, we can do this with something like a TCP library just by evaluating it against the state machine published in the RFC. Similarly, custom protocols are ultimately just state machines, and it's easier to effect secureness when you limit the protocol to a regular or context-free grammar[1]. But, if the vendor insists that their protocol needs a more powerful grammar, then at least we can start talking about security/feature trade-offs in formal terms (i.e. vendors would have to justify their decisions with math and computer science instead of feature checkboxes and pricing psychology).
Having documentation or source code is nice, but not going to help, because you have to verify the system in question actually conforms to the documentation. With the software - to a certain extent - this is possible with static analysis, but with hardware this is nearly damn impossible.
Nonetheless, having a solid formalizations for network protocols is a good idea. But this just won't happen.
Sure that works for a restricted set of vulnerabilities. But generally security vulnerabilities don't stem from bad protocol implementations, but from misconfigurations of generally-functioning software, and bad choice of policy. (For example, this vulnerability was just an extra hardcoded admin account in the router's web console.
If I know what the state machine/protocol looks like, I can have a better idea about how much stuff should be present.
For example, if the vendor claims that the product is limited to a regular grammar (a “plus” for security), but the implementation contains extra stuff in the data section, then an audit can flag it since a finite state machine should (theoretically) only need code.
Since IP4 and TCP each have a “length field”, they're stuck being context-sensitive I suppose. But, on the other hand, since the length fields are bounded, you could treat each value as a separate token (though it would create a huge state machine).
But, routers are not the only edge devices that worry me. Smart tvs|phones|toasters|… are also edge devices with respect to the topology of your “personal area network”.
Yes, exactly. In the US, this is the thing NSA should be going after ("pentesting for the people"), unleashing their wrath on vendors who sell this product (How many a politician uses a vulnerable router like that? How many a public service? It could endanger lives.) This is the thing EFF should be making easy money on.
This completely ignores the reality of how the majority of people buy and use their computers. Sure, personally I (as: the HN user typing this comment) can portscan my router, solder in the serial port to get the system console... but that's not something that can reasonably be required from any average Joe.
Things on that level of negligence should be dealt with just like bare cables touchable from the outside: Have some entity go into stores, buy the box, check it for the worst vulnerabilities, declare it unsuitable for sale and place a ban on the vendor.
We do this for childrens' toys, electrical appliances, ... why not do it for IT equipment? Just to weed out the worst offenders in terms of security holes.