Does anyone know why router manufacturers aren't financially responsible for the exploits that allow their devices to be hacked?
At the very least there should be some kind of policy or standard that allows someone on the inside of the network to know if the password or software has been changed. If the FBI can tell from the outside, then how in the world are people still in the dark about this?
If the exploit wasn't put there intentionally, then we're talking about a bug in the software. Do you really want liability for software bugs? The consequences of that would be substantial. Imagine if Apache or PHP were liable for their bugs used on websites across the internet. The projects would shutdown immediately.. no one could fund the potential liability.
I realize the implications of this are significant.
I don't think the solution is "all bugs cost every company money for every product", but there's definitely more or less risk involved in some software and we are well past the point of negligence from router manufacturers - the vulnerabilities we see from them are absolutely absurd.
This is going to be really, really hard without turning into a mess. Software is complex, and bad software even more so, and an integrated hardware/software system is even worse. Even finding the vulnerabilities is hard already, because lots of systems are snowflakes and each needs to be analyzed individually, and usually in individual ways.
And even assuming we have a definition of 'infrastructure software' and a way to reliably enumerate a set of vulnerabilities, attribution of liability is even harder:
- Is the distributor of the router liable for a vulnerability in a used library? Surely they could vet and review libraries.
- What happens if that library is openssl and almost all webservers on the internet are vulnerable?
- What happens if the library is used in an insecure way? For example, if you seed openssl or libressl with weak random numbers, it is possible to attack algorithms provided by the library.
- On the contrary, if the author of a library is liable, what's going to happen if I use a library of a company and build something vulnerable with it intentionally?
I dislike being so negative about it, but I wouldn't want to get sued for sticking an MIT license on a silly project 10 years ago someone necro'd and stuck into a router, so to say.
When an aircraft crashes and the NTSB gets involved to investigate, does the aircraft manufacturer get sued? I don't know. Why can't we set up something like the NTSB for critical infrastructure stuff at least? I can see how setting it up for consumer stuff would suck. But purpose of the org would be to make stuff safer and develop best practices, not assign blame. I'm probably naive, as I don't know all the good and bad details of how NTSB investigations work. But I figure that's a good place as any to find inspiration for something that could work?
OSS libraries are almost always shipped with a no warranty license. Your router however is not. The Vendor shipping a hardware product IS responsible for the quality and safety of that product. By shipping libraries as part of their product with a no warranty liability to lower their costs they absorb the liability.
Sounds like it would be smartest for all software to come with a warning that it may be vulnerable to malicious interference. It'll be the software version of a Prop 65 warning.
There's no such thing as absolutely safe software.
We have fairly complex rules already in place when it comes to hardware safety. Manufacturers aren't liable if they can show that they performed due diligence in designing their product. Why can't we have something similar for software?
Except we can forbid by law to attach such devices to public networks. If you want to attach a router with no guarantees of security, by a router with guarantee of security and plug in it before vulnerable one.
Perhaps a “UL Labs” type of solution for software, so that if your software and organization are certified according to the current standard, then your liabilities would be reduced?
And yes, organizations and versions of software would have to be recertified on a regular basis.
You would want software versions to be able to be certified quickly and through an automated process, but there is already some best practice in this space — it’s just unevenly distributed.
If certification authority said that software is secure and then security flaws are found, then who is liable?
IMHO, we should have optimistic check: any public network device must have guarantee from the vendor to fix any remote vulnerability in 30 days after discovery by independent security organization(s), otherwise vendor liable for the damage done by his device.
> Even finding the vulnerabilities is hard already, because lots of systems are snowflakes and each needs to be analyzed individually, and usually in individual ways.
When it comes to SOHO routers it's not as hard as it should be, by a long shot. Tons of hardcoded creds and pretty surface vulns in them.
> - Is the distributor of the router liable for a vulnerability in a used library? Surely they could vet and review libraries.
Yes.
> - What happens if that library is openssl and almost all webservers on the internet are vulnerable?
Everyone deploying it is liable.
> - What happens if the library is used in an insecure way? For example, if you seed openssl or libressl with weak random numbers, it is possible to attack algorithms provided by the library.
The company doing so is liable.
> - On the contrary, if the author of a library is liable, what's going to happen if I use a library of a company and build something vulnerable with it intentionally?
You are liable.
As in, the person who produces the product is liable for what they put in the product.
But, as I said elsewhere, this is all off the cuff and relies on a way to properly classify software, which is extremely hard.
But yeah, to your points, none of those feel hard to deal with at all.
I'm not a lawyer. This is not an area I'm so familiar with. But we already have some controls for CC info, and extending those further would be hard but a good start. This situation is extremely out of hand, whether I have the solution today or not.
I have a very hard time believing you’ve developed software that has been released. I’ve always done my very best to release robust and stable software, and I’ve still shipped bugs. Should I be sued out of existence?
We don’t need hardware and software costs spirally out of control like healthcare because of the liability. If device makers would just support their products (bug fixes) for 10(?) years I think that would do it.
It seems to me this is one of those situations where the programmer mindset does not properly interface with the lawyer mindset.
In programming, you have true and false, and generally things fall into one or the other category with no human input.
In law, you have concepts like "reasonable", and a whole lot of human input, by design.
So my expectation would be that if software vendors were to be held responsible for bugs in their products, the standard they would be expected to adhere to would be "reasonable expectation of proper functioning", with humans intepreting what "reasonable" means.
On the contrary, it's the legal perspective that's most worrying. If every software bug carries the potential for liability, there's no way your legal department will let you have a widely-visible bug tracker, or easily report bugs at all. It'd be much like copyright violations are treated today, where there's a formal process to raise the issue and everyone's specifically trained not to discuss them openly.
What makes software so special in engineering, aside from the amazingly terrible culture we've crafted around it?
Change the phrase "software bug" to "engineering error". Then consider the liabilities involved with the manufacture of any real-world-might-kill-someone product. The lawyer's view starts to make a helluva lot more sense.
Whilst that is true, it doesn't have to be that way. If from day one, liability for software bugs lead back to whomever wrote the code, then the world would be a much different place.
For one thing, we would be much more conservative in how we wrote code. Libraries would be vetted, with insurance contracts attached to them. Programming languages would not allow for dynamic data, type coercion, or weak typing. In the 80s, rather than C rising to prominence, Ada would have. Haskell would be our generations Javascript, and XHTML would have won over HTML5 simple because guessing how rendering should work would open up a browser maker to high fines and lawsuits.
We'd have to rewrite our entire software stack from the ground up. It's not impossible, but we'd have to view it as a multi-decade transition like how the chemical industry was slowly forced to not use heavily polluting procedures.
Other engineering disciplines are very conservative. Due to age, this doesn't compare well to software.
There is tradition, but a lot of it military in origin, not civil: ARPA net, Grace Hopper, and the first bug, Turing and the Enigma. Stealworking as a military secret in comparison is thousands of years old. And so far, leaked DB content has per the official record not killed anyone ... perhaps a few astronauts but none of the people responsible in place to fix it, as would be the case in a family of incorporated electricians.
Most routers are shit. Manufacturers slap together a version of linux, some crappy web ui, and ship it. It is unlikely to receive any updates or patches.
Manufacturers should be liable for the poor quality of the devices they make. Software vulnerabilities are a fact of life, because there is no driving force to be better. Strict liability would force the industry to be more like other engineering disciplines.
I've been shipping production software for years... I think you've misread my post if you think I said that every bug should lead to a lawsuit. It's right in the first part of my first post that we should not be holding every product equally liable for every bug.
There is a line somewhere, and beyond that line is negligence. A developer exposing a potential vulnerability in an internal service that does not handle sensitive information is clearly not across that line, a company that creates routers that constantly have serious holes, that handle sensitive information, seems clearly on the other side.
Deciding exactly where that line exists is obviously complex.
>>>> - What happens if that library is openssl and almost all webservers on the internet are vulnerable?
>>> Everyone deploying it is liable.
> I've been shipping production software for years...
Have you ever shipped software which depends on openssl? If not, then pretend that you have. Since you believe that you are liable, can you give me a ballpark of how much money you think you personally should be sued for because you deployed something using openssl?
> Since you believe that you are liable, can you give me a ballpark of how much money you think you personally should be sued for because you deployed something using openssl?
This is a really ridiculous question. I've already stated that these things are complicated - you're asking for a hard number?
Companies should take responsibility for their users data, which includes understanding the risk involved in third party libraries they use.
If they're concerned about fees, invest in the security of the project you're using.
But this is all based on some hypothetical, undefined 'law', so arguing about the specific mechanics is pointless.
No, I am not asking for a hard number, I specifically asked for a ballpark.
> invest in the security of the project you're using
I agree, but slapping fines of developers for using openssl to enhance security makes it a bit hard for anyone to afford putting any extra money towards security
> arguing about the specific mechanics is pointless
Agreed, my goal is not to flesh out the mechanics, it is to demonstrate the pitfalls of such a law. I'm only asking for a ballpark (again, not a hard number) so that you can personally understand why you and every other serious developer would be sued out of existence unless you propose obscenely small fines (which would make the whole idea useless, because then developers could easily afford to be negligent without getting too much of an increase in fines).
> No, I am not asking for a hard number, I specifically asked for a ballpark.
Yes, and it's a fake law that doesn't exist. How would I possibly answer this?
Off handedly, I'd say that the fine could really range depending on a lot of things. Was this an outdated version of OpenSSL that they just didn't patch? Was it a programmer error using the library? A 0day? All of these things would probably make a big different - charging companies for 0days in 3rd party code, in at least many cases, would not make sense.
> I agree, but slapping fines of developers for using openssl to enhance security makes it a bit hard for anyone to afford putting any extra money towards security
At some point if companies can't afford to keep users safe maybe they just shouldn't be companies. And if we're talking about router companies, they have the cash.
> why you and every other serious developer would be sued out of existence unless you propose obscenely small fines
I would imagine instead that companies would have insurance around these issues to cover developers, but again, the legal components of this are not something I'd want to get into since I'm not qualified to.
>charging companies for 0days in 3rd party code, in at least many cases, would not make sense.
Well you've just made the workaround trivial. Open source the majority of everything under a separate org and then use that stuff from the product being shipped. Therefore any vulns are not their problem.
>And if we're talking about router companies, they have the cash
I don't think you understand how tiny the margins are in consumer networking gear. Lowest price dominates.
>the legal components of this are not something I'd want to get into since I'm not qualified to.
If vendors refuse to update vulnerable OpenSSL library when fix is provided for free, risking their customers, then they should be punished until they change their altitude. Volunteers are doing that for free. Why businesses cannot pay to OpenWrt project for 10 year service for their products?
Why would routers be handling sensitive information? You're doing something seriously wrong. Perhaps you should be fined for not encrypting your communications?
Do you sell the product? If you don't, why is this a concern? I think the parent's arguments are for software that's being sold, not just random silly project that you don't care about (which is obvious if you don't sell it).
Software is a mess by choice. Let those who made it a mess burn.
Nobody has half a fucking clue where the libraries they're slapping together come from, nor how they're maintained, nor how they're vulnerable. It gets worse every day with trash like DockerHub, and has no relief in sight.
So yeah -- let the folks who won't adhere to proper engineering burn.
Is there no laws w.r.t. negligence that can be used to punish negligent actors? If a door manufacturer is negligent in their construction of the door and someone gets robbed as a result, in violation of how they expected their door to work, is there nothing currently in the law that could help them?
Just as a door being breakable by sufficient force doesn't necessarily mean that the manufacturer is negligent, the fact that some software isn't perfect (i.e. contains bugs) doesn't necessarily mean that the developers are negligent.
> Just as a door being breakable by sufficient force doesn't necessarily mean that the manufacturer is negligent, the fact that some software isn't perfect (i.e. contains bugs) doesn't necessarily mean that the developers are negligent.
So you consider well-known and well-understood design limitations to be comparable to unknown defects?
I propose that hardware manufacturers be forced to divulge admin methods and encryption keys to their products 6 months after their software updates end.
At least users can apply workarounds in that condition. As it stands, there are no options for the owner of the device.
> Yeah, definitely. Especially for infrastructure.
The problem is that this is entirely useless.
There are basically two classes of software company.
The first is the likes of Google or Mozilla. They, as a rule, do the right thing. All humans make mistakes but the mistakes are understandable and there isn't really much we can expect to incentivize them to do that they aren't already doing.
The second is Fly By Night IoT Device Corporation. They make garbage, it has a million vulnerabilities, but they're judgment proof. If you sue them they just file for bankruptcy. Many of them don't even exist within your jurisdiction and the ones that do are likely to have gone out of business by the time you get around to filing a lawsuit. You might as well pass a law imposing liability on raccoons for spilling garbage.
There is a much better solution to all of this. Fund a government agency to search for vulnerabilities in popular products and report the vulnerabilities to the developers. Then remove products from the market that have had known unpatched vulnerabilities for more than a limited amount of time, and require updates to be offered to any product sold in the past X number of years.
Because it's a lot easier to get a company to spend $5000 in developer time to fix their garbage than to get them not to avoid a twelve billion dollar lawsuit by filing for bankruptcy -- which only leaves all their customers in the lurch with hardware that will then never be patched.
> I didn't say they did... I was providing two examples.
But Cisco (i.e. Talos) are the ones finding the vulnerabilities in routers made by other companies in this case.
> Maybe if companies building software can't afford to keep it safe... they shouldn't be companies? Is that so controversial?
They still would be companies though. That's the point. If they expect to be out of business by then regardless, or they're outside of your jurisdiction, or they know they're judgment proof, it doesn't change their behavior.
It's like trying to address homelessness by allowing the victims of panhandling to sue the perpetrators. There is no blood to be had from that stone.
All you do is make the problem worse, because every company you destroy is a company which is no longer around to patch their installed base of devices. Meanwhile they're immediately replaced in the market by another company which is no better.
Regulation and liability only works against monopolies and other huge companies. When you actually have a competitive market like this, you need to use the carrot rather than the stick.
>Regulation and liability only works against monopolies and other huge companies.
That's not true at all. Many industries that are very competitive and are full of small companies are effectively regulated.
>There is no blood to be had from that stone.
There's a simple fix to this problem. You require companies to carry insurance to cover the problems you're talking about. We do it with general contractors, doctors, tree removal companies etc...
The regulation says it's illegal to manufacturer, sale, or distribute non-licensed routers, and part of the requirement for licensing is insurance.
I'm not saying that this is necessarily the best course of action in this instance, but there are definitely time tested solutions for the problem you're describing.
> That's not true at all. Many industries that are very competitive and are full of small companies are effectively regulated.
There are many industries that are very competitive and full of small companies and have regulations, but what most commonly happens in those cases is that the regulations are rarely enforced which nobody much minds because the competition is preventing abusive practices regardless.
> There's a simple fix to this problem. You require companies to carry insurance to cover the problems you're talking about. We do it with general contractors, doctors, tree removal companies etc...
None of those things happen at scale. When a doctor makes a mistake, it affects one patient. A single security vulnerability can affect millions of people.
That's the problem with this. Typically what regulators try to do with risk is to find a deep pocket to stick it to that can absorb it with minimal consequences, but there isn't one here because the risk is large compared to the (inexpensive) cost of the device.
It's also a poor thing to try to insure because the main risk factor is code quality but insurance companies are generally not equipped to evaluate that. It doesn't help anybody to triple the price of every device just so the customer can still get pwned because once the insurance is covering it the developers lose the incentive the liability was supposed to be giving them to improve their security.
>There are many industries that are very competitive and full of small companies and have regulations, but what most commonly happens in those cases is that the regulations are rarely enforced which nobody much minds because the competition is preventing abusive practices regardless.
Again I don't think this is true at all. Some counter examples: restaurants, electricians, general contractors, engineering firms, beauty salons, and tanning salons.
>None of those things happen at scale. When a doctor makes a mistake, it affects one patient. A single security vulnerability can affect millions of people.
A single bridge failure can affect an entire city, and a large building collapse could cost billions in payouts--yet engineering and construction firms can and do buy insurance to cover these things.
Magnitude isn't a problem here, even insurance companies buy insurance from larger insurance companies.
>It's also a poor thing to try to insure because the main risk factor is code quality but insurance companies are generally not equipped to evaluate that.
Insurance companies are able to evaluate risk factors for every industry--they are better able to evaluate risk than anyone else, and it's not like they wouldn't hire domain experts.
There's nothing special about software in this regard--it's a complex system, but the insurance industry regularly insurances against damage resulting from far more complex systems than software--weather for instance.
I'm not sure if requiring router manufacturers to buy insurance would lead to a net benefit or not, but the problems you're creating have already been solved by other industries. There's nothing magic about software that makes it impossible to insure.
I'm not sure what sort of liability you're wanting here. Criminal culpability for this sort of thing is simply against the American social contract. You go to jail for specific things that have been previously made illegal, not just for causing public ills. And you can already sue companies if you want civil liability.
I agree that it's kind of hard to claim criminal negligence for software bugs, after all it's very hard to make something bug free and software is that weird edge case where you can tell someone exactly what it does, but not in a way that is useful for preventing problems.
That said in extreme cases like e.g. airplanes I think it would be fair to consider it negligent when code is just brought into production without any kind of debugging or testing.
I guess the stakes decide when something is or isn't negligent.
For paid products? Yes. If you are selling a device, you should be liable for it, just like a car manufacturer would have liability if the brakes failed because they were improperly installed.
If you install someone's free plastic brake pads, you're at fault for whatever happens - and if someone offers to give you something for free and you agree, they don't have to follow through. If you buy them, there's an implied contract that you will receive them. In that implied contract, our society has also inserted "and they won't kill you." The idea that every time a dollar changes hands an implied contract is made underlies US business law. Related to this idea is how rights holders will be offered money for photographs they're giving away for free - because the person who wants to use the image wants to be protected by the implied contract. In this system money is the magic spice that makes agreements real. (If someone gives you their car for free, the courts will sometimes let them take it back. Not so if they sell it for a dollar.)
That's not how liability works at all. That's an idealistic interpretation of the law that has never actually existed.
It does not matter if you charge for your service or product, nor how much you charge, you can still be found criminally negligent or reckless if it kills people and your actions played an important role. There are very few exceptions to that.
In civil terms it's far more straight forward: if your free brakes kill people, you will be sued for it (almost guaranteed). From there they will attempt to prove that you were negligent regarding the quality of the free brakes you created.
You give away thousands of free chicken sandwiches, that without your knowledge happen to contain bacteria that causes food poisoning and ends up killing several people. You did a very poor, careless job at food prep (similar to the dangerously manufactured brakes). You're almost guaranteed to be pursued criminally and civilly for the deaths.
The security levels of most electronics products connected to the Internet, be it IoT or routers, can only be described as criminal negligence. There is no financial incentive for the manufacturers to invest into proper security, and they typically just add some junk to the firmware they get from the SoC vendor and ship.
I'm not sure I agree that making the vendors liable in case of vulnerabilities is the right way. It should probably suffice to require vendors to define a guaranteed product support period, where they are obliged to provide security patches, and then hold them liable if they fail that.
When you use free software, you are tied by its license which says:
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
No matter which component caused the bug, the device vendor that made the final product should always be responsible, unless it has a contract outsourcing the responsibility for a specific component to another entity. If the vendor simply used open source libraries, then it needs to review and take responsibility for the code, or hire a contractor to do so. This might also incentivize them to provide timely security updates, in order to minimize damages.
You need to bundle mandatory, sizable insurance as a requirement for getting the license to sell/not selling illegally (i.e., treat the device as illegal as an insurance provider who is not licensed to sell insurance, not as an unlicensed medical doctor).
The insurance would make the vendor fix his shit. And he can't just chicken out. Make some way for sufficiently large companies to self-insure, or they will be mad. If they don't get mad, they probably like the reduction in competition.
> If the vendor simply used open source libraries, then it needs to review and take responsibility for the code,
I can imagine a lot of licensing hassle here. Worked for Technicolor in Edegem, not on routers / STBs but know the challenges. There are tons of libraries used by virtually every device in the field that no company will touch b/c of licensing hell.
One difference: you can repair your brakes, or replace them, without having to get a new car. Most routers do not have updates available after the first one or two patches, and you can't even install your own OS on most. Your only options are: live with it, or buy a complete new router.
Depends on the nature of the bug, certain kinds of security holes are well known and should come with liability. Examples: SQL injection, default passwords, no encryption of sensitive information on the wire, no permission checks.
Some of these kinds of bugs are so well known that I don't see how someone could argue against liability.
There shouldn't be liability for bugs; there should be liability for negligence. If you ship network-aware code you are negligent if you don't take reasonable steps to prevent bugs and have a reasonable process to patch bugs.
Companies should be liable for deficiencies in their commercial products... that's basic consumer protection.
Samsung had to recall and repair dangerously defective hardware - why not Cisco? Does it matter whether the public risk is in the battery or router memory?
And, imo, it follows that free OSS organizations are not liable for vulnerabilities. No money, no consumers. I think it's fair that businesses should expect to do their due diligence before blindly reusing public domain IP...
In Samsung's case, the defects could be directly responsible for damages to human lives. That's a far reach from software vulnerabilities. Not to mention that software bugs can be incredibly difficult to identify, and sometimes impossible until new exploitation methods are discovered, often times well after the creation of the software, and sometimes only due to new software and hardware tools that only later become available.
If you ever wanted an example of how to stifle innovation, read the comment I'm replying to.
My admittedly ambiguous threshold is "significant impact to quality of life"
Battery fires can kill so even a handful is significant, but a security vulnerability that impacts thousands of routers has lower but wider impact. Some companies will be targeted for DDoS or using the routers to probe and infect the company infrastructure... some consumers will end up paying ransomware, or having their finances hacked, or personal info leakes, or bandwidth siphoned.
Consider Target or Equifax, when they get hacked they are court ordered to notify their customers. Does someone want to maintain a list of customers for every piece of hardware for connected in case something goes awry?
GDPR has shown that inconveniencing tech companies with legal consequences for their negligence can be a net boon to society. So yes, liability for software bugs. Hell, we probably need to start licensing programmers.
Because the market doesn't demand it. If you truly care, speak with your money and buy a WiFi system from a vendor that provides software/security updates and patches. There are a few out there doing this, but it's not cheap.
Yeah admin panels should never hit the internet unless you truly know wth you're doing in which case you can figure out what to do to put it online anyway.
This guy was given a table saw with the guard already removed, and was using it on the floor (a table saw should be used at table height, so that you can have a foot forward to prevent falling into the blade). He was apparently not using push-sticks.
Somehow, the table saw manufacturer was found 65% liable in the case, because technology exists to reduce the likelihood of injury when flesh contacts the blade. Specifically, SawStop, which I believe senses capacitance and fires an aluminum block into the blade.
Here's a rather biased and snarky rundown of the whole thing that links some important bits of the backstory:
One thing he misses is that PTI, et al had explored flesh sensing technology and pretty much didn't feel like doing it.
They also have opposed almost all safety standards.
They are also multi billion dollar conglomerates, often in countries with little respect for IP, so I imagine one reason he patented so much was to protect himself.
They are also a lobbying org, and he misses all the things they did, yet points out what sawstop did.
PTI has done a great job of lobbying here, unfortunately.
I am generally not a fan of the CPSC (what they did on magnets was beyond the pale), but here he also missed the biggest point they made: the cost of table saw injuries, to the government and insurers, is greater than the value of the table saw market!
I could go on.
He seems like a mostly reasonable person (moreso than most on this issue), but yeah.
There really are two sides to that part. Neither side is deserving of adoration, honestly.
The court case is much simpler.
For defective design liability in a lot of states, it's enough to show a feasible alternative design that would not have impacted function or price in a serious way.
Most of the argument was about the latter.
This stuff is now very cheap.
They are talking about the jobsite and home market, and if you compare that to what these videos show, yeah it's wildly unsafe
Most of these devices are insecure not because the attackers are hyper-sophisticated, but because the software is rushed and a second-thought to the hardware. There is no one (in power) at these companies that cares about crafting quality software. They just care about crafting the bare minimum to make their devices work.
I wager that "security" is something fairly far from their mind when they craft this software, which I consider especially negligent for any company that is dealing in networked devices.
It's even worse: most of these companies don't even write the software for the low end consumer hardware. They just license it from a third party, usually in Asia, and pay them to turn features on or off and skin the UI for their branding.
It's no surprise that routers from competing manufacturers are vulnerable, since it's all the same under the hood. The companies that sell the finished product have zero insight into how secure the software is.
It would probably be best to let the market regulate by itself but the issue here might be that it’s not visible to users that their router is infected.
Not true. Product liability lawsuits have been around for ages. It's just that the tech industry has been able to escape them, by and large. I think one of the greater injustices in business was Microsoft's avoidance of a lawsuit from their spate of windows malware from roughly 2003-2010. They just sat on their hands and let for-profit A/V companies and nonprofit volunteers secure their platform, while consumers lost millions of hours and billions of dollars to simply be able to use their computers again after being rooted by 4 lines of JavaScript
That's probably an exaggeration, but I agree with the sentiment of what they are saying. It's not crazy hard to write quality software. Most corporations don't bother because they know they don't have to, since they can escape liability laws. It's gotten to the point where most software developers somehow view this situation as "normal".
Do you have a QA department? What do they do? If you're like most software companies, they do testing. This is not QA, though, it's QV. The role of QA is to assure that the quality of your product is high. By the time you're testing it, the product is "done" - all of the defects are already there in the product. There is now nothing you can do to affect the quality of the product - all you can do is test and measure the overall quality of the product.
I mean, your tests will find problems, and you can fix those problems. But, if you keep track of how many problems you find, and how much test effort was involved to find those problems, you can plot a weibull regression and make a prediction about how many defects you haven't found yet. On any project I've been on where we ran this analysis, it was obvious that testing was finding almost none of the problems - barely skimming the surface. And, when you ran this analysis you could predict how many defects you will find if you spend X days testing, and on any project where I've seen this analysis run, those predictions have been highly accurate.
Really good automated tests can help with this, because they can do huge amounts of test effort "for free", but really QA should be about the process you follow when you write software - making sure you have a repeatable process, and then figuring out how to improve it. Almost no one does this.
This sounds like it's really expensive, and hard to prove the benefits of to most purchasers. I doubt we'll get the sort of industry change this would require without since strong formalization of the discipline.
I won't provide a source but i'll provide a modified real life example, since i'm pretty sure this vuln is still in the wild.
I was doing a pen test on a router whose manufacturer decided it would be an OK idea to use GET requests to launch their ping diagnostic tool on their router's unauthenticated QA web interface.
You could exploit this from literally any website on the internet, and since it's a GET request and we don't care about what it returns, CORS won't save your router. I think that counts as a javascript one-liner, but you get the idea how fucking awful some of these routers are.
I'm not sure whether you think a line of JS calling out to load a remote procedure would have been extraordinary, a root via a browser would have been extraordinary, any remote rootkit load would have been extraordinary, or whether the fact that it used to be possible to root Windows seems extraordinary. I'm no expert but none of this seems extraordinary to me. Things was loose in those days.
I'm not sure what the OP was referring to, but off the top of my head there was a security vulnerability in Windows disclosed last year which seems to fit that description[1][2].
Why aren’t door manufacturers held responsible when a home is burgled? The locks can be picked, the door is weak enough to kick it open? The fact is you can’t make an unpickable lock or an inpentrable door. You also can’t make a sophisticated device like a router, with many features, unhackable
It makes no sense from a capital perspective. The software provider would need to hold risk capital (money in the back pocket) like a bank or insurance company, but in proportion to the revenues of the companies they sell the software to, not the software company itself.
Do we have hard attribution to a particular nation-state? I know it's being implied that "Fancy Bear did it" but that seems like something anyone who didn't look closer would just buy regardless of truth.
Headline is a bit incorrect - a reboot will interfere with the malware by restarting the it’s C&C process, which the FBI now controls. This does not eliminate the malware, but it will stop it’s data collection and makes it more difficult for an adversary to activate it on a large scale.
As I say below, a reboot will force the router to hit the now-sinkholed domain. This will let the ISP identify customers with affected equipment and notify them.
The core message here that everyone should reboot their router is simple enough to survive on Twitter and be understood, whereas specific instructions about which devices are bad will likely be screwed up.
Presumably if they intended to use this maliciously, they wouldn't have told you about it. But in most cases, the FBI having control is still better than a random malicious actor having control, unless you belong to a certain high risk segment of the population.
In the long term, you want a fix for your router, or you want a new router.
Mine is similar to one of the affected units, enough so that it's likely vulnerable. I'm looking at replacing it.
> 100% of the US population [has an adversarial relationship with the government]
You're going many steps beyond simple exaggeration and pushing into extreme hyperbolic territory.
> classifying 66% of houses as constitution-free border crossings
You're inventing that, such a thing has not been classified by the US Government. If the government - local, state or federal - wants to search your residence in NYC or Los Angeles, they still need a warrant or equivalent court approval. If you were right, that wouldn't be the case.
> holding citizens for years without charges or trial
Show me the specific figures you have on how many times that has occurred in relation to the total number of people that have been arrested over a relevant time frame. It's extraordinarily rare in fact. Using events with very few instances to argue a premise of widespread occurrence, is an immense logic fail.
> a for-profit prison system that engages in de facto forced labor
The government prison complex (the supposedly non-profit oriented mass incarceration machine) is and has been dramatically worse. Over 95% of all people that have been put into prison in the last 40 years, during the war on drugs and mass incarceration phase, have gone into government prisons. During the epic Reagan and Clinton prison boom, the private prison industry had a single digit share of the prison inmates.
And now the incarceration rate is rapidly declining and has been for a decade. We're also pursuing the end of mass incarceration policies, with wide bi-partisan support. And we're also pursuing the end of the war on drugs, via legalization and decriminalization policies all over the US. If I were to use your argumentation approach, that means the expansion of private prisons is causing all of those things and is a good thing: as the private prisons have expanded their market share the last decade, all of those good things have finally started to happen.
> criminalizing mental health issues and withholding psychiatric care from insured people
What share of the population has suffered from the criminalization of which mental health issues? How many insured people are being kept from psychiatric care? Being vague doesn't support your topline premise, it detracts from it.
You've made an extraordinary claim and you didn't support it with much of anything.
>But in most cases, the FBI having control is still better than a random malicious actor having control, unless you belong to a certain high risk segment of the population.
That's like saying liver cancer is better than pancreatic cancer - true but not comforting.
In one case they went to court to get an OK from a judge to use the malware and seized domain(s) to then activity remove it from people's computers. They asked a judge because while they can seize a botnet, actually altering people's computers might be seen as bad so it seems they went about it the right way.
If they were going to be all evil I doubt they'd be talking about rebooting routers and such ;)
I would hope and expect that, at least, they would need a separate court order for every modem.
Also, chances are courts would deny them such orders on the grounds that it would be easier for about everyone involved if the FBI just asked the router’s owner to restore its firmware (why would the FBI need a court order against foo because bar hacked its modem?)
Heh. I'm so old, I can remember that only time, ever, that a vendor sent me an upgrade hardware and a box to return the old one in. (Don't remember which vendor, only that it wasn't a rich one.)
According to talos, a reboot will remove stage 2 and 3 but not 1.
It's not clear how stage 1 installs. Is it into the (hidden) base Linux install in rc.local or whatever, does it get into the bios/firmware of the computer.
Feds take aim at potent VPNFilter malware allegedly unleashed by Russia.
[..] to counter Russian-engineered malware that has infected hundreds of thousands devices.
I'm interested in the evidence for this attribution. Both ars and dailybeast [0] are pointing to Russia, but the only specific hints are that it's targeting Ukraine (which might also have to do with the prevalence of vulnerable devices there, we don't know that), and that it shares code with the BlackEnergy bot builder toolkit[1], which apparently can be bought on the black market for a decade already.
Neither of the original articles [2,3] mention Russia or any of the Russian APTs, so I'm genuinly interested in better attribution data.
Makes you wonder how much is protected sources and methods. One good mole (digital or human, really) would make technical analysis a secondary consideration.
Another article mentioned an rc4 implementation that had been tied to a previous Russian State sponsored cyber attack. (Sorry, am mobile, don't have the link).
You are right. The not-quite-RC4 implementation is mentioned in the Talos post, and it is originating from BlackEnergy. Talos is referencing a US-CERT report of APT28/29[0], which links an F-Secure whitepaper on APTs using "crimeware"[1]:
BlackEnergy is a toolkit that has been used for years by various
criminal outfits. In the summer of 2014, we noted that certain
samples of BlackEnergy malware began targeting Ukranian
government organizations for information harvesting. These
samples were identified as being the work of one group,
referred to in this document as “Quedagh”, which has a history
of targeting political organizations.
The only way I see how anybody could conclude from "APT uses a black market toolkit" to "Anybody using this toolkit is that APT" is: clickbait.
Clickbait is among the more innocent explanations. Reporters who dutifully parrot what the TLAs tell them get more opportunities to do so. Reporters who don't, in short order aren't writing this sort of article. This is such a basic situation, repeated hundreds of times, yet the resolutely naive will bitterly deny that it ever occurs.
"There's no easy way to determine if a router has been infected. It's not yet clear if running the latest firmware and changing default passwords prevents infections in all cases."
Antivirus provider Symantec issued its own advisory Wednesday that identified the targeted devices as:
Mikrotik devices were reportedly affected as well, although I haven't seen any specific model identified (they all run pretty much the same software, although various models are based on different CPU architectures).
The original Cisco article mentions that these aren't all of the vulnerable models, just the ones they've seen. There isn't anything fundamentally different between a 1009 and a 1016.
As GP says, MT boxes all run the same software. The latest release (6.42.3) dated May 24 has suspiciously few bugfixes listed in the changelog. Probably worth updating on the basis the vulnerability fix is in there too.
Is it known that the vulnerability is in the web UI? I ask because the CERT report advised to disable/ACL web UI but it didn't (afaik) say that this was the attach vector. They might have just thrown that in as sound general advice.
What's a good affordable router well supported by Tomato/OpenWRT, these days? (put differently: 2018's version of the Linksys WRT54G :)
From what I understand, alternative firmwares like Tomato & OpenWRT are not inherently safe from VPNFilter, but it seems to me the rate at which they are maintained make them less easy targets (?). So this new flaw made me think now is a good time to replace my crappy router and its unmaintained vendor firmware with something more solid running Tomato/OpenWRT. Disagreements?
I have a Netgear WNDR3700v4 that works pretty well with OpenWRT/LEDE. You just have to be careful about which version of the WNDR3700 you get, because some of them use unsupported chipsets, iirc.
I've stopped trying to find routers with OSS support, it's too much of a pain in the ass. Instead, I got a Core 2 Duo based used Dell Optiplex from my local university's surplus store for $10, threw a second nic in it, and installed PFsense. Sure, it's bigger than a consumer router, but PFSense is battle tested and it's really fast. I use a Ubiquity UniFi access point to provide wireless capability.
Wrt1200ac, supported by openwrt and I can personally tell you that I installed it with no problems. All i did was upload the firmware via the default wrt1200ac ui
IMO both are obsolete especially if your connection is >50Mbps. If you must DIY use pfsense on a x86 machine with Intel NIC and low idle power draw. Otherwise use Ubiquti Edgerouter or Microtik.
Ubiquiti is overpriced and Mikrotik is underpowered. There are good consumer routers that have 802.11ac for the price of a wired-only Ubiquiti router. If you're comfortable installing OpenWRT, it still offers more capabilities for a lower price than those "prosumer" brands that pretend to be real enterprise-grade stuff.
In my experience, a lot of the MikroTik hardware has been underpowered (struggling to get decent routing and IPSEC performance), so I’ll agree with you on that point.
But I’ve found a lot of Ubiquiti hardware to be extremely high quality given how cheap it is. At my office, we installed seven new 802.11ac Ubiquiti access points for as much as it would have cost to add one more 802.11n to our Cisco system (apart from wanting 802.11ac, we also decided to decommission the Cisco because the controller would periodically crash every two months or so).
To get the number of 10GbE interfaces and performance the EdgeRouter Infinity has (for $1600) in a Cisco would cost multiple times the price there too.
I don’t know if I’d trust it for service-provider infrastructure, but we’ve replaced a lot of enterprise Cisco stuff in our office networks with Ubiquiti and only had a good experience.
Pointing out that Ubiquiti equipment is cheaper than Cisco isn't saying much. With regards to the products that are actually relevant to this discussion—the stuff that's a reasonable alternative to typical consumer networking equipment—Ubiquiti definitely isn't the more affordable choice than the competition.
I was at $150 a year ago for an edgemax router + one of the long-range access points (i added a second ap, but for comparison's sake, that was the cost for those 2 components), which gave me an open-source router os (vyos) out of the box. I'm not sure what the cheaper option is once figuring in your own time-cost to hack openwrt in, but it's hard to imagine it would be some dollar-sum that really deserves this much angst.
> I'm not sure what the cheaper option is once figuring in your own time-cost to hack openwrt in, but it's hard to imagine it would be some dollar-sum that really deserves this much angst.
WTF? Angst!?
It takes minutes to install and configure OpenWRT on supported hardware. You upload the OpenWRT firmware like any manufacturer-provided firmware update, and after it's beeen flashed the router reboots into OpenWRT. The added time cost compared to learning and configuring any other router OS is negligible.
Compute power, mostly. The hEX and RouterBoard products are mostly single-core MIPS processors in the 600-800MHz range, with a few using the dual-core 880MHz MIPS chip from Mediatek. You have to go all the way up to the $180 RB3011 to get a decent dual-core 1.4GHz ARM, or you could get that same CPU in a TP-Link router for $125 and also get dual-band WiFi (though admittedly, half as many Ethernet ports, but the second 5-port switch in the RB3011 certainly isn't worth the price difference).
The WRT54 is dog slow by today's standards. My residential Comcast service is faster than it can handle; my max down almost doubled when I swapped my WRT54GL for an AC3200
A few years ago I used my 54G to DL a 15GB file at (a reported max of) 60Mb/s. Obviously took a while ... without hiccups. I'm guessing most US customers aren't getting service that fast ... so there's still plenty of use for them. (Still use one all day at 25Mb/s.)
At least get a WRT841N, they are like 15$+sales tax, unless you get the wrong vendor. Less if you find refurb's or buy bulk. They are the main workhorse for our local mesh network, with WRT1043 devices handling encrypted uplinks due to the lack of speed with chacha/poly running on the former (think under 10Mbit/s). Don't worry, they do handle advanced mesh routing algorithms at line rate, e.g. 2x2 mimo 802.11n and 100BASE-T (4+1 ports).
E.g., they handle mesh domains of about a thousand nodes before one needs to split, and mostly due to L2 traffic starting to hog the slower links (it's an L2 mesh).
At that speed, and if you won't need fancy buffering and got the necessary 3.3-ish volt ready, check out some esp8266-based mesh/repeated tech. Possibly with using wires on their (quad-) SPI bus to pair two, potentially over quite a sizable line length. Saves airtime.
The US agencies have a very close relationship with router manufacturers. TCP 32764 for example was a backdoor many suggest they used cover ops to create and exploit.
that's quite the stretch since the TCP 32764 issue was unique to sercomm routers, which is a Taiwanese router ODM. they write the firmware and create the hardware, then rebrand the UI for however linksys/netgear/whoever wants it.
Modem-routers often have subtly different models (sometimes even under the same model number and SKU!) in different regions. "Subtly different" can mean in this instance: completely different OS, different chipset/hardware/board supplier etc.
It's possible that the models affected by this particular attack aren't sold in other places, or perhaps they are, but are actually still different (enough). Or they are just not widely used.
I say this because govt. organisations have often issued warnings and recommendations like this in similar circumstances, e.g. a while ago some modem-routers widely used in this country were attacked, and a warning very much like this has been issued.
I can find some articles about Interpol/Europol taking down botnets.[1] The US and Russia are probably more advanced than the EU in this regard. I wouldn’t put the UK or China too far behind.
Maybe the FBI works closer to manufacturers or victims? I’ve had a freind who got some Wordpress he managed sites infected. He was able to trace it back to a professor in Turkey and call the FBI. The FBI came and interviewed him.
Yes and how do we find out if we're infected? It can't be that hard, now can it? Once you know what you have to be looking for - it has to be some kind of file or altered binary since stage 1 is persistent.
I'm amazed at how many people here think that a company should be default responsible for what is essentially a third-party tampering with their product. Unless the problem is a result of negligence, it's unreasonable to say that a company should be automatically responsible, except perhaps if they decide not to address the problem in in future products.
There is a _huge_ business opportunity for the entrepreneurial mind. Auto update of firmware with proper monitoring and health checks as the roll out continues.
Yes, technically; the reason it doesn't exist is that "proper monitoring" would highlight how atrocious everyone's development practices are. Oh, and this would accidentally plug all the holes "everyone" variously uses when they're found helpful... generally you want to attract military funding, not scare it away :D
Heh, probably. I don't know if i agree on your point about companies' development practices. Basic monitoring is very simple to add.
I thought what'd a very basic monitoring & release for a self driving car would look like:
Below could be measured as A/B experiment - (control 1% on old release, experiment 1% on new release).
1. Number of miles driven
2. Number of user intervention.
3. Score rating (assuming users give a rating for their comfort after reach ride).
4. Number of rides completed.
5. Average/median speed driven
6. Average/median G change (like too much breaking would cause a change in G-force user is enforced to).
7. Average/median time-to-destination etc.
The data is already available and collected, what's missing is a way that's plug&play for these companies to push the data and necessary dimensions and integrate them into their roll outs.
You can find such metrics for almost all internet-connected device.
Make a dashboard out of this, give a way to slice data, give visualization tools and a way to query it out, and this is a winner.
Is it strange that I'm now feeling it would be acceptable for FBI to counter hack these devices with the seized staging domains and somehow patch them?
It’s the most accessible way to do it. Most people don’t buy their routers, they lease from the ISP. Major ISPs replacing 100k+ devices will take a while and they’ll drag their feet. FBI commandeering the C&C domain and instructing users to reboot is not the long term fix but it is most effective
It'd be a lot better of a situation if router boards were designed to accept firmware upgrades at a low level. After an attack, you often need to use the software updater to reset it. That can no longer be trusted if it's compromised. Consumer-level routers have been very low-quality for quite a while.
I'm partly with you - they don't seem to know what the initial vulnerability was, so just blocking this version of the malware is not sufficient; but by the same token, they'll want to at least find out what the vulnerability was and have a fix before advising consumers replace modems. Clearly, we need laws outlawing the cheap-ass modems we know have and mandating far more secure, easily firmware-updated ones.
My router started acting weirdly about three weeks ago (intermittent disconnections, slow connection etc. ) and then stopped working all of a sudden.
I asked for a replacement from the provider and they changed it last week. Now I am freaked out, because if the device was compromised I will need to change all my passwords which is a real pain in the a
How can I verify some malicious code is actually present on my router? What does this code do? Could the FBI put their own malicious code on the router, via this supposed exploit? Why should I trust the FBI?
Excuse my ignorance but I'm not not going to ask these types of questions.
EDIT: After reading a bit - it seems the control is somehow "transferred" to the FBI rather than the malicious actor - any other external agent controlling my software and hardware should be considered a malicious actor from a defensive standpoint, right?
Also, I don't buy the "FBI is better" argument, because I'm a skeptic.
EDIT 2: Moved the 'Why should I trust the FBI?' question to the end of my opening paragraph because I just want to know more about how a layman should approach verification of this vulnerability other than just "trust the powers that be"
Explain how the FBI would leverage an advantage by telling you to reboot. Explain in a way, which doesn't depend on an unprovable.
The best I can come up with is a false sense of security, which given they actually expect you to also patch and upgrade and proffer advice to patch and upgrade, is a bit weak. Basically, I cannot construct a scenario where there is a significant, could-not-be-found-by-white-hat reason they'd do this, to secure some advantage.
I suppose I just don't know what exploits could be implanted -- are there forms of rootkits that can go undetected? Or have all of these infected firmware been reverse engineered and the exploit in question cataloged?
According to ArsTech in this article (https://arstechnica.com/information-technology/2018/05/hacke...) the VPNFilter exploit can survive a reboot - so how can a simple reboot disinfect if the only delta is the owner of [one of] the second stage callback IP addresses? I haven't seen any mechanics explained that would actually disinfect the router.
I appreciate your response with actual critical thinking tips and not just flippancy - I don't know where else to have these types of discussions.
The attack had three components: infection, sign-in with an initiator head-end, and then second/third stage download.
As I understand it, from reading around: The FBI took over an "initiator" headend which bootstraps a simpler infection into the actual threat/attack code.
The low level infection can't be removed simply, that demands new code from the maker or an OpenWRT type source. The FBI took over the domain namer behind a service which acts as the sign-in site. The attack mode code is not in your firmware, it has to be re-downloaded. If you block the initiator login, you aren't "clean" but you cannot complete download of attack code to mount the DDOS
If you reboot, the low level infection tries to sign in, and is blocked, and so can't get the second/third stage downloads.
Ok, so it's kind of like burning a line in a forest fire - the fire is still fire, but it's controlled and used in such a way that it should stop the bigger blaze from crossing said line?
Thanks for this insightful response. I know a lot of readers would just tell me to do my own research but this was really enlightening.
Nah.. I don't like that metaphor. I think I like this one better.
Back in the day, cable TV was crypted, and people had to have cable TV decoder cards with a key to fit a slot in the receiver. So, in the UK, somebody worked out how to decode the keypair, and you could buy a keycard in the pub for like GBP50, instead of paying the cable company GBP100/mo. But the cards, they have a fixed life. They don't last forever, you have to keep coming back for more.
The real fix is obviously to fix the crypto, but there are a million receivers out there. Nobody has time to go round each one. So what the cops did, is find where the faked out keycards are being printed and shut down the print house, so imagine... if you then get the city electric company to power cycle every house, when its receiver reboots, it needs a new keycard, but they can't get one any more, 'cept from the cable company. Fixed? No, but you cut the problem off at the knees.
Oh wait: we all wanted those sweet stolen keycards. I gotta think of a better metaphor :-)
I don't think the FBI Special Agent job description is one line of "make humans trust you" - I believe it's closer to "protect the country from foreign and domestic threats," and I think just because the FBI tells me to jump doesn't mean I should jump...
Your second point is not clear to me. Space aliens aren't an extant authority on our planet (afaik)
>I think just because the FBI tells me to jump doesn't mean I should jump
You can find independent corroboration of this this malware with little effort. And if your gear is compromised, it's most likely doing something you don't want. So "jumping" is the smart move here unless you just want to be contrary.
The second point is: if you're assuming a conspiracy based on zero evidence, why not go big?
I'm honestly far from a network expert or even engineer - but it seems if there's a vuln that the bad-actor had control over, then the ping-home domain of that vuln is controlled by the FBI, then the FBI is telling me to reboot my router, there is a non-negligible possibility that the FBI has an interest in using that vuln in their favor.
Granted I don't know many specifics here but that's why I'm employing the elenctic method.
I'm just trying to learn through discussion - I thought that HN would be a good place to do it, and that may be my mistake. I'm an interpersonal learner and admittedly ignorant on this topic.
I have no personal fantasies; I want to understand the truth and that's my only motive.
Asus is not affected by this but it I still updated the firmware because it was affected by multiple other vulnerabilities... I wonder if computing we'll become secure before I die...
I was wondering about this so that is good to know, thanks. My Asus router was also updated for multiple other vulnerabilities. I really think Asus has been doing a great job with pushing regular updates.
As a Mikrotik devote, I love the active development and patches being pushed for their Packages and RouterBoard. If anyone maintains a Mikrotik router and/or switches and hasn't heard about the vulnerability and actively patched their systems, then they're completely at fault and putting themselves and possibly they're companies at risk.
I can't say definitively, but I couldn't find a public posting on this by MicroTik at the time Cisco made their information public on Cisco's Talos bolg. So I contacted MT support through email and was told that they were notified of the vulnerability May 18, 2018 but had already patched it March of 2017. I looked at the changelog on MTs RouterOS and several vulnerabilities were patched back at the time.
Cloud Key, USG, Switch 8 POE, and two AP-Lites. The UniFi console makes management easy compared to the edge router UI. I also have one of those but it’s just sitting right now.
I'd be wary of any Ubiquiti network kit. I only trust their APs. The ERL for example is an awful router - it has had a firmware issue for years now where it causes persistent packet loss due to reordering incoming packets. I discovered these problems in my own testing, and there is a giant thread on the forums about it which I helped kick off. The ER-X is the only thing that seems to work properly. Their switches are bad too (small buffers). UI is nice and they are cheap, but not worth the poor performance.
I sold most of my Ubiquiti gear and bought a Netgate 2440 a couple years ago, put Debian on it with Shorewall and auto updates, and haven't looked back.
How large do you want the switch to be? Where I am, if I download something form google drive, I get _major_ bufferbloat on the downstream, I suspect because everthing except the last PHY link to my laptop (on a LAN port) handles at least 1Gbit, but the switch is still a little old-ish and won't do more than fast Ethernet.
Please, for god's sake, either drop packets or use fq-codel or something similar, but don't use a large-ish, blind fifo that only listens to his expelicit QOS settings.
I wan't my mosh session to be responsive even if there is a large _inbound_ filetransfer.
(note that this is textbook bufferbloat, just in the reverse direction from the usual residential situation)
To support a certain bandwidth on a switch you need big enough buffers. Bandwidth delay product is what you want - latency * bandwidth, you need a buffer of at least that size to support throughput in the system. Multiply that by the number of ports to support concurrent bandwidth. The toughswitches were engineered with very small buffers, such that they couldn't even support reasonable bandwidth on a LAN with low latency.
I don't like pFsense because it's difficult to automate and lacks bufferbloat AQM. I use ansible checked into source code - it's easy to rebuild and tweak, and works great. And with Linux you get access to fq_codel and cake. These are easy to setup and work extremely well, with 1-5ms bufferbloat even under throughput saturation. I couldn't manage to get anything similar in pfsense's QoS.
That's your source? I advise you look up Citron's track record and history. That outfit is run by Andrew Left, an activist short-seller. He even targeted companies such as Shopify etc.
"Looking at Citron’s track record, Barron reports that, on average, companies that Left writes about see their value drop by ten per cent in a year. “And some drop as much as 95 per cent,” he writes. But that’s of little consolation to those who followed Citron’s advice and shorted NVIDIA, Motorola or Mobileye."
Ubiquity makes solid networking hardware for prosumers and small businesses that costs considerably less.
The more it seems, is that I should run my own freebsd or centos linux router, and turn on the full compliment of security (Selinux, aide, firewalld, etc) and run my own. And put Openscap on there and use the US government configuration base remediation.
Itd be painful to setup initially, but once setup should be rock solid.
Google WiFi would give you a more secure solution. Google has a hardware token in the Google WiFi locked to the stored image and that is locked to the running image.
So things like this hack would be exposed.
But then Google keeps them up to date. But the other is Google has now found Shellshock, Heartbleed, Meltdown, Cloudbleed, Spectre among several other big ones and just invest far more in security than anyone else.
They find these problems before anyone. Plus they write them up and highly recommend a few of their blog posts.
A great one is their write up on the Broadpwn vulnerability.
According to Google, their consumer routers get automatic security updates.[1] While I don't necessarily trust everything that Google says, I do believe in their ability to secure devices better than most other consumer router vendors.
At the very least there should be some kind of policy or standard that allows someone on the inside of the network to know if the password or software has been changed. If the FBI can tell from the outside, then how in the world are people still in the dark about this?