I think at a certain point someone needs to be held criminally liable for situations such as this one.
A VW engineer is likely going to jail over dieselgate.
We have already seen that such insecure internet connected devices can be easily and quickly assembled into a botnet. The operators of such a network can direct the attacks at healthcare institutions, national infrastructure, and other safety critical systems.
At some point, there has to be stricter consequences for companies than simply a fine. C-levels won't start paying attention to security until there's the real possibility that their ass will end up in jail for this kind of insecurity.
Great question, I agree. Criminal negligence might be one possibility; fraud in selling something not suitable for purpose might be another. Obviously it's going to be very difficult to draw such lines in the (continuous) sand; but we might as well start now. How many corners can you cut before the courts need to be involved (beyond civil lawsuits?) How much of the net do you need to imperil to trigger criminal consequences?
Maybe something like certification for electronics is needed and is possible here? Where manufacturers pay for a fairly decent inspection of their work in return for a mark of inspection?
(I don't mean to take a side re this particular crap-storm.)
They even require the damn things on their gigabit fiber service (there is a separate media converter, so it's not about the physical interface), and enforce using it by doing wired 802.1x.
Currently, in the US, there is legally no way that I'm aware of for liability to flow upwards. It's all gated and derisked behind "if anything bad happens, the end user is to blame."
In order for this to change, two things need to happen. Things that I think even libertarians would get behind (because of end results).
1) Hold manufacturers of networked consumer devices liable for all damages caused to third parties by their devices (e.g. if they are used in a bother). Reason being that the buyer wilfully choose the device, but a third party did not
2) Create a waiver to this liability if (and only if) the manufacturer includes a standards-compliant, end-user usable firmware upgrade facility AND releases documentation necessary for third parties to build firmware files
I agree with all of your points except one, the severity of your listed "corollary" situation:
Dieselgate is tentatively assessed to have resulted in thousands of human deaths due to air pollution. AT&T modem vulnerabilities have not been assessed similarly.
While I concur with everything else you've said, I encourage selecting a different example with a more evenly-matched fatality rate.
Why is fatality rate the only legitimate metric on which to judge prison sentences? People are put in prison for all sorts of lesser offences all the time.
[Original question altered to break apart "why?" and "it shouldn't be".]
Q: Is fatality rate the only legitimate metric on which to judge prison sentences?
No. Fatality rate is not the sole legitimate metric.
Q: Then why is fatality rate relevant? People are put in prison for all sorts of lesser offences all the time.
It's a "hot-button" metric. Its use inflames emotion and skews human behavior away from rational.
When drawing an analogy between two things, introducing "fatality rate" when not already present muddies the waters of conversation. If done accidentally, this can lead to serious "foot in mouth" scenarios. If done intentionally, it's typically used to encourage fearful decisions instead of careful decisions.
Agreed. I think that point was also reached with the tens of thousands of Dodge/Chrysler/Jeep car radios/head units that were wide open to the Internet over cellular.
The staggering incompetence responsible up and down the chain for that should have been investigated fully and publicly, and certainly would have been if anyone had been injured/killed.
Its the internet of vulnerable things! Most of which will never see an update to resolve the security vulns that crop up with time, let alone receive proper software stack maintenance for more than a brief period of time.
We don't yet have cybersecurity expertise in government that approaches the environmental science expertise in the EPA (or California's equivalent). And thus far we have a Congress that doesn't recognize the importance, or see a role for the government to regulate these aspects, so they just let the industry write their own rules. And so we've got exactly the system everybody who isn't a consumer wants us to have.
Update: 4 of 535 members of Congress have computer science degrees.
Or follow the automotive model: put greater liability on the end user. End user is then obliged to pay for insurance, incentivized to pick safer suppliers, and feels the consequences of their own bad behaviour (it's not the ISP's fault that you keep opening .exe email attachments).
How's the customer going to pick when their market, if they're lucky, has only two fixed link ISPs? When most have only one (really crappy cable one) that if they're lucky provides more than 25Mbit down and who knows what up. (Yeah it's nearly 2018 and those speeds would have been 'OK' 15 years ago.)
A lot of the reason Americans are in this mess is the complete lack of competition in 99+% of the country. /That/ lack of competition comes from a broken investment model. Like roads, water/sewer, and other systems regulated as utilities the physical plant is a natural monopoly; it isn't effective for society or companies to build parallel infrastructures. If the base platform (last mile 'wires') were owned by the community and competition occurred on top of it (like with package delivery) then the context of your comment would make more sense.
>If the base platform (last mile 'wires') were owned by the community
It's not necessary for the wires to be owned by the people. They could be owned by a private, highly regulated, capital intensive, infrastructure utility company, that is a legally separate entity from the ISPs who compete to provide your data services.
Private is not necessarily better than community ownership, but may be more politically viable.
Even in areas with heavy fiber penetration like Seattle, where Centurylink has 70+% of the city covered with Fiber, Comcast is still capping users to 1TB and Centurylink feels little competition. Hell, they're trying to kill off their TV service right now, and pitching DirecTV (with a much slower guide, missing multi-channel preview, and with a dish bolted to your house) as the new thing they're pushing.
The automotive model is split. The end user pays insurance because he's the least cost avoider for the most obvious types of accidents. And you want him to internalize the cost of such easily avoidable errors.
But the producer is also regulated and required to include certain safety features because individual consumers are poorly equipped to select cars based on complex safety features. And in any event most consumers are very price sensitive; many people (perhaps most, actually) don't have the luxury of choosing a car based on safety features. seat belts, air bags, crumple zones, ABS, and rear-view cameras have all become mandatory thanks to regulation. (In some cases voluntarily with the understanding that they'd be involuntary if industry didn't cooperate). Collision avoidance systems are already scheduled to be mandatory, again by voluntary agreement.
When it comes to tech, consumers just aren't sophisticated enough to know how to choose products. And the insurance industry doesn't know how to solve that problem, either. It's really only the _commercial_ insurance industry where the insurers work with the policy purchasers to help them select the safest products and procedures. Anyone who has worked in tech support knows that it's a lost cause trying to educate individual consumers.
What's the incentive for the vendor? Trying to do the best and hoping the customer picks them? Or just maintaining the status quo because the customer has insurance and is protected?
I hear ya. The flaw is, there isn't enough competition in enough markets. Many rural areas only have one ISP.
Bruce Schneier has speculated about creating mandatory legal liability for software vendors and service providers. Liability regimes are a major reason why physical products are so safe.
I've actually had discussions with several people, separately and at a legal summit that specifically discussed the topic of software liability, that proposed a good solution to this:
Make the liability proportional to the degree to which you've given them the degree to find and fix any issues that might arise. If you give someone FOSS software to solve some particular problem, even a potentially-high-liability problem, you've given them everything they need to both analyze the software for potential issues, and fix them themselves. (Auto manufacturers like FOSS because it means they can always support it themselves if the vendor won't, and they have far longer support lifetimes than many other products.) So disclaiming all liability there seems reasonable, for anything short of intentional malice (e.g. backdoors).
On the other hand, if you give someone a piece of opaque proprietary software, then you're selling them a solution to a problem, and it'd better work because they can't do anything but go to you if it doesn't. They also can't introspect it, and black-box testing only goes so far. So this should increase liability. Even more so if you supply it in an obfuscated form that's difficult to reverse-engineer or test, or if you lock it down to make it irreplaceable.
The same thing would go for changing software yourself, on a high-liability device. If you don't change it, it's not your fault; if you change it, it might be your fault.
New Zealand consumer law has a requirement that goods must be 'fit for purpose'. This means that if a company sells you a clothes washer for example it must clean clothes, but if you try to wash your shoes and it breaks it's not the manufacturer's fault (unless they said it could of course).
I think something like this could apply very well to software. If someone uploads source code to github they haven't sold it to you with any purpose in mind, but commercial sales have functionality promises so they could be held to them.
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# This file is part of Ansible
# Ansible is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
# Ansible is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
What i'm saying is, if sold by ansible to a consumer, these disclaimers are completely ineffective in a bunch of states.
The UCC (which is what governs these transactions) is model legislation, and provides for a few options that states can choose in various parts. One option allows disclaimer of implied warranty in all transactions. One option allows disclaimer of implied warranty in all transactions except those with consumers. One option disallows disclaimers of implied warranties.
Depending on which option a given state chose, the language you are citing would have different effectiveness.
Why hasn't the open source community figured this out? Seems like a bad idea to ever distribute source code at below the cost of liability insurance. Or should we be geo-restricting open source software? "Sorry, Linux is not available in your state."
Thanks for pointing this out. I imagine most people who publish open source software aren't aware of this risk, and will stop when they learn about it.
1. If you aren't selling it, the UCC doesn't apply. Most people don't sell their open source software.
2. Most folks are aware of this risk when they start companies to sell open source software.
3. Most do not sell the software directly to consumers, and outside of the few states that don't allow disclaiming at all, most states are fine with disclaiming warranties in b2b transactions, just not b2c.
Most don't sell the software directly at all, and those that do are often selling support anyway :)
In summary, there wouldn't be liability for open source developers because there is no business contract. But if you run a website with open source software, you of course would still be liable for anything that happens to your customers' data. So you would probably want to buy that same open source software from someone (e.g. Red Hat), who would also be liable.
Which means, hosting/supporting open source would quickly become not worth it for any major company representing a large target for opportunistic lawyers. When you make millions from a product, you can afford a team of lawyers setting up contacts just right and fighting liability trolls. When you make zero profit, you cut the losses. Such law would spell very quick end to any corporate support of free and open source software, for liability reasons. There's a reason why all such software is accompanied with "NO WARRANTY" texts, and enforcing liability will make the corporate world to run from it. Nobody wants to be sued because they use OpenSSL and there was a vulnerability there, and that's exactly what would happen with any commercial vendor using OpenSSL if liability laws would be introduced. It doesn't matter the company didn't write OpenSSL - as soon as they use it, they are on the hook. IBM was sued not because they made Linux, but because they used it and had tons of money. And I do not see any business model that could allow companies to charge enough to cover such liabilities. Maybe for established powerhouses like Linux or for corporate foundational projects like Chrome and Darwin, but not for any lesser projects and surely not for any starting up open source with unclear revenue potential. It won't kill all open source, but it would severely hurt the ecosystem and turn it into two worlds - pure hobbyist geekery which nobody with money would touch, and formally open-source projects with strict corporate governance that has no ecosystem beyond the founding corporation.
One of the worst ideas I've heard lately, and I am genuinely baffled how a person as smart and experienced as Schneier could support it.
I'm not sure you're envisioning the same thing I am. You would only be liable for something that you sell or otherwise make money from. You would be free to publish software, open source or not, without incurring liability as long as you don't make money from it. Whoever then uses that software for a commercial product would be incurring the full liability, and would not be able to turn around and sue up the chain.
The currently widespread practice of trusting sensitive user data to open source code without an audit (either internally or via a third-party, e.g. Red Hat) is horrifying and incredibly negligent.
> You would only be liable for something that you sell or otherwise make money from
If your business includes software in any meaningful function, you are making money from it. Any competent lawyer would be able to successfully argue that. Charging money for a license is not the only way, otherwise everybody would just switch to charging for "consultancy service", which coincidentally provides free software license, and avoid any liability.
> You would be free to publish software, open source or not, without incurring liability as long as you don't make money from it.
You as a private person would be. That's my point - that would be the only way to do open source, any corporate support of open source projects would imply full liability, which would be impossible for a product the company gets to revenue from. It would be much harder for a business to justify supporting an open source project when liability costs are added to the equation.
> The currently widespread practice of trusting sensitive user data to open source code without an audit (either internally or via a third-party, e.g. Red Hat) is horrifying
Audits cost money. Tons of money. And they don't guarantee anything - bugs in OpenSSL have not been discovered for years despite thousands of people using the code, poring over it and billions depending on it. There's no magic in "audit" that allows code to be bug-free after it - if there was such a magical procedure people would already be using it, but there's no indication anybody has invented "audit" procedure that allows to eliminate all bugs. Existing flawed procedures are already being used - every company that produces software that I ever heard about uses them - and they are not enough. So what would happen is drastically raising the costs (to the point where having a website would no longer be affordable to an average person) while not significantly improving security.
Audits cost money. Tons of money. And they don't guarantee anything - bugs in OpenSSL have not been discovered for years despite thousands of people using the code, poring over it and billions depending on it.
Any reasonable audit of OpenSSL would have said, "Don't use it."
And instead use... what? Let's say you are creating a company that needs website to sell stuff. On that website, you need TLS implementation, to process user data & credit cards. After expensive security audit that consumed most of what your angel investors can give you, you decide that anything based on OpenSSL can't be safely used. Now what?
Forks of OpenSSL. After Heartbleed, dozens of forks were made. One I think is really promising is LibreSSL which is managed by the same people who work on OpenBSD.
So, the premise there is "we couldn't find the bugs by the whole community in twenty years, but if we split the community into a dozen independent projects which do not cooperate, surely then we'll find the bugs that eluded us for two decades". Right.
How do you know? It's not a magic law of the universe that "someone" always creates things because someone wants them. Sometimes problems are hard and just don't get solved, for a long time, even though everybody wants them solved.
But this isn't some intractable problem. For aerospace and critical infrastructure projects, for example, there is plenty of meticulously written and audited high-quality code developed at higher cost in response to the demand created by legal requirements.
Did I make my point clear or do I need to google for another 2 minutes to find a dozen more examples of "aerospace and critical infrastructure projects" with software problems?
So why didn't you do that already? I mean, there are very few problems as important to software world and, frankly, to the major part of world economy, than robust electronic commerce. And encryption pays a vital role in this process. Somebody releasing 100% secure crypto implementation, guaranteed no bugs, would be doing the humanity a huge service. What prevents you from doing it now, today, this minute? Please do it right now!
>>But if you run a website with open source software, you of course would still be liable for anything that happens to your customers' data. So you would probably want to buy that same open source software from someone (e.g. Red Hat), who would also be liable.
As others have said, this idea is really bad.
Red Hat will start charging obscene amounts to support the legal side of the license, especially if it is used in eCommerce platforms. What about the wife and husband who want to sell hand-knitted socks online, or small businesses who do less than $250k/year online? Will they be able to afford an alternative to the LAMP stack and fully shield themselves from legal liability and the horde of lawyers who will gladly step into any loophole?
Like many, my servers were affected by Heartbleed. So if I ran OpenSSL and someone found out before I patched it (took me 24h to do so), I could be sued in that window if I hadn't bought the license to Red Hat - oh, and how about all the licenses to all the open source software that depends on it underneath, OpenSSL being one of about a hundred of those projects? Do we license GNU toolchain? What if there are buffer overflow exploits found in various tools?
> What about the wife and husband who want to sell hand-knitted socks online, or small businesses who do less than $250k/year online? Will they be able to afford an alternative to the LAMP stack and fully shield themselves from legal liability and the horde of lawyers who will gladly step into any loophole?
If mom and pop want to sell hand-knitted socks on the internet I usually recommend them to use a hosted shop solution such as shopify and its ilk. They want to sell socks, not become an expert in hosting a LAMP stack. This is how liability works for brick and mortar stores as well: They're liable if a customer electrocutes himself because some dork attached the wrong wire to the wrong metal part. That's why mom and pop stores in the brick and mortar part of the world usually don't do the electrical installation or any parts that's covered under a builder code. They hire people supposed to be experts in that field to do that work and in in turn, they get to discharge the liability to them. It's about time we treat software the same. If you want to host something, either own up to it or hire someone to do it.
Another, less chilling effects way would be to have an Underwriters' Laboratories (UL) or CE for software. Any government, company or customer could then have that as a requirement.
Certification programs, when they exist, tend to become political creatures and tools of destruction. You wind up with byzantine and arbitrary rules, often put in place for horse-trading reasons, often to exclude competition and drive up prices.
Want a seat at the standards table? Pony up tons of time, plenty of money for travel to cities where meetings are held, and be prepared for the big software consulting firms to crush you anyway with requirements like ISO-<somethingStartingWith9> before you can make a project on github public. Want to contribute to Python? Sure hope you have that degree, plus an engineering certificate from your local/state/national government. Your dues are paid up and you've passed the most recent set of exams, right?
Oh, and let's talk about your toolchain. You can license a certified compiler for $15,000 a seat. Per year.
I can't imagine anything more chilling, other than an outright ban on people writing software.
That's what would effectively emerge from the liability system. Companies would buy insurance from liability, and the insurance companies would demand some kind of certification process.
Step 2 would be the creation of an actual software engineering profession complete with the equivalent of the 'iron ring' and a pledge to go with it. Maybe the ring could be made of ferrite ;)
Step 3 is attaching a figure to compensating victims of breaches over and beyond some credit score bs.
Step 4 would be actual legal liability for service providers and shrink wrap software manufacturers which they could not get you to waive.
Step 5 would be criminal liability for producers of faulty software and especially the management layers above them.
Applied sequentially until it starts to hurt would improve software quality in a hurry and would most likely result in massive retraining of a large number of people now employed as programmers as well as their management.
> Step 2 would be the creation of an actual software engineering profession complete with the equivalent of the 'iron ring' and a pledge to go with it. Maybe the ring could be made of ferrite ;)
This is already possible. If you graduate with a B.Eng in Canada you can become a P.Eng with the appropriate training/certification. Many Computer/Software Engineering programs in Canada are B.Eng programs (the alternative is B.Sc though those are less common).
Software/Computer Engineers who graduate with a B.Eng are eligible to receive the iron ring, and many do decide to participate in the Ritual of the Calling of an Engineer.
I think it's much less common to find a P.Eng who is a software or hardware engineer, versus say a Civil or Mechanical Engineer, but the option exists.
I'd be very surprised if Pratt & Whitney Canada didn't have a P.Eng to sign off on their turbofan control software. But Aerospace is different, because it's a highly regulated industry and people's lives can be quickly and obviously at risk due to a mistake.
I think part of the problem, especially in software, is that not having a P.Eng designation is now self fulfilling. So few people in software have one that the number of people who can get one is very small.
I think the solution would be to find a way to induct an initial group to get the process kick started. The challenge is verifying the training of that initial group as one needs some way other than the current method of working for 4 years under an existing P.Eng.
> Step 2 would be the creation of an actual software engineering profession complete with the equivalent of the 'iron ring' and a pledge to go with it. Maybe the ring could be made of ferrite ;)
It already exists in many European countries, and yes in Portugal there is also a ring to go with it.
And although not enforced as the Engineers Association would like it to be, if you are signing projects as <whatever> architect it is advisable to be a registered member.
No, you're fine to do whatever you want. But companies will likely want to hire people that reduce their exposure to lawsuits that will stick.
So if your software is going to affect lives and you feel that you can handle the fall-out in case it does not and you are not certified then you're more than welcome to make that combination work.
Apple isn't in the business of providing critical software where such liability would make much sense beyond when:
(1) their phones given enough charge refuse to call 9/11 as mandated by law at the present anyway
(2) their authentication details are leaked
(3) their operating systems suffer security bugs
At most other levels the damage could easily be contained. Note that even at present they could be sued for any and all of the above, whether such a lawsuit is winnable is beyond my expertise to evaluate but it would certainly be interesting to see the arguments both sides put forth.
So plenty of opportunity to be employed for 'non certified programmers'. Note that almost every big software provider already has a certification program of sorts but at the moment these are just used as either a revenue stream or a way to get people to invest in the eco system.
Just like in other licensed fields like medicine.. no wait, I meant construction... sorry no, I meant a field with fewer consequences.. like hair dressing.. no wait, that doesn't work either.
Think of it as a coming of age of sorts. Software is now so important to the functioning of the world that we can no longer pretend our actions are without consequences or that we as an industry can exempt ourselves from the results of malpractice.
Just like you're free to build a hut in your backyard you are not free to construct a home and then to sell it if it is not up to code. That makes pretty good sense to me.
I would love for software engineering to have the same licensing requirements as civil and structural engineering.
For simple stuff, like building a deck, you don't need an engineer. For a lot of basic-enough construction projects, you can just buy supplies and hammer them together. It's only where structural or safety issues kick in that you need to have someone sign off and approve your plans, and that makes sense.
Likewise, putting up a brochure-ware Wordpress site, just do it yourself or have some kid do it. But once you start collecting customer/financial/personally-identifying information, you should need to have a professional either do it, or review your code and sign off on it. There's a risk involved, and for too long we just shrug our shoulders and push the risk off onto the consumer, onto Visa, onto whatever.
I am looking forward to when the software world leaves this wild-west phase.
No, the underlying systems need serious rearchitecting to avoid these issues to begin with. A third-party trying to verify your identity should never have enough information to impersonate you afterwards.
CTO is criminally liable for technical breaches involving their company. They're supposed to be responsible for this anyway, but given that at worst they receive a slap on the wrist and their company gets fined, clearly more needs to be done.
You might counter: well no one will be willing to be CTO if it involves such personal risk.
I would counter: companies will find a way, through certification or lawyers, of ensuring that their products are secure enough, or they can show sufficient due diligence that they won't end up in jail.
I personally think making execs criminally liable for this kind of insecurity would really force companies to start spending money on ensuring they ship something secure and not a minimum viable product full of holes like this.
> CTO is criminally liable for technical breaches involving their company
Which means CTOs would have huge incentive to conceal such breaches and persecute anybody who would report them. Also, CTOs would require huge money to cover the risks, and probably wide insurance coverage like medical malpractice insurance. Which is five-figure number at least. I wonder which startup could afford that.
> I would counter: companies will find a way, through certification or lawyers, of ensuring that their products are secure enough
They surely can spend money on lawyers writing tricky contracts and acquiring various expensive certifications. As for whether it'd make their software any more secure - this is much more doubtful.
> I personally think making execs criminally liable for this kind of insecurity would really force companies to start spending money on ensuring they ship something secure
That implicitly assumes right now the problem in software being insecure is because not enough money is spent on it, and if software would be more expensive, it would be more secure. This assumption does not seem to be true.
Laws that stripped resources from share holders.
Blaming a programmer for a culture they are not responsible for creating or maintaining is far too late in the process.
If a venture has no reason to exist if cannot produce secure products a culture of building secure products will emerge. It may be that every MBA with a NDA will not be able to afford the competent programmers required, but as they are weeded out we will have fewer less than useless things.
It goes a lot further than that, the VW saga in Germany is intricately intertwined with the political situation and since 20% of the shares are held by Lower Saxony they'd essentially invite scrutiny of their own position in the whole ordeal.
That lead to my firing at a previous job :/ I think people should be held accountable for ordering engineers to do questionably ethical things like this situation
Sure. I'll never argue against holding people accountable. But even when that doesn't happen, engineers don't get to say "I was just following orders".
It's always annoyed me that ISPs seem to like giving customers these horribly overcomplex modems as well as other "value-added features" like "inject advertisements into the user’s unencrypted web traffic" --- especially since customers are already paying them for the service.
My vision for an ideal modem is more like a dumb Ethernet to coax/fiber/etc. adapter, and is otherwise as unobtrusive as possible. Ditto for an ideal ISP: just sell access to the raw, unfiltered Internet, and nothing else.
In fairness, comcast (big US ISP) allows you to buy your own cable modem. However, they are terrible for multiple other reasons that would not fly in Europe (data caps, lobbying to resell browsing history, opaque pricing schemes, etc)
Is it though? Freedom and regulatory overreach arguments are being used right now, with a straight face, to eliminate net neutrality rules that prohibit ISPs from screwing over their customers. Is it really sarcasm if it's actually happening?
Pretty much, Cable Modem Hacking is extremely uncommon and even an owned modem is out of your control. Look at all the Intel PUMA cable modems with bufferbloat issues where the cable ISP refuses to update said modem to fix the software bug causing bufferbloat.
Not if you're on a Comcast Business plan with static IPs; you're limited to what they'll give you, even though people have shown that their static IP-setup works with customer provided gear.
Well, there are some reasons for this, such as making sure the equipment is good and can easily be checked on/configured by support to meet service level guarantees (which are a thing with some business contracts). It's just not a very good reason.
I don't know, my friends from Croatia would disagree, as would I, but I'm not in the EU, only in Europe (Serbia), you get a black box modem/router from the provider which they can access in any way, and it has open Telnet and HTTP admin with credentials like "admin/ztonpk; admin/tzlkisonpk; admin/telekom; telekom/telekom; and let's not forget admin/admin"... I think this is just in large EU countries that you don't get shitty modems full of security holes... But I'm not even sure of that anymore...
Europe is not a country. The only FTTH provider in Athens gives you a locked-down Huawei GPON modem/router where you can't even enable bridge mode to bypass its routing/NAT
The modems, by default, do WiFi, NAT, and have a 4-port switch on the back. The default WLAN name and password is printed on a sticker that's on the modem, they're unique per modem. Same for the admin user. I don't know if they allow SSH and if the SSH password is unique per modem however.
If you ask for the modem to be put into bridge mode, which they will happily do, the WiFi and NAT get disabled and whatever you plug in to the modem gets assigned a real IP address. When I upgraded my service and required a new modem they actually asked if I wanted it in bridge mode. All of this is configured on their side and the modem seems to pull the configuration when you first boot it.
Dumbish DOCSIS cable and dumb DSL modems exist, they're just not default because most
people aren't technically-literate enough to deploy them. However, cable ISPs often can update the firmware of DOCSIS modems, so who know what potential is there for backdoors, malware, NSA diodeing and weak security. DSL modem management may vary.
Treat all networks and network termination equipment as compromised. Make your general case a special case of your corner case and you won't have to deal with multiple logic paths - at worst you'll have to do some extra work but at least you won't find things broken when you need them most.
My experience has been that ISPs are downright dishonest about it. AT&T and Verizon have both lied to me about how their locked down crummy routers are superior, which isn't true.
You'll love The Netherlands. Your ISP is forced, on your request, to put your modem in bridge mode. Ie. the ISP isn't able to force customers to use a certain modem as router.
TBH, in Serbia, Telekom does give you a modem that you have full control over, it's just a shame that everyone else who types in your IP gets the same control too... Basically they can change any setting that you can, but you can request they give you xDSL config params and buy your own modem which doesn't have routing functionality...
Thats what I have via my FTTH provider: plain DHCP enabled ethernet - a media converter is all what is needed. After that: the customer gets to decide what equipment to use/own.
Not nearly as large as ATT U-Verse but I found a similar vulnerability in the modem I was provided from a rural DSL provider a few years ago.
It all started when I called to get the admin credentials so that I could open a port. They refused, stating that they use the same PW on all of them so they couldn't provide it to me.
After a day or 2 I found a vulnerability in the WebUI that dumped the password to my browser. Did a shodan scan and found hundreds of these modems connected to the internet. What they said was true, that password worked on the 2-3 I tried just out of curiosity.
I tried reporting my findings to them but they didn't seem to care. So I just changed the password on the one provided to me and let it be.
Now I live elsewhere and use my own purchased modem/firewall/wap. Can't trust ISPs to care about your security.
You SHOULDN'T trust an ISP to care about your security - just like you shouldn't trust a the water company to select which Faucet/Shower head you install in your bathrooms.
Raw pipes to info == raw pipes to water (interesting aside, the Mayans always equated thought as being symbolized by water)
I am paying the water company for pipes to my house, I choose which faucets/shower-heads and use the water is consumed for.
Imagine if the water company charged me a different rate for Kohler Faucets used in the kitchen for washing my dishes, vs a Home Depot Hose used in the garden to water my plants? I pay the water company for the volume of water consumed. I pay the ISP for the bandwidth (volume of data) consumed.
Further, if the ISP is ostensibly providing my security to literally anything, then, by contract, they are assuming some of the risk? If "what we do is for your protection" -- then they assume full/some liability.
The water company provides zero such assurances. A broken pipe/leak/flooding/damage has no affect on the water company, my agreement/bill with them.
Further, the water company isn't injecting "paid supplements" (aside from fluoride, which we can equate to NSA backdoors in this example) into my water supply without my will (ads) -- they don't feed me a % of Gatorade in my water supply because Gatorade has a deal with the main faucet - or fertilizers into the garden hose because of a deal with Monsanto.
We're talking about the water equivalent of having the feed to your house that first goes through an open rain barrel at the front of your house, something anyone passing by could lob cigarette butts or other garbage into.
You'd ask the water company "can't I provide my own connection to the water" and they'd say "No". Then you'd want another water company, but no such company exists because they're a monopoly.
At that point you'd be better off collecting water from your roof and filtering it yourself. The water company is not helping.
If the water company is the only water provider and they require you to use the specific faucet or they don't give you water, good luck arguing them with the fancy words and ideals.
In reality, water is considered utility but internet is not. Therefore water company can do much less than ISP.
If the INTERNET company is the only INTERNET provider and they require you to use the specific MODEM or they don't give you INTERNET, good luck arguing them with the fancy words and ideals.
EDIT: Nevermind, I have (bonded) VDSL running through a 5268AC so I don't think I'll be able to do it. If it was "normal" Ethernet it would be possible.
Theoretically, it should be possible for vdsl too. If you can find something to do bonded vdsl in real bridge mode, you could probably hook up the 5268ac to that with the Ethernet want port, and if that works, you could proxy the 802.1x auth there too.
How much bandwidth do you lose? I have AT&T fiber and I want as close to 1Gb as I can. Someone else else I saw online did something similar with an EdgeRouter and he lost a ton of speed.
So there are two ways to add your own equipment. I don't know what method the person you're talking about used.
The first you can put the modem in 'DMZ Plus' mode which is the closest you'll get to a bridge mode. This is where you'll lose bandwidth but it's easier to set up.
The second, which I recommend, is to connect your router to the ONT directly, and use their modem as a client on your network. You have to set up some rules to hook up the 802.1X traffic but otherwise the att modem is no longer in the picture. I haven't lost any bandwidth and I can't imagine that att's provided cheapo box would be faster than an EdgeRouter.
The DMZ Plus mode doesn't really kill much bandwidth. I get pretty close to 1 Gbit through it. Maybe 960-980 Mbit. Good enough for me. The real problem with the DMZ Plus mode is that it basically sets up a NAT to your router and the state table of the modem is somewhat limited. I've never had any problems but supposedly it might choke if you have tons of open connections.
DMZ plus mode is now where near bringing your own hardware and plugging it into the ONT. Everything still has to go through the AT&T gateway.
Comcast will let you bring your own modem and plug it into the coaxial. I've heard google fiber will let your bring your own stuff.
DMZ plus is much different than setting up an EdgeRouter to forward the authentication to the gateway. You are in much more control of your network if you don't use DMZ plus.
Wait, why? I never understood why people with ultrafast connections care so much about this. It's not like you are doing anything that remotely uses 100% of that speed most of the time.
If I had even close to a 1Gb symmetric link I wouldn't care too much if I lost a couple megabits here or there (especially in the name of security or privacy). If I had a 30mbit link that only had 1mbit uplink I'd be upset not to use the whole tube but complaining about 980mbits vs 1000 is just a waste of time.
Simply for the fact that I pay for 1Gb symmetrical and I want the speed which is closer to something like 930mbits. If a lose a hundred or more just to use my own hardware then I'm going to be upset.
I'll do some tests myself. To see what the bandwidth lose is once I get an EdgeRouter.
But why would you need a custom modem for it to work? Can't you use a different modem and just get them to give you xDSL credentials? Login and password, enter VPI/VCI and plug it into the telephone line..? What is the "802.1X auth"?
No, you can't. It doesn't use a username/password for authentication, it uses a protocol known as 802.1X, which uses certificates (and the associated private key) that's stored on the device.
Here[1] is a 802.1x proxy you can use to hide your incredibly vulnerable residential gateway behind a firewall of your choosing. It allows the eap packets to pass through.
I honestly knew this was going to be a problem when I first port-scanned my residential gateway and saw exposed who-knows-what ports, but for symmetrical 1Gb internet for $79.99 a month what can you do?
I had the exact same thought after port scanning my own uverse gigapower connection - so seeing real exploits actually be found on it is not surprising in the slightest to me.
Though instead of going with the 802.1x proxy approach, it's also possible to spoof the mac address of the RG with your router and swap it in place after 802.1x authentication has occurred. (You have to swap without the link to the ONT going down however, the easiest way to do so being a switch with VLAN support. You put the RG and ONT on one vlan, and then once the connection is up, you swap your router in place of the RG.)
Then you can unplug the RG and put it in your closet (until you have a power outage and have to do it again, which is the main drawback to this approach. However since AT&T provides a UPS for the ONT, if you have a UPS for your router you should be good there too.)
> a kernel module whose sole purpose seems to be to inject advertisements into the user’s unencrypted web traffic
Ugh…why is this even a thing? Like who thought it would be okay to add this "feature" to a modem, let alone at the kernel level where it would be difficult to disable and easier to be compromised?
I have AT&T Gigapower with a Pace 5268AC, so not one of the modems discussed here.
I don't use its wifi and I have it configured for pass-thru mode. When I got service early this year, I briefly investigated bypassing it entirely. It turns out you need the modem to periodically respond to authentication packets from the AT&T network. But with some ingenuity, you can hang the router off an extra port on your own router and use it only for authentication purposes:
I eventually decided not to do this because it's somewhat brittle and I didn't otherwise have any issues with the Pace. It's performance is fine.
But, given this disclosure, I'm going to revisit my decison. First, it seems like it's just a matter of time before the Pace has a similar security issue. Second, that kernel module for injecting HTTP advertisements. Just the idea of it bothers me.
Update: I've moved the 5268AC behind my EdgeRouter Lite. I wasn't happy with any of the 802.1x proxies other folks wrote and/or they weren't working for me and/or I just wanted to write it in Python, so I wrote my own:
What really ticks me off about this is that for FTTN access around here, you have to use their crappy routers. This is true even if you go through a third party like sonic.net.
Worse, their routers seem to do something to defeat attempts at two-level NAT setups.
I thought one of the network neutrality principles said you couldn't discriminate against compatible network hardware. Too bad Pai is in now.
This is one reason why I really like FiOS. I get ethernet straight into the house. Sometimes you get coax instead, but you can use a MoCA adapter to convert it to ethernet. And their provided modem is actually pretty decent! I had planned to bypass it when I signed up for the service, but after using it for a bit I decided to just keep it in place.
Can someone please use this to lift the EAP certificate so that any individual can authenticate themselves with AT&T instead of having to put the gateway in the very entrance to the home network?
I have AT&T U-verse (Gigabit fiber product) at home and I believe that I an not vulnerable to public Internet attacks because I've configured my modem in pass-through mode. The AT&T pass-through is pretty weak and is really only a 1:1 NAT, not a bridge, but as far as I can tell, the modem does not answer to the Internet when configured in this mode.
I also have a 5268AC (on Uverse) and have been unable to replicate these issues (but, TBH, I haven't tried very hard at all), although I have mine in "almost-bridged-mode" ("real" bridged mode can't be done, but I have it as close to bridged mode as I can get it).
So since Sonic was highly reviewed and hyped here (San Jose), I checked them out/tried to switch to them. For my address the only option from them was the one on AT&T's IP network. For that service, I must use the modem they (AT&T) give me. There's modem renting fee, and there's router feature that I cannot turn off. That's the dealbreaker for me so I stopped trying. I'm glad that I didn't go that route, and I hope Sonic could do better here.
But still, there's router feature on that modem that I cannot turn off, so I cannot bring my own router (unless I want to do double-NAT), and there are still highly likely some hardcoded remote access password.
This is why, if I can, get bridge mode on the ISP provided device and put in an OpenWRT router. I am still on 802.11n but who cares, most of my devices are not 802.11ac either.
I want to know what runs on my router, damnit! It's my biggest vulnerability if done wrong and one of the more important security features of the home network if done right.
I disagree; it is totally reasonable to be terrified when the device at the heart of your network has a internet-facing root exploit that you can't patch.
Has anyone with this modem in the wild actually confirmed this update was pushed to them and if so is ssh listening on their public IP? So far in the article comments (haven't looked thru HN comments yet), those having this modem have verified that ssh is not listening.
Another way around this is to just request a modern modem. I recognize that modem as an older model and, especially if it is having hardware problems, you can just call up AT&T and ask for their newer model (not sure if it is still the newest) 5268ACFXN.
I just installed a Ubiquity Unifi Security Gateway and AP behind my Uverse modem and turned everything off including the radios. Thank God. I'm sure I could still be MITM so running all traffic to a VPN now.
I am not joking when I ask this question: will this open me up to potential CFAA charges if I run the commands to check if I'm vulnerable and run the self-mitigation commands?
In a world where users should have control over their own computers, it is the customer who should have SSH access, not AT&T. But as you all know, most times the customer does not own the modem. One well-known workaround is for the customer to use their own modem or to use their own router as a gateway to the modem. But does this really give the user more contorl over the modem/router?
These user-owned modems/routers usually do not encourage SSH access by the user, if they even provide it. Instead they promote a "web interface". Indirect control of the settings. Better than SSH? That is for you to decide.
The "market" seems to love the "web interface". But this often the easiest vector for successful attacks. Less control, and less safety. Is the tradeoff still worth it? That is for you to decide.
Relying on "Keep up to date with patches" or "Enable updates" as a strategy to improve the safety of a product that was unsafe to begin with is a bit of cognitive dissonance given that its safety was deemed "good enough" for the renter/purchaser at the time of rental/purchase. To achieve a safer product requires not only manufacturers to set new priorities but also consumers as well.
How important is that "web interface"? More important than safety? And why not configure using SSH instead? Whatever the reasons, tradeoffs have consequencesre: safety.
I applaud Hutchins' choice to use Full disclosure instead. Let that be a lesson to all Big Corps, there are White Hats out there who won't be bullied by your "Responsbile disclosure" propaganda. The writing is on the walls, Big Corps, take responsibility and secure your gear by design or be pilloried.
A VW engineer is likely going to jail over dieselgate.
We have already seen that such insecure internet connected devices can be easily and quickly assembled into a botnet. The operators of such a network can direct the attacks at healthcare institutions, national infrastructure, and other safety critical systems.
At some point, there has to be stricter consequences for companies than simply a fine. C-levels won't start paying attention to security until there's the real possibility that their ass will end up in jail for this kind of insecurity.