Hacker News new | past | comments | ask | show | jobs | submit login
A look at home routers, and a surprising bug in Linux/MIPS (cyber-itl.org)
140 points by walterbell on Dec 15, 2018 | hide | past | favorite | 79 comments



The paper was laborious to read.

The short of it seems to be that Linux used to perform floating point emulation on MIPS by writing instructions to run to the usermode stack. The stack therefore had to be executable. This is known to be a bad idea.

A couple of years ago, this code was moved from the stack to a new segment, but this segment is still writable and executable, and the address is fixed. This is known to be a bad idea.

And of course the stacks are still executable because the commonly-used compilers haven’t been updated to request non-executable stacks.

The paper does not propose a solution.


The “obvious” solution is to map that page RX and have the kernel write to it anyway because the kernel has magic kernel powers.


Or map it twice. Once RX for usermode, once W at a random address for the kernel to write to. Some JITs use the same tactic.


Some of the more extreme SELinux and similar policies don't like this either.


The permissions are managed by the processor or the mmu. The kernel has magical powers to modify the permissions but can’t just ignore them.


The kernel can use a second mapping that is marked as being accessible from kernel mode only to write to the page.


On MIPS32, there's a couple fixed RWE mappings in the higher half that the kernel runs out of.


How about changing the ABI to something like the ARM soft float ABI. Yeah, it's got a few problems, but it's way better than all this crap.


I think that ship has sailed long ago, softfloat MIPS targets are almost certainly legacy hardware these days.


Nope, 1004k/1074k is still the default vs the 1004kf/1074kf that includes the FPU.

MIPS never really got out of their gate count niche that'd make an FPU a drop in the ocean.


Interesting, I'll admit that I haven't worked on a MIPS-based platform in a while. I suppose that with ARM becoming the de-facto standard for powerful embedded chips MIPS can only stay relevant in specific niches.


It is more widespread than you may assume. A lot of Ubiquity gear uses MIPS. Chips on hardware which are not the processor might be ARM or MIPS.


I think the only reason that MIPS is still found in WiFi gear is because 802.11ac was a 5GHz-only standard. Current WiFi routers and APs can get by with an old MIPS-based SoC with 802.11n 2.4GHz, plus a separate 802.11ac 5GHz NIC connected by PCIe. With 802.11ax, the MIPS-based WiFi SoCs will finally be obsolete, and all the newer WiFi SoCs are ARM-based.

Ubiquiti and a few others are still using older Cavium Octeon network processors that are MIPS-based and don't have built-in WiFi driving their obsolescence. However, I think 2.5Gb/5Gb Ethernet will eventually push those products to adopt the newer ARM-based Octeon processors, leaving MIPS very dead in the networking world.


Thanks for the detailed, informative post. I'll be selling my Ubiquiti networking gear ASAP to upgrade to ARM-based and x86-64-based networking gear.


> The short of it seems to be that Linux used to perform floating point emulation on MIPS by writing instructions to run to the usermode stack. The stack therefore had to be executable.

Why is this necessary? I see no reason why you can’t emulate floating point arithmetic in software with no writable and executable segments at all.


There's a pretty clear explanation in https://lore.kernel.org/patchwork/patch/506101/ :

MIPS floating point support requires that any instruction that cannot be directly executed by the FPU, be emulated by the kernel. Part of this emulation involves executing non-FPU instructions that fall in the delay slots of FP branch instructions. Since the beginning of MIPS/Linux time, this has been done by placing the instructions on the userspace thread stack, and executing them there, as the instructions must be executed in the MM context of the thread receiving the emulation.

i.e., the kernel needs to find some place in the userspace process to stick the emulation code, because the emulation code needs to happen in the context of the userspace process (e.g., memory access permissions etc.), and ~20 years ago a convenient place the kernel could write to in the userspace process was to push things onto the stack and nobody realized that was weird. Historically the stack was executable everywhere, and when we realized that was a bad idea because exploits can put shellcode on the stack and stopped doing it on most architectures, we weren't able to stop on MIPS because of this reason.

If we were rearchitecting this from scratch today I bet there would be something like a vDSO for this, i.e., the kernel maps a read-only executable page at a random address when the process starts, and when you try to execute a floating-point instruction it sets some registers (or pushes data and only data onto the stack) and updates the instruction pointer to point to that page. But the simplest fix to the existing architecture was apparently to specify a non-stack area for the kernel to use, which still involves a W|X segment, but at least it isn't the stack.


> If we were rearchitecting this from scratch today I bet there would be something like a vDSO for this

This is what I was getting at with my question. There is a way to do this; sure, branch-delay slots are annoying but they’re not impossible to work around.

> But the simplest fix to the existing architecture was apparently to specify a non-stack area for the kernel to use, which still involves a W|X segment, but at least it isn't the stack.

I mean, you protect yourself against trivial buffer overflows, but a good write primitive is still enough to get code execution. It’s more of a band-aid than an actual fix.


If I'm reading the patch right, you allocate a buffer at a random address and pass a pointer to the kernel, and then you don't have to hold on to a pointer to that address any more, right? So unlike with the stack, it should be hard for an attacker who can only writes to figure out where to write. (Maybe this isn't true in a 2 GB address space given enough attempts?)


Yeah, I missed the part where the address is given by the user space thread to the kernel instead of the other way around. That being said, I’m not sure as to the amount of entropy the address can have: it might end up being bruteforceable on 32-bit without having to query the full address space (the memory region might be a whole page?). The best solution, of course, is to not have any RWX segments at all.


It wasn't ever "necessary". There are perfectly working IEEE emulation libraries written in portable C that the compiler could have hooked instead. But they're likely several times slower than what was picked.

NX stacks are a comparatively new idea, from an era when even embedded CPUs have lots of spare cycles. It hasn't always been that way.

The lesson I took away from this isn't about architecture, it's that Linux on MIPS has a pretty severe shortfall in maintainer bandwidth for this to have gone unnoticed and unfixed. I mean, the first question in review of this patch should have been "OK, who's going to fix the stack permissions now?"


It's not emulating the FP code itself that used this technique.

The issue is that the emulated FP code can include FP branch instructions, and those FP branch instructions have branch delay slots, and the instruction in the branch delay slot can be an ordinary integer instruction like a load or store - that has to be done with user permissions - and it can even be an instruction from some weird extension to the ISA that the kernel might not have heard of.

It's executing those branch delay slot instructions, in the branch-taken case, that uses this W|X page.


I'm guessing that they want to avoid the overhead of handling the fault, context switching to the kernel, emulate the instruction (which may be tricky if it involves a memory access), context switch back to the application. Instead they can just emit the code into the address space of the process and patch the instruction with a jump to it. Maybe they can also "JIT" the instruction to emit optimized code for a particular invocation.

That's just a guess though, I read sideways through the paper and I don't think they really explain that. They link to this page but it doesn't really give any details: https://www.linux-mips.org/wiki/Floating_point#The_Linux_ker...

In particular I'm not sure why you'd want to put it in the stack instead of some allocated page dedicated to that endeavor (besides "it was already there so we used it").


I mean, TP-Link could enable all of these basic security features and be lauded in the paper, then you look 5 minutes and find a "system(<web interface input>)" level bug in the firmware.


ASLR, stack protector etc are all enabled in recent openwrt as far as I know, not sure about DEP though


What are the best alternatives? I've read about some open source router alternatives (both hardware and software) in the past but forget the names. Also a very good HW with hardened open software could be great. Any information?


For myself, I switched everything over to Google WiFi's, precisely because they auto-update, and having worked previously on a security-focused team at Google, I trust them to have a competent security team and actually stay on top of the patches. I don't miss fussing with manually updating router firmware. Life is too short.

On the other hand, my Nest thermostats were bricked after a software update this week, so maybe today I'm starting to see a crack in my "auto update is best" dogma...


> On the other hand, my Nest thermostats were bricked after a software update this week, so maybe today I'm starting to see a crack in my "auto update is best" dogma...

It works best when the vendor can be trusted to only push security updates and occasional quality of life improvements. In general, automatic updates tend to be a vector for bloat, user-hostile features (e.g. spyware), and user-hostile business practices (e.g. remote bricking).


Which vendors fit this criteria for you?

I am not aware of any, except—somewhat ironically—for some open source projects.


> Which vendors fit this criteria for you?

None. Hence, personally, I dislike auto updates.


Microtik has a “bugfix” security update channel that can be selected in their routers mgmt interface. They also provide a “current/stable” channel which delivers new features.


...maybe today I'm starting to see a crack in my "auto update is best" dogma...

For me that crack came when an automatic update to Android removed Exchange server support from my tablet (around 2015). I no longer had a good workflow for keeping up with work communication, which ultimately, as a sometimes-remote dev at the time, cost me a lot of productivity and reputation at that job.

Now, anything that involves my productivity or quality of life (e g. thermostat) is on a manual update process as much as possible.


I just checked my OnHub. It is the original, now over 3 years old. It got five updates this year, none of which I noticed. It's too bad that it's nearly guaranteed that they will kill this product line at some point in the future.


Same reason I've also switched to G-WiFi, lately I've been having issues with iOS devices disconnecting randomly and after a couple of seconds the connection would have been established again.

I've tried playing around with DHCP but nothing changed, maybe my it's my ISP's router which is a pita to work with...(vodafone).


If you have a 1500+ sqft house, want an easy setup & excellent coverage, and want pro-level capabilities, try a Ubiquiti UniFi setup. Troy Hunt did this and is converted: https://www.troyhunt.com/ubiquiti-all-the-things-how-i-final...

If you've got less square footage (or don't need top speeds everywhere), want to spend less money, and want something really easy to set up, try Ubiquiti AmpliFi. Here's Troy's writeup on that: https://www.troyhunt.com/how-i-finally-fixed-my-parents-dodg...

Personally, I am amply served by a quad-core HP thin client ($70 on eBay) running pfSense (w/ Intel server NIC, $25 on eBay) and a Ubiquiti UniFi UAP-AC-LR ($75-90 used on eBay). My living space is 1500 sqft. Internet speed is 150Mb/s up and down, and I get that speed of WiFi nearly all over the house. Total cost was less than $200.


Given the posted article is about the poor security posture of most home routers, has anyone inspected ubiquiti?

I gather they have a bug bounty which is a good start, but so do Netgear and their routers are still full of bad vulns.


It is worth remember that unlike those home routers, Ubiquiti's EdgeOS is based on a production major network os, Vyatta, (as is VyOs), and therefor has a lot more scrutiny and bug fixing than those other things, which are often even more complicated by the lack of foss hardware or standardized ways to do things (many routers are stuck on whatever the manufacturer put on it in the factory). I'm sure Ubiquiti's code is probably ripe for fuzzing, etc, but a well configured EdgeOS device I would put against any cisco, cumulus , extremeos, routeros, junos, pfsense, ipfire, etc device any day. The main benefit of something like Ubiquiti being asic offload that wouldn't exist if you did something like pfsense on cots x86.

The real irony of how vulnerable these devices are, is that often they are based on tech that is foss that has updates to fix those issues but has been carefully packaged up inside a blackbox the consumer doesn't get to control and therefor doesnt get those updates.

Once again, why we need a "right to root".

For national security!


Yes, nice advertisement and before I read this article I'd have agreed but the bad news is I just verified that my 2 EdgeOS products don't have NX support. Heck, one of them uses the Cavium processor mentioned in the PDFs.


> Ubiquiti's EdgeOS is based on a production major network os, Vyatta, (as is VyOs), and therefor has a lot more scrutiny and bug fixing than those other things

Apparently you’ve not really looked at EdgeOS. https://www.theregister.co.uk/2017/03/16/ubiquiti_networking...

Vyatta was acquired by Brocade in 2012. Ubiquiti forked after that.


EdgeOS was really based on VyOS... which was based on Vyatta, which was still being used on major production systems even before the brocade acquisition. I have used it in prod myself. I should have said VyOs but still, the point gets across to those not looking to nitpick.


I got finally frustrated with the flaky WiFi reception at my parents-in-law's and bought a UniFi AP for them. I am very satisfied - the WiFi reception is now either 5/5 or 4/5 in every room, while the previous Asus WiFi router had no reception in some rooms.

But I was rather surprised by the lack of almost any configuration options in the UniFi. No dedicated WebUI, just a few basic configuration options (SSID + WPA2 password) in the mobile app.

Do I need to procure a Windows machine and download the Windows app to configure and take full advantage of the AP? What extra options can I set up with the desktop app in addition to the basic configuration options in the mobile app? As said though - it works really well as it is.


The desktop app serves up the same web UI that the UniFi Cloud Key does. So you need either the cloud key or a computer on which to run the web app (even a Raspberry Pi is reportedly capable of serving it; I just fire it up on my Macbook Pro when it's time for an update or config change).

There is a dizzying amount of information and options in the web UI, but it's designed well so the basics are well-presented and accessible.


The UniFi Cloud Key is basically a piece of hardware akin to the Raspberry Pi, but only serving that purpose. The UniFi Controller software is written in Java so it will run on any machine which runs Java. You can even run it in Docker. You can also use SSH to connect to the AP.

The UniFi line is rather easy to configure with the UniFi Controller. The EdgeMAX line can utilise UNMS (which can be run in a VM or Docker) for configuration but also HTTP(S). EdgeMAX devices are much more powerful, allowing fine-grained configuration via SSH when HTTP(S)/UNMS doesn't suffice. For UniFi line you should be OK with the UniFi Controller software.

TL;DR: The UniFi line has a lower barrier of entry.

Something else of note: it appears Ubiquity wants users to be able to use UNMS for UniFi products in the future. That's good news because right now you need 2 controller software for 2 product lines from the same company.


I've happened on a similar solution after years of testing --- pfSense in a VM (with a dedicated NIC) on my NAS + backup box, and two Ubiquiti AC Pros. By far the fastest, most robust, and easy to configure system, plus nice benefits like pfBlockerNG (similar in functionality to a Pi Hole) and OpenVPN. Tutorials and documentation are generally pretty good for pfSense and UniFi is extremely intuitive.


I use a fanless PC running OpenBSD. It's a bit of work to configure, but I think I can trust it.


PC Engines APU2 fanless PC has multiple NICs, optional LTE/SSD, AMD CPU and coreboot open-source firmware. Widely used with pfSense and OpenBSD. TPM available via LPC bus.


Passively-cooled Apollo Lake x86_64 computer (e.g. from Compulab) with wifi and two Ethernet ports running a mainstream distro with proper security updates (e.g. Ubuntu). Self-configured dnsmasq, ufw and hostapd.


Another approach here is that executable stacks, ASLR bypasses, etc. are only ways to make an exploit easier, and reducing your attack surface in the first place is a decent approach.

The internet-facing side of my router does nothing interesting (I haven't enabled the onboard VPN, remote management, etc.) and I trust everyone with access to the LAN side of my home wifi not to attempt to exploit my router in the first place. That said, I do wonder how much attack surface is exposed to websites making permissible cross-domain requests (blind GETs from image loads, blind POSTs from <form action="http://192.168.0.1">, etc.).


There are a ton of ARM based routers out on the market, both in user-friendly products (most current high end routers), and geek-friendly products (EspressoBIN, Turris Mox).

I'm personally running OpenWRT on an EspressoBIN with an Atheros card, but I keep eyeing the Turris pretty hard.


ARM-based products may be stuck on whatever kernel snapshot the board support package had. With x86_64 one can expect proper kernel security updates.


The boards I suggested are well supported in Mainline linux (though there are a couple minor patches that haven't yet been merged).


There is insufficient demand for software warranties. It doesn’t seem unreasonable to ask for a warranty of at least two years, which means one would get two years plus how many additional years the product continues to be sold.

Could even encourage security by using read-only flash, only a reboot is needed to clear any viruses.


Microsoft has proposed a model where they provide Linux security updates for 10 years, licensed separately from a device vendor that may go out of business. Let's see if Azure Sphere routers make it to market in 2019 - and at what price point. Maybe they can bundle the hardware in a single subscription service, or partner directly with cable companies.


You don't need a warranty to get kernel security patches _in practice_ if you run a mainstream distro on x86_64 hardware (which is available passively-cooled with low power consumption).


Espressobin is really bad, they didn't implement pcie drivers properly. I would stay away, and use an intel atom board from the 2010's.


Yeah -- that's probably a good point. Don't run the vendor kernel. That's honestly a good rule for most ARM boards -- you should avoid the vendor provided kernel tree.

To clean up the odd PCIe behavior on the Espressobin with mainline 4.17, there are 6 kernel patches required. Arch and OpenWRT both already ship them, and I'm in the process of getting them in to Buildroot. I'd expect they'll land in the mainline eventually.

https://github.com/jrb/buildroot/tree/espressobin/board/glob...

edit: sorry; 3 PCI patches. The other 3 mitigate other board/chip oddities.


You know that FreeBSD and pfsense run on espresso.bin, right?


Not surprised: I remember that MIPS has instructions to trap on integer overflow 'for free'(no need to pollute your icache with branch-on-overflow like other ISA) but compilers don't use these instructions :-( Conclusion: to a first approximation nobody care about security.


They don’t use those instructions because unsigned integers in C are supposed to wrap.


I was talking about signed integers of course.


I am not familiar with the services all those routers provide to their companies but I am not bringing one online without first slapping OpenWRT on it. Well tested and to my knowledge reliable. Functional for everything I need it for.


I have a small embedded device (that runs OpenBSD) that serves as the router here at home. It works wonderfully for me and serves my needs well but it definitely wouldn't be a good choice for 99% of "home router administrators".

Likewise, OpenWRT (and similar open-source router firmware) is a big step up in quality than probably pretty much anything any router manufacturer ships.

Here's the thing... <rant>

You and I -- and, I'd wager, the overwhelming majority of HN readers -- are easily capable of replacing our stock firmware, locking it down, keeping it up-to-date, and so on. Unfortunately, the average person (who likely just buys one of the cheapest routers they can find on the shelf at Walmart or Best Buy or similar) isn't.

The average consumer simply doesn't understand that many of these devices they buy -- especially all of the new "Internet of Things" devices that have been popping up the last few years -- are completely insecure pieces of trash. Hell, many of them don't even care -- well, until it directly affects them personally, at least -- so long as "it works". They have no desire to learn a ton of stuff about computing, networking, or security -- they just want the ability to monitor what's going on in their house while they're away or whatever -- and they cannot understand why they should be required to (and I, personally, don't blame them). They don't know about the whole "convenience versus security" continuum or just how far away from the "security" side of that continuum that these devices they're buying to make life more convenient are.

The average consumer (rightfully) expects that these devices that are available for them to purchase and install in their homes are (reasonably) "secure". They simply aren't aware of the sorry state of (in)security in the software industry.

I think that within the next few years we'll begin to see (in the U.S., at least) some regulation with regard to security and software. I don't think any of us really WANT this to happen (it would be much better if the industry were "self-policing", of course) but it has become apparent that those who are producing these devices simply aren't going to devote the resources required to improve the security of their products until they are forced to. </rant>

(Related: for the last seven years (until very recently) I worked for a small ISP. I was amazed at how very little many of the employees -- including the ones responsible for all of the networking gear! -- knew or even cared about security. With the exception of myself (and a recent college graduate who we hired as, basically, my "junior") nobody even thought about security unless or until "something happened" that required them to. Having experienced that, it became clear to me that the average person REALLY isn't gonna give a damn.)


I think part of it is that there's also not that many hackers actually using exploits. Sure, there are automated stuff like botnets, but there seems to be little manual hacking done. Chances are, if you have lousy security, you're still not going to get hacked (unless you're literally holding money, or have blueprints of military hardware). If this is correct, it means that security is usually a waste of time.

I really care about security, but I think it's mostly because the aesthetics of insecure code bothers me, and not because of a carefully considered cost-benefit analysis.


I rather think the hackers are simply nice, not vandalizing wildly, instead they stay under the radar and we don't even begin to understand what they might do with that power (close the holes, I hope).


Note that this does not effect all MIPS-based systems. The standard has included hardware floating point for many years now, it just seems some OEMs have neglected to upgrade or cut it for cost savings.


Given the OEMs cited, would it be a reasonable estimation to suggest that the _majority_ of MIPS-based systems are affected?


Yeah, most consumer routers I've dealt with don't have a FPU. They're more likely to be dual core than have an FPU.


Consumers care about speed and don't care about security, so it's reasonable choice by vendors.


Consumers don't really care about speed, they care about checkboxes being checked. I'd be surprised if dual core really did anything for getting packets to and from the gateway interface any quicker. If anything I bet it makes it go slower with a naive implementation.


Consumers care about security if they’re hacked.


They aren't hacked yet when they make decision about buying the router, and anyway they cannot differentiate based on security promises marketed to them by vendors.


> they cannot differentiate based on security promises marketed to them by vendors

If $VENDOR has a reputation of being hacked, it might deter people from buying their products.


It won't make a difference. Consumers don't care about security on their devices. That is why I quit frankly like things like Bricker bot. The internet is an ecosystem and people's devices should be bricked if they are having a negative effect due to blatant negligence by device manufacturers.

I hope a new round of Bricker bots continue killing devices that are unsupported, incredibly insecure devices. It is a public service (albeit a felony, don't get caught)



Rebranding is cheap these days.


We've updated the submission from a tweet (https://mobile.twitter.com/dotMudge/status/10736765802899456...) to its linked article.


Hmm. The tweet, and thread on Twitter, has more actual information on the vulnerability than the page you linked to. You have to click through to the actual paper to get the additional information. I feel like the Twitter link gives a better quick summary information.


I think clicking through to actual papers is something HN is up for!


Then why not just link the paper?

I personally like the summaries. Papers are typically written to sound smart, not to read nicely. HTML scales, users can configure the font, you can click pictures to enlarge... but instead we choose to use PDFs. Why? I speculate it is because everyone else who sounds smart does it. This particular paper is not as bad as most (no serif font, 8pt text, small and unreadable diagrams, references instead of practical links) so it's definitely not all directed at this particular paper, but I do question the usefulness of linking to a longer text instead of to a useful summary that has a link to the longer text.

GP is right in that the first 15 words of the tweet told me more than the whole 269 word blog post. I also recognized the author on Twitter but not on the blog. I'm happy the tweet is still linked in the comments.


Yes, my point is, either leave the link to the Twitter thread (which also contains a link to the paper), or link directly to the paper. Linking to this summary page, which contains a brief abstract that contains more words but less information than the Twitter post, seems to be worse than any of the other options.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: