WiFi chips and baseband processors are particularly attractive targets for exploitation, since they are network entry point for devices, and running systems that have probably been less investigated (at least publicly).
Very nice research and writeup. For those who haven't seen it a couple of years ago, Project Zero also had a series of articles about exploiting Broadcom's WiFi stack [0].
Can someone please do a crowdfunding for a fully open source 802.11ac chipset and mini PCI express device?
For security purposes we do not want any binary drivers, blobs, bloated boat loaders and other fancy non-security in the hardware. This is really really basic security level.
802.11-2016 is a 3000+ page (very dense) technical standard, you'd need bigger backing than a crowdfunding campaign to create an open hardware+open software solution for it.
> Can someone please do a crowdfunding for a fully open source 802.11ac chipset and mini PCI express device?
A few RF transceivers and an FPGA like an Artix-7 (which has PCIe capability) might do the trick. It wouldn't be as cheap as a mass produced chipset, but a completely open 802.11ac chipset is unlikely to be mass produced anyway.
We already have examples of LTE base stations being run with SDR hardware like the LimeSDR, which is just an RF transceiver and an Altera FPGA, with a USB3 connection to the FPGA fabric.
In fact there are some SDR/FPGA dev kits that are Mini PCIe size and intended for use inside a laptop, specifically designed with LTE in mind[1].
So WiFi seems doable, even if you end up with a soft core CPU in the FPGA to do the same jobs WiFi chipset firmware is doing right now, at least you'd have full control over it and the firmware running on it.
unfortunately the FPGA ecosystem is even more closed and open source unfriendly than the wifi hardware one, you aren't allowed to know anything about the chips, how code runs, or how to upload your own code, and you even have to use vendor specific IDEs and language extensions you are lucky if work anywhere outside of windows.
Current market FPGAs definitely aren't some shining beacon alternative to shitty hardware vendors, they are amongst the worst of the lot.
A few years ago that was true, and commercial tools are still horrid and closed and necessary for certain FPGA families.
But as of right now you can use[1] the Lattice iCE40 (small, 8k LUTs), Lattice UltraPlus (5k LUTs, DSPs) and Lattice ECP5[2] (~85k LUTs, with 5G SerDes and PCIe Gen 2) with completely open tools. The ECP5 in particular would be well suited for it.
And there is a productive effort[3] to do the same for the Artix-7 and other Xilinx 7 Series parts.
Even for those parts that are still very much closed, you can load an existing bitstream on them using open tools. Intel Max10, which is the part found on the LimeSDR Mini, is one of those even though we don't have open bitstream documentation for it yet.
The major commercial FPGA tools all work Linux at this point too, I use most of them on Ubuntu routinely including Lattice Diamond, Altera/Intel Quartus, and Xilinx ISE/Vivado.
This is a lot harder to achieve than it sounds. Open source hardware projects that go as far as chip design - are there any of these that have been successful?
It needs a lot of money. You'd have to sell people a free virtual spaceship with the thousand-dollar tiers, maybe. And persuade people to accept higher per-unit costs than the cheap Chinese equivalents.
There are a lot of open-source SDR projects around for things like LTE - you'll never get the per-unit costs of fixed function taped-out designs but it should be doable. SDR stacks already exist for WiFi monitoring and analysis although most of the stacks are CPU based and therefore too slow to associate with networks because they can't send ACKs in time. With an FPGA based system the latency requirements are probably achievable.
The better approach is probably a fully open firmware for an existing ASIC - let someone else subsidize your production costs. Obviously there's still attack surface in the fixed-function ASIC components but the attack surface is way smaller and the boundary could probably be audited fairly well.
Better leverage would probably be demand side -- get a couple of major buyers (maybe for systems which can safely use older tech, like industrial or government or finance) to demand a baseline which requires open source/verifiable hardware, at least for certain systems.
Given that Wifi chips already have their own ARM CPU, at this point I'd rather have that CPU which already runs its own OS to just present as a network device to do NAT. Connect it to the fixed network, use a serial link - anything will do.
At least, I'd rather have anything but the current alternative: a device on the PCI bus having DMA with a firmware I can't audit.
Not speaking specifically of the OP case, but CPU gets less and less involved in the datapath starting from a certain requirement of the max throughput. Insisting on it's going through the CPU still would raise the bar on the CPU (as a consequence, more fast RAM and increased overall power consumption, shorter battery life).
> with a firmware I can't audit.
In modern fast datapaths, there is a good deal of hardware acceleration involved, the firmware code would probably be incomprehensible without intimately knowing these.
My understanding is that all modern OSes now use the IOMMU to protect system memory from rogue devices. Of course that protection is only as good as the PCIe implementation and the drivers doing the mapping and operating on the mapped structures.
TL;DR
Reseacher finds super cool RCE over (unconnected) WiFi for the Marvell Avastar Wi-Fi chipset family firmware and one to (locally) exploit the AP device driver.
List of impacted devices includes PS4, Xbox One, Samsung Chromebooks, and Microsoft Surface devices.
Nicely written paper from Embedi researcher Denis Selianin himself:
Good. I hope that vulnerabilities like this continue to surface until legislators take notice. Morally bankrupt vendors will never stop locking down hardware unless governments get involved. Fuck each and every company that does this. Fuck them all to hell.
While I agree with you that vendors need to be held accountable for shipping crap, we also have to beware that we don't end up in a world of devices we cannot do anything on.
All kinds of jailbreaks, no matter if for the first generations of iPhones, for consoles, or for rooting Android devices, are based on vendors implementing shoddy security. Take it away and whoops, now we as users are fully in the death grip of what vendors and RIAA/MAFIAA allow us to do.
we also have to beware that we don't end up in a world of devices we cannot do anything on.
IMHO it's already gotten a bit too far in that direction, and if there's no mass revolt (which is itself quite unlikely), it's only going to get worse. The old Franklin quote has never been so relevant... people these days are so highly valuing "safe" over "free", that they don't realise they're building prisons around themselves.
An actionable way to discourage this outcome is to use the General Public License version 3 or later, which contains "right to repair/right to tinker" provisions: https://www.gnu.org/licenses/quick-guide-gplv3.en.html
In the United States, you can also join The Repair Association advocacy group: https://repair.org/
We're actually on the same page here. I should have worded my comment more clearly: Governments should get involved and force vendors to allow device owners full control over their devices. Not only to secure them selves from vendor mistakes, but also to repurpose the device to fit their needs.
Remember that your computer is really made of multiple computers that run bare-metal code or can have their own OSes. For example Intel CPUs have Minix running inside of them.
>> After the memory controller, bus bridges, GPUs, and DMA controller, it's probably the largest one
Probably best to distinguish between processing-heavy and throughput-heavy; apart from the GPU, all those are throughput-heavy but do very little to the data, and I think would not normally have a recognisable processor or an OS. There's the additional problem that they're required to boot, so nowhere convenient to load the OS from unless you scatter SPI flash across the board.
GPUs on the other hand have a full task-switching operating system.
It means it has a large processor that can run an RTOS. It does not automatically mean it has an RTOS.
I would be surprised if the faster bus and memory controllers had it (because of latency problems). I would expect something like it on a USB controller, GPU, network or disk interface. I really have no idea what to expect from a DMA controller.
It’s the card itself fairly similar to how controllers work in HDD/SSDs basically they abstract the physical storage medium which allows them to use a wide range of flash memory on the backend while maintianing compatibility with the SD format.
While you can achieve the same abstraction in hardware it’s much easier and cheaper to simply pick a small micro controller and do all the black magic you need in software especially since the flash on some of these cards can be really really bad as it’s often the lowest grade flash or worse recycled memory that ends up being used which means you end up with chips that are 50-80% defected so the controller ensures that these sectors/cells aren’t used.
Wow. I wonder if my mental image of an RTOS on the devices is correct? I would've imagined there's some sort of firmware inside any device that communicates with a host, but when I hear RTOS I think of something fairly generic that would be deployed across a range of device types (like a Linux system but real time) rather than specialized firmware... is that accurate?
For the Wifi device mentioned in the original article, they dissasembled the firmware and found out that it uses "ThreadX": https://en.wikipedia.org/wiki/ThreadX
> "hreadX provides priority-based, preemptive scheduling, fast interrupt response, memory management, interthread communication, mutual exclusion, event notification, and thread synchronization features. Major distinguishing technology characteristics of ThreadX include preemption-threshold, priority inheritance, efficient timer management, picokernel design, event-chaining, fast software timers, and compact size. The minimal footprint of ThreadX on an ARM processor is on the order of 2KB"
ie it's generic code that gets built into a unikernel-style image with the device-specific code that actually implements the various tasks required of a wifi controller.
Embedded micro controllers are very common these days in a wide range of ICs most of them aren’t disclosed or accessible to their users their sole role is to abstract the physical IC and present what the customer expects this can be things like timers, flash memory or even microcontrollers themselves as well as more complex ICs that need to ensure some defined PHY like network or serial controllers.
Which ones? The ones on SD cards are there for one reason only and that is to provide the hardware abstraction and PHY compatibility to allow SD cards to be manufactured more cheaply.
I haven’t seen any evidence that any of these MC does anything beyond that.
A RTOS is not really an OS, just a super-fast way of dealing with I/O streams. More driver/firmware than OS, if you ask me. But some companies need fancy words for marketing, I guess.
> not really an OS, just a super-fast way of dealing with I/O streams
Saying this without any animosity, but you would probably be interested in reading about the history of operating systems. Desktop OSes are a (very visible) minority, and it's the opposite way in my opinion: a desktop OS is an OS + a large suite of tools + a shell.
It's literally something that operates the system so that every program written doesn't have to handle all the low level IO, that enables task management, etc.
RTOS literally means "OS with real-time capabilities", you can't say in general that an RTOS is not really an OS. QNX is very clearly an OS. Linux with real-time modifications is clearly an OS. And even with really small variants, like ThreadX in this case, they have many markers of an OS: It provides threads, with scheduling, synchronization and memory isolation (if the hardware supports that). It has a networking stack and file system abstractions. What exactly is it missing that makes it clearly "not an OS"?
Very nice research and writeup. For those who haven't seen it a couple of years ago, Project Zero also had a series of articles about exploiting Broadcom's WiFi stack [0].
[0] https://googleprojectzero.blogspot.com/2017/04/over-air-expl...