I'm a little bit confused as to what this site actually is.
Is nommu a fork of the linux kernel like uclinux? If so, where is the source? Or is it just a guide for programming on MMU-less systems (e.g. uclinux).
I'm planning a MC68020 SBC project. My current goal is to get uclinux running, but I'd be interested to see what else I can run on it.
Anyone wanting a scheduler on a microcontroller based system would not want the baggage of a linux-level general purpose OS. You use a simple scheduler such as FreeRTOS/RTX/etc.
These days the only systems without full MMUs are microcontroller-based ones for which this is not appropriate. No embedded system is going to be dynamically-loading applications like this, its all bare-metal, all built into the one image.
No-MMU linux did have a use back when say the 68000 based lines were still in wide use, but these days all I can think of is for user-space linux where everything is simulated in a single process under a full general-purpose OS.
There are still CPUs without MMU where one would like to run Linux (e.g. J-core or other FPGA soft cores).
One good reason for using Linux instead of any of the RTOS is better networking support. I've used VxWorks, RTEMS, Nucleus, FreeRTOS, eCos, etc and their networking stacks are all pretty slow and crappy.
I recently did a project with ucLinux on an LPC1788 (Cortex-M3). Getting the USB host stack is always a pain in the ass, but it was dirt simple on the Linux port.
Same for LCD framebuffer, GPIO, touchscreen (unique animals every time), I2C, SPI, etc etc etc. And I can pull anything I want off the kernel tree and not have to spend a week getting it working on another OS.
> Anyone wanting a scheduler on a microcontroller based system would not want the baggage of a linux-level general purpose OS.
but that baggage also includes such juicy things as a hardware abstraction layer with plenty of drivers, a network stack with 6LoWPAN and a familiar programming interface.
True, so this comes down to the scale of embedded system we're talking about.
Typically, the scale of embedded systems runs from very BoM cost sensitive (i.e. making millions of the things), to very cost insensitive (i.e. tens of high value items).
Microcontroller-based systems tend to be on the very-cost-sensitive end of the scale due to the fact that if cost isn't an issue, stick a i7 in there instead!
If you're cost sensitive regarding the hardware, then you want the most efficient software possible to reduce the hardware requirements, and hence would not want the baggage of a general-purpose OS for a build-time-configured, static system.
There are plenty of hardware abstraction layers in the embedded world... (e.g. CMSIS), and familiarity is just a case of what you're used to, that argument could quite easily be turned on its head with a different set of people.
IP stacks also exist and are well ported to many RTOSs, (e.g. LwIP).
For example, a (very) typical embedded system that would be familiar to a lot of embedded devs would be:
- Cortex-Mx (CMSIS for HAL)
- FreeRTOS (scheduler)
- LwIP (IP Stack)
- Eclipse IDE.
There are always exceptions to this, e.g human-resources and development time issues change the direction you head in.
BoM cost is not the only cost: the larger your processor, the more complex things like power sequencing, routing to DRAM, boot sequence and bringup, etc. are going to be. Those affect the overall design time and cost even if you're only building one item.
I spent a couple of years building a system to run in the "Sierra Wireless OpenAT" RTOS environment (like those ubiquitous SIM900 modules, but you could add your own code to them). It was pretty terrible and I'm glad they now offer modules that run Linux.
I'm not aware of any drivers out there that work purely based on CMSIS, mostly they are vendor specific (e.g. ST will provide drivers for their Chips based on the stm32 HAL, TI will have Tiva based drivers for TI chips, etc.) if they do exist at all.
I'd love to see more code reuse here.
I agree that nommu Linux is a bit to heavy for most cases as it will always require external RAM and at that point you might as well consider a cheap Cortex-A chip (well, they are not so easy to source/aren't documented as good as e.g. a stm32) but I'd be more than happy if something like RIOT or ChibiOS got more traction.
We used it on microwave link products and satellite tracking antennas (on MicroBlaze microcontrollers in Xillinx FPGAs) for many years until we moved to Zynq (FPGAs with a dual core Cortex-A9 bolted on) in 2013.
curious... why would you take the overhead of a general-purpose OS for a system like that?
If its just for an IP stack, there's easier, more standard, less resource intensive ways of doing that than using a version of Linux with all its drawbacks.
I wasn't there when the decision was made, but it was not too bad an environment. It ran a web server with quite a complex web application, multiple daemons monitoring and controlling various devices, SNMP, various DMAs into the FPGA's block memory to shunt data back and forth for processing in various ways.
I probably would have looked at lighter options like VxWorks or QNX or something but for our new products with a dual core A9 Linux suits us really well.
If you are creating a complicated communications product it can be helpful to be able to develop on a desktop machine then use basically the same source code on the embedded target.
Linux kernel as a Library (LKL)[0] uses nommu mode. I think mostly just because it was easier to get working fast than trying to emulate an MMU. They've attempted to upstream a patch to XFS to work in nommu mode[1], but met some resistance from some XFS developers.
I don't know about this project, but having all processes in the same address space can simplify inter-operation - calling code and accessing data of another process. But that would also require some novel protection (access rights) principle, other than address space separation. The project mentions nothing like this.
Virtual address translation can introduce logarithmic slowdowns for certain memory access patterns. Other than that I don't know why you would want to eschew it.
> But I guess you could also use a special malloc that randomly returns NULL to do that..
On Linux, set the sysctl vm.overcommit_memory = 2.
You can also usually instruct libc to trash freed memory as a debugging mode. But I don't know how to get glibc to do so. Valgrind is good at tracking this kind of thing as well, although slower.
Only if you have to. Not all processors come with a MMU. Using an OS is not always necessary. There are size thresholds for which a MMU makes sense - so memory management does not always imply a memory hierarchy which requires a MMU. Maybe a MMU does not make sense in systems with deterministic & hard real-time requirements.
I had a IP camera running uclinux [1] on ARM7 years ago. The camera was connected to the CPU with USB interface internally. I guess the benefit with using Linux was free ip, USB, and uvc stack.
Kinda. Best i can find is that AmigaOS didn't have memory protection because the 68k initially didn't have it, but later variants got it (68030 onwards?).
I think there was similar issues with (classic) Mac OS and Windows 9x.
Classic MacOS dealt with the memory fragmentation in a pretty neat way tho.
All code was relocatable to start with, and pretty much every memory block you'd allocate was also relocatable by using one further level of indirection called a Handle.
A Handle was basically a pointer to a pointer. A Handle defaulted to 'unlocked' so the actual memory block could move at any time; unless you Locked it, and Unlocked it afterward.
This allowed the OS to compact the memory heap, move all the relocatable blocks in one corner and allow further contiguous blocks to be allocated.
Of course, this is a primitive concept these days, but it allowed amazing pieces of software to exist on very, very small memory systems.
I fondly remember playing with Photoshop 2.5 on my Color Classic with its 16MHz CPU with 10MB RAM... It was amazing what you could do on such a "primitive" machine.
Photoshop was a little bit of a masterpiece back then (it arguably still is, deep down) -- it had it's own 'swap' system for large pixmaps that allowed you to work on images that were massively bigger than the onboard memory; and it wasn't even that slow (unless you were applying filters on the whole image).
If you were lucky enough to have an Amiga with a CPU that did have an MMU, you could use a piece of freely-downloadable software (called GigaMem) to implement paged virtual memory - albeit still with a single address space. One of those few operating systems that could have virtual memory implemented outside the OS kernel.
Why: Even 10 years ago some routers/set-top boxes ran on mmuless SoCs(Coldfire, lexra LX4180), and stayed on uclinux even after switching to new processors with mmu.
It reminds me of the ELKS project to run Linux on i8086-i286. What would be the advantages or disadvantages of this compared to uClinux? For example perhaps it's raison d'etre is that you can run it on Cortex-M0 for ultra low power
IMHO checking the return value is only useful if you can actually do something about it.
For example, you might want to gracefully write a database to disk before terminating the program.
It's definitely bad practice, but if you run out of memory during the initialisation of a program, quitting with a segfault is a fairly reasonable error. Of course, catching the error and printing something meaningful would be much better, but if you're in a rush to get something to work for a personal project it's not much of an issue.
Agreed. The more expected case would be: your httpd is already handling XXX million clients, but the next one comes in and you fail to malloc a tracking structure. Closing the connection and dropping the one client makes more sense than aborting the daemon and dropping all clients.
The article states that malloc only fails when you run out of virtual address space, which is basically never going to happen if you're on a 64 bit architecture. So you can go for good practice and check all your mallocs for failure, but if the failure never happens why bother?
I check them in all of my code because if memory allocation fails I would rather look at recovery options or terminate the program myself gracefully then let the program keep running, performing potentially dangerous operations with bad data, until it fails somewhere down the line.
Sure, maybe that code branch will never get executed but I'll immediately know if it ever does and won't have to try and figure out where the null pointer came from.
Is nommu a fork of the linux kernel like uclinux? If so, where is the source? Or is it just a guide for programming on MMU-less systems (e.g. uclinux).
I'm planning a MC68020 SBC project. My current goal is to get uclinux running, but I'd be interested to see what else I can run on it.