> anything moderately large would need to be put into a special memory block that the OS could rearrange at will, and one would need to lock the block's handle to keep it stable while accessing it
Didn't 16-bit Windows and classic Mac OS do something similar? If you're doing multitasking on a system without an MMU then I think that kind of live heap defragmentation would have been practically required.
Yes. The idea wasn't to get away with not having an MMU, though - it was to get away with shipping the Mac with an ungodly low amount of RAM for a machine with a GUI. I believe the original idea was to ship with like 64k or something?
Obviously, with the state of mobile hardware back then relocatable blocks were also similarly necessary in order to save RAM.
For anyone wondering, no, this isn't the thing that made classic Mac OS unfit for multitasking. The MMU is necessary to keep applications from writing other apps' heaps, not to do memory defragmentation. You can do some cool software-transparent defragmentation tricks with MMUs, but if you're deciding what the ABI looks like ahead of time, then you can just make everyone carry double-pointers to everything.
Well, there's also the fact that the MC68000 in the original Mac didn't have an MMU, and it was difficult to add an external MMU to a 68000 system [1]. You could use an MMU sanely starting with the MC68010, and it wasn't until I think the MC68030 that the CPU came with an integrated MMU.
[1] Because exceptions on the 68000 didn't save enough information to restart the faulting instruction. You could get around this, but it involved using two 68000s as insane hack ...
Just to muddy the waters some more there was also an EC variant¹ of the 030 without the MMU.
The EC variant was available right through to the 060, and I'd be curious to know how prevalent the line was. I suspect the EC versions far outnumbered the "full" chips, because they appeared in all kinds of industrial systems. I'm basing that entirely on working for a company that was still shipping products with MMU-less 68k and coldfire this century, not any real data.
And there's more mud to be found! The 'EC' version of the 68040 & 68060 was "no MMU, no FPU", and there was an 'LC' variant of the 68040 & 68060 that was "MMU, no FPU".
There were huge numbers of embedded 68k family chips shipped, although I've never seen the actual numbers. Folks went from 68000 to 68ec020 to 68ec060 as a (sorta) easy upgrade path. They're still made if you count the 68sec000, and the 68300 line is the spiritual successor.
You could use an MMU sanely starting with the MC68010
Whether the Motorola MMU for the 68010 (the 68451) was sane or not is a matter of some debate. The 68451 was definitely slow and limited (segments, not pages); in the end, most vendors rolled their own MMU out of static RAM and PALs.
> For anyone wondering, no, this isn't the thing that made classic Mac OS unfit for multitasking
Yeah, the way to port classic MacOS apps to native OS X apps was called Carbon, and it was basically 80% of the classic MacOS toolbox just ported to OS X, Handles and QuickDraw and all. Classic MacOS apps written to CarbonLib would "just run" natively in OS X (and the same binary in Classic MacOS). Carbon even kept working on Intel MacOS, but they finally killed it with the 32-bit deprecation a year or two before Apple Silicon was released.
Apple could have worked in multitasking in classic MacOS if they really wanted to, but their management was totally dysfunctional in the 90's where there was no point seen in investing in boring old MacOS since there was always a revolution just around the corner in the form of Pink, Taligent, Copland etc, projects which due to the aforementioned management never went anywhere.
> Apple could have worked in multitasking in classic MacOS if they really wanted to
They did, and they couldn’t. Most users had some code running that patched system calls locally or globally or that peeked into various system data structures, and all applications assumed the system used cooperative multitasking. Going from there to a system with preemptive multitasking would mean breaking a lot of code, or a Herculean effort to (try to) hack around all issues that it caused with existing applications. I think that would have slowed down the system so much that it wasn’t worthwhile making the effort.
Having said that, MacOS 9 had a preemptive multitasking kernel. It ran all ‘normal’ Mac applications cooperatively in a single address space, though. Applications could run ’tasks’ preemptively, but those tasks couldn’t do GUI stuff (https://developer.apple.com/library/archive/documentation/Ca...)
Microsoft actually sort of did that with Windows 9x.
There was a lot of Windows 3.1 and DOS software and drivers that they wanted to remain compatible, and those relied on DOS quirks. So, they had a copy (copies) of DOS always resident in RAM, mostly unused. All DOS syscalls (interrupts) were hooked so that they would call Windows instead. If a program added its own hooks, Windows would detect that and switch to 16-bit mode for the relevant operation. When the custom hook completed, it would call the next hook in the chain, which was the one that went back to Windows. Of course, this relied on Windows understanding DOS's internal data structures and keeping them in sync with what Windows was doing. A similar technique was used for drivers, if Windows found a driver it didn't understand, it would let that driver run in 16-bit mode.
There was a company that re-implemented the macos toolbox on various Unixes. I hope somebody on HN worked there and can fill in more details.
Tldr; from [0]
The company was orignally Quorum Software Systems, Inc. which transformed into -> Latitude Group in 1994, and which got bought by Metrowerks in 1996-7?.
The original product was called Equal, and allowed Microsoft Word 5.1a and Excel 4.0 (the macos versions) to run on UNIX, with native [Motif?] look and feel and good performance without source code.
they later made a library called 'Latitude' so macos App developers could easily port their apps to Unix, which was how the Adobe Apps - Photoshop / Illustrator etc. got made available for Unix. .. and Latitude also apparently implemented a lot of QuickDraw.
"At the heart of Latitude is our own Portable Toolbox Implementation Layer. This layer is completely platform independent. It presents the Mac Toolbox API to the application, answers these calls through a trap table mechanism, and relies on other toolbox calls within the layer whenever possible. When a native system facility is needed, such as the display of a window or control, or some graphical rendering, this layer calls out to one of Latitude's platform dependent modules through an internal, well defined API. The toolbox layer doesn't know what kind of system lies underneath, only that calling this function will display a window or that function will draw a line, etc.
.. Because we've mapped native system facilities to Mac calls, the running application is an equal citizen on the desktop. The application's windows, menus, and control items are native system objects. Cutting and pasting between apps is facilitated by the native system's clipboard mechanism. Fonts come from the system font server -- including the default system font, which means the dialogs come up in something other than Chicago! Application windows are native windows -- not some rendering of a window inside of another. The performance hit is minimal. Latitude is merely mapping the Mac calls to the native system. There is very little processing going on in between. "
More saliently than that, Palm started out as a vendor of Newton apps, before it started making its own Newton-killer hardware.
Palm's Graffiti started out as an alternate text input system for the Newton. It was an Apple software vendor long before it was an Apple rival, and its design is influenced by the Newton more proximally than the Mac.
Well - it is even more than that. They basically used Apple style code resources to define PalmOS apps. You used to be able to compile Think Pascal code on a Mac, and some guy has worked out how to use that code resource to convert it in to a PalmOS app with just tweaking the code resources in the compiled MacOS code. It was quite mind bending to me as a 20-something PalmOS fanboy with a day job doing Delphi. This was in like, 1998/1999 or so. I even went as far as emulating MacOS just to play with it. I don;t know if his code still exists online, but the tool was called SARC (Swiss Army Resource Compiler) is anyone cares to search for it.
I don't think Palm did a lot to change the exe format in the early days. And they used the same CodeWarrior 68K compiler that also targeted MacOS at the time.
Classic MacOS did, but it's definitely not something needed for multitasking without an MMU. For instance AmigaOS didn't do this, but instead effectively had a single shared heap.
Mac OS, Win16, PalmOS all have shared heaps too. This is precisely why you need defragmentation (after an application quits, the heap is a fragmented mess, full of holes) and therefore some system so that the other applications keep "movable handles" to heap blocks instead of raw pointers (which would become invalid after the heap undergoes one round of defragmentatino).
If an OS does not do this you are basically indirectly setting a limit to its uptime, as eventually this global heap's fragmentation will prevent launching any new programs.
Having local heaps does not solve this, as you still have to allocate these local heaps from somewhere. Having an MMU will allow you to do transparent defragmentation without handles as raw pointers (virtual addresses) become your handles. Having an MMU with fixed page size will allow you to outright avoid the need for defragmentation.
Mac OS didn’t. It had a system heap and a heap for the running application. Once it supported running multiple applications simultaneously, each of them had its own heap (https://www.folklore.org/Switcher.html: “One fundamental decision was whether or not to load all of the applications into a single heap, which would make optimal use of memory by minimizing fragmentation, or to allocate separate "heap zones" for each application. I decided to opt for separate heap zones to better isolate the applications, but I wasn't sure that was right.”)
That’s why MultiFinder had to know how much RAM to give to each application. https://en.wikipedia.org/wiki/MultiFinder#MultiFinder: “MultiFinder also provides a way for applications to supply their memory requirements ahead of time, so that MultiFinder can allocate a chunk of RAM to each according to need” (Wikipedia doesn’t mention it, but MultiFinder also allowed users to increase those settings)
It's in fact one of the biggest issues with AmigaOS that made it incredibly hard to add proper MMU support. The OS is heavily message-passing based, and it's not at all always clear "from the outside" who the owner of a given structure passed via a message port (which is little more than a linked list) is, and so the OS doesn't even know which task (process/thread - the distinction was pretty meaningless due to the lack of memory protection) owns a given piece of memory.
Later versions added some (optional) resource tracking to make it easier to ensure resources are freed, but if an application crashed or was buggy you'd frequently leak memory, and eventually have to reboot. It was not great, but usually less awful than it sounds with sufficiently defensive strategies.
[I have at various points when e.g. doing some work on AROS way back, argued that it is is quite likely possible to largely untangle this; partially because for a lot of cases, the ownership changes are clear and rules that fit actual uses can be determined; partially because the set of extant AmigaOS apps is small enough you could "just" add some new calls that does ownership tracking, declare the old ones legacy, and map ownership changes for the rest one by one and either patch them, or, say, add a data file for the OS to use to apply heuristics; had the remaining userbase been larger maybe it'd have been worth it]
That situation doesn't prevent an MMU and virtual memory. It prevents multiple address spaces. Multiple address spaces per process are not a requirement for virtual memory, as such. They are a requirement for getting some of the protection benefits of virtual memory. Not all the benefits. With a single address space for all applications, there can still be user/kernel protection: userland not being able to trash kernel data structures. (Of course with important system functions residing in various daemons, when those processes get trashed, it's as good as the system being trashed.)
It doesn't "prevent" an MMU and virtual memory, you're right, but it does severely limits what you can do with it hence why I wrote "proper" MMU support. There are virtual memory solutions for AmigaOS, though rarely used. There are also limited MMU tools like Enforcer, but it was almost only used by developers. AmigaOS4 has some additional MMU use, and there has been work on trying to add some more protection elsewhere as well, but it is all fairly limited.
Specifically in terms of the comment I replied to, you categorically can not automatically free memory when a task (process/thread) ends in AmigaOS without applications-specific knowledge without risking causing crashes, because some memory "handoffs" are intentional.
> With a single address space for all applications, there can still be user/kernel protection: userland not being able to trash kernel data structures.
Yes, you could if the OS was designed for it, and it was done at a point where most of the application developers were still around to fix the inevitable breakage.
The problem with doing this in AmigaOS without significant API changes or auditing/patching of old code is that there is no clear delineation of ownership for a lot of things.
This includes memory in theory "owned" by the OS, that a lot of applications have historically expected to be able to at least read, and often also write to.
You also e.g. can't just redefine the "system calls" for manipulating lists and message queues to protect everything because those are also documented as ways to manipulate user-level structures - you can define your own message ports and expect them to have a specific memory layout.
More widely, it includes every message sent to or received from the OS, where there's no general rule of who owns which piece of the message sent/received. E.g. a message can - and will often - include pointers to other structures where inclusion in the message may or may not imply an ownership change or "permission" to follow pointers and poke around in internals.
To address this would mean defining lifecycle rules for every extant message type, and figuring out which applications breaks those assumptions and figuring out how to deal with them. It's not a small problem.
Windows 16 bit did, but it required a MMU anyway, at least since Windows 3, that was its big feature, 16 bit protected mode and a VM mode for running MS-DOS.
> Windows 16 bit did, but it required a MMU anyway, at least since Windows 3
Windows 3.0 supported three modes of operation: real mode (8086 minimum), standard mode (286 minimum), 386 Enhanced mode (386 minimum). Real mode was pretty limited, and a lot of apps could not fit in its rather limited memory, but it was not completely useless. I believe real mode Windows apps could use EMS, although I’m not sure if many actually did
In Windows 3.1, real mode was removed, and only standard and 386 Enhanced were supported. So, 3.1 was the first version to “require an MMU”, if by that you mean a 286 or higher
Didn't 16-bit Windows and classic Mac OS do something similar? If you're doing multitasking on a system without an MMU then I think that kind of live heap defragmentation would have been practically required.