Hacker News new | past | comments | ask | show | jobs | submit login
Was Acorn's RISC OS an under-appreciated pearl of OS design? (liam-on-linux.livejournal.com)
162 points by lproven on June 28, 2020 | hide | past | favorite | 114 comments



Really interesting comment from the linked article by one of the Acorn developers, Paul Fellows:

... we had one of the machines that we just could not get this thing to boot reliably. You could boot it, turn it off, reboot it and sometimes it would work and sometimes it wouldn't. It turned out, it would boot fine if you left it long enough, but if you didn't turn it off for very long then it didn't reset properly, and this was because the fan on the board was still spinning and the back EMF on the fan was enough power to keep the ARM running. And that's why you've got an ARM in your phone today, it would take no power to keep a 3 micron ARM with 25000 transistors would run for 30 seconds off the energy stored in the fan.


Paul is one of the lesser known players in the Cambridge tech story, and deserves greater recognition.


Ooops. I used PACE based STB by 2002 and they had booting problems. I have blame the OS for 18 years. But that explanation about EMF makes sense.


And the CPU is the only part being powered by that fan. Never mind the voltage regulators 12V, 5V (or were the 3u ARM already 3V3?)... Seems to be a sport among the distinguished engineers to take the kids for a ride.


Well the fan could have been downstream of the 5V regulator... but I agree that this story is pushing the limits of plausibility.

Generally fans are a very effective way of dumping energy, not storing it. They're literally used for this purpose in exercise bikes.


I always thought the beauty of RISC OS was in the UX not the underlying OS. The three most important things were:

1. Task bar / dock thing. This was well before Windows 95 and Mac OS X both adopted the feature.

2. There was a dedicated menu button on the mouse. This was the only way to get a menu, there no menu bars anywhere, freeing screen real estate. The other two mouse buttons were select, which behaved like a normal mouse button, and adjust, which did the sort of things you need to resort to shift-click on other platforms.

3. There was no file open dialog. You just double clicked a file to open it, or dragged it to the app that you wanted to open it with if you wanted to open in not the default. To save you would be presented with an icon which you could change the name of, and then drag to a finder window. No duplication of the folder system structure in a finder and in every app that handles files.


The drag-to-save feature is incredibly under appreciated today. You could even drag the "file" from a save dialog into another app, rather than into a directory, resulting in a graphical way to do pipes.


How is that better? You have to have already opened the directory where you want to save the file, or do it manually when you want to save instead of the application doing it automatically for you.


Because TrackPad is getting popular. Dragging is comfortable for mouse but not good for trackpads to draggin long distance.


Absolutely. The UX was wonderful - pervasive scalable elements, lovely font hinting, and everything just felt "right"; the problem was that, like Windows 3.x and System 7, it was a co-operative façade laid over a single-tasking kernel.

If AmigaOS had the UX of RiscOS, or RiscOS had the multitasking of AmigaOS, it would have been a beautiful thing.

[on edit] Other people have mentioned the Basic implementation, and it was very impressive - far and beyond most versions, and very much a useful programming language with an accessible syntax.


I was going to say that they probably borrowed the dock from NeXTstep but now I see initial release of NeXTstep was 1989 while initial release of RISC OS was in 1987 ("Arthur") and already had a form of the dock, or at least a dock-like launcher.


Windows 1.0 had an icon bar at the bottom:

https://docs.microsoft.com/pt-br/archive/msdn-magazine/2009/...


[OP/article author here]

No, not really.

Remember that Windows 1.0 was a primitive sort of tiling window system, because Microsoft was afraid of being sued by Apple and did not dare implement overlapping windows.

What you are seeing is a strip of the desktop background, which contains icons representing running programs. The tiling algorithm leaves a strip of "desktop" visible so that you can get to the program icons. This is how you switched between programs.

Windows 2 reintroduced tiling windows, but had no desktop icons as such -- again, it was afraid of being too like classic MacOS. This meant that you could rearrange windows, and so see the desktop. That in turn made it clearer that programs minimized to the desktop, as nothing else could be on the desktop.

This can be seen clearly if you look at screenshots of the work-in-progress test versions of "Windows Chicago", e.g. https://sites.google.com/site/chicagowin95/index -- this is what was to be Windows 4.0 but was renamed Windows 95.

For instance, in this one: https://sites.google.com/site/chicagowin95/index/chicago40

... you can see some running program's icons sitting _on top of_ the prototype taskbar.

There was no separate container for program icons. The taskbar was invented _de novo_ in Windows 95, and subsequent to the invention of the taskbar, icons representing running programs were moved into it.


I agree with you in terms of implementation. But having used Windows 1 for a while and reading about Arthur/RISCOS at the time my impression was that the user experience was similar. If I had been able to actually try RISCOS I might have had a different opinion.

While Windows 1 tiles might have been due to wanting to avoid upsetting Apple (though in terms of APIs and resource management it was mostly a copy) there were those who preferred tiles like With's Oberon (which I also used at the time, along with GEM and Geos).


Isn't that just the minimized windows, like in Windows 3.1? Otherwise they must have removed the taskbar between version 1 and 3 but added it back in version 4 (i.e. Windows 95).


That looks more like a task bar to me, and in the open Write document on the right it mentions "Rudamentary[sic] taskbar" which is what it's likely referring to.


Yeah, the key thing with the icon bar in RISC OS was that it showed running apps on the right, devices on the left. There wasn't really a 'launcher' component to it, and the blurred distinction launcher and running-app-selector bothered me a bit on Mac OS X (until I got used to it, I suppose).


How did you typically launch apps? Through the file manager kind of like classic Mac OS?


Yes. Most apps were 'installed' as a self-contained directory beginning with a '!', wherever you wanted. Double-clicking on such a directory ran the '!Run' script within it, which started the app. You could open up the app to see its component files by holding shift while double-clicking.

It was really quite a lovely system (very few installer programs, install/uninstall very clean with minimal dependencies), but it managed as well as it did mainly because nearly everything was statically linked and did its own thing for config. So centralised management of preferences was difficult, and what shared code there was (like the C library) broke the model a bit by needing to be kept in a separate '!System' directory.

As systems got more complex and ended up on the Internet, there was a greater concern for shared state between applications and security. Consequently the amount of stuff that started from disc on startup, the number of shared libraries and config files etc. went up, and some of the advantages of booting from ROM and the minimal 'copy to install' model went away a bit.


Yes it was called the Filer. Pretty much like the Finder on the Mac.


The bar at the bottom in Windows is called the taskbar even today.

The chief difference between the Windows task bar and a MacOS dock is that some subset of icons on the dock are present even if the application hasn't been started.


Windows task bar can pin things to always be on it, even if it's not running, as well


That's the case on Windows as well, since Windows 98 I think.


[OP/article author here]

You could not pin icons to the taskbar in Windows 98, no.

Windows 98 integrated IE4, which contained a feature called "Active Desktop".

This was not a Win98 innovation -- if you installed IE4 onto Windows 95 or NT 4, you got the same features.

Active Desktop included dockable "toolbars" that could be pinned to the desktop or to any edge of the screen. One of these became part of the taskbar, and was called the "quick launch" bar.

It's a separate area for program icons. If you launch an app, you end up with _two_ icons: one in the quick launch bar, _plus_ an icon in the taskbar as well.

The Quick Launch toolbar is still present in Windows 10 today, in 2020, and can be turned back on very easily: https://mywindowshub.com/add-quick-launch-toolbar-windows-10...

Pinning launcher icons directly to the taskbar was a new feature in Windows 7 and was not possible before that: https://social.technet.microsoft.com/Forums/SharePoint/en-US...


Yes. Also extremely good native support for outline fonts and vector graphics.


Real-time spellcheck years before Office had it always impressed me.

Plus I have a soft spot for co-operative multitasking.


The built-in !DRAW app was surprisingly powerful. It was clunky, but with practice it was possible to do some sophisticated stuff with it.


Yes, and it was implemented in terms of an underlying vector drawing module which could be used by 3rd party apps and even from BBC BASIC fairly easily.


> There was no file open dialog. You just double clicked a file to open it, or dragged it to the app that you wanted to open it with if you wanted to open in not the default. To save you would be presented with an icon which you could change the name of, and then drag to a finder window. No duplication of the folder system structure in a finder and in every app that handles files.

Seems like a pain when working in a program and want to open a file


not really, you just find the file (using the full features of your finder) and open it the same way you always would. constrast that with windows, when you just choose “open”, then find the file and open it.


Another thing I just remembered was that if you dragged the bottom corner handle off the side or bottom of the screen, the other side of the window would grow. You could only grab the bottom right corner to change size, but this didn't matter because of the expanding feature.


Not sure how useful that save thing is in practice. Great for dragging into other apps, but to save to disk you still had to navigate through the filesystem anyway, just outside of the application.


Which sounds terrible, but only because we've been conditioned by apps that don't work well together. On Mac and Windows, workflows where you have multiple apps active at the same time are rare, and elements of the UI (e.g. sticking the menu for a single active application at the top of the screen) actively discourage you from using multiple apps together. In RISCOS the filer was pleasant to use, and the UI pushed you towards workflows that had different apps including the filer working together. It wasn't perfect, but it also wasn't as bad as you might think if you're used to the mainstream operating systems around today.

I was surprised when I switched from RISCOS to Windows just how much windows wants you to work in a 'single task' mode.


This is possibly the single biggest failure in all the modern OSs. Even though they multitask internally, single-task mode is baked into the UI.

Not only is there no easy way to compose application chains, the UX discourages you from thinking that applications chains might be possible.

Automation incredibly difficult - it's impossible for most people, and not easy even if you have developer-level skills. But worse, your data is locked into these monolithic applications. Project interchange isn't usually a thing at all, and data interchange formats tend to be lossy and/or unreliably implemented.

Pipes in a command line shell are a poor substitute for the wonders that would be possible with a common and easy to use composable and automatable UI/API framework and data interchange system.


Data formats may be the biggest blocker here. Think of all the ways copy paste goes sideways today. I've grown accustomed to always using the paste-plain-text way.


Data formats was one of the interesting benefits of RISCOS. Since there was a bitmap editor and a vector editor built into the ROM, any program where it made any kind of sense to could work with the vector format supported by the vector editor and the bitmap format supported by the system bitmap editor. As you point out, those standard interchange formats were definitely part of what and enabled and encouraged cross application workflows.


In addition, Mac and Windows always lift the most recently clicked window to the top, whereas RISC OS would not, making it much easier to set up a screen layout in which several apps cooperate as equals. I still find this to be a pain on my Mac today.


> Which sounds terrible, but only because we've been conditioned by apps that don't work well together.

Nothing to do with "conditioning". I had no experience of any other desktop environment at the time.


Aside from the task bar which I don't know its specifics, those examples aren't that great:

- A dedicated menu button on mouse is hard to discover. Mac's brilliance is that you can start exploring the UX right away without any prior knowledge of which button does what. It also prevents you from examining your options without needing to click anything. I don't think saved screen real estate is worth it. Mac & Windows did away with always-visible menu bars just fine. Remember how Windows 8 tried to hide the start menu and how it had turned into the worst UX experience in the last decade ever. Visibility is important for discoverability.

- "Drag & drop" has too much user friction. It's hard to discover, it's hard to apply properly (many beginner users think that leaving button mid-drag is okay for a short little while). It's impractical because you usually use your apps maximized (for the maximum real estate, remember?). It requires you to have the file visible and easily accessible (consider cluttered desktop icons). I stopped using drag&drop a long time ago, and resort to Copy/Paste function of Windows Explorer for copying files, which works brilliant.

"Drag & drop to open" has other issues too. For one, there is no orthogonality between open & close. Do I drag away the icon to close the file or is there a standalone "close" option without an open?

They might be novel ideas for their time, but I don't think they provide good UX.


> A dedicated menu button on mouse is hard to discover.

For what it's worth, it really wasn't. The meaning of at least the left and menu buttons was literally the first thing you would find out when being shown how to use the machine, and was on the intro sheet in the (extensive and excellent) printed guides.

The difficult-to-discover part was the right button ('adjust'), defined as doing whatever the left button did but a bit differently. So the left button might select a file, while the right button might add a file to the existing selection. Or a left click on the scroll bar 'down' arrow scrolls down, while a right click scrolls up (which was actually quite useful in pre-scroll wheel days as it saved a lot of precision mouse movements when searching documents etc.)

This is really analogous to the discoverability problems that guis have with modifier keys - the right button is basically a sort of ctrl-click without needing to put your hands on the keyboard.

But the menu button was just how you used menus. It was available and worked the same in essentially all applications and was really pleasant to use.


Ctrl-click is equally hard to explore. A beginner user's experience is entirely different.


> Mac & Windows did away with always-visible menu bars just fine.

is this sarcasm? it appears disjointed from your comment and disagrees with your thesis.

> Remember how Windows 8 tried to hide the start menu and how it had turned into the worst UX experience in the last decade ever.

part of that is surely that trigger start in windows 8/8.1 closed (hid/whatever) you were working on before, forcing an unnecessary context switch. i can't print or save documents in office anymore for the same reason - if you choose “File”, it closes your current file.

note that mac os, windows, and to a much greater extent iOS and Android have abandoned the visibility principle. having a dedicated button vs a “force click” feature? i'd prefer the button.

your objections to dnd depend on it not being well implemented. but when it's core, it is well implemented. i use dnd, even sometimes to copy/paste on windows.

° "Drag & drop to open" has other issues too. For one, there is no orthogonality between open & close. Do I drag away the icon to close the file or is there a standalone "close" option without an open?

i don't understand your objection here. is there orthoganality between open/close on other systems? i have never thought so. i close by clicking x (windows, gnome) or just going back home (android). but i open by doubling click the file (linux) or doing something ad hoc (windows, android)


> is this sarcasm? it appears disjointed from your comment and disagrees with your thesis.

Not sure how it contradicts with my points. Can you clarify?

> is there orthoganality between open/close on other systems? i have never thought so.

Yes, there is always a "Close" menu option right in the same menu with the "Open". The close "X" is for closing windows, not opening them, and it also has orthogonality. You open the window by clicking on an icon, and you close it by clicking on another icon.

Going back isn't "closing" on Android. It's just the opposite of navigating forward which also makes sense in a navigational context.


> [Mac] also prevents you from examining your options without needing to click anything.

But it bundles selection-specific and system-general commands up in a remote menu bar that requires a lot of to and fro to operate. The beauty of right-click is that it gives a contextual menu specific to the object right there. I'd argue that it's fantastic UX.

> "Drag & drop" has too much user friction

Yet it's universally used for moving windows around the UI; click-drag on the title bar.


The OS design was barely there, which made it easy to understand and hack. But the GUI had a lot to recommend it.

e.g. both Windows and Mac OS came to the same "icon bar" design that RISC OS had in 1990. It had "magic" application directories for self-contained installation from floppies & downloads. It had a pretty nice vector drawing engine built in, sprite plotters etc. The OS was documented really comprehensively in a £80 set of programmers reference manuals. They wrote a really clear style guide, and (eventually) a well-understood higher-level GUI toolkit which produced great results with little code.

The most heinous part os the OS was the lack of shared libraries in userspace. If you wanted to share code between apps, you wrote it as a kernel extension module and made up new system calls! So this very common facility became a risky, advanced topic for programmers. (also because BASIC + assembler were free, and the C compiler was not, loads of popular libraries were hand-written in assembler).


Though to be fair, it was a similar situation with DOS TSRs (Terminate & Stay Resident) filling in interrupt vectors or hooking existing DOS ones for extra functionality; DOS extenders for memory most notably, but things like audio drivers, CD support etc.


By 1989 RISC OS was awesome, but by 1995 it lacked the same features as other famous OSs (I will not name the other OS to avoid a flame war).

RISC OS used cooperative multitasking and it lacked virtual memory protection. By 1995 linux or BSD were better platforms. MS released Windows 95 and preemptive multitasking arrived to millions of homes.

I still have a 2002 PACE based RISC OS in the basement. It boots with bootp+NFS. It was part of a kiosk that was replaced by a BSD box. No booting or runtime issues anymore. We even designed a serial watchdog to ensure a reboot if the STB hanged, because riscos hanged even booting. No issues with the BSD.

In my humble opinion by the mid 1990s preemptive multitasking and memory protection was mandatory in a general purpose OS and RISC OS lacked it.


This.

I think the author is making the mistake of comparing it with what I would call "new world" OSs.

Very broadly I would categorize most OSs of the 90s into old-world and new-world, common traits of the old-world being:

  - Minimal or no memory protection
  - Minimal or no real user separation
  - Usually desktop-focussed
  - Cooperative multitasking at best
  - Heavily reliant on assembly and hardware specifics
with vendors one by one realizing they had to make a big compatibility-breaking leap to introduce the features now expected for a desktop OS. (cue tens of replies pointing out exceptions, sure)

As an old-world OS, in 1991, I do still think that RISC OS did a pretty good job from a user's perspective. My memories are of it being fast, attractive (anti-aliased fonts?!) and intuitive (often ingenious). With an extremely powerful inbuilt BASIC implementation, to add.

The more I've looked in to RISC OS more recently, the more I've realized how many compromises had to be made to achieve this, and I'm sure they were running out of headroom to evolve it towards a new-world OS. But really this was true for most old-world OSs I can think of. (again, cue exceptions)


Ok, so I'll take you up on the challenge with MINT: Gem and Tos weren't really developed much by Atari, but they did buy in MINT which has preemptive multi tasking and memory protection. With the Aes being retargetable (graphic cards), gdos vector fonts and postscript printing, tcp and lan networking stacks, shell, global shortcut system, posix compatibility and multi-user capabilities, it managed to evolve Tos to something quite modern!

https://en.m.wikipedia.org/wiki/MiNT


I think part of it is that RiscOS had some new-world features, and like the other 'straddling' OSs of the early 90s (OS/2, Desqview/GEM) maybe felt they didn't need to make a big leap to adapt to changes in user expectation, and failed because they had a klunkiness as a result.


My suspicion is that, by the time it became obvious that such things would be essential (maybe 1994 onwards) then the writing was already on the wall. RISC OS was a relatively niche system even in the late 1980s, and it was clear to even to those of us who were enthusiasts as early as 1993 or so that it was going to be left behind. Acorn had weak sales as PCs became standard in schools (its previous strength), the OS was tightly tied to the existing hardware, and there was no money to do whatever radical overhauls that would be necessary to make the 'big leap' for a small market.

I used a RISC OS machine as my primary computer all the way through to 2004, but it was seven or so years past its best by then, and probably twelve since RISC OS was even a potential world-beater.


Keep in mind, most people were using operating systems that were on par with RISC OS in 1995. Most homes and offices still used DOS or Windows 3.1, Windows 95 was just introduced, Mac OS would not see preemptive multitasking or memory protection for another five years, the uptake of OS/2 and Windows NT was rather low, Commodore was bankrupt, Atari was in serious trouble, BeOS was still in development, and NeXTSTEP was targeted at high end workstations.


Someone with a more detailed knowledge of RISC OS may correct me, but to the best of my knowledge there _was_ virtual memory protection between applications (the article carefully says "worthwhile memory protection")

Virtual memory was undermined by large portions of the OS being implemented as, effectively kernel modules. That is because this was the provided mechanism for 'shared' libraries; there's no dynamic linker, as such, just syscalls. So perhaps it can be attributed more not to lack of virtual memory, but lack of a mechanism for shared libraries (and IPC)

It is certainly true that as features from multi-user systems began to make an appearance on PC desktops, RISC OS starts to get left behind in this area.

But also recall that wasn't the focus (for a self-selected user base, of course.) An OS geared to productivity, with features like consistent widgets (and style guide), drag and drop between apps, consistent filesystem interaction throughout and standardised anti-aliased font and vector graphic rendering as part of the OS. It's 2020 and it doesn't feel like any of the OSes I use today have anything like the same cohesion (let alone web apps)

They were, however, the early victories of RISC OS. It does look like in the latter years 'easy' stuff was worked on; things like new window decor and textured backgrounds; perhaps diverting attention (either the RISC OS developers, or users) from the core.

I'm rather interested what Acorn's take was on the porting of NetBSD to Acorn hardware (which I first saw at a show in about 1995). With the benefit of hindsight, perhaps RISC OS could have been layered atop of this. A BSD kernel with a RISC OS syscall interface, windowing system and BASIC interpreter to provide compatibility for the wealth of existing software, and a transition to a 'modern' OS. Was this considered at the time? Was anyone at Acorn paying attention to BSD? Was there enough people-power to make it feasible? Was Acorn already in trouble by then? The world could have been a very different place.


> Was anyone at Acorn paying attention to BSD?

Acorn sold BSD derived RISC iX workstations (the hardware was just rebadged Archimedes) from 1988.

https://en.wikipedia.org/wiki/RISC_iX


Had one of those at my VIth Form - think it was an ex-display model Acorn had demo'd at a trade show (suspect it was an A680, it definitely had SCSI). Can't underestimate the lure of that mysterious "real" OS it would boot into. My first exposure to 'nix.


My first exposure to Unix was RiscBSD, which was netbsd/arm26 running on a RISC PC. Installed from a pile of floppies downloaded at school. Eventually I got an old Elonex 386 that I could install Slackware on, and with a bit of coax I had a 10base2 home network and was being amazed at the concept of running X clients remotely.


They also made a full UNIX workstation called the Unicorn.


How were the R-series workstations not ‘full UNIX workstations’? Can you post a link to a description of the Unicorn project?


The term 'UNIX workstation' at the time referred to a specific class of machines exemplified by the Apollo and the SUN machines, though, in truth there was all kinds of software running on them. The Torch Unicorn was launched to target that specific market, the idea was to use the 6502 based BBC/Micro as the graphical front-end and then to have a 68K in a separate enclosure running either UNIX System III or a bunch of other OS's. I've seen a demo version of that machine long ago, I don't have any pictures, I didn't think any of this was worth recording at the time, though it was definitely impressive.

Please note I used the word 'also', I did not mean to imply the R-series weren't full UNIX workstations, but they came quite a few years after the Unicorn.

A later version called the Triple-X ran System V and came with a GUI on the native processor.

Edit: found a link with a better description of the Unicorn:

http://chrisacorns.computinghistory.org.uk/docs/Torch/Torch_...

One more with some pics:

http://chrisacorns.computinghistory.org.uk/docs/Mags/AU/AU_S...


Multitasking in RISC OS was based on swapping the page tables for the application part of the address space starting at 0x8000. Applications were not relocatable, so memory protection was more like a side effect of the address space layout than a deliberate security feature.

There was some memory protection to make pages read-only for user code, notably page zero (to trap null pointer dereferences, and to protect the interrupt vectors) but my memory is that the system heap (containing the OS modules) was either directly writable from userspace, or it was trivial to make a system call to get write access.


The heritage of RISC OS as an evolution of the BBC Micro MOS is really clear, at the lower levels. Not just the excellent BASIC, but the service calls, vector indirection, VDU system, command line interface and so on. Even the relocatable modules, one of the main unifying concepts in RISC OS, were sort of based on the sideways ROMs from the 8-bit machine. How different it would all have been had the ARX project been successful, although it was an incredibly 'optimistic' design from all I've ever heard about it.

RISC OS was a terrifically fun environment in which to learn programming. Of course it lacked lots of the fundamentals of 'proper' operating systems (memory protection, preemptive task switching, multiple users etc etc) but the compromises they made were cleverly chosen and lots of parts of the design were very elegant. The user interface was great and there was lots of other very sophisticated features that were ahead of their time, discussed elsewhere in these comments (outline fonts, vector graphics, fast native video compression/decompression, the list goes on). The operating system was very easy to extend, so it was easy and fun to write little utilities, and this led to a vibrant hobbyist/'freeware' scene as I suppose it was called in those days.

The fact that the OS was all implemented in assembler was reflected in the API, and (coupled with the inherent pleasure of working with the ARM instruction set of those days) led to the majority of third party apps being written in assembler too, often with bits of the very efficient built-in BASIC to bind it all together. This made the whole user experience very fast and slick.

The documentation was also superb, though expensive for a youngster! I can't bring myself to get rid of the Programmer's Reference Manuals as they're so good and were such a holy text back in the day.


One nice thing about RiscOS on Archimedes was that you could write fully-fledged GUI applications using just the BASIC version built into ROM. This made it very accessible vs the need to buy development tools on Windows.



I almost used the term WIMP but I thought it was too obscure a reference at this point :)

I still have "BASIC WIMP programming on the Acorn" by Alan Senior on my bookshelf for no good reason. I can't even find any cover photos online for it.


In the late 80s I worked at a little BBC department writing multimedia Mac & Windows software (it had moved on from the BBC Micro software it used to write). This was a team descended from the team that did the Domesday Project years before.

One day we had a visitation from Acorn trying to get us to write Archimedes software. They asked us what they could do to make their platform more appealing to us as developers and to users.

We said we liked their CPU performance, but didn't like the Archimedes graphics hardware which was too low res and low depth, and could not be upgraded. For example, the Risc OS system did anti-aliased fonts (which they were very proud of), but at such a horribly low resolution that the result managed to be blurry and blocky at the same time.

They didn't really get our complaints, didn't think the bad graphics were important. They had no intention of ever making a machine with enough video memory to do a better job.

We said their system UI was odd, for example unlike Mac and Windows there was no copy and paste. We gave other examples of the UI being weird just for the sake of it, but I can't remember the details after all these years.

They said all their UI was fine, and copy and paste was stupid.

That department never did publish anything for the Archimedes.


> We said we liked their CPU performance, but didn't like the Archimedes graphics hardware which was too low res and low depth, and could not be upgraded.

I'm surprised about that! Looking at https://en.wikipedia.org/wiki/Acorn_Archimedes it says the machine could do 640 × 512 with 2, 4, 16 or 256 possible colours, or 800 × 600 with 2, 4 or 16 possible colours. That's pretty spectacular for the late eighties; far superior to the Amiga (previously considered the king of graphics) and even superior to VGA, let alone the Mac.

What am I missing?


Domesday was awesome.


I can't take this author seriously if he says FreeBSD is an "ugly old UNIX OS". In my experience with the *BSDs none of them are ugly under the surface, they are pretty cohesive, well put together, no nonsense. I could see how one would prefer Mac for UI, but for those core pieces I would prefer them over the Darwin/XNU components in most cases.


I think it's fair. Unix has a lot of legacy baggage. It's largely terminal-based with three character folder names and two character commands, a monolithic kernel, the C programming language. In contrast, Acorn's ARX was fully graphical, used a microkernel and was written in Modula 2+/3. It sounds like the hardware wasn't quite ready for it though.


But he was contrasting with Darwin/XNU in the statement. You may say it's visually pleasing but in many other senses it's uglier than any given *BSD.


Heh.

OK, I concede, I gave in to a tiny trollette there.

I've been using *nix professionally since 1989. I work for a Linux vendor. I know my way around.

But for the last 25 years, I've mainly worked with Linux. I quite like Linux, although it's getting a bit long in the tooth and bloated these days.

But what strikes me every time that I try any of the BSDs is how little they have learned from Linux and what has made Linux so successful.

Linux natively uses PC partitioning. BSD doesn't; you have to install into special BSD "disk slices" which must be in a primary partition. (I confess I have not tried on a UEFI box yet. I am sure GPT changes this, but I don't know how.)

Linux natively uses the PC keyboard and framebuffer. It adapts to my screen size. It uses colours. BSD doesn't. I still see '80s nonsense like ^H sometimes.

Linux usually detects and uses all my hardware. Even on quite recent kit, I've had BSD not recognise my network card, or my wifi adaptor, or whatever.

Linux feels like a native OS on PCs and handles PC hardware gracefully. BSD, to me, doesn't - it feels like a 1970s minicomputer OS, reluctantly running on an alien platform.

I started out on SCO Xenix 286 on an IBM PC-AT, when that was still fairly modern kit. FreeBSD doesn't feel much more advanced than that, and NetBSD and OpenBSD are worse.

That is what I am getting at.

There have been multiple efforts to incorporate some of the nice bits of OS X over the years, notably NeXTBSD. They failed completely. It seems to me that the BSD folks don't want nasty new tech. If it worked 50 years ago, it's still good enough.


I can't say where, but there is still a RISC OS based real-time radar processing and target tracking system running, providing live data feeds for safety critical systems.

It was a joy to program, but it got killed (in my industry) because customers had been screwed over by vendors locking them into specific hardware, then raising prices once the customer had no way out. So the contracts always specified COTS hardware ... "Commercial Off-the-Shelf". This was always, always interpreted as meaning PC hardware.

We transitioned because we had to, but the lamented the loss of the elegant, simple, clean designs we could use on RISC OS.

I still do.


RiscOS lived a bit on Rox Filer and Oroborox under Unix. Trully unnaprecciated by the people.

Rox-Filer had a panel, a pinboard with shortcuts, a Python-Gtk2 API with Rox-Lib with lots of Drag and Drop applets.

Beyond XFCE, it was a nice alternative to Gnome. It's even lighter than XFCE and probably when used with Fluxbox it would be faster and snappier than LXDE.


No mention of the StrongARM processor card for the Acorn Risc PC. It probably came too late but was a massive speed upgrade over the supplied ARM6 card

https://en.wikipedia.org/wiki/StrongARM


And honestly it was probably DEC's implementation that set ARM on its trajectory for world domination.


Agreed plus Thumb which got ARM into Nokia mobile phones.

Dave Jaggar has some interesting comments on the history around DEC, StrongARM and the Austin ARM team.

https://www.youtube.com/watch?v=_6sh097Dk5k


I saw that video. He only came in long after the Archimedes era and he doesn't seem to know anything about it. No Archimedes, no ARM. You have to know your history; ask George Santayana.

I would mention that that video was _wildly_ contentious over in the RISC OS world. :-D


I can see why the video would have been contentious - the general impression given was that the IP that ARM (the company) inherited wasn't that great, which does't seem entirely fair.

Also RISC OS was a desktop OS and Thumb etc was all about getting into mobile devices and so in effect abandoning that legacy.

In fairness though without Thumb ARM would be unlikely to have reached its market position today so credit due I think to all those who contributed to the evolution of the architecture (including the team that developed A64).


And supposedly Thumb mode was designed at the request of Nintendo, for use in the Game Boy Advance. (I think I read this in one of the Acorn oral histories, maddeningly I can't find a source now.)


Hm that would be rather surprising. GBA was released in 2001; Thumb was introduced in 1994 with ARM7.


Surprising but true! Dave Jaggar makes it clear in this video that it was to keep code within smaller Nintendo ROM size constraints. Seven year gap between ARM / Nintendo discussions and GBA appearance!

https://youtu.be/_6sh097Dk5k?t=1657


He mentions only Nintendo, I must have been mistaken about the GBA. Given the year, this must have been for the Virtual Boy. Not a surprise that they don't tell the story much. In the end the Virtual Boy shipped with a 32-bit NEC CPU.


Possibly, but the Virtual Boy was announced in Nov 1994 and demoed at the start of 1995 - so a very short period for development if they were still debating the processor during 1994. Maybe Nintendo were concerned about code size given their experience with VB and that led to the challenge to the ARM team?

Plus seems like the GBA was paused for a while in the 1990s too.


I remember seeing the specs on the Itsy and wondering how it had more MHz than a Pentium Pro. I was developing for the Palm Pilot at the time. (16MHz Motorola 68328)


No, I didn't mention it at all, because TBH I didn't think it was relevant -- I was mainly discussing the OS and comparing it to other contemporaneous OSes.

Why and how do you think it would have fit?


The first 'real' computer I used as an Acorn A3020 (after Commodore 64, Amstrad 6128+) but I remember using an IBM PC at the time and been quite unimpressed. The Acorn just felt so modern at the time, I have quite a fondness for those machines.

Btw: they were used to generate live on screen graphics for the BBC for quite a few shows https://www.youtube.com/watch?v=exW-LbLRJV0


You can still run it today on a Raspberry Pi (amongst other things):

https://www.riscosopen.org/


Yeah how's that work? I have an older 3B which I'm not using right now (4 4gb replaced it) and I might try it. Maybe I should blog it and put it up on HN.


Well. Even the pi 1 is vastly faster than even the fastest Acorn workstation.

The two major flies in the ointment are that RISC OS has never had a wifi stack (use ethernet or an external wireless access point) and that some of the old software hasn't survived the shift from the ARM2/3's 26-bit addressing mode to the more modern 32-bit standard. But if you don't have old games you want to be nostalgic about, that's less of a problem!


[OP/article author here]

RISC OS recently changed ownership and it is now fully open-source. The new owners have sponsored an improved version with more bundled software, called RISC OS Direct: https://www.riscosdev.com/direct/


That's interesting. Digging around a bit for RISC OS Developments (the new owner), I found this interview with Tech Crunch that mentioned they were working with Pinebook on porting to that platform [0] (6th question down).

Now that would be interesting, a ~$200 laptop running RISC OS!

[0] https://www.techradar.com/uk/news/arms-original-operating-sy...


Perfect use-case, I have repurposed a RPi 3 for RISCOS with an old DVI monitor connected via an HDMI to DVI converter. Now it sits in my workshop as an "internet terminal". Ideal for looking up data sheets and pinouts for connectors etc.


Works very well on a RP3B. Support for the RP4 is not yet complete.


Back in the 90s my High School only had Acorns for the students, although there was a lab of them with the PC cards wed play Scorched Earth on.

I had an Amiga at home and was so frustrated at not having access to x86 PCs.

In hindsight I realise how fortunate I was to learn programming and be exposed to these different very forward thinking OSes very early.

RISC OS began my experience of hacking software by being able to click into the application and edit stuff. Trying to bypass the security to do more on them also honed early development and explorative techniques too.


Same here. You could open up application bundles and inspect and modify the contents. You could even turn a folder into an executable and run a script whenever anyone tried to open it.

I had hours of fun on the school computers making weird stuff happen whenever anyone tried to open an application or access their documents folder.

Being able to drop into a BASIC shell at any point by hitting F12 was cool too.


Remember that RISC OS as we know it today didn't appear until mid-1989, 2 years after the first Archimedes. During those first couple of years it was the BASIC written Arthur 1.20 that you got by typing '*desktop'. You swapped out the ROM chip for RISCOS 2.0 in 1989. FWIW, Arthur 2.0 was meant to be the proper name for RISCOS 2.0 but that was torpedoed by the movie Arthur 2 being released.


Ha, I remember doing that, and the RISCOS 2 to 3 upgrade a few years later. Very exciting, for a nine-year-old.

Kept the old Arthur ROM chips for years afterwards as a sort of prize.


I encountered RiscOS a couple of times in the mid-nineties, first on an Archimedes a teacher had brought in (my middle school was populated my Apple Macintosh LCII machines running System 7) and later the RiscPC was an exotic object of desire I lusted over (before eventually choosing a dual-PowerPC BeBox).

I remember being extremely disappointed by the OS: firstly, storing the system on ROM made obvious sense to me but it also nagged me because this wasn’t an embedded system and the ideology of downloading frequent updates from the internet to patch the deluge of security vulnerabilities and other assorted bugs being discovered on a regular basis was already developing.

Secondly, as the article points out, it actually struck me as a rather rudimentary OS with a fairly ugly (and foreign) GUI. It lacked preemptive multitasking, memory protection, and the notion of multiuser. It felt technically on the level of Windows 3.11 but loaded from ROM, on an (admittedly very fascinating) exotic architecture.


Having an OS which boots from hardware is always a feature to me - no matter how badly you screw up your system, you just do a hard reset (boot holding R) and get a working OS and GUI. You can still patch it by defining a boot app, but the core system is always there, safe from mishap and malware.

And yes, the OS was starting to show its age (both graphically and with its cooperative multitasking) by the time Acorn was broken up, but at its peak it was miles ahead of the competition, and was definitely an under-appreciated pearl.


I remember hand tuning the boot sequence on my A5000. I could make it appear to boot and be ready in a few seconds.

In reality it was still loading all its OS patches from disc.

That along with clever use of 16 colours (8 greyscale) and antialiased fonts really gave the impression of a much more advanced machine for the time.

Really loved that machine. :D


[OP/article author here]

Well, that was sort of my point. If you follow the link to the ROUGOL talk on its history, it explains some of the reasons why.

It's important to consider that the history of RISC OS has an important commonality with the history of AmigaOS: in both cases, the companies were planning something far more sophisticated and clever -- CAOS for Commodore, ARX for Acorn -- but the projects over-ran and under-delivered. So, they fell back on almost-skunkworks projects that used existing tech to get something good enough out the door when the hardware was ready to ship.

In Commodore's case, they used the existing TRIPOS OS, written in BCPL: https://en.wikipedia.org/wiki/TRIPOS ... and the existing Rexx language: https://en.wikipedia.org/wiki/Rexx

The result was AmigaDOS (as it was originally called, before it was renamed to AmigaOS).

In Acorn's case, there had already been a 16-bit port of the BBC Micro MOS and BASIC, for the ill-fated Acorn Communicator, based on the same 65C816 CPU as the Apple ][GS: http://chrisacorns.computinghistory.org.uk/Computers/Communi...

So Acorn ported its existing OS and language to ARM, and wrote a fairly basic desktop GUI in BASIC. The result was Arthur: https://everything2.com/title/Arthur+OS

Once it had shipped, it was finished and renamed RISC OS. Allegedly the name change is because of the Dudley Moore film which had just taken the name "Arthur 2": https://en.wikipedia.org/wiki/Arthur_2:_On_the_Rocks

So, yes, it was a hasty, last-minute exercise, based on an outdated core, an early-1980s, single-user, single-tasking command-line OS for a low-end machine -- just like Windows 3 and DOS.

On the other hand, it worked and worked very well -- arguably far better than Windows 3, I'd say.


You got me at the "Only Amiga" video. That's like "condensed 80's" in a can!


Heh. :-)

I aspired to an Amiga as a student in 1986 or so, but by the time I could afford such a thing, I had the option of something much faster... ;-)


since I hijacked your attention I will mention that the Atari 1020ST was awesome because it had built-in midi ports. you could sequence synths with this right out of the box.


1040st :-)


After reading this, I feel like I'm missing something. I don't see much to say why it should be underappreciated


Nice to see a livejournal link/blog in 2020. ;-)


Ha!

My main (non-tech) blog is on there too and it's 18 years old now. I have just been too lazy to move so far.


This applies.

>Any headline that ends in a question mark can be answered by the word no.

https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...


[OP/article author here]

Yes, that was entirely intentional. :-) I used to be the staff technical writer at PC Pro magazine. Ian Betteridge was a colleague on the staff of our sister magazine, MacUser, for which I also wrote sometimes. We're still in touch today.


That's awesome. The way it immediately answers its titular question in a resounding negative certainly got a chuckle out of me. Good stuff.


Damn, what the hell is Livejournal doing now? This page kept on reloading, reloading, reloading, reloading.


Sorry about that. I'm on a free account. A good ad-blocker is important. :-(


I think it might have been interacting with my ad-blocking, tbh. I dunno, I haven't touched LJ since they changed the TOS to say all gayness is forbidden.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: