liam_on_linux: (Default)
[personal profile] liam_on_linux
Yes, the Amiga offered a GUI with pre-emptive multitasking, as early as 1985 or so. And it was affordable: you didn't even need a hard disk.

The thing is, that's only part of the story.

There's a generation of techies who are about 40 now who don't remember this stuff well, and some of the older ones have forgotten with time but don't realise. I had some greybeard angrily telling me that floppy drives were IDE recently. Senile idiot.

Anyway.

Preemptive multitasking is only part of the story. Lots of systems had it. Windows 2.0 could do preemptive multitasking -- but only of DOS apps, and only in the base 640kB of RAM, so it was pretty useless.

It sounds good but it's not. Because the other key ingredient is memory protection. You need both, together, to have a compelling deal. Amiga and Windows 2.x/3.x only had the preemption part, they had no hardware memory management or protection to go with it. (Windows 3.x when running on a 386 and also when given >2MB RAM could do some, for DOS apps, but not much.)

Having multiple pre-emptive tasks is relatively easy if they are all in the same memory space, but it's horribly horribly unstable.

Also see: microkernels. In size terms, AmigaOS was a microkernel, but a microkernel without memory protection is not such a big deal, because the hard part of a microkernel is the interprocess communication, and if they can just do that by reading and writing each other's RAM it's trivially easy but also trivially insecure and trivially unstable.

RISC OS had pre-emptive multitasking too... but only of text-only command-line windows, and there were few CLI RISC OS apps so it was mostly useless. At least on 16-bit Windows there were lots of DOS apps so it was vaguely useful, if they'd fit into memory. Which only trivial ones would. Windows 3 came along very late in the DOS era, and by then, most DOS apps didn't fit into memory on their own one at a time. I made good money optimising DOS memory around 1990-1992 because I was very good at it and without it most DOS apps didn't fit into 500-550kB any more. So two of them in 640kB? Forget it.

Preemption is clever. It lets apps that weren't designed to multitask do it.

But it's also slow. Which is why RISC OS didn't do it. Co-op is much quicker which is also why OSes like RISC OS and 16-bit Windows chose it for their GUI apps: because GUI apps strained the resources of late-1980s/very-early-1990s computers. So you had 2 choices:

• The Mac and GEM way: don't multitask at all.

• The 16-bit Windows and RISC OS way: multitask cooperatively, and hope nothing goes wrong.

Later, notably, MacOS 7-8-9 and Falcon MultiTOS/MiNT/MagiC etc added coop multitasking to single-tasking GUI OSes. I used MacOS 8.x and 9.x a lot and I really liked them. They were extraordinarily usable to an extent Mac OS X has never and will never catch up with.

But the good thing about owning a Mac in the 1990s was that at least one thing in your life was guaranteed to go down on you every single day.               

(Repurposed from a HN comment.)
 
 

Date: 2024-04-02 12:59 pm (UTC)
tpear: (Default)
From: [personal profile] tpear
Not entirely sure why co-operative multi-tasking has any significantly less overheads than pre-emptive. Each of these OSs had to make different sets of compromises based on the hardware costs — and there wasn’t really an MMU for the 68k until much later into its life unless you were prepared to design your own (eg. as Apple did with Lisa)

The Arm base acorn range did have something (memory stretch for the detail) but it’s RISC OS made a different set of compromises for other reasons.

I remember an article in Byte (I think; might’ve been PCW) comparing the architecture of the Amiga vs MacOS and how the Mac offloaded an awful lot of the work of a gui app onto the programmer vs the Amiga OS; certainly the Amiga mapped onto the 'state of the art' OS design I had recently learnt in college at the time with message passing etc..

Anyway, I think — similar to your description in another article about OS design being locked into the UNIX-like architecture, MMU and context switch support are similarly locked in with basically a supervisor/user security boundary (although newer have a couple more for hypervisor and so on) and tying virtual memory paging mechanisms tightly with security permissions; there’s no room to innovate system software without overhead.

Date: 2024-04-28 10:50 pm (UTC)
tpear: (Default)
From: [personal profile] tpear
TBH I found this comment very hard to follow. I'll try/

Sorry, my bad. Writing with a number of distractions going on while doing it. My underlying point should've been that there was a lot more to the design of these late 1980s/early 1990s OSs than purely the nature of their task switching.

Not entirely sure why co-operative multi-tasking has any significantly less overheads than pre-emptive.


I think it's connected with other things in this thread.

Coop tends to mean no memory protection, which tends to mean a single memory space, no memory protection, no use of an MMU, etc.

That in turn typically means: * much smaller code and a much more compact OS. * fewer context switches, fewer ring transitions, etc. as all code is in 1 ring in 1 context

It's smaller, simpler, and quicker... as long as nothing goes wrong. If an app freezes or crashes or locks up waiting for input, problem: no other apps can take over, you lose the whole OS and all data in all apps. The only way out is a reboot.

Some OSes let you try to kill the offending task, but then you had to save all your work quickly and reboot and hope the apps' memory wasn't corrupted and you weren't saving junk on top of good files.


Sorry, I was not clear. I meant from my point of view as someone who worked on debugging task switching on a bare metal machine a long time ago, I cannot see where there's significantly more overhead in a pre-emptive/time-slicing vs co-operative. The amount of work required by the implementation is fairly similar and even MMU isn't -that- bad.

Back when we entered the (pre PC) 16/32-bit home computer age, we had the likes of Amiga with pre-emptive multi-tasking but no MMU, the likes of RISC OS with co-operative multitasking and some limited use of MMU, and I daresay all kinds of other combinations.

The Arm base acorn range did have something (memory stretch for the detail) but it’s RISC OS made a different set of compromises for other reasons.


Can you explain this?


(From the PoV of someone who's only had glimpses into RISC OS, so may well be talking rubbish)

RISC OS compromises revolved around the need to bring something to market after Acorn's original OS project failed. As such, it's core is very much an evolution of the 8-bit BBC Micro MOS and the co-operative multi-tasking is a function of the GUI sat on top of MOS, not of MOS itself. RISC OS does use a limited amount of MMU protection and mapping (not sure if this is the OS core or the GUI) but I don't think this extends to protecting applications from each other.

I remember an article in Byte (I think; might’ve been PCW) comparing the architecture of the Amiga vs MacOS and how the Mac offloaded an awful lot of the work of a gui app onto the programmer vs the Amiga OS; certainly the Amiga mapped onto the 'state of the art' OS design I had recently learnt in college at the time with message passing etc..


I'd love to see that.


I was able to find this on Archive.org -- goodness knows how this near 40 year old memory managed to survive! Byte Magazine Sept. 1986, with the article on pg. 249 and the particular picture that's stayed in my head all these years in Figure 1 on pg. 250. This link might work: https://archive.org/details/byte-magazine-1986-09/page/n260

Well, I suppose a thesis of my personal research of the last decade or two is that we could cut 99% of the bloat in current systems, not by designing something new and better, but by resurrecting forgotten working products from the 20th century.


I completely agree that we could cut out much of the bloat. Sadly Unix (for example) has forgotten its roots as a reaction against what its creators saw as overkill in Multics. On the other hand, is a system evolved from 1960s/1970s design really the way forward with the challenges of technology today? There has been much innovation in between, of course, with some very imaginative systems along the way, but are these up to the challenges of modern computing -- most especially, the security challenges? Still, even if not directly, a simpler system would be much easier to face those security challenges than a complex on.

July 2025

S M T W T F S
  1234 5
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 5th, 2025 04:30 pm
Powered by Dreamwidth Studios