![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
The thing is, that's only part of the story.
There's a generation of techies who are about 40 now who don't remember this stuff well, and some of the older ones have forgotten with time but don't realise. I had some greybeard angrily telling me that floppy drives were IDE recently. Senile idiot.
Anyway.
Preemptive multitasking is only part of the story. Lots of systems had it. Windows 2.0 could do preemptive multitasking -- but only of DOS apps, and only in the base 640kB of RAM, so it was pretty useless.
It sounds good but it's not. Because the other key ingredient is memory protection. You need both, together, to have a compelling deal. Amiga and Windows 2.x/3.x only had the preemption part, they had no hardware memory management or protection to go with it. (Windows 3.x when running on a 386 and also when given >2MB RAM could do some, for DOS apps, but not much.)
Having multiple pre-emptive tasks is relatively easy if they are all in the same memory space, but it's horribly horribly unstable.
Also see: microkernels. In size terms, AmigaOS was a microkernel, but a microkernel without memory protection is not such a big deal, because the hard part of a microkernel is the interprocess communication, and if they can just do that by reading and writing each other's RAM it's trivially easy but also trivially insecure and trivially unstable.
RISC OS had pre-emptive multitasking too... but only of text-only command-line windows, and there were few CLI RISC OS apps so it was mostly useless. At least on 16-bit Windows there were lots of DOS apps so it was vaguely useful, if they'd fit into memory. Which only trivial ones would. Windows 3 came along very late in the DOS era, and by then, most DOS apps didn't fit into memory on their own one at a time. I made good money optimising DOS memory around 1990-1992 because I was very good at it and without it most DOS apps didn't fit into 500-550kB any more. So two of them in 640kB? Forget it.
Preemption is clever. It lets apps that weren't designed to multitask do it.
But it's also slow. Which is why RISC OS didn't do it. Co-op is much quicker which is also why OSes like RISC OS and 16-bit Windows chose it for their GUI apps: because GUI apps strained the resources of late-1980s/very-early-1990s computers. So you had 2 choices:
• The Mac and GEM way: don't multitask at all.
• The 16-bit Windows and RISC OS way: multitask cooperatively, and hope nothing goes wrong.
Later, notably, MacOS 7-8-9 and Falcon MultiTOS/MiNT/MagiC etc added coop multitasking to single-tasking GUI OSes. I used MacOS 8.x and 9.x a lot and I really liked them. They were extraordinarily usable to an extent Mac OS X has never and will never catch up with.
But the good thing about owning a Mac in the 1990s was that at least one thing in your life was guaranteed to go down on you every single day.
no subject
Date: 2024-03-24 05:53 pm (UTC)Now I want to look into ways to reduce that slowness. There's a way for the cache: cache by physical rather than virtual addresses. SPARC did that, I think. As for the MMU, I don't know, but I want to find out.
no subject
Date: 2024-03-27 07:06 pm (UTC)Nice. :-)
Is that not what the late great Jochen Liedtke spent a lot of time trying to do?
no subject
Date: 2024-03-28 05:52 pm (UTC)1. QNX-style message-passing, which lets a client call a server and get a response without the overhead of a pass through the task scheduler.
2. Single address space operating system, so you don't have to invalidate the cache or the TLB.
3. As a consequence of (2), memory protection information stored and enforced separately from the page table, so that it can be changed readily on a process switch. This likely makes processes something that are defined in the hardware architecture. The trick is doing this without adding lots of complexity.
no subject
Date: 2024-04-03 10:39 am (UTC)This sounds fascinating, although TBH it's over my head from the brief description here.
If you don't have time to implement it then please, at least, write some detailed notes and publish them somewhere so that someone can try to do it?
no subject
Date: 2024-04-05 09:28 am (UTC)no subject
Date: 2024-04-12 03:09 pm (UTC)A hybrid kernel contains what looks a lot like a fully implemented microkernel operating system, with a full set of servers for the various functions. However, they are then linked into one binary, and all run in the same address space and with the same memory protection. The division of code between servers is done at the programming language level, rather than the process level.
This gives the code structuring and easier development advantages of a microkernel OS. It avoids the overheads of inter-process communication, because there isn't any such communication.
It does not give the system crash resistance of a microkernel, because a bad pointer can trample memory belonging to any component of the kernel. Nor does it make replacing OS components as easy as a microkernel, but that's not aways a good idea: see RISC OS for the problems that can happen with that.
A hybrid kernel can and does run on current hardware with good performance. The Windows NT kernel and the macOS/iOS/etc kernels are hybrid kernels and are quite successful.
The Linux kernel is monolithic, and could be considered old-fashioned on those grounds. It seems that it's very configurable at build time, and that is one reason why it can run on a very wide range of hardware.
Implementing my idea would require an unconventional memory management system, for single address space and protection settings separate from memory mapping. Its only advantage is that it would allow a purist microkernel OS, and that's very unlikely to justify the hardware development costs.
no subject
Date: 2024-03-29 10:52 am (UTC)no subject
Date: 2024-04-03 10:46 am (UTC)The ST was an amazing machine for the price in the 1980s, yes indeed, and I don't think it deserves the relative obscurity it fell into later on.
Although I have never explored the MiNT OS in detail, it does seem like it found ways around a lot of the ST's limitations, while retaining a degree of compatibility.
The Amiga did amazing things with tiny resources, but that cleverness meant it was virtually impossible to expand and enhance the OS to use the better facilities provided by later, more capable CPUs, and its media chipset was so closely tied to the CPU that much of its power was lost if the sound and video chips were replaced.
In other words, its high level of integration and functionality crippled later upgrades.
(The same is largely true of Classic MacOS.)
Whereas the ST had less special hardware cleverness, and less amazing functionality, and so in time it was possible to improve and enhance it -- and yet still run the original apps.
Nowadays there is EmuTOS, a modernised, all-FOSS replacement ST OS, and part of it is based on the original DR PC GEM source code which Caldera made FOSS. That makes me really happy.
I wish someone somewhere would find, open up and share the source code of any of DR's later multitasking CP/M derivatives. DR grew CP/M into a proper multitasking networked 32-bit OS with a GUI, and it seems to be almost all lost, because it was all proprietary.
no subject
Date: 2024-04-02 12:59 pm (UTC)The Arm base acorn range did have something (memory stretch for the detail) but it’s RISC OS made a different set of compromises for other reasons.
I remember an article in Byte (I think; might’ve been PCW) comparing the architecture of the Amiga vs MacOS and how the Mac offloaded an awful lot of the work of a gui app onto the programmer vs the Amiga OS; certainly the Amiga mapped onto the 'state of the art' OS design I had recently learnt in college at the time with message passing etc..
Anyway, I think — similar to your description in another article about OS design being locked into the UNIX-like architecture, MMU and context switch support are similarly locked in with basically a supervisor/user security boundary (although newer have a couple more for hypervisor and so on) and tying virtual memory paging mechanisms tightly with security permissions; there’s no room to innovate system software without overhead.
no subject
Date: 2024-04-03 11:09 am (UTC)TBH I found this comment very hard to follow. I'll try/
I think it's connected with other things in this thread.
Coop tends to mean no memory protection, which tends to mean a single memory space, no memory protection, no use of an MMU, etc.
That in turn typically means: * much smaller code and a much more compact OS. * fewer context switches, fewer ring transitions, etc. as all code is in 1 ring in 1 context
It's smaller, simpler, and quicker... as long as nothing goes wrong. If an app freezes or crashes or locks up waiting for input, problem: no other apps can take over, you lose the whole OS and all data in all apps. The only way out is a reboot.
Some OSes let you try to kill the offending task, but then you had to save all your work quickly and reboot and hope the apps' memory wasn't corrupted and you weren't saving junk on top of good files.
Agreed.
Can you explain this?
I'd love to see that.
Well, I suppose a thesis of my personal research of the last decade or two is that we could cut 99% of the bloat in current systems, not by designing something new and better, but by resurrecting forgotten working products from the 20th century.
no subject
Date: 2024-04-28 10:50 pm (UTC)Sorry, my bad. Writing with a number of distractions going on while doing it. My underlying point should've been that there was a lot more to the design of these late 1980s/early 1990s OSs than purely the nature of their task switching.
Sorry, I was not clear. I meant from my point of view as someone who worked on debugging task switching on a bare metal machine a long time ago, I cannot see where there's significantly more overhead in a pre-emptive/time-slicing vs co-operative. The amount of work required by the implementation is fairly similar and even MMU isn't -that- bad.
Back when we entered the (pre PC) 16/32-bit home computer age, we had the likes of Amiga with pre-emptive multi-tasking but no MMU, the likes of RISC OS with co-operative multitasking and some limited use of MMU, and I daresay all kinds of other combinations.
(From the PoV of someone who's only had glimpses into RISC OS, so may well be talking rubbish)
RISC OS compromises revolved around the need to bring something to market after Acorn's original OS project failed. As such, it's core is very much an evolution of the 8-bit BBC Micro MOS and the co-operative multi-tasking is a function of the GUI sat on top of MOS, not of MOS itself. RISC OS does use a limited amount of MMU protection and mapping (not sure if this is the OS core or the GUI) but I don't think this extends to protecting applications from each other.
I was able to find this on Archive.org -- goodness knows how this near 40 year old memory managed to survive! Byte Magazine Sept. 1986, with the article on pg. 249 and the particular picture that's stayed in my head all these years in Figure 1 on pg. 250. This link might work: https://archive.org/details/byte-magazine-1986-09/page/n260
I completely agree that we could cut out much of the bloat. Sadly Unix (for example) has forgotten its roots as a reaction against what its creators saw as overkill in Multics. On the other hand, is a system evolved from 1960s/1970s design really the way forward with the challenges of technology today? There has been much innovation in between, of course, with some very imaginative systems along the way, but are these up to the challenges of modern computing -- most especially, the security challenges? Still, even if not directly, a simpler system would be much easier to face those security challenges than a complex on.
no subject
Date: 2024-09-11 07:02 pm (UTC)