liam_on_linux: (Default)
[personal profile] liam_on_linux
Yes, the Amiga offered a GUI with pre-emptive multitasking, as early as 1985 or so. And it was affordable: you didn't even need a hard disk.

The thing is, that's only part of the story.

There's a generation of techies who are about 40 now who don't remember this stuff well, and some of the older ones have forgotten with time but don't realise. I had some greybeard angrily telling me that floppy drives were IDE recently. Senile idiot.

Anyway.

Preemptive multitasking is only part of the story. Lots of systems had it. Windows 2.0 could do preemptive multitasking -- but only of DOS apps, and only in the base 640kB of RAM, so it was pretty useless.

It sounds good but it's not. Because the other key ingredient is memory protection. You need both, together, to have a compelling deal. Amiga and Windows 2.x/3.x only had the preemption part, they had no hardware memory management or protection to go with it. (Windows 3.x when running on a 386 and also when given >2MB RAM could do some, for DOS apps, but not much.)

Having multiple pre-emptive tasks is relatively easy if they are all in the same memory space, but it's horribly horribly unstable.

Also see: microkernels. In size terms, AmigaOS was a microkernel, but a microkernel without memory protection is not such a big deal, because the hard part of a microkernel is the interprocess communication, and if they can just do that by reading and writing each other's RAM it's trivially easy but also trivially insecure and trivially unstable.

RISC OS had pre-emptive multitasking too... but only of text-only command-line windows, and there were few CLI RISC OS apps so it was mostly useless. At least on 16-bit Windows there were lots of DOS apps so it was vaguely useful, if they'd fit into memory. Which only trivial ones would. Windows 3 came along very late in the DOS era, and by then, most DOS apps didn't fit into memory on their own one at a time. I made good money optimising DOS memory around 1990-1992 because I was very good at it and without it most DOS apps didn't fit into 500-550kB any more. So two of them in 640kB? Forget it.

Preemption is clever. It lets apps that weren't designed to multitask do it.

But it's also slow. Which is why RISC OS didn't do it. Co-op is much quicker which is also why OSes like RISC OS and 16-bit Windows chose it for their GUI apps: because GUI apps strained the resources of late-1980s/very-early-1990s computers. So you had 2 choices:

• The Mac and GEM way: don't multitask at all.

• The 16-bit Windows and RISC OS way: multitask cooperatively, and hope nothing goes wrong.

Later, notably, MacOS 7-8-9 and Falcon MultiTOS/MiNT/MagiC etc added coop multitasking to single-tasking GUI OSes. I used MacOS 8.x and 9.x a lot and I really liked them. They were extraordinarily usable to an extent Mac OS X has never and will never catch up with.

But the good thing about owning a Mac in the 1990s was that at least one thing in your life was guaranteed to go down on you every single day.               

(Repurposed from a HN comment.)
 
 

Date: 2024-03-24 05:53 pm (UTC)
history_monk: (Default)
From: [personal profile] history_monk
You've given me an idea. The slow part of microkernels, and the slow part of pre-emptive multitasking, have the same cause: context switching is comparatively slow. You have to invalidate the cache and let it refill, and you have to load new settings into the MMU.

Now I want to look into ways to reduce that slowness. There's a way for the cache: cache by physical rather than virtual addresses. SPARC did that, I think. As for the MMU, I don't know, but I want to find out.

Date: 2024-03-28 05:52 pm (UTC)
history_monk: (Default)
From: [personal profile] history_monk
I think I can see how to do this, but it's complicated, and I doubt I'll ever get anywhere. Switching processes is slow for the MMU, because you have to invalidate the TLB and let it refill. The pieces seem to be:

1. QNX-style message-passing, which lets a client call a server and get a response without the overhead of a pass through the task scheduler.

2. Single address space operating system, so you don't have to invalidate the cache or the TLB.

3. As a consequence of (2), memory protection information stored and enforced separately from the page table, so that it can be changed readily on a process switch. This likely makes processes something that are defined in the hardware architecture. The trick is doing this without adding lots of complexity.

Date: 2024-04-05 09:28 am (UTC)
history_monk: (Default)
From: [personal profile] history_monk
I have never implemented an OS, and it's a pretty big job. Some of these concepts require hardware support that doesn't exist in any current architecture that I'm aware of. I'll put them onto Usenet comp.arch, which is a pretty active newsgroup discussing the subject. They'll tell me what's wrong with them.

Date: 2024-04-12 03:09 pm (UTC)
history_monk: (Default)
From: [personal profile] history_monk
I started writing a fuller description and doing some research, and found out that this effectively exists already, in the form of "Hybrid Kernels."

A hybrid kernel contains what looks a lot like a fully implemented microkernel operating system, with a full set of servers for the various functions. However, they are then linked into one binary, and all run in the same address space and with the same memory protection. The division of code between servers is done at the programming language level, rather than the process level.

This gives the code structuring and easier development advantages of a microkernel OS. It avoids the overheads of inter-process communication, because there isn't any such communication.

It does not give the system crash resistance of a microkernel, because a bad pointer can trample memory belonging to any component of the kernel. Nor does it make replacing OS components as easy as a microkernel, but that's not aways a good idea: see RISC OS for the problems that can happen with that.

A hybrid kernel can and does run on current hardware with good performance. The Windows NT kernel and the macOS/iOS/etc kernels are hybrid kernels and are quite successful.

The Linux kernel is monolithic, and could be considered old-fashioned on those grounds. It seems that it's very configurable at build time, and that is one reason why it can run on a very wide range of hardware.

Implementing my idea would require an unconventional memory management system, for single address space and protection settings separate from memory mapping. Its only advantage is that it would allow a purist microkernel OS, and that's very unlikely to justify the hardware development costs.

Date: 2024-03-29 10:52 am (UTC)
flaviomatani: (computery)
From: [personal profile] flaviomatani
Ah, 1989 with me doing music and music lesson stuff on an AtariST. Yep, no multitasking (although some desk accessories allowed you do to do one or two little things more in addition to the open app). Didn't miss it at the time, didn't know it could be a thing for personal computers. The Atari I bought, like so many people, for the MIDI ports and the monochrome monitor as I was not going to be playing games on it (dixit flavio, but I did waste many hours on Super BreakOut). Maybe I missed things for not choosing an Amiga -in reality I had 'chosen' a Mac but there was no way I could afford one at the time therefore Atari. And emulators....

Date: 2024-04-02 12:59 pm (UTC)
tpear: (Default)
From: [personal profile] tpear
Not entirely sure why co-operative multi-tasking has any significantly less overheads than pre-emptive. Each of these OSs had to make different sets of compromises based on the hardware costs — and there wasn’t really an MMU for the 68k until much later into its life unless you were prepared to design your own (eg. as Apple did with Lisa)

The Arm base acorn range did have something (memory stretch for the detail) but it’s RISC OS made a different set of compromises for other reasons.

I remember an article in Byte (I think; might’ve been PCW) comparing the architecture of the Amiga vs MacOS and how the Mac offloaded an awful lot of the work of a gui app onto the programmer vs the Amiga OS; certainly the Amiga mapped onto the 'state of the art' OS design I had recently learnt in college at the time with message passing etc..

Anyway, I think — similar to your description in another article about OS design being locked into the UNIX-like architecture, MMU and context switch support are similarly locked in with basically a supervisor/user security boundary (although newer have a couple more for hypervisor and so on) and tying virtual memory paging mechanisms tightly with security permissions; there’s no room to innovate system software without overhead.

Date: 2024-04-28 10:50 pm (UTC)
tpear: (Default)
From: [personal profile] tpear
TBH I found this comment very hard to follow. I'll try/

Sorry, my bad. Writing with a number of distractions going on while doing it. My underlying point should've been that there was a lot more to the design of these late 1980s/early 1990s OSs than purely the nature of their task switching.

Not entirely sure why co-operative multi-tasking has any significantly less overheads than pre-emptive.


I think it's connected with other things in this thread.

Coop tends to mean no memory protection, which tends to mean a single memory space, no memory protection, no use of an MMU, etc.

That in turn typically means: * much smaller code and a much more compact OS. * fewer context switches, fewer ring transitions, etc. as all code is in 1 ring in 1 context

It's smaller, simpler, and quicker... as long as nothing goes wrong. If an app freezes or crashes or locks up waiting for input, problem: no other apps can take over, you lose the whole OS and all data in all apps. The only way out is a reboot.

Some OSes let you try to kill the offending task, but then you had to save all your work quickly and reboot and hope the apps' memory wasn't corrupted and you weren't saving junk on top of good files.


Sorry, I was not clear. I meant from my point of view as someone who worked on debugging task switching on a bare metal machine a long time ago, I cannot see where there's significantly more overhead in a pre-emptive/time-slicing vs co-operative. The amount of work required by the implementation is fairly similar and even MMU isn't -that- bad.

Back when we entered the (pre PC) 16/32-bit home computer age, we had the likes of Amiga with pre-emptive multi-tasking but no MMU, the likes of RISC OS with co-operative multitasking and some limited use of MMU, and I daresay all kinds of other combinations.

The Arm base acorn range did have something (memory stretch for the detail) but it’s RISC OS made a different set of compromises for other reasons.


Can you explain this?


(From the PoV of someone who's only had glimpses into RISC OS, so may well be talking rubbish)

RISC OS compromises revolved around the need to bring something to market after Acorn's original OS project failed. As such, it's core is very much an evolution of the 8-bit BBC Micro MOS and the co-operative multi-tasking is a function of the GUI sat on top of MOS, not of MOS itself. RISC OS does use a limited amount of MMU protection and mapping (not sure if this is the OS core or the GUI) but I don't think this extends to protecting applications from each other.

I remember an article in Byte (I think; might’ve been PCW) comparing the architecture of the Amiga vs MacOS and how the Mac offloaded an awful lot of the work of a gui app onto the programmer vs the Amiga OS; certainly the Amiga mapped onto the 'state of the art' OS design I had recently learnt in college at the time with message passing etc..


I'd love to see that.


I was able to find this on Archive.org -- goodness knows how this near 40 year old memory managed to survive! Byte Magazine Sept. 1986, with the article on pg. 249 and the particular picture that's stayed in my head all these years in Figure 1 on pg. 250. This link might work: https://archive.org/details/byte-magazine-1986-09/page/n260

Well, I suppose a thesis of my personal research of the last decade or two is that we could cut 99% of the bloat in current systems, not by designing something new and better, but by resurrecting forgotten working products from the 20th century.


I completely agree that we could cut out much of the bloat. Sadly Unix (for example) has forgotten its roots as a reaction against what its creators saw as overkill in Multics. On the other hand, is a system evolved from 1960s/1970s design really the way forward with the challenges of technology today? There has been much innovation in between, of course, with some very imaginative systems along the way, but are these up to the challenges of modern computing -- most especially, the security challenges? Still, even if not directly, a simpler system would be much easier to face those security challenges than a complex on.

Date: 2024-09-11 07:02 pm (UTC)
From: [personal profile] fromarcanum
Maybe he spoke about ls-120 drive, it has IDE connector

June 2025

S M T W T F S
1234567
891011121314
15161718192021
22 232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 3rd, 2025 04:40 pm
Powered by Dreamwidth Studios