liam_on_linux: (Default)
I like cheap Chinese phones. I am on my 3rd now: first an iRulu Victory v3, which came with 5.1. First 6.5" phablet I ever saw: plasticky, not very hi-res, but well under €200 and had dual SIMs, a µSD slot and a replaceable battery. No compass though.

Then a PPTV King 7, amazing device for the time, which came with 5 as well but half in Chinese. I rooted it and put CyanogenMod on it, getting me Android 6. Retina screen, dual SIM or 1 + µSD, fast, amazing screen.

Now, an Umidigi F2, which came with Android 10. Astonishing spec for about €125. Dual SIM + µSD, 128GB flash, fast, superb screen.

But with all of them, typically, you get 1 ROM update ever, normally the first time you turn it on, then that's it. The PPTV was a slight exception as a 3rd party ROM got me a newer version, but with penalties: the camera autofocus failed and all images were blue-tinged, the mic mostly stopped working, and the compass became a random-number generator.

They are all great for the money, but the chipset will never get a newer Android. This is normal. It's the price of getting a £150 phone with the specification of a £600+ phone.

In contrast, I bought my G/F a Xiaomi A2. It's great for the money – a £200 phone – but it wasn't high-end when new. But the build quality is good, the OS has little bloatware (because Android One), at 3YO the battery still lasts a day, there are no watermarks on photos etc.

It had 3 major versions of Android (7, then 8, then 9) and then some updates on top.

This is what you get with Android One and a big-name Chinese vendor.

Me, I go for the amazing deals from little-known vendors, and I accept that I'll never get an update.

MediaTek are not one of those companies that maintain their version for years. In return, they're cheap and the spec is good when they're new. They just move on to new products. Planet persuaded 'em to put 8 on it, and they deserve kudos for that, not complaining. It's an obsolete product; there's no reason to buy a Gemini when you could have a Cosmo, other than cost.

No, these are not £150 phones. They're £500 phones, because of the unique form-factor: a clamshell with the best mobile keyboard ever made.

But Planet Computers are a small company making an almost-bespoke device: i.e. in tiny numbers by modern standards. So, yes, it's made from cheap parts from the cheapest possible manufacturers, because the production run is thousands. A Chinese phone maker like Xiaomi would consider a production run of only 20 million units to be a failure. (Source: interview with former CEO.) 80 million is a niche product to them.

PlanetComp production is below prototype scale for these guys. It's basically a weird little niche hand-made item.

For that, £500 is very good. Compare with the F(x)tech Pro-1, still not shipping a good 18 months after I personally enquired about one, which is about £750 – for a poorer keyboard and a device with fewer adaptations to landscape use.

This is what you get when one vendor -- Google -- provides the OS, another does the port, another builds products around it, and often, another sells the things. Mediatek design and build the SoC, and port one specific version of Android to it... a bit of work from the integrator and OEM builder, and there's your product.

This is one of the things you sometimes get if you buy a name-brand phone: OS updates. But the Chinese phones I favour are ½-⅓ of the price of a cheap name-brand Android and ¼ of the price of a premium brand such as Samsung. So I can replace the phone 2-3× more often and keep more current that way... and still be a lot less worried about having it stolen, or breaking it, or the like. Win/win, for my perspective.

Part of this is because the ARM world is not like the PC world.

For a start, in the x86 world, you can rely on their being system firmware to boot your OS. Most PCs used to use a BIOS; the One Laptop Per Child XO-1 used Open Firmware, like NewWorld PowerMacs. Now, we all get UEFI.

(I do not like UEFI much, as regular readers, if I have a plural number of those, may have gathered.)

ARM systems have no standard firmware. No bootloader, nothing at all. The system vendor has to do all that stuff themselves. And with a SoC (System On A Chip), the system vendor is the chip designer/fabricator.

(For instance, the Raspberry Pi's ARM cores are actually under the control of the GPU which runs its own OS -- a proprietary RTOS called ThreadX. When a RasPi boots, the *GPU* loads the "firmware" from SD card, which boots ThreadX, and then ThreadX starts the ARM core(s) and loads an OS into them. That's why there must be the special little FAT partition: that is what ThreadX reads. That's also why RasPis do not use GRUB or any other bootloader. The word "booting" is a reference to Baron Münchausen lifting himself out of a swamp by his own bootstraps. The computer loads its own software, a contradiction in terms: it lifts itself into running condition by its own bootstraps. I.e. it boots up.

Well, RasPis don't. The GPU boots, loads ThreadX, and then ThreadX initialises the ARMs and puts an OS into their memory for them and tells them to run it.)

So each and every ARM system (i.e. device built around a particular SoC, unless it's very weird) has to have a new native port of every OS. You can't boot a one phone off the Android from another.

A Gemini is a cheapish very-low-production-run Chinese Android phone, with an additional keyboard wired on, and the screen forced to landscape mode in software. (A real landscape screen would have cost too much.)

Cosmo piggybacks a separate little computer in the lid, much like the "touchbar" on a MacBook Pro is a separate little ARM computer running its own OS, like a tiny, very long thin iPad.

AstroSlide will do away with this again, so the fancy hinge should make for a simpler, less expensive design... Note, I say should...
liam_on_linux: (Default)

[Repurposed from Stack Exchange, here]
The premise in the question is incorrect. There were such chips. The question also fails to allow for the way that the silicon-chip industry developed.
Moore's Law basically said that every 18 months, it was possible to build chips with twice as many transistors for the same amount of money.
The 6502 (1975) is a mid-1970s design. In the '70s it cost a lot to use even thousands of transistors; the 6502 succeeded partly because it was very small and simple and didn't use many, compared to more complex rivals such as the Z80 and 6809.
The 68000 (1979) was also from the same decade. It became affordable in the early 1980s (e.g. Apple Lisa) and slightly more so by 1984 (Apple Macintosh). However, note that Motorola also offered a version with an 8-bit external bus, the 68008, as used in the Sinclair QL. This reduced performance, but it was worth it for cheaper machines because it was so expensive to have a 16-bit chipset and 16-bit memory.
Note that just 4 years separates the 6502 and 68000. That's how much progress was being made then.
The 65C816 was a (partially) 16-bit successor to the 6502. Note that WDC also designed a 32-bit successor, the 65C832. Here is a datasheet: https://downloads.reactivemicro.com/Electronics/CPU/WDC%2065C832%20Datasheet.pdf
However, this was never produced. As a 16-bit extension to an 8-bit design, the 65C816 was compromised and slower than pure 16-bit designs. A 32-bit design would have been even more compromised.

Note, this is also why Acorn succeeded with the ARM processor: its clean 32-bit-only design was more efficient than Motorola's combination 16/32-bit design, which was partly inspired by the DEC PDP-11 minicomputer. Acorn evaluated the 68000, 65C816 (which it used in the rare Acorn Communicator), NatSemi 32016, Intel 80186 and other chips and found them wanting. Part of the brilliance of the Acorn design was that it effectively used slow DRAM and did not need elaborate caching or expensive high-speed RAM, resulting in affordable home computers that were nearly 10x faster than rival 68000 machines.
The 68000 was 16-bit externally but 32-bit internally: that is why the Atari machine that used it was called the ST, short for "sixteen/thirty-two".
The first fully-32 bit 680x0 chip was the 68020 (1984). It was faster but did not offer a lot of new capabilities, and its successor the 68030 was more successful, partly because it integrated a memory management unit. Compare with the Intel 80386DX (1985), which did much the same: 32-bit bus, integral MMU.
The 80386DX struggled in the market because of the expense of making 32-bit motherboards with 32-bit wide RAM, so was succeeded by the 80386SX (1988), the same 32-bit core but with a half-width (16-bit) external bus. This is the same design principle as the 68008.
Motorola's equivalent was the fairly rare 68EC020.
The reason was that around the end of the 1980s, when these devices came out, 16MB of memory was a huge amount and very expensive. There was no need for mass-market chips to address 4GB of RAM — that would have cost hundreds of thousands of £/$ at the time. Their 32-bit cores were for performance, not capacity.
The 68030 was followed by the 68040 (1990), just as the 80386 was followed by the 80486 (1989). Both also integrated floating-point coprocessors into the main CPU die. The progress of Moore's Law had now made this affordable.
The line ended with the 68060 (1994), but still 32-bit — but again like Intel's 80586 family, now called "Pentium" because they could't trademark numbers — both have Level 1 cache on the CPU die.
The reason was because at this time, fabricating large chips with millions of transistors was still expensive, and these chips could still address more RAM than was remotely affordable to fit into a personal computer.
So the priority at the time was to find way to spend a limited transistor budget on making faster chips: 8-bit → 16-bit → 32-bit → integrate MMU → integrate FPU → integrate L1 cache
This line of development somewhat ran out of steam by the mid-1990s. This is why there was no successor to the 68060.
Most of the industry switched to the path Acorn had started a decade earlier: dispensing with backwards compatibility with now-compromised 1970s designs and starting afresh with a stripped-down, simpler, reduced design — Reduced Instruction Set Computing (RISC).
ARM chips supported several OSes: RISC OS, Unix, Psion EPOC (later renamed Symbian), Apple NewtonOS, etc. Motorola's supported more: LisaOS, classic MacOS, Xenix, ST TOS, AmigaDOS, multiple Unixes, etc.
No single one was dominant.
Intel was constrained by the success of Microsoft's MS-DOS/Windows family, which sold far more than all the other x86 OSes put together. So backwards-compatibility was more important for Intel than for Acorn or Motorola.
Intel had tried several other CPU architectures: iAPX-432, i860, i960 and later Itanium. All failed in the general-purpose market.
Thus, Intel was forced to to find a way to make x86 quicker. It did this by breaking down x86 instructions into RISC-like "micro operations", re-sequencing them for faster execution, running them on a RISC-like core, and then reassembling the results into x86 afterwards. First on the Pentium Pro, which only did this efficiently for x86-32 instructions, when many people were still running Windows 95/98, an OS composed of a lot of x86-16 code and which ran a lot of x86-16 apps.
Then with the Pentium II, an improved Pentium Pro with onboard L1 (and soon after L2) cache and improved x86-16 optimisation — but also around the time that the PC market moved to Windows XP, a fully x86-32 OS.
In other words, even by the turn of the century, the software was still moving to 32-bit and the limits of 32-bit operation (chiefly, 4GB RAM) were still largely theoretical. So, the effort went into making faster chips with the existing transistor budget.
Only by the middle of the first decade of the 21st century did 4GB become a bottleneck, leading to the conditions for AMD to create a 64-bit extension to x86.
The reasons that 64-bit happened did not apply in the 1990s.
From the 1970s to about 2005, 32 bits were more than enough, and CPU makers worked on spending the transistor budgets on integrating more go-faster parts into CPUs. Eventually, this strategy ran out, when CPUs included the integer core, a floating-point core, a memory management unit, a tiny amount of L1 cache and a larger amount of slower L2 cache.
Then, there was only 1 way to go: integrate a second CPU onto the chip. Firstly as a separate CPU die, then as dual-core dies. Luckily, by this time, NT had replaced Win9x, and NT and Unix could both support symmetrical multiprocessing.
So, dual-core chips, then quadruple-core chips. After that, a single user on a desktop or laptop gets little more benefit. There are many CPUs with more cores but they are almost exclusively used in servers.
Secondly, the CPU industry was now reaching limits of how fast silicon chips can run, and how much heat they emit when doing so. The megahertz race ended.
So the emphases changed, to two new ones, as the limiting factors became:

  • the amount of system memory

  • the amount of cooling they required

  • the amount of electricity they used to operate

These last two things are two sides of the same coin, which is why I said two not three.
Koomey's Law has replaced Moore's Law.

liam_on_linux: (Default)
The first computer I owned was a Sinclair ZX Spectrum, and I retain a lot of fondness for these tiny, cheap, severely-compromised machines. I just backed the ZX Spectrum Next kickstarter, for instance.

But after I left university and got a job, I bought myself my first "proper" computer: an Acorn Archimedes. The Archie remains one of the most beautiful computers [PDF] to use and to program I've ever known. This was the machine for which Acorn developed the original ARM chip. Acorn also had am ambitious project to develop a new, multitasking, better-than-Unix OS for it, written in Modula-2 and called ARX. It never shipped, and instead, some engineers from Acorn's in-house AcornSoft publishing house did an inspired job of updating the BBC Micro OS to run on the new ARM hardware. The result was called Arthur. Version 2 was renamed RISC OS [PDF].

(Incidentally, Dick Pountain's wonderful articles about the Archie are why I bought one and why I'm here today. Some years later, I was lucky enough to work with him on PC Pro magazine and we're still occasionally in touch. A great man and a wonderful writer.)

Seven or eight years ago on a biker mailing list, Ixion, I mentioned RISC OS as something interesting to do with a Raspberry Pi, and a chap replied "a friend of mine wrote that!" Some time later, that passing comment led to me facilitating one of my favourite talks I ever attended at the RISC OS User Group of London. The account is well worth a read for the historical context.

(Commodore had a similar problem: the fancy Commodore Amiga Operating System, CAOS, was never finished, and some engineers hastily assembled a replacement around the TRIPOS research OS. That's what became AmigaOS.)

Today, RISC OS runs on a variety of mostly small and inexpensive ARM single-board computers: the Raspberry Pi, the BeagleBoard, the (rather expensive) Titanium, the PineBook and others. New users are discovering this tiny, fast, elegant little OS and becoming enthusiastic about it.

And that's let to two different but cooperating initiatives that hope to modernise and update this venerable OS. One is backed by a new British company, RISC OS Developments, who have started with a new and improved distribution of the Raspberry Pi version called RISC OS Direct. I have it running on a Rasπ 3B+ and it's really rather nice.

The other is a German project called RISC OS Cloverleaf.

What I am hoping to do here is to try to give a reality check on some of the more ambitious goals for the original native ARM OS, which remains one of my personal favourites to this day.

Even back in 1987, RISC OS was not an ambitious project. At heart, it vaguely resembles Windows 3 on top of MS-DOS: underneath, there is a single-tasking, single-user, text-mode OS built to an early-1980s design, and layered on top of that, a graphical desktop which can cooperatively multitask graphical apps -- although it can also pre-emptively multitask old text-mode programs.

Cooperative multitasking is long gone from mainstream OSes now. What it means is that programs must voluntarily surrender control to the OS, which then runs the next app for a moment, then when that app gives up control of the computer, a third, a fourth and so on. It has one partial advantage: it's a fairly lightweight, simple system. It doesn't need much hardware assistance from the CPU to work well.

But the crucial weakness is in the word "cooperative": it depends on all the programs being good citizens and behaving themselves. If one app grabs control of the computer and doesn't let go, there's nothing the OS can do. Good for games and for media playback -- unless you want to do something else at the same time, in which case, tough luck -- but bad news if an app does something demanding, like rendering a complex model or applying a big filter or something. You can't switch away and get on with anything else; you just have to wait and hope the operation finishes and doesn't run out of memory, or fill up the hard disk, or anything. Because if that one app crashes, then the whole computer crashes, too, and you'll lose all your work in all your apps.

Classic MacOS worked a bit like this, too. There are good reasons why everyone moved over to Windows 95 (or Windows NT if they could afford a really high-end PC) -- because those OSes used the 32-bit Intel chips' hardware memory protection facilities to isolate programs from one another in memory. If one crashed, there was a chance you could close down the offending program and save your work in everything else.

Unlike under MacOS 7, 8 or 9, or under RISC OS. Which is why Acorn and Apple started to go into steep decline after 1995. For most people, reliability and robustness are worth an inferior user experience and a bit of sluggishness. Nobody missed Windows 3.

Apple tried to write something better, but failed, and ended up buying NeXT Computer in 1996 for its Unix-based NeXTstep OS. Microsoft already had an escape plan -- to replace its DOS-based Windows 9x and get everyone using a newer, NT-based OS.

Acorn didn't. It was working on another ill-fated all-singing, all-dancing replacement OS, Galileo, but like ARX, it was too ambitious and was never finished. I've speculated about what might have happened if Acorn did a deal with Be for BeOS on this blog before, but it would never have happened while Acorn was dreaming of Galileo.

So Acorn kept working on RISC OS alongside its next-gen RISC PC, codenamed Phoebe: a machine with PCI slots and the ability to take two CPUs -- not that RISC OS could use more than one. It added support for larger hard disks, it built-in video encoding and decoding and some other nice features, but it was an incremental improvement at best.

Meanwhile, RISC OS had found another, but equally doomed, niche: the ill-fated Network Computer initiative. NCs were an idea before their time: thin, lightweight, simple computers with no hard disk, but always-on internet access. Programs wouldn't -- couldn't -- get installed locally: they'd just load over the Internet. (Something like a ChromeBook with web apps, 20 years later, but with standalone programs.) The Java cross-platform language was ideal for this. For this, Acorn licenced RISC OS to Pace, a UK company that made satellite and cable-TV set-top boxes.

Acorn's NC was one of the most complete and functional, although other companies tried, including DEC, Sun and Corel. The Acorn NC ran NCOS, based on, but incompatible with, RISC OS. Sadly, the NC idea was ahead of its time -- this was before broadband internet was common, and it just wasn't viable on dial-up.

Acorn finally acknowledged reality and shut down its workstation division in 1998, cancelling the Phoebe computer after production of the cases had begun. Its ARM division went on to become huge, and the other bits were sold off and disappeared. The unfinished RISC OS 4 was sold off to a company called RISC OS Ltd. (ROL), who finished it and sold it as an upgrade for existing Acorn owners. Today, it's owned by 3QD, the company behind the commercial Virtual Acorn emulator.

A different company, Castle Technology, continued making and selling some old Acorn models, until 2002 when it surprised the RISC OS world with a completely new machine: the Iyonix. It had proved impossible to make new ARM RISC OS machines, because RISC OS ran in 26-bit mode, and modern ARM chips no longer supported this. Everyone had forgotten the Pace NC effort, but Castle licenced Pace's fork of RISC OS and used it to create a new, 32-bit version for a 600MHz Intel ARM chip. It couldn't directly run old 26-bit apps, but it was quite easy to rewrite them for the new, 32-bit OS.

The RISC OS market began to flourish again in a modest way, selling fast, modern RISC OS machines to old RISC OS enthusiasts. Some companies still used RISC OS as well, and rumour said that a large ongoing order for thousands of units from a secret buyer is what made this worthwhile for Castle.

ROL, meantime, was very unhappy. It thought it had exclusive rights to RISC OS, because everyone had forgotten that Pace had a license too. I attempted to interview its proprietor, Paul Middleton, but he was not interested in cooperating.

Meantime, RISC OS Ltd continued modernising and improving the 26-bit RISC-OS-4-based branch of the family, and selling upgrades to owners of old Acorn machines.

So by early in the 21st century, there were two separate forks of RISC OS:

  • ROL's edition, derived from Acorn's unfinished RISC OS 4, marketed as Select, Adjust and finally "RISC OS SIX", running on 26-bit machines, with a lot of work done on modularising the codebase and adding a Hardware Abstraction Layer to make it easier to move to different hardware. This is what you get with VirtualAcorn.

  • And Castle's edition, marketed as RISC OS 5, for modern 32-bit-only ARM computers, based on Pace's branch as used to create NCOS. This is the basis of RISC OS Open and thus RISC OS Direct.

When Castle was winding down its operations selling ARM hardware, it shared up the source code to RISC OS 5 in the form of RISC OS Open (ROOL). It wasn't open source -- if you made improvements, you had to give them back to Castle Technologies. However, this caused RISC OS development to speed up a little, and let to the version that runs on other ARM-based computers, such as the Raspberry Pi and BeagleBoard.

Both are still the same OS, though, with the same cooperative multitasking model. RISC OS does not have the features that make 1990s 32-bit OSes (such as OS/2 2, Windows NT, Apple Mac OS X, or the multiple competing varieties of Unix) more robust and stable: hardware-assisted memory management and memory protection, pre-emptive multitasking, support for multiple CPUs in one machine, and so on.

There are lightweight, simpler OSes that have these features -- the network-centric successor to Unix, called Plan 9, and its processor-independent successor, Inferno; the open-source Unix-like microkernel OS, Minix 3; the commercial microkernel OS, QNX, which was nearly the basis for a next-generation Amiga and was the basis of the next-generation Blackberry smartphones; the open-source successor to BeOS, Haiku; Pascal creator Niklaus Wirth's final project, Oberon, and its multiprocessor-capable successor A2/Bluebottle -- which ironically is pretty much exactly what Acorn ARX set out to be.

In recent years, RISC OS has gained some more minor modern features. It can talk to USB devices. It speaks Internet protocol and can connect to the Web. (But there's no free Wifi stack, so you need to use a cable. It can't talk Bluetooth, either.) It can handle up to 2GB of memory -- four thousand times more than my first Archimedes.

Some particular versions or products have had other niceties. The proprietary Geminus allowed you to use multiple monitors at once. Aemulor allows 32-bit computers to run some 26-bit apps. The Viewfinder add-on adaptor allowed RISC PCs to use ATI AGP graphics cards from PCs, with graphics acceleration. The inexpensive PineBook laptop has Wifi support under RISC OS.

But these are small things. Overcoming the limitations of RISC OS would be a lot more difficult. For instance, Niall Douglas implemented a pre-emptive multitasking system for RISC OS. As the module that implements cooperative multitasking is called the WIMP, he called his Wimp2. It's still out there, but it has drawbacks -- the issues are discussed here.

And the big thing that RISC OS has is legacy. It has some 35 years of history, meaning many thousands of loyal users, and hundreds of applications, including productivity apps, scientific, educational and artistic tools, internet tools, games, and more.

Sibelius, generally regarded as the best tool in the world for scoring and editing sheet music, started out as a RISC OS app.

People have a lot of investment in RISC OS. If you have using a RISC OS app for three decades to manage your email, or build 3D models, or write or draw or paint or edit photos, or you've been developing your own software in BBC BASIC -- well, that means you're probably quite old by now, and you probably don't want to change.

There are enough such users to keep paying for RISC OS to keep a small market going, offering incremental improvements.

But while if someone can raise the money to pay the programmers, adding wifi, or Bluetooth, or multi-monitor graphical acceleration, or hardware-accelerated video encoding or decoding, would be relatively easy to do, it still leaves you with a 1980s OS design:

  • No pre-emptive multitasking

  • No memory protection or hardware-assisted memory management

  • No multi-threading or multiple CPU support

  • No virtual memory, although that's less important as a £50 computer now has four times more RAM than RISC OS can support.

Small, fast, pleasant to use -- but with a list of disadvantages to match:

  • Unable to take full advantage of modern hardware.

  • Unreliable -- especially under heavy load.

  • Unable to scale up to more processors or more memory.

The problem is the same one that Commodore and Atari faced in the 1990s. To make a small, fast OS for an inexpensive computer which doesn't have much memory, no hard disk, a single CPU with no fancy features, then you have to do a lot of low-level work, close to the metal. You need to write a closely-integrated piece of software, much of it in assembly language, which is tightly coupled to the hardware it was built for.

The result is something way smaller and faster than big lumbering modular PC operating systems which have to work with a huge variety of hardware from hundreds of different companies -- so the OS is not closely integrated with the hardware. But conversely, this design has advantages, too: because it is adaptable to new devices, as the hardware improves, the OS can improve.

So when you ran Windows 3 on a 386 PC with 4MB of RAM -- a big deal in 1990! -- it could use the hardware 16-bit virtualisation of the 386 processor to pretend to be 2, 3 or 4 separate DOS PCs at the same time -- so you could keep your DOS apps when you moved to Windows. They didn't look or feel like Windows apps, but you already knew how to use them and you could still access all your data and continue to work with it.

Then when you got a 486 in 1995 (or a Pentium with Windows NT if you were rich) it could pretend to be multiple 386 computers running separate copies of 16-bit Windows as well as still running those DOS apps. And it could dial into the Internet using new 32-bit apps, too. By the turn of the century, it could use broadband -- the apps didn't know any difference, as it was all virtualised. Everything just went faster.

Six or seven years after that, your PC could have multiple cores, but multiple 32-bit apps could be divided up and run across two or even four cores, each one at full speed, as if it had the computer to itself. Then a few years later, you could get a new 64-bit PC with 64-bit Windows, which could still pretend to be a 32-bit PC for 32-bit apps.

When these things started to appear in the 1990s, the smaller OSes that were more tightly-integrated with their hardware couldn't be adapted so easily when that hardware changed. When more capable 68000-series processors appeared, such as the 68030 with built-in memory management, Atari's TOS, Commodore's AmigaOS and Apple's MacOS couldn't use it. They could only use the new CPU as a faster 68000.

This is the trap that RISC OS is in. Amazingly, by being a small fish in a very small pond -- and thanks to Castle's mysterious one big customer -- it has survived into its fourth decade. The only other end-user OS to survive since then has been NeXTstep, or macOS as it's now called, and it's had a total facelift and does not resemble its 1980s incarnation at all: a 32-bit 68030 OS became a PowerPC OS, which became an Intel 32-bit x86 OS, which became a 64-bit x86 OS and will soon be a 64-bit ARM OS. No 1980s or 1990s NeXTstep software can run on macOS today.

When ARM chips went 32-bit only, RISC OS needed an extensive rewrite, and all the 26-bit apps stopped working. Now, ARM chips are 64-bit, and soon, the high-end models will drop 32-bit support altogether.

As Wimp2 showed, if RISC OS's multitasking module was replaced with a pre-emptive one, a lot of existing apps would stop working.

AmigaOS is now owned by a company called Hyperion, who have ported it to PowerPC -- although there aren't many PowerPC chips around any more.

It's too late for virtual memory, and we don't really need it any more -- but the programming methods that allow virtual memory, letting programs spill over onto disk if the OS runs low on memory, are the same as those that enforce the protection of each program's RAM from all other programs.

Just like Apple did in the late 1990s, Hyperion have discovered that if they rewrite their OS to take advantage of PowerPC chips' hardware memory-protection, then it breaks all the existing apps whose programmers assumed that they could just read and write whatever memory they wanted. That's how Amiga apps communicate with the OS -- it's what made AmigaOS so small and fast. There are no barriers between programs -- so when one program crashes, they all crash.

The same applies to RISC OS -- although it does some clever trickery to hide programs' memory from each other, they can all see the memory that belongs to the OS itself. Change that, and all existing programs stop working.

To make RISC OS able to take advantage of multiple processors, the core OS itself needs an extensive rewrite to allow all its modules to be re-entrant -- that is, for different apps running on different cores to be able to call the same OS modules at the same time and for it to work. The problem is that the design of the RISC OS kernel dates back to about 1981 and a single eight-bit 6502 processor. The assumption that there's only one processor doing one thing at a time is deeply written into it.

That can be changed, certainly -- but it's a lot of work, because the original design never allowed for this. And once again, all existing programs will have to be rewritten to work with the new design.

Linux, to pick an example, focuses on source code compatibility. Since it's open source, and all its apps are open source, then if you get a new CPU, you just recompile all your code for the new chip. Linux on a PowerPC computer can't run x86 software, and Linux on an ARM computer can't run PowerPC software. And Linux on a 64-bit x86 computer doesn't natively support 32-bit software, although the support can be added. If you try to run a commercial, proprietary, closed-source Linux program from 15 or 20 years ago on a modern Linux, it won't even install and definitely won't function -- because all the supporting libraries and modules have slowly changed over that time.

Windows does this very well, because Microsoft have spend tens of billions of dollars on tens of thousands of programmers, writing emulation layers to run 16-bit code on 32-bit Windows, and 32-bit code on 64-bit Windows. Windows embeds layers of virtualisation to ensure that as much old code as possible will still work -- only when 64-bit Vista arrived in 2006 did Windows finally drop support for DOS programs from the early 1980s. Today, Windows on ARM computers emulates an x86 chip so that PC programs will still work.

In contrast, every few versions of macOS, Apple removes any superseded code. The first x86 version of what was then called Mac OS X was 10.4, which was also the last version that ran Classic MacOS apps. By version 10.6, OS X no longer ran on PowerPC Macs, and OS X 10.7 no longer ran PowerPC apps. OS X 10.8 only ran on 64-bit Macs, and 10.15 won't run 32-bit apps.

This allows Apple to keep the OS relatively small and manageable, whereas Microsoft is struggling to maintain the vast Windows codebase. When Windows 10 came out, it announced that 10 was the last-ever major new version of Windows.

It would be possible to rewrite RISC OS to give it pre-emptive multitasking -- but either all existing apps would need to be rewritten, or it would need to incorporate some kind of emulator, like Aemulor, to run old apps on the new OS.

Pre-emptive multitasking -- which is a little slower -- would make multi-threading a little easier, which in turn would allow multi-core support. But that would need existing apps to be rewritten to use multiple threads, which allows them to use more than one CPU core at once. Old apps might still work, but not get any faster -- you could just run as many as you have CPU cores side-by-side with only a small drop in speed.

Then a rewrite of RISC OS for 64-bit ARM chips would require a 32-bit emulation layer for old apps to run -- and very slowly at that, when ARM chips no longer execute 32-bit code directly. A software emulation of 32-bit ARM would be needed, with perhaps a 10x performance drop.

All this, on a codebase that was never intended to allow such things, and done by a tiny crew of volunteers. It will take many years. Each new version will inevitably lose some older software which will stop working. And each year, some of those old enthusiasts who are willing to spend money on it will die. I got my first RISC OS machine in 1989, when I was 21. I'm 52 now. People who came across from the previous generation of Acorn computers, the BBC Micro, are often in their sixties.

Once the older users retire, who will spend money on this? Why would you, when you can use Linux, which does far more and is free. Yes, it's slower and it needs a lot more memory -- but my main laptop is from 2011, cost me £129 second-hand in 2017, and is fast and reliable in use.

To quote an old joke:
"A traveller stops to ask a farmer the way to a small village. The farmer thinks for a while and then says "If you want to go there I would not start from here."

There are alternative approaches. Linux is one. There's already a RISC OS-like desktop for Linux: it's called ROX Desktop, and it's very small and fast. It needs a bit of an update, but nothing huge.

ROX has its own system for single-file applications, like RISC OS's !Apps, called 0install -- but this never caught on. However, there are others -- my personal favourite is called AppImage, but there are also Snap apps and Flatpak. Supporting all of them is perfectly doable.

There is also an incomplete tool for running RISC OS apps on Linux, called ROLF... and a project to run RISC OS itself as an app under Linux.

Not all Linux distributions have the complicated Linux directory layout -- one of my favourites is GoboLinux, which has a much simpler, Mac-like layout.

It would be possible to put together a Linux distribution for ARM computers which looked and worked like RISC OS, had a simple directory layout like RISC OS, including applications packaged as single files, and which, with some work, could run existing RISC OS apps.

No, it wouldn't be small and fast like RISC OS -- it would be nearly as big and slow as any other Linux distro, just much more familiar for RISC OS users. This is apparently good enough for all the many customers of Virtual Acorn, who run RISC OS on top of Windows.

But it would be a lot easier to do than the massive rewrite of RISC OS needed to bring it up to par with other 21st century OSes -- and which would result in a bigger, slower, more complex RISC OS anyway.

The other approach would be to ignore Linux and start over with a clean sheet. Adopt an existing open-source operating system, modify it to look and work more like RISC OS, and write some kind of emulator for existing applications.

My personal preference would be A2/Bluebottle, which is the step-child of what Acorn originally wanted as the OS for the Archimedes. It would need a considerable amount of work, but Professor Wirth designed the system to be tiny, simple and easy to understand. It's written in a language that resembles Delphi. It's still used for teaching students at ETH Zürich, and is very highly-regarded [PDF] in academic circles.

It would be a big job -- but not as big a job as rewriting RISC OS...
liam_on_linux: (Default)

I was a huge Archimedes fan and still have an A310, an A5000, a RiscPC and a RasPi running RISC OS.

But no, I have to disagree. RISC OS was a hastily-done rescue effort after Acorn PARC failed to make ARX work well enough. I helped to arrange this talk by the project lead a few years ago.

RISC OS is a lovely little OS and a joy to use, but it's not very stable. It has no worthwhile memory protection, no virtual memory, no multi-processor support, and true preemptive multitasking is a sort of bolted-on extra (the Task Window). When someone tried to add pre-emption, it broke a lot of existing apps.

It was not some industry-changing work of excellence that would have disrupted everything. It was just barely good enough. Even after 33 years, it doesn't have wifi or bluetooth support, for instance, and although efforts are going on to add multi-processor support, it's a huge amount of work for little gain. There are a whole bunch of memory size limits in RISC OS as it is -- apps using >512MB RAM are very difficult and that requires hackery.

IMHO what Acorn should have done is refocus on laptops for a while -- they could have made world-beating thin, light, long-life, passively-cooled laptops in the late 1990s. Meanwhile, worked with Be on BeOS for a multiprocessor Risc PC 2. I elaborated on that here on this blog.

But RISC OS was already a limitation by 1996 when NT4 came out.

I've learned from Reddit that David Braben (author of Elite and the Archimedes' stunning "Lander" demo and Zarch game) offered to add enhancements to BBC BASIC to make it easier to write games. Acorn declined. Apparently, Sony was also interested in licensing the ARM and RISC OS for a games console -- probably the PS1 -- but Acorn declined. I had no idea. I thought the only 3rd party uses of RISC OS were NCs and STBs. Acorn's platform was, at the time, almost uniquely suitable for this -- a useful Internet client on a diskless machine.

The interesting question, perhaps, is the balance between pragmatic minimalism as opposed to wilful small-mindedness.

I really recommend the Chaos Computer Congress Ultimate Archimedes talk on this subject.

There's a bunch of stuff in the original ARM2/IOC/VIDC/MEMC design (e.g. no DMA, e.g. the 26-bit Program Counter register) that looks odd but reflects pragmatic decisions about simplicity and cost above all else... but a bit like the Amiga design, one year's inspired design decision may turn out, a few years later, to be a horrible millstone around the team's neck. Even the cacheless design which was carefully tuned to the access speeds of mid-1990s FP-mode DRAM.

They achieved greatness by leaving a lot out -- but not just from some sense of conceptual purity. Acorn's Steve Furber said it best: "Acorn gave us two things that nobody else had. No people and no money."

Acorn implemented their new computer on four small, super-simple, chips and a minimalist design, not because they wanted to, but because it was a design team of about a dozen people and almost no budget. They found elegant work-arounds and came up with a clever design because that's all they could do.

I think it may not be a coincidence that a design that was based on COTS parts and components, assembled into an expensive, limited whole eventually evolved into the backbone of the entire computer industry. It was poorly integrated but that meant that parts could be removed and replaced without breaking the whole: the CPU, the display, the storage subsystems, the memory subsystem, in the end the entire motherboard logic and expansion bus.

I refer, of course, to the IBM PC design. It was poor then, but now it's the state of the art. All the better-integrated designs with better CPUs are gone, all the tiny OSes with amazing performance and abilities in a tiny space are gone.

When someone added proper pre-emptive multitasking to RISC OS, it could no longer run most existing apps. If CBM had added 68030 memory management to AmigaOS, it would have broken inter-app communication.

Actually, the much-maligned Atari ST's TOS got further, with each module re-implemented by different teams in order to give it better display support, multitasking etc. while remaining compatible. TOS became MINT -- Mint Is Not TOS -- and then MINT became TOS 4. It also became the proprietary MaGiC OS-in-a-VM for Mac and PC, and later, volunteers integrated 3rd party modules to create a fully GPL edition, AFROS.

But it doesn't take full advantage of later CPUs and so on -- partly because Atari didn't.
Apple famously tried to improve MacOS into something with proper multitasking, nearly went bankrupt doing so, bought their co-founder's company NeXT and ended up totally dumping their own OS, frameworks, APIs and tooling -- and most of the developers -- and switching to a UNIX.

Sony could doubtless have done wonderful stuff with RISC OS on a games console -- but note that the Playstation 4 runs Orbis, which is based on FreeBSD 9, but none of Sony's improvements have made it back to FreeBSD.

Apple macOS is also in part based on FreeBSD, and none of its improvements have made it back upstream. macOS has a better init system, launchd, and a networked metadata directory, netinfo, and a fantastic PDF-based display server, Quartz, as well as some radical filesystem tech.
You won't find any of that in FreeBSD. It may have some driver stuff but the PC version is the same ugly old UNIX OS.

If Acorn made its BASIC into a games engine, that would have reduced its legitimacy in the sciences market. Gamers don't buy expensive kit, universities and laboratories do. Games consoles sell at a loss, like inkjet printers -- the makers earn a profit on the games or ink cartridges. It's called the Gilette razors model.

As a keen user, it greatly saddened me when Acorn closed down its workstations division, but the OS was by then a huge handicap, and there simply wasn't an available replacement by then. As I noted in that blog post I linked to, they could have done attractive laptops, but it wouldn't have helped workstation sales, not back then.

The Phoebe, the cancelled RISC PC 2, had PCI and dual-processor support. Acorn could have sold SMP PCs way cheaper than any x86 vendor, for most of whom the CPU was the single most expensive component. But it wasn't an option, because RISC OS couldn't use 2 CPUs and still can't. If they'd licensed BeOS, and maybe saved Be, who knows -- a decade as the world's leading vendor of inexpensive multiprocessor workstations doesn't sound so bad -- well, the resultant machines would have been very nice, but they wouldn't be RISC PCs because they wouldn't run Archimedes apps, and in 1998 the overheads of running RISC OS in a VM would have been prohibitive. Apple made it work, but some 5 years later, when it was normal for a desktop Mac to come with 128MB or 256MB of RAM and a few gigs of disk, and it was doable to load a 32-64MB VM with another few hundred megs of legacy OS in it. That was rather less true in 1997 or 1998, when a high-end PC had 32 or 64MB of RAM, a gig of disk, and could only take a single CPU running at a couple of hundred megahertz.

I reckon Acorn and Be could have done it -- BeOS was tiny and fast, RISC OS was positively minute and blisteringly fast -- but whether they could have done it in time to save them both is much more doubtful.
I'd love to have seen it. I think there was a niche there. I'm a huge admirer of Neal Stephenson and his seminal essay In The Beginning Was The Command Line is essential reading. It dissects some of the reasons Unix is the way it is and accurately depicts Linux as the marvel it was around the turn of the century. He lauds BeOS, and rightly so. Few ever saw it but it was breathtaking at the time.

Amiga fans loved their machine, not only for its graphics and sound, but multitasking too. This rather cheesy 1987 video does show why...


Just a couple of years later, the Archimedes did pretty much all that and more and it did it with raw CPU grunt, not fancy chips. There are reasons its OS is still alive and still in use. Now, it runs on a mass-market £25 computer. AmigaOS is still around, but all the old apps only run under emulation and it runs on niche kit that costs 5-10x more than a PC of comparable spec.

A decade later, PCs had taken over and were stale and boring. Sluggish and unresponsive despite their immense power. Acorn computers weren't, but x86 PCs were by then significantly more powerful, had true preemptive multitasking, built-in networking and WWW capabilities and so on. But no pizazz. They chugged. They were boring office kit, and they felt like it.

But take a vanilla PC and put BeOS on it, and suddenly, it booted in seconds, ran dozens of apps with ease without flicker or hesitation, played back multiple video streams while rendering them onto OpenGL 3D solids. And, like the Archimedes did a decade before, all in software, without hardware acceleration. All the Amiga's "wow factor" long after we'd given up ever seeing it again.

This, at the time when Linux hadn't even got a free desktop GUI yet, required hand-tuning thousands of lines of config files like OS/2 at its worst, and had no productivity apps.

But would this have been enough to keep A&B going until mass-market multi-core x86 chips came along and stomped them? Honestly, I really doubt it. If Apple had bought Be, it would have got a lovely next-gen OS, but it wouldn't have got Steve Jobs, and it wouldn't have been able to tempt classic MacOS devs to the new OS with amazing next-gen dev tools. I reckon it would have died not long after.

If Acorn and Be had done a deal, or merged or whatever, would there have been enough appeal in the cheapest dual-processor RISC workstation, with amazing media abilities, in the industry? (Presumably, soon after, quad-CPU and even 6- or 8- CPU boxes.)

I hate to admit it, but I really doubt it.
liam_on_linux: (Default)

Acorn pulled out of making desktop computers in 1998, when it cancelled the Risc PC 2, the Acorn Phoebe.

The machine was complete, but the software wasn't. It was finished and released as RISC OS 4, an upgrade for existing Acorn machines, by RISC OS Ltd.

by that era, ARM had lost the desktop performance battle. If Acorn had switched to laptops by then, I think it could have remained competitive for some years longer -- 486-era PC laptops were pretty dreadful. But the Phoebe shows that what Acorn was actually trying to build was a next-generation powerful desktop workstation.

Tragically, I must concede that they were right to cancel it. If there had been a default version with 2 CPUs, upgradable to 4, and that were followed with 6- and 8-core models, they might have made it, but RISC OS couldn't do that, and Acorn didn't have the resources to rewrite RISC OS to do it. A dedicated Linux machine in 1998 would have been suicidal -- Linux didn't even have a FOSS desktop in those days. If you wanted a desktop Unix workstation, you still bought a Sun or the like.

(I wish I'd bought one of the ATX cases when they were on the market.)

Read more... )
liam_on_linux: (Default)
More retrocomputing meanderings -- whatever became of the ST, Amiga and Acorn operating systems?

The Atari ST's GEM desktop also ran on MS-DOS, DR's own DOS+ (a forerunner of the later DR-DOS) and today is included with FreeDOS. In fact the first time I installed FreeDOS I was *very* surprised to find my name in the credits. I debugged some batch files used in installing the GEM component.

The ST's GEM was the same environment. ST GEM was derived from GEM 1; PC GEM from GEM 2, crippled after an Apple lawsuit. Then they diverged. FreeGEM attempted to merge them again.

But the ST's branch prospered, before the rise of the PC killed off all the alternative platforms. Actual STs can be quite cheap now, or you can even buy a modern clone:

http://harbaum.org/till/mist/index.shtml

If you don't want to lash out but have a PC, the Aranym environment gives you something of the feel of the later versions. It's not exactly an emulator, more a sort of compatibility environment that enhances the "emulated" machine as much as it can using modern PC hardware.

http://aranym.org/

And the ST GEM OS was so modular, different 3rd parties cloned every components, separately. Some commercially, some as FOSS. The Aranym team basically put together a sort of "distribution" of as many FOSS components as they could, to assemble a nearly-complete OS, then wrote the few remaining bits to glue it together into a functional whole.

So, finally, after the death of the ST and its clones, there was an all-FOSS OS for it. It's pretty good, too. It's called AFROS, Atari Free OS, and it's included as part of Aranym.

I longed to see a merger of FreeGEM and Aranym, but it was never to be.

The history of GEM and TOS is complex.

Official Atari TOS+GEM evolved into TOS 4, which included the FOSS Mint multitasking later, which isn't much like the original ROM version of the first STs.

The underlying TOS OS is not quite like anything else.

AIUI, CP/M-68K was a real, if rarely-seen, OS.

However, it proved inadequate to support GEM, so it was discarded. A new kernel was written using some of the tech from what was later to become DR-DOS on the PC -- something less like CP/M and more like MS-DOS: directories, separated with backslashes; FAT format disks; multiple executable types, 8.3 filenames, all that stuff.

None of the command-line elements of CP/M or any DR DOS-like OS were retained -- the kernel booted the GUI directly and there was no command line, like on the Mac.

This is called GEMDOS and AIUI it inherits from both the CP/M-68K heritage and from DR's x86 DOS-compatible OSes.

The PC version of GEM also ran on Acorn's BBC Master 512 which had an Intel 80186 coprocessor. It was a very clever machine, in a limited way.

Acorn's series of machines are not well-known in the US, AFAICT, and that's a shame. They were technically interesting, more so IMHO than the Apple II and III, TRS-80 series etc.

The original Acorns were 6502-based, but with good graphics and sound, a plethora of ports, a clear separation between OS, BASIC and add-on ROMs such as the various DOSes, etc. The BASIC was, I'd argue strongly, *the* best 8-bit BASIC ever: named procedures, local variables, recursion, inline assembler, etc. Also the fastest BASIC interpreter ever, and quicker than some compiled BASICs.

Acorn built for quality, not price; the machines were aimed at the educational market, which wasn't so price-sensitive, a model that NeXT emulated. Home users were welcome to buy them & there was one (unsuccessful) home model, but they were unashamedly expensive and thus uncompromised.

The only conceptual compromise in the original BBC Micro was that there was provision for ROM bank switching, but not RAM. The 64kB memory map was 50:50 split ROM and RAM. You could switch ROMs, or put RAM in their place, but not have more than 64kB. This meant that the high-end machine had only 32kB RAM, and high-res graphics modes could take 21kB or so, leaving little space for code -- unless it was in ROM, of course.

The later BBC+ and BBC Master series fixed that. They also allowed ROM cartridges, rather than bare chips inserted in sockets on the main board, and a numeric keypad.

Acorn looked at the 16-bit machines in the mid-80s, mostly powered by Motorola 68000s of course, and decided they weren't good enough and that the tiny UK company could do better. So it did.

But in the meantime, it kept the 6502-based, resolutely-8-bit BBC Micro line alive with updates and new models, including ROM-based terminals and machines with a range of built-in coprocessors: faster 6502-family chips for power users, Z80s for CP/M, Intel's 80186 for kinda-sorta PC compatibility, the NatSemi 32016 with PANOS for ill-defined scientific computing, and finally, an ARM copro before the new ARM-based machines were ready.

Acorn designed the ARM RISC chip in-house, then launched its own range of ARM-powered machines, with an OS based on the 6502 range's. Although limited, this OS is still around today and can be run natively on a Raspberry Pi:

https://www.riscosopen.org/content/

It's very idiosyncratic -- both the filesystem, the command line and the default editor are totally unlike anything else. The file-listing command is CAT, the directory separator is a full stop (i.e. a period), while the root directory is called $. The editor is a very odd dual-cursor thing. It's fascinating, totally unrelated to the entire DEC/MS-DOS family and to the entire Unix family. There is literally and exactly nothing else even slightly like it.

It was the first GUI OS to implement features that are now universal across GUIs: anti-aliased font rendering, full-window dragging and resizing (as opposed to an outline), and significantly, the first graphical desktop to implement a taskbar, before NeXTstep and long before Windows 95.

It supports USB, can access the Internet and WWW. There are free clients for chat, email, FTP, the WWW etc. and a modest range of free productivity tools, although most things are commercial.

But there's no proper inter-process memory protection, GUI multitasking is cooperative, and consequently it's not amazingly stable in use. It does support pre-emptive multitasking, but via the text editor, bizarrely enough, and only of text-mode apps. There was also a pre-emptive multitasking version of the desktop, but it wasn't very compatible, didn't catch on and is not included in current versions.

But saying all that, it's very interesting, influential, shared-source, entirely usable today, and it runs superbly on the £25 Raspberry Pi, so there is little excuse not to try it. There's also a FOSS emulator which can run the modern freeware version:

http://www.marutan.net/rpcemu/

For users of the old hardware, there's a much more polished commercial emulator for Windows and Mac which has its own, proprietary fork of the OS:

http://www.virtualacorn.co.uk/index2.htm

There's an interesting parallel with the Amiga. Both Acorn and Commodore had ambitious plans for a modern multitasking OS which they both referred to as Unix-like. In both cases, the project didn't deliver and the ground-breaking, industry-redefiningly capable hardware was instead shipped with much less ambitious OSes, both of which nonetheless were widely-loved and both of which still survive in the form of multiple, actively-maintained forks, today, 30 years later -- even though Unix in fact caught up and long surpassed these 1980s oddballs.

AmigaOS, based in part on the academic research OS Tripos, has 3 modern forks: the FOSS AROS, on x86, and the proprietary MorphOS and AmigaOS 4 on PowerPC.

Acorn RISC OS, based in part on Acorn MOS for the 8-bit BBC Micro, has 2 contemporary forks: RISC OS 5, owned by Castle Technology but developed by RISC OS Open, shared source rather than FOSS, running on Raspberry Pi, BeagleBoard and some other ARM boards, plus some old hardware and RPC Emu; and RISC OS 4, now owned by the company behind VirtualAcorn, run by an ARM engineer who apparently made good money selling software ARM emulators for x86 to ARM holdings.

Commodore and the Amiga are both long dead and gone, but the name periodically changes hands and reappears on various bits of modern hardware.

Acorn is also long dead, but its scion ARM Holdings designs the world's most popular series of CPUs, totally dominates the handheld sector, and outsells Intel, AMD & all other x86 vendors put together something like tenfold.

Funny how things turn out.
liam_on_linux: (Default)
A friend of mine who is a Commodore enthusiast commented that if the company had handled it better, the Amiga would have killed the Apple Mac off.

But I wonder. I mean, the $10K Lisa ('83) and the $2.5K Mac ('84) may only have been a year or two before the $1.3K Amiga 1000 ('85), but in those years, chip prices were plummeting -- maybe rapidly enough to account for the discrepancy.

The 256kB Amiga 1000 was half the price of the original 128kB Mac a year earlier.

Could Tramiel's Commodore have sold Macs at a profit for much less? I'm not sure. Later, yes, but then, Mac prices fell, and anyway, Apple has long been a premium-products-only sort of company. But the R&D process behind the Lisa & the Mac was long, complex & expensive. (Yes, true, it was behind the Amiga chipset, too, but less so on the OS -- the original CAOS got axed, remember. The TRIPOS thing was a last-minute stand-in, as was Arthur/RISC OS on the Acorn Archimedes.)

The existence of the Amiga also pushed development of the Mac II, the first colour model. (Although I think it probably more directly prompted the Apple ][GS.)

It's much easier to copy something that someone else has already done. Without the precedent of the Lisa, the Mac would have been a much more limited 8-bit machine with a 6809. Without the precedent of the Mac, the Amiga would have been a games console.


I think the contrast between the Atari ST and the Sinclair QL, in terms of business decisions, product focus and so on, is more instructive.
The QL could have been one of the imporant 2nd-generation home computers. It was launched a couple of weeks before the Mac.
But Sinclair went too far with its hallmark cost-cutting on the project, and the launch date was too ambitious. The result was a 16-bit machine that was barely more capable than an 8-bit one from the previous generation. Most of the later 8-bit machines had better graphics and sound; some (Memotech, Elan Enterprise) as much RAM, and some (e.g. the SAM Coupé) also supported built-in mass storage.
But Sinclair's OS, QDOS, was impressive. An excellent BASIC, front & centre like an 8-bit machine, but also full multitasking, modularity so it readily handled new peripherals -- but no GUI by default.
The Mac, similarly RAM deprived and with even poorer graphics, blew it away. Also, with the Lisa and the Mac, Apple had spotted that the future lay in GUIs, which Sinclair had missed -- the QL didn't get its "pointer environment" until later, and when it did, it was primitive-looking. Even the modern version is:



Atari, entering the game a year or so later, had a much better idea where to spend the money. The ST was an excellent demonstration of cost-cutting. Unlike the bespoke custom chipsets of the Mac and the Amiga, or Sinclair's manic focus on cheapness, Atari took off-the-shelf hardware and off-the-shelf software and assembled something that was good enough. A decent GUI, an OS that worked well in 512kB, graphics and sound that were good enough. Marginally faster CPU than an Amiga, and a floppy format interchangeable with PCs.
Yes, the Amiga was a better machine in almost every way, but the ST was good enough, and at first, significantly cheaper. Commodore had to cost-trim the Amiga to match, and the first result, the Amiga 500, was a good games machine but too compromised for much else.

The QL was built down to a price, and suffered for it. Later replacement motherboards and third-party clones such as the Thor fixed much of this, but it was no match for the GUI-based machines.

The Mac was in some ways a sort of cut-down Lisa, trying to get that ten-thousand-dollar machine down to a more affordable quarter of the price. Sadly, this meant losing the hard disk and the innovative multitasking OS, which were added back later in compromised form -- the latter cursed the classic MacOS until it was replaced with Mac OS X at the turn of the century.

The Amiga was a no-compromise games machine, later cleverly shoehorned into the role of a very capable multimedia GUI coomputer.

The ST was also built down to a price, but learned from the lessons of the Mac. Its spec wasn't as good as the Amiga, its OS wasn't as elegant as the Mac, but it was good enough.

The result was that games developers aimed at both, limiting the quality of Amiga games to the capabilities of the ST. The Amiga wasn't differentiated enough -- yes, Commodore did high-end three-box versions, but the basic machines remained too low-spec. The third-generation Amiga 1200 had a faster 68020 chip which the OS didn't really utilise, it had provision for a built-in hard disk which was an optional extra. AmigaOS was a pain to use with only floppies, like the Mac -- whereas the ST's ROM-based OS was fairly usable with a single drive. A dual-floppy-drive Amiga was the minimum usable spec, really, and it benefited hugely from a hard disk -- but Commodore didn't fit one.

The ST killed the Amiga, in effect. By providing an experience that was nearly as good in the important, visible ways, Commodore had to price-cut the Amiga to keep it competitive, hobbling the lower-end models. And as games were written to be portable between them both without too much work, they mostly didn't exploit the Amiga's superior abilities.

Acorn went its own way with the Archimedes -- it shared almost no apps or games with the mainstream machines, and while its OS is still around, it hasn't kept up with the times and is mainly a curiosity. Acorn kept its machines a bit higher-end, having affordable three-box models with hard disks right from the start, and focused on the educational niche where it was strong.

But Acorn's decision to go its own way was entirely vindicated -- its ARM chip is now the world's best-selling CPU. Both Microsoft and Apple OSes run on ARMs now. In a way, it won.

The poor Sinclair QL, of course, failed in the market and Amstrad killed it off when it was still young. But even so, it inspired a whole line of successors -- the CST Thor, the ICL One-Per-Desk (AKA Merlin Tonto, AKA Telecom Australia ComputerPhone), the Qubbesoft Aurora replacement main board and later the Q40 and Q60 QL-compatible PC-style motherboards. It had the first ever multitasking OS for a home computer, QDOS, which evolved into SMSQ/e and moved over to the ST platform instead. It's now open source, too.

And Linus Torvalds owned a QL, giving him a taste for multitasking so that he wrote his own multitasking OS when he got a PC. That, of course, was Linux.

The Amiga OS is still limping along, now running on a CPU line -- PowerPC -- that is also all but dead. The open-source version, AROS, is working on an ARM port, which might make it slightly more relevant, but it's hard to see a future or purpose for the two PowerPC versions, MorphOS and AmigaOS 4.

The ST OS also evolved, into a rich multitasking app environment for PCs and Macs (MagiC) and into a rich multitasking FOSS version, AFROS, running on an emulator on the PC, Aranym. A great and very clever little project but which went nowhere, as did PC GEM, sadly.

All of these clever OSes -- AROS, AFROS, QDOS AKA SMSQ/E. All went FOSS too late and are forgotten. Me, I'd love Raspberry Pi versions of any and all of them to play with!

In its final death throes, a flailing Atari even embraced the Transputer. The Atari ABAQ could run Parhelion's HELIOS, another interesting long-dead OS. Acorn's machines ran one of the most amazing OSes I've ever seen, TAOS, which nearly became the next-generation Amiga OS. That could have shaken up the industry -- it was truly radical.

And in a funny little side-note, the next next-gen Amiga OS after TAOS was to be QNX. It didn't happen, but QNX added a GUI and rich multimedia support to its embedded microkernel OS for the deal. That OS is now what powers my Blackberry Passport smartphone. Blackberry 10 is now all but dead -- Blackberry has conceded the inevitable and gone Android -- but BB10 is a beautiful piece of work, way better than its rivals.

But all the successful machines that sold well? The ST and Amiga lines are effectively dead. The Motorola 68K processor line they used is all but dead, too. So is its successor, PowerPC.

So it's the two niche machines that left the real legacy. In a way, Sinclair Research did have the right idea after all -- but prematurely. It thought that the justification for 16-bit home/business computers was multitasking. In the end, it was, but only in the later 32-bit era: the defining characteristic of the 16-bit era was bringing the GUI to the masses. True robust multitasking for all followed later. Sinclair picked the wrong feature to emphasise -- even though the QL post-dated the Apple Lisa, so the writing was there on the wall for all to see.

But in the end, the QL inspired Linux and the Archimedes gave us the ARM chip, the most successful RISC chip ever and the one that could still conceivably drive the last great CISC architecture, x86, into extinction.

Funny how things turn out.
liam_on_linux: (Default)
Symbian was OK. EPOC, its progenitor, was in some ways better. (I write as a Psion owner, user and -- TBH -- fan.)

AIUI, and I do not have good solid references on this, EPOC was a very early adopter of C++ as opposed to plain old C, and as a result, it did many things in extremely nonstandard ways compared to later C++ practice. Its string handling, error handling and all sorts of things was very weird and proprietary compared to the way that the greater C++ community ended up doing.

Read more... )

May 2025

S M T W T F S
    12 3
45678910
11121314151617
1819 2021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 10th, 2025 07:01 am
Powered by Dreamwidth Studios