liam_on_linux: (Default)
[Another recycled mailing list post]

I was asked what options there were for blind people who wish to use Linux.

The answer is simple but fairly depressing: basically every blind person I know personally or via friends of friends who is a computer user, uses Windows or Mac. There is a significant move from Windows to Mac.

Younger computer users -- by which I mean people who started using computers since the 1990s and widespread internet usage, i.e. most of them -- tend to expect graphical user interfaces, menus and so on, and not to be happy with command-line-driven programs.

This applies every bit as much to blind users.

Linux can work very well for blind users if they use the terminal. The Linux shell is the richest and most powerful command-line environment there is or ever has been, and one can accomplish almost anything one wants to do using it.

But it's still a command line, and a notably unfriendly and unhelpful one at that.

In my experience, for a lot of GUI users, that is just too much.

For instance, a decade or so back, the Register ran some articles I wrote on switching to Linux. They were, completely intentionally, what is sometimes today called "opinionated" -- that is, I did not try to present balance or a spread of options. Instead I presented what was, IMHO, the best choices.


Multiple readers complained that I included a handful of commands to type in. "This is why Linux is not usable! This is why it is not ready for the real world! Ordinary people can't do this weird arcane stuff!" And so on.

Probably some of these remarks are still there in the comments pages.

In vain did some others try to reason with them.

But it was 10x quicker to copy-and-paste these commands!
-> No, it's too hard.

He could give GUI steps but it would take pages.
-> Then that's what he should have done, because we don't do this weird terminal nonsense.

But then the article would have been 10x longer and you wouldn't read it.
-> Well then the OS is not ready, it's not suitable for normal people.

If you just copy-and-paste, it's like 3 mouse clicks and you can't make a typing error.
-> But it's still weird and scary and I DON'T LIKE IT.

You can't win.

This is why Linux Mint succeeded -- partly because when Ubuntu introduced its non-Windows-like desktop after Microsoft threatened to sue, Mint hoovered up those users who wanted it Windows-like.

But also because Mint didn't make you install the optional extras. It bundled them, and so what if that makes it illegal to distribute in some countries? It Just Worked out of the box, and it looked familiar, and that won them millions of fans.

Mac OS X has done extremely well partly because users never ever need to go need a command line, for anything, ever. You can if you want, but you never, ever need to.

If that means you can't move your swap file to another drive, so be it. If that means that a tonne of the classic Unix configuration files are gone, replaced by a networked configuration database, so be it.

Apple is not afraid to break things in order to make something better.

The result has been to become the first trillion-dollar computer company, and hundreds of millions of happy customers.

Linux gives you choices, lets you pick what you want, work the way you want... and despite offering the results for free, the result has been about 1% of the desktop market and basically zero of the tablet and smartphone markets.

Ubuntu made a valiant effort to make a desktop of Mac-like simplicity, and it successfully went from a new entrant in a busy marketplace in 2004 to being the #1 desktop Linux within a decade. It has made virtually no dent on the non-Linux world, though.

After 20 years of this, Google (after *bitter* internal argument) introduced ChromeOS, a Linux which takes away all your choices. It only runs on Google hardware, has no apps, no desktop, no package management, no choices at all. It gives you a dead cheap, virus-proof computer that gets you on the Web.

In less time than Ubuntu took to win about 1% of the Windows market over to Linux, ChromeBooks persuaded about one third of the world laptop buying market to switch to Linux. More Chromebooks sell every year -- tens of millions -- than Ubuntu users in total since it lauched.

What effect has this had on desktop Linux? Zero. None at all. If that is the price of success, they are not willing to pay it. What Google has done is so unspeakable foul, so wrong, so blasphemous, they don't even talk about it.

What effect has it had on Microsoft? A lot. Cheaper Windows laptops than ever, new low-end editions of Windows, serious efforts to reduce the disk and memory usage...

And little success. The cheap editions lose what makes Windows desirable, and ultra-cheap Windows laptops make poorer slower Chromebooks than actual Chromebooks.

Apple isn't playing. It makes its money in the high-end.

Unfortunately a lot of people are very technologically conservative. Once they find something they like, they will stay with it at all costs.

This attitude is what has kept Microsoft immensely profitable.

A similar one is what has kept Linux as the most successful server OS in the world. It is just a modernised version of a quick and dirty hack of an OS from the 1960s, but it's capable and it's free. "Good enough" is the enemy of better.

There are hundreds of other operating systems out there. I listed 25 non-Linux FOSS OSes in this piece, and yes, FreeDOS was included.

There are dozens that are better in various ways than Unix and Linux.

  • Minix 3 is a better FOSS Unix than Linux: a true microkernel which can cope with parts of itself failing without crashing the computer.

  • Plan 9 is a better UNIX than Unix. Everything really is a file and the network is the computer.

  • Inferno is a better Plan 9 than Plan 9: the network is your computer, with full processor and OS-independence.

  • Plan 9's UI is based on Oberon: an entire mouse-driven OS in 10,000 lines of rigorous, type-safe code, including the compiler and IDE.

  • A2 is the modern descendant of Oberon: real-time capable, a full GUI, multiprocessor-aware, internet- and Web-capable.

(And before anyone snarks at me: they are all niche projects, direly lacking polish and not ready for the mass market. So was Linux until the 21st century. So was Windows until version 3. So was the Mac until at the very least the Mac Plus with a hard disk. None of this in any way invalidates their potential.)

But almost everyone is too invested in the way they know and like to be willing to start over.

So we are trapped, the monkey with its hand stuck in a coconut shell full of rice, even though it can see the grinning hunter coming to kill and eat it.

We are facing catastrophic climate change that will kill most of humanity and most species of life on Earth, this century. To find any solutions, we need better computers that can help us to think better and work out better ways to live, better cleaner technologies, better systems of employment and housing and everything else.

But we can't let go of the single lousy handful of rice that we are clutching. We can't let go of our broken political and economic and military-industrial systems. We can't even let go of our broken 1960s and 1970s computer operating systems.

And every day, the hunter gets closer and his smile gets bigger.
liam_on_linux: (Default)
My talk should be on in about an hour and a half from when I post this.

«

A possible next evolutionary step for computers is persistent memory: large capacity non-volatile main memory. With a few terabytes of nonvolatile RAM, who needs an SSD any more? I will sketch out a proposal for how to build an versatile, general-purpose OS for a computer that doesn't need or use filesystems or files, and how such a thing could be built from existing FOSS code and techniques, using lessons from systems that existed decades ago and which inspired the computers we use today.

Since the era of the mainframe, all computers have used hard disks and at least two levels of storage: main memory, or RAM, and secondary or auxiliary storage: disk drives, accessed over some form of disk controller using a file system to index the contents of secondary storage for retrieval.

Technology such as Intel's 3D Xpoint -- sold under the brand name Optane -- and HP's future memristor storage will render this separation obsolete. When a computer's permanent storage is all right there in the processors' memory map, there is no need for disk controllers or filesystems. It's all just RAM.

It is very hard to imagine how existing filesystem-centric OSes such as Unix could be adapted to take full advantage of this, so fundamental are files and directories and metadata to how they operate. I will present the outline of an idea how to build an OS that natively uses such a computer architecture, based on existing technology and software, that the FOSS community is ideally situated to build and develop.
»


It talks about Lisp, Smalltalk, Oberon and A2, and touches upon Plan 9, Inferno, Psion EPOC, Newton, Dylan, and more.

You can download the slides (in PDF or LO ODP format) from the FOSDEM programme entry for the talk.
It is free to register and to watch.

I will update this post later, after it is finished, with links to the video, slides, speaker's notes, etc.

UPDATE:

In theory you should be able to watch the video on the FOSDEM site after the event, but it seems their servers are still down. I've put a copy of my recording on Dropbox where you should be able to watch it.

NOTE: apparently Dropbox will only show the first 15min in its preview. Download the video and play it locally to see the whole, 49min thing. It is in MP4 encoded with H.264.
Unfortunately, in the recording, the short Steve Jobs video is silent. The original clip is below. Here is a transcript:
I had three or four people who kept bugging me that I ought to get my rear over to Xerox PARC and see what they were doing. And so I finally did. I went over there. And they were very kind and they showed me what they were working on.

And they showed me really three things, but I was so blinded by the first one that I didn’t even really see the other two.

One of the things they showed me was object-oriented programming. They showed me that, but I didn’t even see that.

The other one they showed me was really a networked computer system. They had over a hundred Alto computers, all networked using email,
et cetera, et cetera. I didn’t even see that.

I was so blinded by the the first thing they showed me, which was the graphical user interface. I thought it was the best thing I'd ever seen in my life.

Now, remember, it was very flawed. What we saw was incomplete. They’d done a bunch of things wrong, but we didn’t know that at the time. And still, though, they had the germ of the idea was there and they’d done it very well. And within, you know, 10 minutes, it was obvious to me that all computers would work like this someday. It was obvious.


liam_on_linux: (Default)
The first computer I owned was a Sinclair ZX Spectrum, and I retain a lot of fondness for these tiny, cheap, severely-compromised machines. I just backed the ZX Spectrum Next kickstarter, for instance.

But after I left university and got a job, I bought myself my first "proper" computer: an Acorn Archimedes. The Archie remains one of the most beautiful computers [PDF] to use and to program I've ever known. This was the machine for which Acorn developed the original ARM chip. Acorn also had am ambitious project to develop a new, multitasking, better-than-Unix OS for it, written in Modula-2 and called ARX. It never shipped, and instead, some engineers from Acorn's in-house AcornSoft publishing house did an inspired job of updating the BBC Micro OS to run on the new ARM hardware. The result was called Arthur. Version 2 was renamed RISC OS [PDF].

(Incidentally, Dick Pountain's wonderful articles about the Archie are why I bought one and why I'm here today. Some years later, I was lucky enough to work with him on PC Pro magazine and we're still occasionally in touch. A great man and a wonderful writer.)

Seven or eight years ago on a biker mailing list, Ixion, I mentioned RISC OS as something interesting to do with a Raspberry Pi, and a chap replied "a friend of mine wrote that!" Some time later, that passing comment led to me facilitating one of my favourite talks I ever attended at the RISC OS User Group of London. The account is well worth a read for the historical context.

(Commodore had a similar problem: the fancy Commodore Amiga Operating System, CAOS, was never finished, and some engineers hastily assembled a replacement around the TRIPOS research OS. That's what became AmigaOS.)

Today, RISC OS runs on a variety of mostly small and inexpensive ARM single-board computers: the Raspberry Pi, the BeagleBoard, the (rather expensive) Titanium, the PineBook and others. New users are discovering this tiny, fast, elegant little OS and becoming enthusiastic about it.

And that's let to two different but cooperating initiatives that hope to modernise and update this venerable OS. One is backed by a new British company, RISC OS Developments, who have started with a new and improved distribution of the Raspberry Pi version called RISC OS Direct. I have it running on a Rasπ 3B+ and it's really rather nice.

The other is a German project called RISC OS Cloverleaf.

What I am hoping to do here is to try to give a reality check on some of the more ambitious goals for the original native ARM OS, which remains one of my personal favourites to this day.

Even back in 1987, RISC OS was not an ambitious project. At heart, it vaguely resembles Windows 3 on top of MS-DOS: underneath, there is a single-tasking, single-user, text-mode OS built to an early-1980s design, and layered on top of that, a graphical desktop which can cooperatively multitask graphical apps -- although it can also pre-emptively multitask old text-mode programs.

Cooperative multitasking is long gone from mainstream OSes now. What it means is that programs must voluntarily surrender control to the OS, which then runs the next app for a moment, then when that app gives up control of the computer, a third, a fourth and so on. It has one partial advantage: it's a fairly lightweight, simple system. It doesn't need much hardware assistance from the CPU to work well.

But the crucial weakness is in the word "cooperative": it depends on all the programs being good citizens and behaving themselves. If one app grabs control of the computer and doesn't let go, there's nothing the OS can do. Good for games and for media playback -- unless you want to do something else at the same time, in which case, tough luck -- but bad news if an app does something demanding, like rendering a complex model or applying a big filter or something. You can't switch away and get on with anything else; you just have to wait and hope the operation finishes and doesn't run out of memory, or fill up the hard disk, or anything. Because if that one app crashes, then the whole computer crashes, too, and you'll lose all your work in all your apps.

Classic MacOS worked a bit like this, too. There are good reasons why everyone moved over to Windows 95 (or Windows NT if they could afford a really high-end PC) -- because those OSes used the 32-bit Intel chips' hardware memory protection facilities to isolate programs from one another in memory. If one crashed, there was a chance you could close down the offending program and save your work in everything else.

Unlike under MacOS 7, 8 or 9, or under RISC OS. Which is why Acorn and Apple started to go into steep decline after 1995. For most people, reliability and robustness are worth an inferior user experience and a bit of sluggishness. Nobody missed Windows 3.

Apple tried to write something better, but failed, and ended up buying NeXT Computer in 1996 for its Unix-based NeXTstep OS. Microsoft already had an escape plan -- to replace its DOS-based Windows 9x and get everyone using a newer, NT-based OS.

Acorn didn't. It was working on another ill-fated all-singing, all-dancing replacement OS, Galileo, but like ARX, it was too ambitious and was never finished. I've speculated about what might have happened if Acorn did a deal with Be for BeOS on this blog before, but it would never have happened while Acorn was dreaming of Galileo.

So Acorn kept working on RISC OS alongside its next-gen RISC PC, codenamed Phoebe: a machine with PCI slots and the ability to take two CPUs -- not that RISC OS could use more than one. It added support for larger hard disks, it built-in video encoding and decoding and some other nice features, but it was an incremental improvement at best.

Meanwhile, RISC OS had found another, but equally doomed, niche: the ill-fated Network Computer initiative. NCs were an idea before their time: thin, lightweight, simple computers with no hard disk, but always-on internet access. Programs wouldn't -- couldn't -- get installed locally: they'd just load over the Internet. (Something like a ChromeBook with web apps, 20 years later, but with standalone programs.) The Java cross-platform language was ideal for this. For this, Acorn licenced RISC OS to Pace, a UK company that made satellite and cable-TV set-top boxes.

Acorn's NC was one of the most complete and functional, although other companies tried, including DEC, Sun and Corel. The Acorn NC ran NCOS, based on, but incompatible with, RISC OS. Sadly, the NC idea was ahead of its time -- this was before broadband internet was common, and it just wasn't viable on dial-up.

Acorn finally acknowledged reality and shut down its workstation division in 1998, cancelling the Phoebe computer after production of the cases had begun. Its ARM division went on to become huge, and the other bits were sold off and disappeared. The unfinished RISC OS 4 was sold off to a company called RISC OS Ltd. (ROL), who finished it and sold it as an upgrade for existing Acorn owners. Today, it's owned by 3QD, the company behind the commercial Virtual Acorn emulator.

A different company, Castle Technology, continued making and selling some old Acorn models, until 2002 when it surprised the RISC OS world with a completely new machine: the Iyonix. It had proved impossible to make new ARM RISC OS machines, because RISC OS ran in 26-bit mode, and modern ARM chips no longer supported this. Everyone had forgotten the Pace NC effort, but Castle licenced Pace's fork of RISC OS and used it to create a new, 32-bit version for a 600MHz Intel ARM chip. It couldn't directly run old 26-bit apps, but it was quite easy to rewrite them for the new, 32-bit OS.

The RISC OS market began to flourish again in a modest way, selling fast, modern RISC OS machines to old RISC OS enthusiasts. Some companies still used RISC OS as well, and rumour said that a large ongoing order for thousands of units from a secret buyer is what made this worthwhile for Castle.

ROL, meantime, was very unhappy. It thought it had exclusive rights to RISC OS, because everyone had forgotten that Pace had a license too. I attempted to interview its proprietor, Paul Middleton, but he was not interested in cooperating.

Meantime, RISC OS Ltd continued modernising and improving the 26-bit RISC-OS-4-based branch of the family, and selling upgrades to owners of old Acorn machines.

So by early in the 21st century, there were two separate forks of RISC OS:

  • ROL's edition, derived from Acorn's unfinished RISC OS 4, marketed as Select, Adjust and finally "RISC OS SIX", running on 26-bit machines, with a lot of work done on modularising the codebase and adding a Hardware Abstraction Layer to make it easier to move to different hardware. This is what you get with VirtualAcorn.

  • And Castle's edition, marketed as RISC OS 5, for modern 32-bit-only ARM computers, based on Pace's branch as used to create NCOS. This is the basis of RISC OS Open and thus RISC OS Direct.

When Castle was winding down its operations selling ARM hardware, it shared up the source code to RISC OS 5 in the form of RISC OS Open (ROOL). It wasn't open source -- if you made improvements, you had to give them back to Castle Technologies. However, this caused RISC OS development to speed up a little, and let to the version that runs on other ARM-based computers, such as the Raspberry Pi and BeagleBoard.

Both are still the same OS, though, with the same cooperative multitasking model. RISC OS does not have the features that make 1990s 32-bit OSes (such as OS/2 2, Windows NT, Apple Mac OS X, or the multiple competing varieties of Unix) more robust and stable: hardware-assisted memory management and memory protection, pre-emptive multitasking, support for multiple CPUs in one machine, and so on.

There are lightweight, simpler OSes that have these features -- the network-centric successor to Unix, called Plan 9, and its processor-independent successor, Inferno; the open-source Unix-like microkernel OS, Minix 3; the commercial microkernel OS, QNX, which was nearly the basis for a next-generation Amiga and was the basis of the next-generation Blackberry smartphones; the open-source successor to BeOS, Haiku; Pascal creator Niklaus Wirth's final project, Oberon, and its multiprocessor-capable successor A2/Bluebottle -- which ironically is pretty much exactly what Acorn ARX set out to be.

In recent years, RISC OS has gained some more minor modern features. It can talk to USB devices. It speaks Internet protocol and can connect to the Web. (But there's no free Wifi stack, so you need to use a cable. It can't talk Bluetooth, either.) It can handle up to 2GB of memory -- four thousand times more than my first Archimedes.

Some particular versions or products have had other niceties. The proprietary Geminus allowed you to use multiple monitors at once. Aemulor allows 32-bit computers to run some 26-bit apps. The Viewfinder add-on adaptor allowed RISC PCs to use ATI AGP graphics cards from PCs, with graphics acceleration. The inexpensive PineBook laptop has Wifi support under RISC OS.

But these are small things. Overcoming the limitations of RISC OS would be a lot more difficult. For instance, Niall Douglas implemented a pre-emptive multitasking system for RISC OS. As the module that implements cooperative multitasking is called the WIMP, he called his Wimp2. It's still out there, but it has drawbacks -- the issues are discussed here.

And the big thing that RISC OS has is legacy. It has some 35 years of history, meaning many thousands of loyal users, and hundreds of applications, including productivity apps, scientific, educational and artistic tools, internet tools, games, and more.

Sibelius, generally regarded as the best tool in the world for scoring and editing sheet music, started out as a RISC OS app.

People have a lot of investment in RISC OS. If you have using a RISC OS app for three decades to manage your email, or build 3D models, or write or draw or paint or edit photos, or you've been developing your own software in BBC BASIC -- well, that means you're probably quite old by now, and you probably don't want to change.

There are enough such users to keep paying for RISC OS to keep a small market going, offering incremental improvements.

But while if someone can raise the money to pay the programmers, adding wifi, or Bluetooth, or multi-monitor graphical acceleration, or hardware-accelerated video encoding or decoding, would be relatively easy to do, it still leaves you with a 1980s OS design:

  • No pre-emptive multitasking

  • No memory protection or hardware-assisted memory management

  • No multi-threading or multiple CPU support

  • No virtual memory, although that's less important as a £50 computer now has four times more RAM than RISC OS can support.

Small, fast, pleasant to use -- but with a list of disadvantages to match:

  • Unable to take full advantage of modern hardware.

  • Unreliable -- especially under heavy load.

  • Unable to scale up to more processors or more memory.

The problem is the same one that Commodore and Atari faced in the 1990s. To make a small, fast OS for an inexpensive computer which doesn't have much memory, no hard disk, a single CPU with no fancy features, then you have to do a lot of low-level work, close to the metal. You need to write a closely-integrated piece of software, much of it in assembly language, which is tightly coupled to the hardware it was built for.

The result is something way smaller and faster than big lumbering modular PC operating systems which have to work with a huge variety of hardware from hundreds of different companies -- so the OS is not closely integrated with the hardware. But conversely, this design has advantages, too: because it is adaptable to new devices, as the hardware improves, the OS can improve.

So when you ran Windows 3 on a 386 PC with 4MB of RAM -- a big deal in 1990! -- it could use the hardware 16-bit virtualisation of the 386 processor to pretend to be 2, 3 or 4 separate DOS PCs at the same time -- so you could keep your DOS apps when you moved to Windows. They didn't look or feel like Windows apps, but you already knew how to use them and you could still access all your data and continue to work with it.

Then when you got a 486 in 1995 (or a Pentium with Windows NT if you were rich) it could pretend to be multiple 386 computers running separate copies of 16-bit Windows as well as still running those DOS apps. And it could dial into the Internet using new 32-bit apps, too. By the turn of the century, it could use broadband -- the apps didn't know any difference, as it was all virtualised. Everything just went faster.

Six or seven years after that, your PC could have multiple cores, but multiple 32-bit apps could be divided up and run across two or even four cores, each one at full speed, as if it had the computer to itself. Then a few years later, you could get a new 64-bit PC with 64-bit Windows, which could still pretend to be a 32-bit PC for 32-bit apps.

When these things started to appear in the 1990s, the smaller OSes that were more tightly-integrated with their hardware couldn't be adapted so easily when that hardware changed. When more capable 68000-series processors appeared, such as the 68030 with built-in memory management, Atari's TOS, Commodore's AmigaOS and Apple's MacOS couldn't use it. They could only use the new CPU as a faster 68000.

This is the trap that RISC OS is in. Amazingly, by being a small fish in a very small pond -- and thanks to Castle's mysterious one big customer -- it has survived into its fourth decade. The only other end-user OS to survive since then has been NeXTstep, or macOS as it's now called, and it's had a total facelift and does not resemble its 1980s incarnation at all: a 32-bit 68030 OS became a PowerPC OS, which became an Intel 32-bit x86 OS, which became a 64-bit x86 OS and will soon be a 64-bit ARM OS. No 1980s or 1990s NeXTstep software can run on macOS today.

When ARM chips went 32-bit only, RISC OS needed an extensive rewrite, and all the 26-bit apps stopped working. Now, ARM chips are 64-bit, and soon, the high-end models will drop 32-bit support altogether.

As Wimp2 showed, if RISC OS's multitasking module was replaced with a pre-emptive one, a lot of existing apps would stop working.

AmigaOS is now owned by a company called Hyperion, who have ported it to PowerPC -- although there aren't many PowerPC chips around any more.

It's too late for virtual memory, and we don't really need it any more -- but the programming methods that allow virtual memory, letting programs spill over onto disk if the OS runs low on memory, are the same as those that enforce the protection of each program's RAM from all other programs.

Just like Apple did in the late 1990s, Hyperion have discovered that if they rewrite their OS to take advantage of PowerPC chips' hardware memory-protection, then it breaks all the existing apps whose programmers assumed that they could just read and write whatever memory they wanted. That's how Amiga apps communicate with the OS -- it's what made AmigaOS so small and fast. There are no barriers between programs -- so when one program crashes, they all crash.

The same applies to RISC OS -- although it does some clever trickery to hide programs' memory from each other, they can all see the memory that belongs to the OS itself. Change that, and all existing programs stop working.

To make RISC OS able to take advantage of multiple processors, the core OS itself needs an extensive rewrite to allow all its modules to be re-entrant -- that is, for different apps running on different cores to be able to call the same OS modules at the same time and for it to work. The problem is that the design of the RISC OS kernel dates back to about 1981 and a single eight-bit 6502 processor. The assumption that there's only one processor doing one thing at a time is deeply written into it.

That can be changed, certainly -- but it's a lot of work, because the original design never allowed for this. And once again, all existing programs will have to be rewritten to work with the new design.

Linux, to pick an example, focuses on source code compatibility. Since it's open source, and all its apps are open source, then if you get a new CPU, you just recompile all your code for the new chip. Linux on a PowerPC computer can't run x86 software, and Linux on an ARM computer can't run PowerPC software. And Linux on a 64-bit x86 computer doesn't natively support 32-bit software, although the support can be added. If you try to run a commercial, proprietary, closed-source Linux program from 15 or 20 years ago on a modern Linux, it won't even install and definitely won't function -- because all the supporting libraries and modules have slowly changed over that time.

Windows does this very well, because Microsoft have spend tens of billions of dollars on tens of thousands of programmers, writing emulation layers to run 16-bit code on 32-bit Windows, and 32-bit code on 64-bit Windows. Windows embeds layers of virtualisation to ensure that as much old code as possible will still work -- only when 64-bit Vista arrived in 2006 did Windows finally drop support for DOS programs from the early 1980s. Today, Windows on ARM computers emulates an x86 chip so that PC programs will still work.

In contrast, every few versions of macOS, Apple removes any superseded code. The first x86 version of what was then called Mac OS X was 10.4, which was also the last version that ran Classic MacOS apps. By version 10.6, OS X no longer ran on PowerPC Macs, and OS X 10.7 no longer ran PowerPC apps. OS X 10.8 only ran on 64-bit Macs, and 10.15 won't run 32-bit apps.

This allows Apple to keep the OS relatively small and manageable, whereas Microsoft is struggling to maintain the vast Windows codebase. When Windows 10 came out, it announced that 10 was the last-ever major new version of Windows.

It would be possible to rewrite RISC OS to give it pre-emptive multitasking -- but either all existing apps would need to be rewritten, or it would need to incorporate some kind of emulator, like Aemulor, to run old apps on the new OS.

Pre-emptive multitasking -- which is a little slower -- would make multi-threading a little easier, which in turn would allow multi-core support. But that would need existing apps to be rewritten to use multiple threads, which allows them to use more than one CPU core at once. Old apps might still work, but not get any faster -- you could just run as many as you have CPU cores side-by-side with only a small drop in speed.

Then a rewrite of RISC OS for 64-bit ARM chips would require a 32-bit emulation layer for old apps to run -- and very slowly at that, when ARM chips no longer execute 32-bit code directly. A software emulation of 32-bit ARM would be needed, with perhaps a 10x performance drop.

All this, on a codebase that was never intended to allow such things, and done by a tiny crew of volunteers. It will take many years. Each new version will inevitably lose some older software which will stop working. And each year, some of those old enthusiasts who are willing to spend money on it will die. I got my first RISC OS machine in 1989, when I was 21. I'm 52 now. People who came across from the previous generation of Acorn computers, the BBC Micro, are often in their sixties.

Once the older users retire, who will spend money on this? Why would you, when you can use Linux, which does far more and is free. Yes, it's slower and it needs a lot more memory -- but my main laptop is from 2011, cost me £129 second-hand in 2017, and is fast and reliable in use.

To quote an old joke:
"A traveller stops to ask a farmer the way to a small village. The farmer thinks for a while and then says "If you want to go there I would not start from here."

There are alternative approaches. Linux is one. There's already a RISC OS-like desktop for Linux: it's called ROX Desktop, and it's very small and fast. It needs a bit of an update, but nothing huge.

ROX has its own system for single-file applications, like RISC OS's !Apps, called 0install -- but this never caught on. However, there are others -- my personal favourite is called AppImage, but there are also Snap apps and Flatpak. Supporting all of them is perfectly doable.

There is also an incomplete tool for running RISC OS apps on Linux, called ROLF... and a project to run RISC OS itself as an app under Linux.

Not all Linux distributions have the complicated Linux directory layout -- one of my favourites is GoboLinux, which has a much simpler, Mac-like layout.

It would be possible to put together a Linux distribution for ARM computers which looked and worked like RISC OS, had a simple directory layout like RISC OS, including applications packaged as single files, and which, with some work, could run existing RISC OS apps.

No, it wouldn't be small and fast like RISC OS -- it would be nearly as big and slow as any other Linux distro, just much more familiar for RISC OS users. This is apparently good enough for all the many customers of Virtual Acorn, who run RISC OS on top of Windows.

But it would be a lot easier to do than the massive rewrite of RISC OS needed to bring it up to par with other 21st century OSes -- and which would result in a bigger, slower, more complex RISC OS anyway.

The other approach would be to ignore Linux and start over with a clean sheet. Adopt an existing open-source operating system, modify it to look and work more like RISC OS, and write some kind of emulator for existing applications.

My personal preference would be A2/Bluebottle, which is the step-child of what Acorn originally wanted as the OS for the Archimedes. It would need a considerable amount of work, but Professor Wirth designed the system to be tiny, simple and easy to understand. It's written in a language that resembles Delphi. It's still used for teaching students at ETH Zürich, and is very highly-regarded [PDF] in academic circles.

It would be a big job -- but not as big a job as rewriting RISC OS...
liam_on_linux: (Default)
This is a repurposed CIX comment. It goes on a bit. Sorry for the length. I hope it amuses.

So, today, a friend of mine accused me of getting carried away after reading a third-generation Lisp enthusiast's blog. I had to laugh.

The actual history is a bit bigger, a bit deeper.

The germ was this:

https://www.theinquirer.net/inquirer/news/1025786/the-amiga-dead-long-live-amiga

That story did very well, amazing my editor, and he asked for more retro stuff. I went digging. I'm always looking for niches which I can find out about and then write about -- most recently, it has been containers and container tech. But once something goes mainstream and everyone's writing about it, then the chance is gone.

I went looking for other retro tech news stories. I wrote about RISC OS, about FPGA emulation, about OSes such as Oberon and Taos/Elate.

The more I learned, the more I discovered how much the whole spectrum of commercial general-purpose computing is just a tiny and very narrow slice of what's been tried in OS design. There is some amazingly weird and outré stuff out there.

Many of them still have fierce admirers. That's the nature of people. But it also means that there's interesting in-depth analysis of some of this tech.

It's led to pieces like this which were fun to research:

http://www.theregister.co.uk/Print/2013/11/01/25_alternative_pc_operating_systems/

I found 2 things.

One, most of the retro-computers that people rave about -- from mainstream stuff like Amigas or Sinclair Spectrums or whatever -- are actually relatively homogenous compared to the really weird stuff. And most of them died without issue. People are still making clone Spectrums of various forms, but they're not advancing it and it didn't go anywhere.

The BBC Micro begat the Archimedes and the ARM. Its descendants are everywhere. But the software is all but dead, and perhaps justifiably. It was clever but of no great technical merit. Ditto the Amiga, although AROS on low-cost ARM kit has some potential. Haiku, too.

So I went looking for obscure old computers. Ones that people would _not_ read about much. And that people could relate to -- so I focussed on my own biases: I find machines that can run a GUI or at least do something with graphics more interesting than ones before then.

There are, of course, tons of the things. So I needed to narrow it down a bit.

Like the "Beckypedia" feature on Guy Garvey's radio show, I went looking for stuff of which I could say...

"And why am I telling you this? Because you need to know."

So, I went looking for stuff that was genuinely, deeply, seriously different -- and ideally, stuff that had some pervasive influence.

Read more... )
And who knows, maybe I’ll spark an idea and someone will go off and build something that will render the whole current industry irrelevant. Why not? It’s happened plenty of times before.

And every single time, all of the most knowledgeable experts said it was a pointless, silly, impractical flash-in-the-pan. Only a few nutcases saw any merit to it. And they never got rich.
liam_on_linux: (Default)
I recently received an email from a reader -- a rare event in itself -- following my recent Reg article about educational OSes.

They asked for more info about the OS. So, since there's not a lot of this about, here is some more info about the Oberon programming language, the Oberon operating system written in it, and the modern GUI version, Bluebottle.

It is the final act in the life's work of Professor Niklaus Wirth, inventor of Pascal and later Modula-2. Oberon is what Pascal evolved into; probably, he should have called them all Pascal:

  1. Pascal 1 (i.e. Pascal & Delphi)

  2. Modula

  3. Modula-2 (basis of the original Acorn Archimedes OS, among others)

  4. Oberon

IgnoreTheCode has a good overview. This is perhaps the best place to start for a high-level quick read.

The homepage for the FPGA OberonStation went down for a while. Perhaps it was the interest driven by my article. ;-)

It is back up again now, though.

Perhaps the seminal academic paper is Oberon - the Overlooked Jewel by Michael Franz of the University of California at Irvine.

A PDF is here: https://pdfs.semanticscholar.org/d48b/ecdaf5c3d962e2778f804e...

This is essential reading to understand its relevance in computer science.

There are 2 software projects called "Oberon", a programming language and an operating system, or family of OSes, written in the language.

There's some basic info on Wikipedia about both the OS and the programming language.

Professor Wirth worked at ETH Zurich, which has a microsite about the Oberon project. However, this has many broken links and is unmaintained.

And the Oberon Book, the official bible of the project, is online.

Development did not stop on the OS after Prof Wirth retired. It continued and became AOS, which has a rather different type of GUI called a Zooming UI. The AOS zooming UI is called "Bluebottle" and newer versions of the OS are thus referred to as "A2", "Bluebottle" (or both, as "AOS" is a widely-used name).

There is a sort of fan page dedicated to A2/Bluebottle.

Here's the OS project on GitHub.

There is a native port for x86 PCs. I have this running under VirtualBox, as an app under 64-bit Linux, and natively on the metal of a Thinkpad X200.

May 2025

S M T W T F S
    12 3
45678910
11121314151617
1819 2021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 28th, 2025 10:05 pm
Powered by Dreamwidth Studios