![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I read this wonderful article on mainframe OSes.
I've been meaning to do something like it for years, but I may use this as a jumping off point.
I think, for me, what I find intriguing about mainframe OSes in the 21st century is this:
On the one hand, there have been so many great OSes and languages and interfaces and ideas in tech history, and most are forgotten. Mainframes were and are expensive. Very, very expensive. Minicomputers were cheaper – that’s why they thrived, briefly, and are now totally extinct – and microcomputers were very cheap.
All modern computers are microcomputers. Maybe evolved to look like minis and mainframes, like ostriches and emus and cassowaries evolved to look a bit like theropod dinosaurs, but they aren’t. They’re still birds. No teeth, no claws on their arms/wings, no live young. Still birds.
One of the defining characteristics of micros is that’s they are cheap, built down to a price, and there’s very little R&D money.
But mainframes aren’t. They cost a lot, and rental and licensing costs a lot, and running them costs a lot… everything costs a lot. Meaning you don’t use them if you care about costs that much. You have other reasons. What those are doesn’t matter so much.
Which means that even serving a market of just hundreds of customers can be lucrative, and be enough to keep stuff in support and in development.
Result: in a deeply homogenous modern computing landscape, where everything is influenced by pervasive technologies and their cultures – Unix, C, the general overall DEC mini legacy that pervades DOS and Windows and OS/2 and WinNT and UNIX, that deep shared “DNA” – mainframes are other.
There used to be lots of deeply different systems. In some ways, classic Mac and Amigas and Acorn ARM boxes with RISC OS and Lisp Machines and Apollo DomainOS boxes and so many more – were deeply and profoundly unlike the DEC/xNix model. They were, by modern standards, profoundly strange and alien.
But they’re all dead and gone. A handful persist in emulation or as curiosities, but they have no chance of being relevant to the industry as a whole ever again. Some are sort of little embedded parasites, living in cysts, inside a protective wall of scar tissue, persisting inside an alien organism. Emacs and its weird Lispiness. Smalltalk. Little entire virtual computers running inside very very different computers.
Meantime, mainframes tick along, ignored by the industry as a whole, unmoved and largely uninfluenced by all the tech trends that have come and gone.
They have their own deeply weird storage architectures, networking systems, weird I/O controllers, often weird programming languages and memory models… and yes, because they have to, they occasionally sully themselves and bend down to talk to the mainstream kit. They can network with it; if they need to talk to each other, they’ll tunnel their own strange protocols over TCP/IP or whatever.
But because they are the only boxes that know where all the money is and who has which money where, and who gets the tax and the pensions, and where all the aeroplanes are in the sky and who’s on them, and a few specialised but incredibly important tasks like that, they keep moving on, serene and untroubled, like brontosauri placidly pacing along while a tide of tiny squeaky hairy things scuttle around their feet. Occasionally a little hairy beast jumps aboard and sucks some blood, or hitches a ride… A mainframe runs some Java apps, or it spawns a VM that contain a few thousand Linux instances – and the little hairy beasts think they’ve won. But the giant plods slowly along, utterly untroubled. Maybe something bit one ankle but it didn’t matter.
Result: the industry ignores them, and they ignore the industry.
But whereas, in principle, we could have had, oh, say, multi-processor BeOS machines in the late 1990s, or smoothly-multitasking 386-based OS/2 PCs in the late 1980s, or smoothly multitasking 680x0 Sinclair clones instead of Macs, or any one of hundreds of other tech trends that didn’t work out… they were microcomputer-based, so the R&D money wasn’t there.
Instead, we got lowest-common-denominator systems. Not what was best, merely what was cheapest, easiest, and just barely good enough – the “minimum viable product” that an industry of shysters and con-men think is a good thing.
And a handful of survivors who keep doing their thing.
What is funny about this, of course, is that it’s cyclic. All human culture is like this, and software is culture. The ideas of late-20th-century software, things that are now assumptions, are just what was cheap and just barely good enough. They’ve now been replaced and there’s a new layer on top, which is even cheaper and even nastier.
And if we don’t go back to the abacus and tally sticks in a couple of generations, this junk, which those who don’t know anything else believe is “software engineering” and not merely fossilised accidents of exigency – will be the next generation’s embedded, expensive, emulated junk.
What sort of embedded assumptions? Well, the lower level is currently this… quote marks to indicate mere exigencies with no real profound meaning or importance:
“Low-level languages” which you “compile” to “native binaries”. Use these to build OSes, and a hierarchy of virtualisation to scale up something not very reliable and not very scalable.
Then on top of this, a second-level ecosystem built around web tech, of “dynamic languages” which are “JITted” in “cross-platform” “runtimes” so they run on anything, and can be partitioned up into microservices, connected by “standard protocols”, so they can be run in the “cloud” at “web scale”.
A handful of grumpy old gits know that if you pick the right languages, and the right tools, you can build something to replace this 2nd level system in the same types of tools as the first level system, and that you don’t need all the fancy scaling infrastructure because one modern box can support a million concurrent users no problem, and a few such boxes can support tens of hundreds of millions of them, all in something in the corner of one room, with an uptime of decades and no need for any cloud.
But it’s hard to do it that way, and it’s much easier to slap it together in a few interpreted languages and ginormous frameworks.
And twas ever thus.
no subject
Date: 2022-09-24 04:33 pm (UTC)I'd argue that two of the greatest failures in microcomputer OSes have come about when companies have tried to do large-scale R&D with high budgets:
The first was IBM's development of OS/2 from the basic v1.0 to the full thing, with Presentation Manager, Communications Manager, and so on. They used up a lot of staff in the process - my friend who worked there has never really recovered his mental health - because they didn't seem to be able to get organised. They had more formal methods and radical ideas than anyone could keep track of, and they still failed to produce anything that could compete for usefulness and functionality with Windows 3.x and Classic MacOS.
The second was Taligent/Pink, which made even less progress.
The trick isn't spending lots of money. It's proceeding in achievable steps. Linux has been gradually re-engineered from a pretty basic kernel for a single platform to something very widely capable and portable, and a lot of that is due to doing the job in a sensibly incremental manner.
The other OSes that are being successful at present are macOS/iOS, which was done by fitting together existing parts, plus Apple writing the part they're good at, which is GUI, and Windows NT. Windows did benefit from a large-scale R&D project, but it wasn't done by Microsoft. It was done at DEC for the PRISM and MICA projects, and when DEC cancelled them, Microsoft scooped up the ideas and the people from the MICA OS project.
My job is doing porting and platforms for a large and long-lived mathematical modeller. In the past, it ran on Apollo Domain/OS, and Data General AOS, and VMS, and the code wasn't significantly different for any of them. It did not take well to classic MacOS, because it wasn't designed to deliberately yield control at short regular intervals, so we never shipped that, and nobody has ever wanted it on a mainframe. Code is code, once you dig in a bit.
no subject
Date: 2022-09-25 06:55 pm (UTC)Whoops -- wrong account.
Well, I've written about OS/2 too much here before.
ISTM, still, that if IBM had listened to MS, made it 386-specific and able to multitask DOS apps from v1.0, which MS already had demonstrated that it had the tech to do, it probably would have been a rousing success.
Whether that would have been better for the industry is an entirely different question. :-)
That, IMHO, was the core error that doomed IBM.
Inasmuch as there is simply no way they could ever spend themselves out of the resultant mess, no matter how many people they threw at it -- 100% agree.
The point about incremental progress is a strong one. OTOH, with that sort of safe, cautious progress, it's very very hard to ever make big steps. And ISTM that contemporary computing is caught in several difficult blind alleys, to do with large-scale distributed systems built from poorly-chosen tech, and that will require giant leaps to escape.
As in, Plan 9 had better answers to clustering than anything Linux can offer, even if you bolt on vast, incomprehensible complexity. Inferno had better answers to cross-platform binaries than WebAssembly or anything in any interpreted, JITted scripting language.
But you can't get there from here.
I like your point about DEC and MICA. I've only recently been digging into the history of the decline and fall of DEC and finding out about this. Tragic, really.
NT wasn't just MICA, though. There was an injection of OS/2 3, AKA OS/2 NT, in there too. And some Windows influence too.
I liked NT, before the marketing folks got their claws too deeply into it. I liked Win 95, too. What it did, with the technology and the software, hardware and drivers of the time, was nothing short of amazing, really.
I bought OS/2 2.0 for cash. Never done that with any other OS in my life. I got "Windows Chicago" as a free beta via PC Pro. It banes me to admit it, but for all practical uses, W95 was better in every measurable or demonstrable way than OS/2 2.x.
It was prettier, had a much better UI, faster, much more compatible, and in real life about as stable. It was no contest.
no subject
Date: 2022-09-27 12:34 pm (UTC)If the job is impossible, they'll be able to prove that, as DEC did when they tried to keep VAX performant, and showed that it was impossible, leading to Alpha.
If the job can't be done within the existing software, they'll discover that, with enough time available to plan a drastic change.
What you don't do is put your new development team in an ivory tower. That leads to disasters like Acorn's ARX or Apple's Copeland. You need people who really understand the system you're trying to improve. In the best case I've seen, this was someone returning to full-time work after being part-time while her children were young. She was experienced, and not involved in any current projects.