Windows 3.0 came out in May 1990, 2 whole years earlier. It already had established an ecosystem before 32-bit OS/2 appeared.
Secondly, OS/2 2 really wanted a 386DX and 4MB of RAM, and a quality PC with quality name-brand parts. I owned it. I ran it on clones. I had to buy a driver for my mouse. From another CONTINENT.
Windows 3.0 ran on any old random junk PC, even on a PC XT class box with EGA. At first only high-end users of high-end executive-class workstations got the fun of 386 Enhanced Mode, but that was all OS/2 2.0 could run on at all.
OS/2 died when OS/2 1.x was a high-end OS with low-end features, and a cheapo low-end 386SX PC with 1 or 2MB of RAM, with MS-DOS and DESQview (not DESQview/X, just plain old text-mode DESQview) could outperform it.
(Remember the 386SX came out in 1988 and was common by the time Windows 3.0 shipped.)
But as soon as OS/2 1.x was a flop, MS turned its attention back to Windows, and before even the first betas of Windows 3.0, there were rumours in the tech press that MS was going to abandon the project. This was widely discussed in the media at the time.In my then-job, around 1989, my boss sent me on a training course for 3Com's new NOS, 3+Open, which was based on OS/2 1.0.
I did not realise it was a clever evaluation strategy. He knew I might have enthused about it if given a copy to play with. Instead, being trained on it, I was told some of the holes and weaknesses.
I came back, told them it was good but only delivered with OS/2 clients and had no compelling features for DOS clients, and they were very pleased -- and the company went on to start selling Novell Netware instead.
Looking back that was a good choice. In the late 1980s Netware 2 and 3 were superb server OSes for DOS, and OS/2 with LAN Manager wasn't.
But yes, I think from soon after OS/2 launched, it was apparent and widely reported to IBM that MS was not happy, and as soon as MS started talking about launching a new updated version of Windows -- circa 1989 -- it was very clear to IBM that MS was preparing a lifeboat and would soon abandon ship. By the time Windows 3.0 came out MS had left the project. Its consolation prize was to keep Portable OS/2, also called OS/2 3.0, later OS/2 NT.
This wasn't secret, and everyone in the industry knew about it. The embarrassment to IBM was considerable and I think that's why IBM threw so many people and so much money at OS/2 2.x. It was clear early on that although it was a good product, it wasn't good enough.
NT 3.1 launched in July 1993, almost exactly 1 year before OS/2 2.1, and NT made it pretty clear that OS/2 2.x was toast.
I deployed NT 3.1 in production and supported it. Yes, it was big and it needed a serious PC. OS/2 2.0 was pretty happy on a 486 in 4MB of RAM and ran well in 8MB. NT 3.1 needed 16MB to be useful and really wanted a Pentium.
But NT was easier to install. For instance, you could copy the files from CD to hard disk and then run the installer from MS-DOS. OS/2 had to boot to install, and it had to boot with CD drivers to install from CD. Not trivial to achieve: ATAPI CD-ROM drives hadn't been invented yet. It was expensive SCSI drives and a driver for your SCSI card, or proprietary interfaces and proprietary drivers, and many of those were DOS-only.
NT didn't have OS/2's huge, complicated CONFIG.SYS file. NT had networking and sound and so on integrated as standard, while they were paid-for optional extras on OS/2.
And NT ran Windows 3 apps better than OS/2, because each Windows 3 app had its own resource heaps under NT. Since the 64kB heap was the critical limitation on Win3 apps, NT ran them better than actual Windows 3.
If you could afford the £5000 PC to run NT, it was a better OS. OK, its UI was the clunky (but fast) Windows 3 Program Manager, but it worked. OS/2's fancy Workplace Shell was more powerful but harder to use. E.g. why on earth did some IBMer think needing to use the right mouse button to drag an icon was a good idea?
I owned OS/2, I used it, and I liked it. I am still faintly nostalgic for it.
But aside from the fancy front-end, NT was better.
NT 3.5 was smaller, faster and better still. NT 3.51 was even smaller, faster and stabler than that, and was in some ways the highpoint of NT performance. It ran well in 8MB of RAM and very well in 16MB. On 32MB of RAM, if you were that rich, we NT users could poke fun at people with £20K-£30K UNIX workstations, because a high-end PC was as fast, as stable, and had a lot more apps and a much easier UI.
Sad to say, but the writing was on the wall for OS/2 by 1989 or so, before 2.0 even launched. By 1990 Windows 3.0 was a hit. By 1992 Windows 3.1 was a bigger hit and by 1993 it was pretty clear that it was all over bar the shouting.
There was a killer combination that had a chance, but not a good one: Novell Netware for OS/2. Netware ran on OS/2 and that made it a useful non-dedicated server. IBM could have bought Novell, combined the products, and had a strong offering. Novell management back then were slavering to outcompete Microsoft; that's why Caldera happened, and why Novell ended up buying SUSE.
(For whom I worked until last year.)
OS/2 plus Netware as a server platform had real potential, and IBM could have focussed on server apps. IBM had CC:Mail and Lotus Notes email, it had the DB2 database, soon it would have Websphere. It had the products to bundle to make OS/2 a good deal as a server, but it wanted to push the client.
I read this wonderful article on mainframe OSes.
I've been meaning to do something like it for years, but I may use this as a jumping off point.
I think, for me, what I find intriguing about mainframe OSes in the 21st century is this:
On the one hand, there have been so many great OSes and languages and interfaces and ideas in tech history, and most are forgotten. Mainframes were and are expensive. Very, very expensive. Minicomputers were cheaper – that’s why they thrived, briefly, and are now totally extinct – and microcomputers were very cheap.
All modern computers are microcomputers. Maybe evolved to look like minis and mainframes, like ostriches and emus and cassowaries evolved to look a bit like theropod dinosaurs, but they aren’t. They’re still birds. No teeth, no claws on their arms/wings, no live young. Still birds.
One of the defining characteristics of micros is that’s they are cheap, built down to a price, and there’s very little R&D money.
But mainframes aren’t. They cost a lot, and rental and licensing costs a lot, and running them costs a lot… everything costs a lot. Meaning you don’t use them if you care about costs that much. You have other reasons. What those are doesn’t matter so much.
Which means that even serving a market of just hundreds of customers can be lucrative, and be enough to keep stuff in support and in development.
Result: in a deeply homogenous modern computing landscape, where everything is influenced by pervasive technologies and their cultures – Unix, C, the general overall DEC mini legacy that pervades DOS and Windows and OS/2 and WinNT and UNIX, that deep shared “DNA” – mainframes are other.
There used to be lots of deeply different systems. In some ways, classic Mac and Amigas and Acorn ARM boxes with RISC OS and Lisp Machines and Apollo DomainOS boxes and so many more – were deeply and profoundly unlike the DEC/xNix model. They were, by modern standards, profoundly strange and alien.
But they’re all dead and gone. A handful persist in emulation or as curiosities, but they have no chance of being relevant to the industry as a whole ever again. Some are sort of little embedded parasites, living in cysts, inside a protective wall of scar tissue, persisting inside an alien organism. Emacs and its weird Lispiness. Smalltalk. Little entire virtual computers running inside very very different computers.
Meantime, mainframes tick along, ignored by the industry as a whole, unmoved and largely uninfluenced by all the tech trends that have come and gone.
They have their own deeply weird storage architectures, networking systems, weird I/O controllers, often weird programming languages and memory models… and yes, because they have to, they occasionally sully themselves and bend down to talk to the mainstream kit. They can network with it; if they need to talk to each other, they’ll tunnel their own strange protocols over TCP/IP or whatever.
But because they are the only boxes that know where all the money is and who has which money where, and who gets the tax and the pensions, and where all the aeroplanes are in the sky and who’s on them, and a few specialised but incredibly important tasks like that, they keep moving on, serene and untroubled, like brontosauri placidly pacing along while a tide of tiny squeaky hairy things scuttle around their feet. Occasionally a little hairy beast jumps aboard and sucks some blood, or hitches a ride… A mainframe runs some Java apps, or it spawns a VM that contain a few thousand Linux instances – and the little hairy beasts think they’ve won. But the giant plods slowly along, utterly untroubled. Maybe something bit one ankle but it didn’t matter.
Result: the industry ignores them, and they ignore the industry.
But whereas, in principle, we could have had, oh, say, multi-processor BeOS machines in the late 1990s, or smoothly-multitasking 386-based OS/2 PCs in the late 1980s, or smoothly multitasking 680x0 Sinclair clones instead of Macs, or any one of hundreds of other tech trends that didn’t work out… they were microcomputer-based, so the R&D money wasn’t there.
Instead, we got lowest-common-denominator systems. Not what was best, merely what was cheapest, easiest, and just barely good enough – the “minimum viable product” that an industry of shysters and con-men think is a good thing.
And a handful of survivors who keep doing their thing.
What is funny about this, of course, is that it’s cyclic. All human culture is like this, and software is culture. The ideas of late-20th-century software, things that are now assumptions, are just what was cheap and just barely good enough. They’ve now been replaced and there’s a new layer on top, which is even cheaper and even nastier.
And if we don’t go back to the abacus and tally sticks in a couple of generations, this junk, which those who don’t know anything else believe is “software engineering” and not merely fossilised accidents of exigency – will be the next generation’s embedded, expensive, emulated junk.
What sort of embedded assumptions? Well, the lower level is currently this… quote marks to indicate mere exigencies with no real profound meaning or importance:
“Low-level languages” which you “compile” to “native binaries”. Use these to build OSes, and a hierarchy of virtualisation to scale up something not very reliable and not very scalable.
Then on top of this, a second-level ecosystem built around web tech, of “dynamic languages” which are “JITted” in “cross-platform” “runtimes” so they run on anything, and can be partitioned up into microservices, connected by “standard protocols”, so they can be run in the “cloud” at “web scale”.
A handful of grumpy old gits know that if you pick the right languages, and the right tools, you can build something to replace this 2nd level system in the same types of tools as the first level system, and that you don’t need all the fancy scaling infrastructure because one modern box can support a million concurrent users no problem, and a few such boxes can support tens of hundreds of millions of them, all in something in the corner of one room, with an uptime of decades and no need for any cloud.
But it’s hard to do it that way, and it’s much easier to slap it together in a few interpreted languages and ginormous frameworks.
And twas ever thus.