liam_on_linux: (Default)
A HN poster questioned the existence of 80286 computers with VGA displays.

In fact the historical link between the 286 and VGA are significant and represent one of the most important events in the history of x86 computers.

The VGA standard, along with PS/2 keyboard and mouse ports, 1.4MB 3.5" floppies, and even 72-pin SIMMs, was introduced with IBM's PS/2 range of computers in 1987.

The original PS/2 range included:

• Model 50 -- desktop 286.

• Model 60 -- tower 286.

• Model 70 -- desktop 386DX.

• Model 80 -- tower 386DX. (I still have one. One of the best-built PCs ever made.)

All had the Microchannel (MCA) expansion bus, and VGA as standard.

Note, I am not including the Model 30, as it wasn't a true PS/2: no MCA, and no VGA, just MCGA.

IBM promised buyers that they would be able to run the new OS/2 operating system it was working on with Microsoft at the time.

This is the reason why IBM insisted OS/2 must run on the 286: to provide it to the many tens of thousands of customers it had sold 286 PS/2 machines to.

Microsoft wanted to make OS/2 specific to the newer 32-bit 386 chip. This had hardware-assisted multitasking of 8086 VMs, meaning the new OS would be able to multitask DOS apps with excellent compatibility.

But IBM had promised customers OS/2 and IBM is the sort of company that takes such promises seriously.

So, OS/2 1.x was a 286 OS, not a 386 OS. That meant it could only run a single DOS session and compatibility wasn't great.

This is why OS/2 flopped. That in turn is why MS developed Windows 3, which could multitask DOS apps, and was a big hit. That is why MS had the money to headhunt the MICA team from DEC, headed by Dave Cutler, and give them Portable OS/2 to finish. That became OS/2 NT (because it was developed on Intel's i860 RISC chip, codenamed N-Ten.) That became Windows NT.

That is why Windows ended up dominating the PC industry, not OS/2 (or DESQview/X or any of the other would-be DOS enhancements or replacements).

Arguably, although I admit this is reaching a bit, that's what led to the 386SX, and later to VESA local bus computers, and Win95 and a market of VGA-equipped PCI machines: the fertile ground in which Linux took root and flourished.

PCs got multitasking combined with a GUI because of Windows 3 and its successors. (It's important to note that there were lots of text-only multitasking OSes for PCs: DR's Concurrent DOS, SCO Xenix, QNX, Coherent, TSX-32, PC-MOS, etc.) The killer feature was combining DOS, a GUI, and multitasking of DOS apps. That needed a 386SX or DX.

These things only happened because OS/2 failed, and OS/2 failed because there were lots of 286-based PS/2 machines and IBM promised OS/2 on them.

The 286 and VGA went closely together, and indeed, IBM later made the ISA-bus "PS/2" Model 30-286 in response to the relatively failure of MCA.

It was a pivotal range of computers and it sealed the future of the PC industry long after PS/2s themselves largely disappeared. They were a hugely important range of computers, and they introduced the standards that dominated the PC world throughout the 1990s and into the 2000s: PS/2 ports, VGA sockets, 72-pin RAM, 1.4MB floppies etc. Only the expansion bus and the planned native OS failed. All the external ports, connectors, media and so on became the new industry standards.               

liam_on_linux: (Default)
Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It's entitled "First new vax in ...30 years? 🙂"

Someone posted it on Hackernews. One of the comments said, roughly, that they didn't see the significance and could someone "explain it like I'm a Computer Science undergrad." This is my attempt to reply...

Um. Now I feel like I'm 106 instead of "just" 53.

OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families... and both were from the same company.

Minicomputers are what came after mainframes, before microcomputers. A microcomputer is a computer whose processor is a microchip: a single integrated circuit containing the whole processor. Before the first one was invented in 1974 (IIRC), processors were made from discrete logic: lots of little silicon chips.

The main distinguishing feature of minicomputers from micros is that the early micros were single-user: one computer, one terminal, one user. No multitasking or anything.

Minicomputers appeared in the 1960s and peaked in the 1970s, and cost just tens to hundreds of thousands of dollars, while mainframes cost millions and were usually leased. So minicomputers could be afforded by a company department, not an entire corporation... meaning that they were shared, by dozens of people. So, unlike the early micros, minis had multiuser support, multitasking, basic security and so on.

The most significant minicomputer vendor was a company called DEC: Digital Equipment Corporation. DEC made multiple incompatible lines of minis, many called PDP-something -- some with 9-bit logic, some with 12-bit, 16-bit, 18-bit, or 36-bit logic (and an unreleased 24-bit model, the PDP-2).

One of its early big hits was the 12-bit PDP-8. It ran multiple incompatible OSes, but one was called OS/8. This OS is long gone but it was the origin of a command-line interface (largely shared with TOPS-10 on the later, bigger and more expensive, 36-bit PDP-10 series) with commands such as DIR, TYPE, DEL, REN and so on. It also had a filesystem with 6-letter names (all in caps) with semi-standardised 3-letter extensions, such as README.TXT.

This OS and its shell later inspired Digital Research's CP/M OS, the first industry-standard OS for 8-bit micros. CP/M was planned to be the OS for the IBM PC, too, but IBM got a cheaper deal from Microsoft for what was essentially a clean-room re-implementation of CP/M, which called IBM called "PC DOS" and Microsoft "MS-DOS".

So DEC's PDP-8 and OS-8 directly inspired the entire PC-compatible industry, the whole x86 computer industry.

Another DEC mini was the 18-bit PDP-7. Like almost all DEC minis, this too ran multiple OSes, both from DEC and others.

A 3rd-party OS hacked together as a skunkworks project on a disused spare PDP-7 at AT&T's research labs was UNIX.

More or less at the same time as the computer industry gradually standardised on the 8-bit byte, DEC also made 16-bit and 32-bit machines.

Among the 16-bit machines, the most commercially successful was the PDP-11. This is the machine that UNIX's creators first ported it to, and in the process, they rewrote it in a new language called C.

The PDP-11 was a huge success so DEC was under commercial pressure to make an improved successor model. It did this by extending the 16-bit PDP-11 instruction set to 32 bits. For this machine, the engineer behind the most successful PDP-11 OS, called RSX-11, led a small team that developed a new, pre-emptive multitasking, multiuser OS with virtual memory, called VMS.

(When it gained a POSIX-compliant mode and TCP/IP, it was renamed from VAX/VMS to OpenVMS.)

OpenVMS is still around: it was ported to DEC's Alpha, the first 64-bit RISC chip, and later to the Intel Itanium. Now it has been spun out from HP and is being ported to x86-64.

But the VMS project leader, Dave Cutler, and his team, were headhunted from DEC by Microsoft.

At this time, IBM and Microsoft had very acrimoniously fallen out over the failed OS/2 project. IBM kept the x86-32 version OS/2 for the 386, which it completed and sold as OS/2 2 (and later 2.1, 3, 4 and 4.5. It is still on sale today under the name Blue Lion from Arca Noae.)

At Microsoft, Cutler and his team got given the very incomplete OS/2 version 3, a planned CPU-independent portable version. Cutler et al finished this, porting it to the new Intel RISC chip, the i860. This was codenamed the "N-Ten". The resultant OS was initially called OS/2 NT, later renamed – due to the success of Windows 3 – as Windows NT. Its design owes as much to DEC VMS as it does to OS/2.

Today, Windows NT is the basis of Windows 10 and 11.

So the PDP-7, PDP-8 and PDP-11 directly influenced the development of CP/M, MS-DOS, OS/2, Windows 1 through to Windows ME.

A different line of PDPs directly led to UNIX and C.

Meanwhile, the PDP-11's 32-bit successor directly influenced the design of Windows NT.

When micros grew up and got to be 32-bit computers themselves, and vendors needed multitasking OSes with multiuser security, they turned back to 1970s mini OSes.

This project is a FOSS re-implementation of the VAX CPU on an FPGA. It is at least the 3rd such project but the earlier ones were not FOSS and have been lost.
liam_on_linux: (Default)
Someone asked me if I could describe how to perform DOS memory allocation. It's not the first time, either. It's a nearly lost art. To try to illustrate that it's a non-trivial job, I decided to do something simpler: describe how DOS allocates drive letters.

I have a feeling I've done this before somewhere, but I couldn't find it, so I tried writing it up as an exercise.

Axioms:

  • DOS only understands FAT12, FAT16 and in later versions FAT32. HPFS, NTFS and all *nix filesystems will be skipped.

  • We are only considering MBR partitioning.

So:

  • Hard disks support 2 partition types: primary and logical. Logical drives must go inside an extended partition.

  • MBR supports a legal max of 4 primaries per drive.

  • Only 1 primary partition on the 1st drive can be marked "active" and the BIOS will boot that one _unless_ you have a *nix bootloader installed.

  • You can only have 1 extended partition per drive. It counts as a primary partition.

  • To be "legal" and to support early versions of NT and OS/2, only 1 DOS-readable primary partition per drive is allowed. All other partitions should go inside an extended partition.

  • MS-DOS, PC DOS and NT will only boot from a primary partition. (I think DR-DOS is more flexible and  I don't know for FreeDOS.)

Those are our "givens". Now, after all that, how does DOS (including Win9x) assign drive letters?

  1. It starts with drive letter C.

  2. It enumerates all available hard drives visible to the BIOS.

  3. The first *primary* partition on each drive is assigned a letter.

  4. Then it goes back to the start and starts going through all the physical hard disks a 2nd time.

  5. Now it enumerates all *logical* partitions on each drive and assigns them letters.

  6. So, all the logicals on the 1st drive get sequential letters.

  7. Then all the logicals on the next drive.

  8. And so on through all logicals on all hard disks.

  9. Then drivers in CONFIG.SYS are processed and if they create drives (e.g. DRIVER.SYS) those letters are assigned next.

  10. Then drivers in AUTOEXEC.BAT are processed and if they create drives (e.g. MSCDEX) those are assigned next.

So you see... it's quite complicated. :-)

Assigning upper memory blocks is more complicated.

NT changes this and I am not 100% sure of the details. From observation:

  • NT 3 did the same, but with the addition of HPFS and NTFS (NT 3.1 & 3.5) and NTFS (3.51) drives.

  • NT 4 does not recognise HPFS at all but the 3.51 driver can be retrofitted.

  • NT 3, 4 & 5 (Win2K) *require* that partitions are in sequential order.

Numbers may be missing but you can't have, say:
[part â„– 1] [part â„– 2] [part â„– 4] [part â„– 3]

They will blue-screen on boot if you have this. Linux doesn't care.

Riders:

  1. The NT booloader must be on the first primary partition on the first drive.

  2. (A 3rd party boot-loader can override this and, for instance, multi-boot several different installations on different drives.)

  3. The rest of the OS can be anywhere, including a logical drive.

NT 6 (Vista) & later can handle it, but this is because MS rewrote the drive-letter allocation algorithm. (At least I think this is why but I do not know for sure; it could be a coincidence.)

Conditions:

  • The NT 6+ bootloader must be in the same drive as the rest of the OS.

  • The bootloader must be on a primary partition.

  • Therefore, NT 6+ must be in a primary partition, a new restriction.

  • NT 6+ must be installed on an NTFS volume, therefore, it can no longer dual-boot with DOS on its own & a 3rd party bootloader is needed.

NT 6+ just does this:

  1. The drive where the NT bootloader is becomes C:

  2. Then it allocates all readable partitions on drive 1, then all those on drive 2, then all those on drive 3, etc.

So just listing the rules is quite complicated. Turning into a step-by-step how-to guide is significantly longer and more complex. As an example, the much simpler process of cleaning up Windows 7/8.x/10 if preparing to dual-boot took me several thousand words, and I skipped some entire considerations to keep it that "short".

Errors & omissions excepted, as they say. Corrections and clarifications very welcome. To comment, you don't need an account — you can sign in with any OpenID, including Facebook, Twitter, UbuntuOne, etc.
liam_on_linux: (Default)
EDIT: this post has attracted discussion and comments on various places, and some people are disputing its accuracy. So, I've decided to make some edits to try to clarify things.

When Windows 2 was launched, there were two editions: Windows, and Windows/386.

The ordinary "base" edition of Windows 2.0x ran on an XT-class computer: that is, an Intel 8088 or 8086 CPU. These chips can only directly access a total of 1MB of memory, of which the highest 384kB was reserved for ROM and I/O: so, a maximum 640kB of RAM. That was not a lot for Windows, even then. But both DOS and Windows 2.x did support expanded memory (Lotus-Intel-Microsoft-specification EMS). I ran Windows 2 on 286s and 386s at work, and on 386 machines I used Quarterdeck's QEMM386 to turn the extended memory that Windows 2 couldn't see or use into expanded memory that it could.

The Intel 80286 could access up to 16MB of memory. But all except the first 640kB was basically invisible to DOS and DOS apps. Only native 16-bit programs could access it, and there barely were any — Lotus 1-2-3 r3 was one of the few, for instance.

There was one exception to this: due to a bug the first 64kB of memory above 1MB (less 16 bytes) could be accessed in DOS's Real Mode. This was called the High Memory Area (HMA). 64kB wasn't much even then, but still, it added 10% to the amount of usable memory on a 286. DOS 3 couldn't do anything with this – but Windows 2 could.

Windows 2 and 2.01 were not successful, but some companies did release applications for them – notably, Aldus' PageMaker desktop publishing (DTP) program. So, Microsoft put out some bug-fix releases: I've found traces of 2.01, 2.03, 2.11 and finally 2.12.


When Windows 2.1x was released, MICROS~1 did a little re-branding. The "base" edition of Windows 2.1 was renamed Windows/286. In some places, Microsoft itself claims that this was a special 286 edition of Windows 2 that ran in native 80286 mode and could access all 16MB of memory.

But some extra digging by people including Mal Smith has uncovered evidence that Windows/286 wasn't all it was cracked up to be. For one thing, without the HIMEM.SYS driver, it runs perfectly well on 8088/8086 PCs – it just can't access the 64kB HMA. Microsoft long ago erased the comments to Raymond Chen's blog post, but they are on the Wayback Machine.

So the truth seems to be that Windows/286 didn't really have what would later be called Standard Mode and didn't really run in the 286's protected mode. It just used the HMA for a little extra storage space, giving more room in conventional memory for the Windows real-mode kernel and apps.

So, what about Windows/386?


The new 80386 chip had an additional mode on top of 8/16-bit (8088/8086-compatible) and fully-16-bit (80286-compatible) modes. The '386 had a new 32-bit mode – now called x86-32 – which could access a vast 4GB of memory. (In 1985 or so, that would have cost hundreds of thousands of dollars, maybe even $millions.)

However, this was useless to DOS and DOS apps, which could still only access 640kB (plus EMS, of course).

But Intel learned from the mistake of the 286 design. The 286 needed new OSes to access all of its memory, and even they couldn't give DOS apps access to that RAM.

The 386 "fixed" this. It could emulate, in hardware, multiple 8086 chips at once and even multitask them. Each got its own 640kB of RAM. So if you had 4MB of RAM, you could run 6 separate full-sized DOS sessions and still have 0.4MB left over for a multitasking OS to manage them. DOS alone couldn't do this!

There were several replacement OSes to allow this. At least one of them is now FOSS -- it's called PC-MOS 386.

Most of these 386 DOS-compatible OSes were multiuser OSes — the idea was you could plug some dumb terminals into the RS-232 ports on the back of a 386 PC and users could run text-only DOS apps on the terminals.

But some were aimed at power users, who had a powerful 386 PC to themselves and wanted multitasking while keeping their existing DOS apps.

My personal favourite was Quarterdeck DESQview. It worked with the QEMM386 memory manager and let you multitask multiple DOS apps, side by side, either full-screen or in resizable windows. It ran on top of ordinary MS-DOS.

Microsoft knew that other companies were making money off this fairly small market for multitasking extensions to DOS. So, it made a third special edition of Windows 2, called Windows/386, which supported 80386 chips in 32-bit mode and could pre-emptively multitask DOS apps side-by-side with Windows apps.

Windows programs, including the Windows kernel itself, still ran in 8086-compatible Real Mode and couldn't use all this extra memory, even on Windows/386. All Windows/386 did was provide a loader that converted all the extra memory above 1MB in your high-end 386 PC – that is, extended (XMS) memory – into expanded (EMS) memory that both Windows and DOS programs could use.

The proof of this is that it's possible to launch Windows/386 on an 8086 computer, if you bypass the special loader. Later on, this loader became the basis of the EMM386 driver in MS-DOS 4, which allowed DOS to use the extra memory in a 386 as EMS.


TBH, Windows/386 wasn't very popular or very widely-used. If you wanted the power of a 386 with DOS apps, then you probably were fine with or even preferred text-mode stuff and didn't want a GUI. Bear in mind this is long before graphics accelerators had been invented. Sure you could tile several DOS apps side-by-side, but then you could only see a little bit of each one -- VGA cards and monitors only supported 640×480 pixels. Windows 2 wasn't really established enough to have special hi-res superVGA cards available for it yet.*

Windows/386 could also multitask DOS apps full-screen, and if you used graphical DOS apps, you had to run them full-screen. Windows/386 couldn't run graphical DOS apps inside windows.

But if you used full-screen multitasking, with hotkeys instead of a mouse, then why not use something like DESQview anyway? It used way less disk and memory than Windows, and it was quicker and had no driver issues, because it didn't support any additional drivers.

The big mistake MS and IBM made when they wrote OS/2 was that they should have targeted the 386 chip, instead of the 286.

Microsoft knew this – it even had a prototype OS/2 1 for 386, codenamed "Sizzle" and "Football" – but IBM refused because when it sold thousands of 286 PS/2 machines it had promised the customers OS/2 for them. The customers didn't care, they didn't want OS/2, and this mistake cost IBM the entire PC industry.

If OS/2 1 had been a 386 OS it could have multitasked DOS apps, and PC power users would have been all over it. But it wasn't, it was a 286 OS, and it could only run 1 DOS app at a time. For that, the expensive upgrade and extra RAM you needed wasn't worth it.

So OS/2 bombed. Windows 2 bombed too. But MS was so disheartened by IBM's intransigence, it went back to the dead Windows 2 product, gave it a facelift with the look-and-feel stolen from OS/2 1.2, and they used some very clever hacks to combine the separate Windows (i.e. 8086), Windows/286 and Windows/386 programs all into a single binary product. The WIN.COM loader looked at your system spec and decided whether to start the 8086 kernel (KERNEL.EXE), 286 kernel (DOSX.EXE) or the 386 kernel (WIN386.EXE).

If you ran Windows 3 on an 8086 or a machine with only 640kB (i.e. no XMS), you got a Real Mode 8086-only GUI on top of DOS.

If you ran Win3 on a 286 with 1MB-1¾MB of RAM then it launched in Standard Mode and magically became a 16-bit DOS extender, giving you access to up to 16MB of RAM (if you were rich and crazy eccentric).*

If you ran W3 on a 386 with 2MB of RAM or more, it launched in 386 Enhanced Mode and became a 32-bit multitasking DOS extender and could multitask DOS apps, give you virtual memory and a memory space of up to 4GB.

All in a single product on one set of disks.

This was revolutionary, and it was a huge hit...

And that about wrapped it up for OS/2.

Windows 3.0 was very unreliable and unstable. It often threw what it called an Unrecoverable Application Error (UAE) – which even led to a joke T-shirt that said "I thought UAE was a country in Arabia until I discovered Windows 3!"... but when it worked, what it did was amazing for 1990.

Microsoft eliminated UAEs in Windows 3.1, partly by a clever trick: it renamed the error to "General Protection Fault" (GPF) instead.

Me, personally, always the contrarian, I bought OS/2 2.0 with my own money and I loved it. It was much more stable than Windows 3, multitasked better, and could do way more... but Win3 had the key stuff people wanted.

Windows 3.1 bundled the separate Multimedia Extensions for Windows and made it a bit more stable. Then Windows for Workgroups bundled all that with networking, too!

Note — in the DOS era, all apps needed their own drivers. Every separate app needed its own printer drivers, graphics drivers (if it could display graphics in anything other than the standard CGA, EGA, VGA or Hercules modes), sound drivers, and so on.

One of WordPerfect's big selling points was that it had the biggest and best set of printer drivers in the business. If you had a fancy printer, WordPerfect could handle it and use all its special fonts and so on. Quite possibly other mainstream offerings couldn't, so if you ran WordStar or MultiMate or something, you only got monospaced Courier in bold, italic, underline and combinations thereof.

This included networking. Every network vendor had their own network stack with their own network card drivers.

And network stacks were big and each major vendor used their own protocol. MS used NetBEUI, Novell used IPX/SPX, Apple used AppleTalk, Digital Equipment Corporation's PATHWORKS used DECnet, etc. etc. Only weird, super-expensive Unix boxes that nobody could afford used TCP/IP.

You couldn't attach to a Microsoft server with a Novell network stack, or to an Apple server with a Microsoft stack. Every type of server needed its own unique special client.

This basically meant that a PC couldn't be on more than one type of network at once. The chance of getting two complete sets of drivers working together was next to nil, and if you did manage it, there'd be no RAM left to run any apps anyway.

Windows changed a lot of things, but shared drivers were a big one. You installed one printer driver and suddenly all your apps could print. One sound driver and all your apps could make noises, or play music (or if you had a fancy sound card, both!) and so on. For printing, Windows just sent your printer a bitmap — so any printer that could print graphics could suddenly print any font that came with Windows. If you had a crappy old 24-pin dot-matrix printer that only had one font, this was a big deal. It was slow and it was noisy but suddenly you could have fancy scalable fonts, outline and shadow effects!

But when Microsoft threw networking into this too, it was transformative. Windows for Workgroups broke up the monolithic network stacks. Windows drove the card, then Windows protocols spoke to the Windows driver for the card, then Windows clients spoke to the protocol.

So now, if your Netware server was configured for AppleTalk, say — OK, unlikely, but it could happen, because Macs only spoke AppleTalk — then Windows could happily access it over AppleTalk with no need for IPX.

The first big network I built with Windows for Workgroups, I built dual-stack: IPX/SPX and DECnet. The Netware server was invisible to the VAXen, and vice versa, but WfWg spoke to both at once. This was serious black magic stuff.

This is part of why, over the next few years, TCP/IP took off. Most DOS stuff never really used TCP/IP much — pre-WWW, very few of us were on the Internet. So, chaos reigned. WfWg ended that. It spoke to everything through one stack, and it was easy to configure: just point-and-click. Original WfWg 3.1 didn't even include TCP/IP as standard: it was an optional extra on the disk which you had to install separately. WfWg 3.11 included 16-bit TCP/IP but later Microsoft released a 32-bit TCP/IP stack, because by 1994 or so, people were rolling out PC LANs with pure IP.



* Disclaimer: this is a slight over-simplification for clarity, one of several in this post. A tiny handful of SVGA cards existed, most of which needed special drivers, and many of which only worked with a tiny handful of apps, such as one particular CAD program, or the GEM GUI, or something obscure. Some did work with Windows 2, but if they did, they were all-but unusable because Windows 2's core all had to run in the base 640kB of RAM and it very easily ran out of memory. Windows 3 was not much better, but Windows 3.1 finally fixed this a bit.

So if you had an SVGA card and Windows/286 or Windows/386 or even Windows 3.0, you could possibly set some super-hires mode like 1024×768 in 16 colours... and admire it for whole seconds, then launch a few apps and watch Windows crash and die. If you were in something insane like 24-bit colour, you might not even get as far as launching a second app before it died.

Clarification for the obsessive: when I said 1¾MB, that was also a simplification. The deal was this:

If you had a 286 & at least 1MB RAM, then all you got was Standard Mode, i.e. 286 mode. More RAM made things a little faster – not much, because Windows 2 didn't have a disk cache, relying on DOS to do that. If you had 2 MB or 4 or 8 or 16 (not that anyone sane would put 16MB in a 286, as it would cost $10,000 or something) it made no odds: Standard Mode was all a 286 could do.

If you had a 386 and 2MB or more RAM, you got 386 Enhanced Mode. This really flew if you had 4MB or more, but very few machines came with that much except some intended to be servers, running Unix of one brand or another. Ironically, the only budget 386 PC with 4MB was the Amstrad 2386, a machine now almost forgotten by history. Amstrad created the budget PC market in Europe with the PC1512 and PC1640, both 8086 machines with 5.25" disk drives.

It followed this with the futuristic 2000 series. The 2086 was an unusual PC – an ISA 8086 with VGA. The 2286 was a high-end 286 for 1988: 1MB RAM & a fast 12.5MHz CPU.

But the 2386 had 4MB as standard, which was an industry-best and amazing for 1988. When Windows 3.0 came out a couple of years later, this was the only PC already on the market that could do 386 Enhanced Mode justice, and easily multitask several DOS apps and big high-end Windows apps such as PageMaker and Omnis. Microsoft barely offered Windows apps yet – early, sketchy versions of Word and Excel, nothing else. I can't find a single page devoted to this remarkable machine – only its keyboard.

The Amstrad 2000 series bombed. They were premature: the market wasn't ready and few apps used DOS extenders yet. Only power users ran OS/2 or DOS multitaskers, and power users didn't buy Amstrads. Nor did people who wanted a server for multiuser OSes such as Digital Research's Concurrent DOS/386.

Its other bold design move was that Amstrad gambled on 5.25" floppies going away, replaced by 3.5" diskettes. They were right, of course – and so the 2000 series had no 5.25" bays, allowing for a sleek, almost aerodynamic-looking case. But Amstrad couldn't foresee that soon CD-ROM drives would be everywhere, then DVDs and CD burners, and the 5.25" bay would stick around for another few decades.
liam_on_linux: (Default)
My previous post was an improvised and unplanned comment. I could have structured it better, and it caused some confusion on https://lobste.rs/

Dave Cutler did not write OS/2. AFAIK he never worked on OS/2 at all in the days of the MS-IBM pact -- he was still at DEC then.

Many sources focus on only one side of the story -- the DEC side, This is important but only half the tale.

IBM and MS got very rich working together on x86 PCs and MS-DOS. They carefully planned its successor: OS/2. IBM placed restrictions on this which crippled it, but it wasn't apparent at the time just how bad this would turn out to be.

In the early-to-mid 1980s, it seemed apparent to everyone that the most important next step in microcomputers would be multitasking.

Even small players like Sinclair thought so -- the QL was designed as the first cheap 68000-based home computer. No GUI, but multitasking.

I discussed this a bit in a blog post a while ago: http://liam-on-linux.livejournal.com/46833.html

Apple's Lisa was a sideline: too expensive. Nobody picked up on its true significance.

Then, 2 weeks after the QL, came the Mac. Everything clever but expensive in the Lisa stripped out: no multitasking, little RAM, no hard disk, no slots or expansion. All that was left was the GUI. But that was the most important bit, as Steve Jobs saw and nobody much else did.

So, a year later, the ST had a DOS-like OS but a bolted-on GUI. No shell, just a GUI. Fast-for-the-time CPU, no fancy chips, and it did great. It had the original, uncrippled version of DR GEM. Apple's lawsuit meant that PC GEM was crippled: no overlapping windows, no desktop drive icons or trashcan, etc.

Read more... )
liam_on_linux: (Default)

Windows NT was allegedly partly developed on OS/2. Many MSers loved OS/2 at the time -- they had co-developed it, after all. But there was more to it than that.

Windows NT was partly based on OS/2. There were 3 branches of the OS/2 codebase:

[a] OS/2 1.x – at IBM’s insistence, for the 80286. The mistake that doomed OS/2 and IBM’s presence in the PC industry, the industry it had created.

[b] OS/2 2.x – IBM went it alone with the 80386-specific version.

[c] OS/2 3.x – Portable OS/2, planned to be ported to multiple different CPUs.

After the “divorce”, MS inherited Portable OS/2. It was a skeleton and a plan. Dave Cutler was hired from DEC, which refused to allow him to pursue his PRISM project for a modern CPU and successor to VMS. Cutler got the Portable OS/2 project to complete. He did, fleshing it out with concepts and plans derived from his experience with VMS and plans for PRISM.

Read more... )

July 2025

S M T W T F S
  1234 5
6789101112
13141516171819
20212223242526
2728293031  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 6th, 2025 10:38 am
Powered by Dreamwidth Studios