![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Separate processors in storage, in terminals, in networking controllers, in printers, in everything. Typically the machine cannot actually drive any output or input directly (e.g. mouse movements, or keystrokes, or anything): peripherals do that, collect and encode the results, and send them over a network. So as someone else commented, a mainframe isn't really a computer, it's a whole cluster of closely-coupled computers, many with dedicated functionality. Quite possibly all implemented with different architectures, instruction sets, programming languages, bit widths, everything.
Here's an article I wrote about a decade back about how IBM added the new facility of "time sharing" -- i.e. multiple users on terminals, all interacting with the host at the same time -- by developing the first hypervisor and running 1 OS per user in VMs, because the IBM OSes of the time simply could not handle concepts like "terminal sessions".
Minicomputer: the 2nd main style of computer. Smaller, cheaper, so affordable for a company department to own one, while mainframes were and are mostly leased to whole corporations. Typically 1 CPU in early ones, implemented as a whole card cage full of multiple boards: maybe a few boards with registers, one with an adder, etc. The "CPU" is a cabinet with thousands of chips in it.
Hallmarks: large size; inherently multitasking, so that multiple users can share the machine, accessing it via dumb terminals on serial lines. User presses a key, the keystroke is sent over the wire, the host displays it. Little to no networking. One processor, maybe several disk or tape drives, also dumb and controlled by the CPU. No display on the host. No keyboard or other user input on the host. All interaction is via terminals. But because they were multiuser, even the early primitive ones had fairly smart OSes which could handle multiple user accounts and so on.
Gradually acquired networking and so on, but later.
In some classic designs, like some DEC PDPs, adding new hardware actually adds new instructions to the processor's instruction set.
Wildly variable and diverse architectures. Word lengths of 8 bit, 9 bit, 12 bit, 18 bit, 24 bit, 32 bit, 36 bit and others. Manufacturers often had multiple incompatible ranges and maybe several different OSes per model range, depending on the task wanted, so offering dozens of totally different and incompatible OSes across half a dozen models.
Microcomputer: the simplest category. The entire processor is implemented on a single silicon chip, a microprocessor. Early machines very small and simple, driven by 1 terminal with 1 user. No multitasking, no file or other resource sharing, no networking, no communications except typically 1 terminal and maybe a printer. Instead of 1 computer per department, 1 computer per person. Facilities added by standardised expansion cards.
This is the era of standardisation and commoditisation. Due largely to microcomputers, things like the size of bytes, their encoding and so on were fixed. 8 bits to a byte, ASCII coding, etc.
Gradually grew larger: 16-bit, then 32-bit, etc. In the early '80s gained onboard ROM typically with a BASIC interpreter, on-board graphics and later sound. Mid-'80s, went to 16-bit with multicolour graphics (256+ colours), stereo sound. Lots of incompatible designs, but usually 1 OS per company, used for everything. All single-user boxes.
These outperformed most minis and minis died out. Some minis gained a hi-res graphical display and turned into single-user deskside "workstations", keeping their multitasking OS, usually a UNIX by this point. Prices remained at an order of magnitude more than PCs, and processors were proprietary, closely-guarded secrets, and sometimes still implemented across multiple discrete chips. Gradually these got integrated into single chip devices but they usually weren't very performance competitive and got displaced by RISC processors, built to run compiled C quickly.
In the '90s, generalising wildly, networking became common, and 32-bit designs became affordable. Most of the 16-bit machines died out and the industry standardised on MS Windows and classic MacOS. As internet connections became common in the late '90s, multitasking and a GUI were expected along with multimedia support.
Apple bought NeXT, abandoned its proprietary OS and switched to a UNIX.
Microsoft headhunted DEC's team from the cancelled MICA project, merged it with the Portable OS/2 project, and got them to finish OS/2 NT, later Windows NT, on the N-Ten CPU, the Intel i860, a RISC chip, then on MIPS, and later on x86-32 and other CPUs. This was the first credible commercial microcomputer OS that could be both a client and a server, and ultimately killed off all the proprietary dedicated-server OSes, of which the biggest was Novell Netware.
That is a vastly overlong answer but it's late and I'm just braindumping.
Mainframe: big tightly-clustered bunch of smart devices, flying in close formation. Primary role, batch driven computing, non-interactive; interactivity bolted on later.
Mini: departmental shared computer with dumb terminals and dumb peripherals, interactive and multitasking from the start. Most text-only, with interactive command-line interfaces -- "shells" -- and multiprogramming or multitasking OSes. Few had batch capabilities; no graphics, no direct I/O, maybe rare graphical terminals for niche uses. Origins of the systems that inspired CP/M, MS-DOS, VMS, UNIX, and NT.
Micro: single-chip CPU, single-user machines, often with graphics and sound early on. Later gained GUIs, then later than that networking, and evolved to be servers as well.
If the machine can control and be controlled by a screen and keyboard plugged into the CPU, it's a micro. If its CPU family has always been a single chip from the start, it's a micro. If it boots into some kind of firmware OS loader, it's probably a micro. The lines between micros and UNIX workstations are a bit blurred.
no subject
Date: 2022-10-06 08:50 pm (UTC)So today's workstations run Windows NT (Windows 10 or 11), with Linux and macOS as alternatives that are strongest in specific sub-markets.
no subject
Date: 2022-10-07 04:17 pm (UTC)Indeed so.
Minis, arguably, evolved into workstations, and then were replaced my micros that evolved into very slightly different, and rather more standard, workstations.
It's a sort of convergent evolution.
no subject
Date: 2022-10-07 07:49 pm (UTC)Silicon Graphics turns out to have an even more interesting history. Their very first products were terminals for what were definitely minicomputers (eg, VAXes), but they rapidly evolved into workstations and servers based around microprocessors in the same approach as Sun had.
no subject
Date: 2022-10-07 09:25 pm (UTC)Interesting. OK, fair point.
How would you revise the taxonomy then? Most workstations were micros, but a few were derived from minicomputer tech? Or the other way round?
As
history_monk says above, a valid interpretation is that high-end PCs basically evolved into things that were workstations in all but ancestry.
no subject
Date: 2022-10-07 10:11 pm (UTC)This replacement included classical minicomputer companies like DEC. DEC still sold server VAXes, but they were increasingly implemented with single-chip VAX CPUs, not with discreet components (although I think they remained physically big). And these single-chip VAX CPUs let DEC make VAX workstations that had the small form factor of Sun, SGI, and so on workstations, a form factor that wasn't possible with 'minicomputer' discreet components. At one point we had both DEC Ultrix VAX and MIPS workstations, and you mostly couldn't tell them apart from the outside.
In the 1990s, the rising tide of x86 capability and volume erased any remaining hardware advantage that Unix vendors had, first in workstation size machines and then later in server ones. By the end of the 1990s, x86 desktop machines were objectively better and cheaper than Unix vendor workstations for anything except expensive, high end graphics work. In 1999, we evaluated replacements for 1996 era SGI Indys and x86 desktops running Linux crushed the competition (to a degree that was sad to see), even with Sun's workstations of the era using a lot of PC hardware interfaces and hardware. And obviously in the modern era the 'minicomputer' form factor is pretty much dead because you can get a very powerful machine into the small '1U' rack form factor.
no subject
Date: 2022-10-07 10:20 pm (UTC)no subject
Date: 2022-10-09 12:10 pm (UTC)I take your point.
Workstations do confuse the whole question. :-(
no subject
Date: 2022-10-24 11:49 am (UTC)Delineating 'micros' as a single category is problematic — as some have said already, especially with Workstations spanning the era of late minis to higher powered desktops. Some late minis made use of better silicon integration using bit-slice chips in implementation, as did early Workstations like Alto and PERQ. Some mini CPUs found there way into silicon maybe requiring one or a few chips to implement. PDP-11 was implemented in silicon as LSI-11, for example.
As things moved into the desktop age, 'micros' started to require additional chips for what we might consider now basic CPU functions — memory controllers, MMUs, FPUs. How much during the PC era are Desktops and Servers true single chip CPUs in the same way as 8-bit Home computers, given that they can’t run without a specifically designed chipset.
And then how much does a modern Desktop or Server fall into the category of 'mainframe' above? There are so many processors in a modern PC — not just the overt ones like CPUs and GPUs, but the microcontrollers embedded here, there and everywhere.
And then we start to think about the regression to mainframe/computer bureau architecture of cloud computing, where the super-computer on our desks merely provides an old mainframe-style terminal to applications running on a big central mainframe-style remote computing service.
So personally, I see a computer lineage and categorisation chart at best like a diagram of a complicated railway and at worst like a bowl of spaghetti 😊