liam_on_linux: (Default)
OS/2 2.0 came out in April 1992.

Windows 3.0 came out in May 1990, 2 whole years earlier. It already had established an ecosystem before 32-bit OS/2 appeared.

Secondly, OS/2 2 really wanted a 386DX and 4MB of RAM, and a quality PC with quality name-brand parts. I owned it. I ran it on clones. I had to buy a driver for my mouse. From another CONTINENT.

Windows 3.0 ran on any old random junk PC, even on a PC XT class box with EGA. At first only high-end users of high-end executive-class workstations got the fun of 386 Enhanced Mode, but that was all OS/2 2.0 could run on at all.

OS/2 died when OS/2 1.x was a high-end OS with low-end features, and a cheapo low-end 386SX PC with 1 or 2MB of RAM, with MS-DOS and DESQview (not DESQview/X, just plain old text-mode DESQview) could outperform it.

(Remember the 386SX came out in 1988 and was common by the time Windows 3.0 shipped.)               

But as soon as OS/2 1.x was a flop, MS turned its attention back to Windows, and before even the first betas of Windows 3.0, there were rumours in the tech press that MS was going to abandon the project. This was widely discussed in the media at the time.

In my then-job, around 1989, my boss sent me on a training course for 3Com's new NOS, 3+Open, which was based on OS/2 1.0.

I did not realise it was a clever evaluation strategy. He knew I might have enthused about it if given a copy to play with. Instead, being trained on it, I was told some of the holes and weaknesses.

I came back, told them it was good but only delivered with OS/2 clients and had no compelling features for DOS clients, and they were very pleased -- and the company went on to start selling Novell Netware instead.

Looking back that was a good choice. In the late 1980s Netware 2 and 3 were superb server OSes for DOS, and OS/2 with LAN Manager wasn't.

But yes, I think from soon after OS/2 launched, it was apparent and widely reported to IBM that MS was not happy, and as soon as MS started talking about launching a new updated version of Windows -- circa 1989 -- it was very clear to IBM that MS was preparing a lifeboat and would soon abandon ship. By the time Windows 3.0 came out MS had left the project. Its consolation prize was to keep Portable OS/2, also called OS/2 3.0, later OS/2 NT.

This wasn't secret, and everyone in the industry knew about it. The embarrassment to IBM was considerable and I think that's why IBM threw so many people and so much money at OS/2 2.x. It was clear early on that although it was a good product, it wasn't good enough.

NT 3.1 launched in July 1993, almost exactly 1 year before OS/2 2.1, and NT made it pretty clear that OS/2 2.x was toast.

I deployed NT 3.1 in production and supported it. Yes, it was big and it needed a serious PC. OS/2 2.0 was pretty happy on a 486 in 4MB of RAM and ran well in 8MB. NT 3.1 needed 16MB to be useful and really wanted a Pentium.

But NT was easier to install. For instance, you could copy the files from CD to hard disk and then run the installer from MS-DOS. OS/2 had to boot to install, and it had to boot with CD drivers to install from CD. Not trivial to achieve: ATAPI CD-ROM drives hadn't been invented yet. It was expensive SCSI drives and a driver for your SCSI card, or proprietary interfaces and proprietary drivers, and many of those were DOS-only.

NT didn't have OS/2's huge, complicated CONFIG.SYS file. NT had networking and sound and so on integrated as standard, while they were paid-for optional extras on OS/2.

And NT ran Windows 3 apps better than OS/2, because each Windows 3 app had its own resource heaps under NT. Since the 64kB heap was the critical limitation on Win3 apps, NT ran them better than actual Windows 3.

If you could afford the £5000 PC to run NT, it was a better OS. OK, its UI was the clunky (but fast) Windows 3 Program Manager, but it worked. OS/2's fancy Workplace Shell was more powerful but harder to use. E.g. why on earth did some IBMer think needing to use the right mouse button to drag an icon was a good idea?

I owned OS/2, I used it, and I liked it. I am still faintly nostalgic for it.

But aside from the fancy front-end, NT was better.

NT 3.5 was smaller, faster and better still. NT 3.51 was even smaller, faster and stabler than that, and was in some ways the highpoint of NT performance. It ran well in 8MB of RAM and very well in 16MB. On 32MB of RAM, if you were that rich, we NT users could poke fun at people with £20K-£30K UNIX workstations, because a high-end PC was as fast, as stable, and had a lot more apps and a much easier UI.

Sad to say, but the writing was on the wall for OS/2 by 1989 or so, before 2.0 even launched. By 1990 Windows 3.0 was a hit. By 1992 Windows 3.1 was a bigger hit and by 1993 it was pretty clear that it was all over bar the shouting.

There was a killer combination that had a chance, but not a good one: Novell Netware for OS/2. Netware ran on OS/2 and that made it a useful non-dedicated server. IBM could have bought Novell, combined the products, and had a strong offering. Novell management back then were slavering to outcompete Microsoft; that's why Caldera happened, and why Novell ended up buying SUSE.

(For whom I worked until last year.)

OS/2 plus Netware as a server platform had real potential, and IBM could have focussed on server apps. IBM had CC:Mail and Lotus Notes email, it had the DB2 database, soon it would have Websphere. It had the products to bundle to make OS/2 a good deal as a server, but it wanted to push the client.               

liam_on_linux: (Default)
A HN poster questioned the existence of 80286 computers with VGA displays.

In fact the historical link between the 286 and VGA are significant and represent one of the most important events in the history of x86 computers.

The VGA standard, along with PS/2 keyboard and mouse ports, 1.4MB 3.5" floppies, and even 72-pin SIMMs, was introduced with IBM's PS/2 range of computers in 1987.

The original PS/2 range included:

• Model 50 -- desktop 286.

• Model 60 -- tower 286.

• Model 70 -- desktop 386DX.

• Model 80 -- tower 386DX. (I still have one. One of the best-built PCs ever made.)

All had the Microchannel (MCA) expansion bus, and VGA as standard.

Note, I am not including the Model 30, as it wasn't a true PS/2: no MCA, and no VGA, just MCGA.

IBM promised buyers that they would be able to run the new OS/2 operating system it was working on with Microsoft at the time.

This is the reason why IBM insisted OS/2 must run on the 286: to provide it to the many tens of thousands of customers it had sold 286 PS/2 machines to.

Microsoft wanted to make OS/2 specific to the newer 32-bit 386 chip. This had hardware-assisted multitasking of 8086 VMs, meaning the new OS would be able to multitask DOS apps with excellent compatibility.

But IBM had promised customers OS/2 and IBM is the sort of company that takes such promises seriously.

So, OS/2 1.x was a 286 OS, not a 386 OS. That meant it could only run a single DOS session and compatibility wasn't great.

This is why OS/2 flopped. That in turn is why MS developed Windows 3, which could multitask DOS apps, and was a big hit. That is why MS had the money to headhunt the MICA team from DEC, headed by Dave Cutler, and give them Portable OS/2 to finish. That became OS/2 NT (because it was developed on Intel's i860 RISC chip, codenamed N-Ten.) That became Windows NT.

That is why Windows ended up dominating the PC industry, not OS/2 (or DESQview/X or any of the other would-be DOS enhancements or replacements).

Arguably, although I admit this is reaching a bit, that's what led to the 386SX, and later to VESA local bus computers, and Win95 and a market of VGA-equipped PCI machines: the fertile ground in which Linux took root and flourished.

PCs got multitasking combined with a GUI because of Windows 3 and its successors. (It's important to note that there were lots of text-only multitasking OSes for PCs: DR's Concurrent DOS, SCO Xenix, QNX, Coherent, TSX-32, PC-MOS, etc.) The killer feature was combining DOS, a GUI, and multitasking of DOS apps. That needed a 386SX or DX.

These things only happened because OS/2 failed, and OS/2 failed because there were lots of 286-based PS/2 machines and IBM promised OS/2 on them.

The 286 and VGA went closely together, and indeed, IBM later made the ISA-bus "PS/2" Model 30-286 in response to the relatively failure of MCA.

It was a pivotal range of computers and it sealed the future of the PC industry long after PS/2s themselves largely disappeared. They were a hugely important range of computers, and they introduced the standards that dominated the PC world throughout the 1990s and into the 2000s: PS/2 ports, VGA sockets, 72-pin RAM, 1.4MB floppies etc. Only the expansion bus and the planned native OS failed. All the external ports, connectors, media and so on became the new industry standards.               

liam_on_linux: (Default)

I read this wonderful article on mainframe OSes.

I've been meaning to do something like it for years, but I may use this as a jumping off point.

I think, for me, what I find intriguing about mainframe OSes in the 21st century is this:

On the one hand, there have been so many great OSes and languages and interfaces and ideas in tech history, and most are forgotten. Mainframes were and are expensive. Very, very expensive. Minicomputers were cheaper – that’s why they thrived, briefly, and are now totally extinct – and microcomputers were very cheap.

All modern computers are microcomputers. Maybe evolved to look like minis and mainframes, like ostriches and emus and cassowaries evolved to look a bit like theropod dinosaurs, but they aren’t. They’re still birds. No teeth, no claws on their arms/wings, no live young. Still birds.

One of the defining characteristics of micros is that’s they are cheap, built down to a price, and there’s very little R&D money.

But mainframes aren’t. They cost a lot, and rental and licensing costs a lot, and running them costs a lot… everything costs a lot. Meaning you don’t use them if you care about costs that much. You have other reasons. What those are doesn’t matter so much.

Which means that even serving a market of just hundreds of customers can be lucrative, and be enough to keep stuff in support and in development.

Result: in a deeply homogenous modern computing landscape, where everything is influenced by pervasive technologies and their cultures – Unix, C, the general overall DEC mini legacy that pervades DOS and Windows and OS/2 and WinNT and UNIX, that deep shared “DNA” – mainframes are other.

There used to be lots of deeply different systems. In some ways, classic Mac and Amigas and Acorn ARM boxes with RISC OS and Lisp Machines and Apollo DomainOS boxes and so many more – were deeply and profoundly unlike the DEC/xNix model. They were, by modern standards, profoundly strange and alien.

But they’re all dead and gone. A handful persist in emulation or as curiosities, but they have no chance of being relevant to the industry as a whole ever again. Some are sort of little embedded parasites, living in cysts, inside a protective wall of scar tissue, persisting inside an alien organism. Emacs and its weird Lispiness. Smalltalk. Little entire virtual computers running inside very very different computers.

Meantime, mainframes tick along, ignored by the industry as a whole, unmoved and largely uninfluenced by all the tech trends that have come and gone.

They have their own deeply weird storage architectures, networking systems, weird I/O controllers, often weird programming languages and memory models… and yes, because they have to, they occasionally sully themselves and bend down to talk to the mainstream kit. They can network with it; if they need to talk to each other, they’ll tunnel their own strange protocols over TCP/IP or whatever.

But because they are the only boxes that know where all the money is and who has which money where, and who gets the tax and the pensions, and where all the aeroplanes are in the sky and who’s on them, and a few specialised but incredibly important tasks like that, they keep moving on, serene and untroubled, like brontosauri placidly pacing along while a tide of tiny squeaky hairy things scuttle around their feet. Occasionally a little hairy beast jumps aboard and sucks some blood, or hitches a ride… A mainframe runs some Java apps, or it spawns a VM that contain a few thousand Linux instances – and the little hairy beasts think they’ve won. But the giant plods slowly along, utterly untroubled. Maybe something bit one ankle but it didn’t matter.

Result: the industry ignores them, and they ignore the industry.

But whereas, in principle, we could have had, oh, say, multi-processor BeOS machines in the late 1990s, or smoothly-multitasking 386-based OS/2 PCs in the late 1980s, or smoothly multitasking 680x0 Sinclair clones instead of Macs, or any one of hundreds of other tech trends that didn’t work out… they were microcomputer-based, so the R&D money wasn’t there.

Instead, we got lowest-common-denominator systems. Not what was best, merely what was cheapest, easiest, and just barely good enough – the “minimum viable product” that an industry of shysters and con-men think is a good thing.

And a handful of survivors who keep doing their thing.

What is funny about this, of course, is that it’s cyclic. All human culture is like this, and software is culture. The ideas of late-20th-century software, things that are now assumptions, are just what was cheap and just barely good enough. They’ve now been replaced and there’s a new layer on top, which is even cheaper and even nastier.

And if we don’t go back to the abacus and tally sticks in a couple of generations, this junk, which those who don’t know anything else believe is “software engineering” and not merely fossilised accidents of exigency – will be the next generation’s embedded, expensive, emulated junk.

What sort of embedded assumptions? Well, the lower level is currently this… quote marks to indicate mere exigencies with no real profound meaning or importance:

“Low-level languages” which you “compile” to “native binaries”. Use these to build OSes, and a hierarchy of virtualisation to scale up something not very reliable and not very scalable.

Then on top of this, a second-level ecosystem built around web tech, of “dynamic languages” which are “JITted” in “cross-platform” “runtimes” so they run on anything, and can be partitioned up into microservices, connected by “standard protocols”, so they can be run in the “cloud” at “web scale”.

A handful of grumpy old gits know that if you pick the right languages, and the right tools, you can build something to replace this 2nd level system in the same types of tools as the first level system, and that you don’t need all the fancy scaling infrastructure because one modern box can support a million concurrent users no problem, and a few such boxes can support tens of hundreds of millions of them, all in something in the corner of one room, with an uptime of decades and no need for any cloud.

But it’s hard to do it that way, and it’s much easier to slap it together in a few interpreted languages and ginormous frameworks.

And twas ever thus.

liam_on_linux: (Default)
https://emutos.sourceforge.io/

I have an ST, and an Amiga, but I didn't use either back in the day. But I think this is amazing work and really impressive.

So I stuck in on HN and some pillock went "yah boo TOS sucks Amiga is better" like it was 1986. I paraphrase. I am unimpressed.

In fact, while I don't want to be mean, you're missing two or possibly three different points... among which are the reasons I posted this link.

[1] It's not that TOS is less advanced than AmigaOS. Yes it is, and anyone who knows them realises that, but that's not the issue here. The issue is that this FOSS project has brought these two platforms together after about 35 years, and that's both really technologically impressive and also just plain fun.

[2] It means in principle that Amiga owners can run Atari apps, and the ST had some impressive apps.

[3] AROS is great but it's an x86 OS. It doesn't readily run on classic Amigas, or even especially well on the handful of later PowerPC Amigas, AFAIK. It also doesn't run natively on modern RISC hardware, like say the Raspberry Pi.

[4] But because it doesn't, that's prompted the creation of another really cool FOSS project, Emu68 -- a native 68K emulation environment for Arm, something comparable to Apple's nanokernel for running Classic MacOS on PowerMacs.

https://github.com/michalsc/Emu68

[5] Creating an OS that's as good or even better than the original while running on original hardware is impressive. Improved localisation opens it up to more people. That's good. It enables reviving vintage kit more easily, and expanding it. That's great.

You were so busy mocking something that you didn't stop to consider all the good sides.

[6] We know TOS was limited. We all know that. OTOH its simplicity enabled this. Its simplicity also was part of why the ST survived as a musicians' tool of choice for decades after it went out of production: super low latencies for music, and so on.

But others knew that TOS was limited, which drove a 3rd party OS market, with products such as MagiC:

https://en.wikipedia.org/wiki/MagiC

And MagiC is now FOSS:

https://gitlab.com/AndreasK/Atari-Mac-MagiC-Sources

Which is good, but OTOH, it's not attracted much interest or development, AFAICS...

Whereas EmuTOS is now on v 1.21 and is seeing new releases several times a year. This is great, and is one reason I posted it.

[7] The limitations of TOS are also what prompted the development of MINT, and that's FOSS too, and it's quite mature:

https://github.com/totalspectrum/atari-mint

And it has distros, such as AFROS:

https://aranym.github.io/afros.html

Which you can run on x86 kit:

https://aranym.github.io/

All of which is amazing work.

So, yes, while you just wanted to do some advocacy, you missed a huge amount of great work by a committed community.

Not cool, dude.

Leave the Amiga-v-ST hate in the 1980s where it belonged. It wasn't very welcome then. They're both great computers. But hey, then the fans were children, so they can be excused.

In 2022, they can't.
liam_on_linux: (Default)
I did this on a pro basis a little while ago:
https://www.theregister.com/2022/05/20/freebsd_131/

It went easier than I expected, so I thought I'd have a go on my sacrificial half-decent Thinkpad.

(It is a T420: only a Core i5, it has a wobbly screen hinge, new-but-dead battery, and until recently it had a tiny decade+ old SSD. Recently, I pulled the mSATA SSD that was giving errors from my X220, replaced it with a new bigger one -- that's only been waiting for a couple of years -- and bunged the old, possibly-flakey one into the sacrificial T420. I tried it with Ubuntu Kylin, the international remix. Kylin worked well although there are numerous little glitches.)

This machine's primary SSD has ChromeOS Flex on it. I rather like it. It's slick, it's fast, it does its one job very well. I am using a Debian container to run Firefox 'cos I am perverse like that. Firefox is slow to start, issues with maximisation, but works well.

Tonight, because the radio has gone Very Very Solemn And Dull And Worthy, I spent a while faffing with DR-DOS VMs and then I decided on something that might be more educational.

I nuked Kylin and put FreeBSD 13.1 on it.

Now recently I tried GhostBSD on my multiboot testbed. It was a nightmare. Hard, fiddly, falls over in its own install process, and the end result has an ugly theme. Not impressed.

So I went with the vanilla version.

First boot: it finds the wifi card, but can't see any WLANs. That is not much cop.

Tried again. Found a hint online on my phone, switched vconsoles, did `ifconfig wlan0 down` and then `ifconfig wlan0 up`. Went back to the installer, and lo! It sees WLANs! Pick mine, connect, and now I can install stuff.

So, violating the recommended process -- since it's booted off a DVD image via Ventoy, I figure that if I reboot, I won't be able to readily mount and install stuff off that. So let's do it while I'm booted off the install medium. It leaves you in a `chroot` console so it should be installing stuff onto the hard disk. 

So, I followed this:
https://unixcop.com/how-to-install-xfce-in-freebsd-13/

Install xorg, then install slim, then install xfce, then manually write some config files because FreeBSD is a bit BDSM like that.

Reboot... no GUI. `startx` fails with an error about not being able to use the framebuffer. 

Google. Find some vague hints. This seems to be the BSD way: no straight answers, nothing current, sort it out newbie.

This thread has info. And people arguing, and disagreeing, and telling each other they're wrong. 

https://forums.freebsd.org/threads/new-to-freebsd-startx-fails-with-cannot-run-in-framebuffer-mode-help.68882/

But it has some pointers. Install Intel video drivers. Reboot. Nope.

There are vague conflicting things about DRM-mod, whatever that is. Most of the packages don't exist, but `drm-kmod` does. It says it's something to do with Linux, but what the hell, it's a sacrificial system. Install it.

It pulls in dozens and dozens of other things... AMD drives, nVidia drivers, Intel drivers, hypothetical deities know what. Anyway. Let it finish, reboot.

And lo! A desktop!

Obviously my ordinary user can't log in, because I only hand-wrote an `.xinitrc` for root. Use that. It works. Switch consoles and handwrite one for `lproven` too, in Vi, because fsck you, newbie.

Luckily for me, I am not a newbie. I am a grumpy old git but I do have some skills and I did do this in a VM just a couple of months ago.

And stone me, but it works. I can log in.

No volume control. No sound. No network controls. No browser. It's all a bit minimal but it's there, and it works.

So I have just spent an hour or so installing Firefox, Chromium, LibreOffice, a PDF viewer -- watch out, the default install of `atril` tries to pull in all of MATE! Feck! Ctrl-C Ctrl-C! OK, now I know why `atril-lite` is there.

I have a decent shell. With manual installing of Xfce packages, and a graphical package manager, which in turn needed manual installation and configuration of `sudo` -- but ahaha, I added a `wheel` group during installation because this is not my first rodeo -- I have a perfectly nice desktop.

I had to manually install a fake network-manager, whose webpage implies it's an external thing but it isn't, it's in the repos, and it works. And Chromium pulled in Pulseaudio and now sound works. LibreOffice pulled in a JVM. 

It only took a few hours' work, some jiggery-pokery to get it connected, but it works. And on a Core i5 with 6GB of RAM, running off an old and possibly knackered 128GB SSD, it's fast.

Grudgingly, I am impressed. It is much much too much hard work. There really ought to be an Ubuntu-like distro of this that doesn't fsck around with it and leaves everything on defaults -- which disqualifies GhostBSD and MidnightBSD, and FuryBSD is dead.

There's a ports system still although interestingly now it's an optional install. Everything so far has been in the repos, and it remember my wifi credentials from installation and it still autoconnects. Updating is painless.

FreeBSD has come quite a long way. This is a usable desktop OS. It most certainly wasn't last time I tried, on version 9 or 10. I failed to get 12 to install completely, although getting a networked text-only server was fairly easy.

Small stuff that pleases me: my brightness controls work, Lenovo's volume controls work, etc. The mouse works on the console.

And it doesn't take much RAM, and in 6GB of RAM, it feels very smooth and not at all like it's trying hard. I don't think I've heard the laptop's fans come on yet.

I haven't seen much noise out of the Hello System for a while now, which looked like the last best hope for an easy, clean, desktop FreeBSD. I know he is not a fan of GNUstep but I like it -- I reckon ProbonoPD should have adopted NeXTspace.
https://github.com/trunkmaster/nextspace

I have yet to try suspend and resume. I should. Power management seems to be on, though; the machine runs nice and cool.

A lot of people are getting very irked with Linux these days. GNOME is a mess and getting worse, Snap and Flatpak are grossly inefficient, systemd has lots of enemies, Btrfs isn't that trustworthy -- it ate my OS multiple times at SUSE -- Bcachefs and Stratis aren't ready and ZFS is only in Ubuntu and is controversial.

All of that goes away with FreeBSD. 

But currently, it is much much too hard to install, even if it's better than it was, and it is going to scare Linux people off unless they are seriously hardcore.

I've been running TrueNAS Core on my home NAS for a while now and I like it. So far it's been bulletproof.

There is potential in this OS, but it needs to work a bit harder on its outreach. Pick a nice simple default desktop, something that isn't mainstream in the Linux world for differentiation -- Xfce works great -- and a simple clean default set of apps. Something akin to an Ubuntu Minimal Install: desktop, file manager, maybe some viewers, text editor, and a good browser. 

liam_on_linux: (Default)

The problem with the Unix lowest-common-denominator model is that it pushes complexity out of the stack and into view, because of stuff other designs _thought_ about and worked to integrate.

It is very important never to forget the technological context of UNIX: a text-only OS for a tiny, already obsolete and desperately resource-constrained, standalone minicomputer. It was written for a machine that was already obsolete, and it shows.

No graphics. No networking. No sound. Dumb text terminals, which is why the obsession with text files being piped to other text files and filtered through things that only handle text files.

While at the same time as UNIX evolved, other bigger OSes for bigger minicomputers were being designed and built to directly integrate things like networking, clustering, notations for accessing other machines over the network, accessing filesystems mounted remotely over the network, file versioning and so on.

I described how VMS pathnames worked in this comment recently: https://news.ycombinator.com/item?id=32083900

People brought up on Unix look at that and see needless complexity, but it isn't.

VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.

50 years later and Unix still can't do stuff like that. It needs tons of extra work with load-balancers and multi-homed network adaptors and SANs to simulate what VMS did out of the box in the 1970s in 1 megabyte of RAM.

The Unix was only looks simple because the implementors didn't do the hard stuff. They ripped it out in order to fit the OS into 32 kB of RAM or something.

The whole point of Unix was to be minimal, small, and simple.

Only it isn't any more, because now we need clustering and network filesystems and virtual machines and all this baroque stuff piled on top.

The result is that an OS which was hand-coded in assembler and was tiny and fast and efficient on non-networked text-only minicomputers now contains tens of millions of lines of unsafe code in unsafe languages and no human actually comprehends how the whole thing works.

Which is why we've build a multi-billion-dollar industry constantly trying to patch all the holes and stop the magic haunted sand leaking out and the whole sandcastle collapsing.

It's not a wonderful inspiring achievement. It's a vast, epic, global-scale waste of human intelligence and effort.

Because we build a planetary network out of the software equivalent of wet sand.

When I look at 2022 Linux, I see an adobe and mud-brick construction: https://en.wikipedia.org/wiki/Great_Mosque_of_Djenn%C3%A9#/m...

When we used to have skyscrapers.

You know how big the first skyscraper was? 10 floors. That's all. This is it: https://en.wikipedia.org/wiki/Home_Insurance_Building#/media...

The point is that it was 1885 and the design was able to support buildings 10× as big without fundamental change.

The Chicago Home Insurance building wasn't very impressive, but its design was. Its design scaled.

When I look at classic OSes of the past, like in this post, I see miracles of design which did big complex hard tasks, built by tiny teams of a few people, and which still works today.

When I look at massive FOSS OSes, mostly, I see ant-hills. It's impressive but it's so much work to build anything big with sand that the impressive part is that it works at all... and that to build something so big, you need millions of workers, and constant maintenance.

If we stopped using sand, and abandoned our current plans, and started over afresh, we could build software skyscrapers instead of ant hills.

But everyone is too focussed on keeping our sand software working on our sand hill OSes that they're too busy to learn something else and start over. 
liam_on_linux: (Default)
"MS-DOS" is the MS version of what IBM sold as PC DOS. Microsoft produced that on very short notice by licensing (note, not buying) 86-DOS from Seattle Computer Products. That was originally called QDOS, for Quick’n’Dirty OS.

Tim Paterson wrote QDOS based on studying the docs for CP/M and CP/M-86. It was API compatible, but used a different disk filesystem: Paterson used the FAT format of MS’ standalone disk BASIC.

It was wholly-new code, but written to be closely compatible with DR’s published info about CP/M.

That is not even reverse-engineering. Indeed CP/M-86 was released late and it didn’t even exist to be reverse-engineered yet, AFAIK. QDOS was written for and sold with SCP’s 8086 cards in 1979; CP/M-86 did not ship until 1981.

Writing compatible code to a published API is what APIs are for. That’s why the info is published.

QDOS wasn’t a clone of CP/M-86; in fact, it is older than and predated CP/M-86.

It was a compatible OS written to info DR published. That is entirely legal. DR published the APIs intending this for app writers, not for people writing OSes compatible with DR OSes, but it’s not breaking any rules.

In fact in the late 1970s there were lots of CP/M clones out there, such as CPN and Cromemco CDOS and many others. Later MSX-DOS was a much-enhanced CP/M clone.

The difference is, most other companies cloned CP/M on 8080 or Z80. SCP did it on 8088/8086.

But while yes, it’s arguably something like a clone (for different hardware, with a different file system), it was just one of many and didn’t use anything illegal or violate any licenses.

The key thing is that QDOS ran on then-modern hardware with a future. Most of the others ran on what was rapidly becoming obsolete hardware. SCP QDOS became 86-DOS became PC DOS and MS-DOS, and sold in the tens of millions of copies, and made MS huge amounts of money.

DR and IBM made big bad mistakes and it cost them dominance of their industries and lots of money. MS was smart and got lucky and got very very rich.

Later on, MS abused that power repeatedly, stole code, copied ideas, unfairly pushed rivals out of business, and generally became a bully and a criminal. MS effectively killed Be, Netscape, and Central Point Software; it crippled Aldus and STAC; and many more.

But DR survived and briefly it staged a successful comeback, before being bought by Novell.

I entirely understand how angry Dr Gary Kildall was. It was justified. But he did make mistakes. Sadly some of them are only clear in hindsight. DR should have rushed to make CP/M-86 quickly for IBM, and reserved the rights to sell it to others, as Microsoft did. DR should have sold single-user single-tasking CP/M-86 cheaply, building the market, and made Concurrent CP/M the premium product. It should have sold GEM cheaply to get wide adoption. It should have made standalone single-user multitasking CP/M a desirable power-user OS, rather than aiming at the multiuser market, which was on the way out as PCs got cheaper and cheaper.

But as little as I personally like MS, in how it cornered the market and became rich, it did it by being clever, and fast, and outmaneuvering bigger, slower rivals, and there’s nothing wrong with that.

liam_on_linux: (Default)
 

My #1 annoyance these days, because it is so egregious, is Electron apps.

I guess because the only language some programmers know is Javascript, of which I know little but what little I know places it marginally above PHP in intrinsic horror.

So people write standalone apps in a language intended for tweaking web pages, meaning that to deploy those apps requires embedding an entire web browser into every app.

And entire popular businesses, for example Slack, do not as far as I can tell have an actual native client. The only way to access the service is via a glorified web page, running inside an embedded browser. Despite which, it can't actually authenticate on its own and needs ANOTHER web browser to be available to do that.

Electron apps make Java ones look lean and mean and efficient.

Apparently, expecting a language that can compile to native machine code that executes directly on a CPU, and which makes API calls to the host OS in order to display a UI, is quaint and retro now.

And it's perfectly acceptable to have a multi-billion-dollar business that requires a local client, but which does not in fact offer native clients of any form for any OS on the market.

It's enough to make me want to go back to DOS, it really is. Never mind "nobody will ever need more than 640kB"... if you can't do it in 640kB and still have enough room for the user's data, maybe you should reconsider what you are doing and how you are doing it.               

liam_on_linux: (Default)
 That is *what* it came from, yes, but not *why*.
 
The "why" part seems to be forgotten now: because Microsoft was threatening to sue all the Linux vendors shipping Windows 95-like desktops.
 
https://www.theregister.com/2006/11/20/microsoft_claims_linux_code
 
Microsoft invented the Win95 desktop from scratch. Its own previous Ones (e.g. Windows for Workgroups 3.11, Windows NT 3.51 and OS/2 1.x) looked nothing like it.
 
The task bar, the Start menu, the system tray, "My Computer", "Network Neighbourhood", all that: all original, *patented* Microsoft designs. There was nothing like it before. 
 
(The closest was Acorn's RISC OS, with an "icon bar" that works very differently, on the Archimedes computer. A handful of those were imported to North America, and right after, NeXT "invented" the Dock, and then Microsoft invented the task bar which is quite a bit more sophisticated.
 
One source: the team that programmed it. Here's me moderating a panel discussion by most of the surviving members of Acorn's programming team, on video from a month ago:
https://www.youtube.com/watch?v=P_SDL0IwbCc
 
SUSE signed a patent-sharing deal:
https://www.theregister.com/2006/11/03/microsoft_novell_suse_linux/
 
Note: SUSE is the biggest German Linux company. (Source: I worked for them until last year.) KDE is a German project. SUSE developers did a lot of the work on KDE. 
 
So, when SUSE signed up, KDE was safe.
 
Red Hat and Ubuntu refused to sign.
 
So, both needed *non* Windows like desktops, ASAP, without a Start menu, without a taskbar, without a window menu at top left and minimize/maximize/close at top right, and so on.
 
Red Hat is the main sponsor of GNOME development. (When KDE was first launched, Qt was not GPL, so Red Hat refused to bundle it or support it, and wrote its own environment instead.)
 
Ubuntu tried to get involved with the development of GNOME 3, and was rebuffed. So it went its own way with Unity instead: basically, a Mac OS X rip-off, only IMHO done better. Myself, I still use both Unity and macOS every day. They are like twins, and switching between them is very easy.
 
So both RH and Ubuntu switched to non-Windows-like desktops by default.
 
In the end MS did not sue anyone... but it got what it wanted: total chaos in the Linux desktop world.
 
Before the threats, almost everyone used GNOME 2. Even SUSE bundled GNOME because its corporate owner bought the main GNOME 3rd party developers, Ximian, and forcibly merged the company into SUSE:
 
https://www.theregister.com/2004/01/07/novell_marries_suse_to_ximian/
 
SUSE, Red Hat, Debian, Ubuntu, even Sun Solaris used GNOME 2. Everyone liked GNOME 2.
 
Then Microsoft rattled its sabre, and the FOSS UNIX world splintered in all directions.
 
RH uses GNOME 3. Ubuntu used Unity, alienated a lot of people who only knew how to use Windows-like desktops, and that made Mint a huge success. GNOME 2 got forked as MATE, and Mint adopted it, helping a lot. Mint also built its own fork of GNOME 3, Cinnamon. Formerly tiny niche desktops like Xfce and LXDE got a *huge* boost. Debian adopted GNOME 3 and systemd, annoying lots of its developers and causing the Devuan fork to happen.
 
Here's an analysis I wrote at the time:
 
https://www.theregister.com/2013/06/03/thank_microsoft_for_linux_desktop_fail/
 
Yes, Unity evolved out of the Ubuntu netbook desktop, but the reason _why_ it did is that Ubuntu was getting threatened.
 
(Xubuntu and Lubuntu and Kubuntu are not official and not the defaults, so they don't endanger it.)
 
 
liam_on_linux: (Default)
The odd things for me, having tried more or less every single Linux desktop under the sun, including several that no longer exist, is that there's no one definition of "user friendly" that holds true for everyone.
 
In this story's comments, there are people saying Windows is the best, others saying certain particular versions are best, others saying they find it unusable or at least hard.
 
Yet this has been the best-selling desktop OS in history for about 35 years now, used by _billions_ of people, so it must be getting *something* right. 
 
Counter to that, there are also people castigating Macs and macOS. That's normal; there are as many biased fanboys *against* as there are *for*.
 
And yet, again, for nearly 40 years now, Apple has been *THE ONE COMPANY* to resist the rise of Microsoft, and has a fantastically loyal fan base and makes a lot of money.
 
I also have a number of blind friends, and they mostly tell me that Windows is the most accessible OS there is, that it has the best selection of assistive tech, that the apps are more accessible, and so on.
 
Some favour macOS. What macOS provides out of the box is *way* better, it's true. If you're a casual computer user -- bit of surfing, bit of online chat, very occasionally write a letter -- it's better for blind users than Windows.
 
If you have a job to do, in business, and need rich powerful apps, and need them to be accessible, my working blind mates tell me Windows easily trounces the Mac.
 
I am not blind so I must take their word for it.
 
But I can make Windows and macOS and my preferred Linux desktops, Unity and Xfce, stand on their heads and do back handsprings for me. I regularly read people telling me that any of these OSes just can't do X or can't do Y, when X and Y are things I do on a daily basis. 
 
What this really means is: they don't know how to do X or Y, and they haven't bothered to look for instructions or guidance. It doesn't do it -- whatever "it" is, it varies a lot -- and so they decide it can't, it doesn't work, and they move on.
 
Don't believe me? Look on Quora for the dozens of idiots asking "why can't Macs do cut and paste?"
 
In terms of the mass market, outside Xerox PARC, Apple *invented* the industry-standard method of C&P and defined the keystrokes every other OS now uses... for the Lisa and the Mac.  Of *course* they can.
 
What the idiots mean, but are too dim to know they mean, is that the *Finder* doesn't do cut and paste. No, it doesn't, for excellent very solid UI and HCI reasons that cause millions of dollars of data loss every year on Windows and have done since 1995. 
 
But it's symptomatic. 
 
People mostly don't know how to drive Windows and Windows-like interfaces with the standard keystrokes. They don't know how to search it, how to manage windows with the keyboard, how to manage virtual desktops, stuff like that. 
 
Because they don't know, most don't do it. 
 
Therefore most of the desktops for Linux are half-baked rip-offs of Windows that don't implement the clever stuff, because the people that implemented it didn't know the clever stuff. 
 
So it doesn't work. 
 
Along came GNOME and ripped all that out. If most people don't use it, then clearly, it's unnecessary so let's bin it. So it *forced* users into accessing the limited remaining functionality via defined keystrokes and gestures.
 
As a result, people have to learn the commands, and they can because there aren't many. 
 
And the end result of that is that they then praise GNOME for being "powerful" and "efficient" because they were forced to learn stuff.
 
Windows did that better 27 years ago, but because of good design -- and I am no MS fanboy! -- you didn't have to learn it. You could point and click your way and stumble across a way to do it.
 
It's sort of Perl vs Python. One gives you a dozen ways to accomplish something; the other has one way that's encouraged as being "natural" or "pythonic".
 
Perl fans loved it for its power as a result... but they can't read their own code, let alone anyone else's. In the end that's doomed the language.
 
Python is easy enough for almost anyone with Clue № 1 but many already-skilled people hate it as a result of its enforced rules.
 
Programmers know this stuff and accept it. They rail about it, but they accept it.
 
Programmers typically do not know desktop tech well. I lived with 2 and was engaged to one. They could out-program me drunk, but they were not techies. It's a different skill.
 
So when someone comes along and says "hey, you know what, I am an expert in this stuff and environments A, B and C have this large feature set and cover 75% or 80% of the functionality of the OS they were copied from," they are probably right.
 
Then someone who knows just 10% of that functionality uses Desktop D, which *only does that 10%* but forced them to learn how to use it properly, and they say "no, Desktop D is better because it works and it's really efficient and has 100% of the functionality I need, and I'm a programmer, I know this stuff, so this is all anyone needs!"
 
It is amazingly frustrating to head this kind of advocacy, *know* that you know far better than the person doing it, but not be able to explain to Mr Loud-Confident-And-Wrong that there is stuff he hasn't considered and the big picture is a lot more complicated than that.
 
But what's worse than that is when the advocates of Desktop D are *so* loud and *so* confident that they persuade billion-dollar corporations to standardize on their fairly poor product... then they throw conferences where they pat each other on the back for their cleverness, and they patronize people online for being stupid from their position of smug, entrenched ignorance.
 
That is really infuriating.
 
 
liam_on_linux: (Default)
There were three products all called MS Word, only peripherally related:
 
Word for DOS, which I first saw at version 3, and of which I used 3, 4, 5, 5.5 (when it suddenly switched to CUA menus), and 6 (like WordPerfect, the last and best version).
 
Word for Mac, which I first saw at version 4, and which in generally-held opinion peaked at v5.1.
 
Word for Windows, AKA WinWord, which went v1, v2, v6.
 
But there were legit reasons. 
 
MS was making an effort to harmonise and coordinate its versions. 
 
IIRC the story is that Gates met Paul Brainerd (founder of Aldus) at some event, and Brainerd told him that Aldus (creators of PageMaker, the ultimate DTP app in its day and the product that made the Mac a big success) was working on a wordprocessor for Windows, because there wasn't a good one. The product was codenamed "Flintstone" and was nearly ready for alpha test.
 
Gates panicked, lied to Brainerd that they shouldn't waste their time because MS was almost ready to launch its and it'd be a killer app. 
 
Brainerd went back to base and cancelled Flintstone. Gates went back to base and told his team to write a Windows word-processor ASAP because Aldus was about to kill them.
 
So, WinWord 1 was a rush job and was rather sketchy. 
 
WinWord 2 fixed a lot of issues and had a much better layout of menus, dialogs, toolbars etc.
 
Then MS decided to put out an Office suite and make the version numbers match across platforms. 
 
So, the native Mac Word was killed.
 
The Windows codebase was ported to the Mac, and got the next consecutive version number. Mac users hated it at first: it was much bigger, much slower and buggier, and felt Windows-like rather than Mac-like.
 
The Windows version was bumped to match the Mac one, which is sort of fair: there was a common codebase, and the Mac version couldn't jump backwards to v3.
 
The DOS version got a minor rejig to reorganize its menus and dialogs to have the same layout as the new v6 product, and the version number was bumped.
 
Word 6 for Windows is the classic version, IMHO. It looks much like all the later versions, works like them, etc.
 
The snags with it in the 21st century are twofold:
 
[1] The 16-bit version works fine in emulators and things but only does short 8.3 filenames, which is a PITA today.
 
[2] The very rare 32-bit version for NT is out there, and handles long filenames fine, but it's a port of a Windows 3 app. So, no proportional scrollbar thumbs, so you can't see how big the document is, something I use a lot. And no mouse scroll-wheel support, because they hadn't been invented yet, but makes it feel very broken on a modern OS.
 
Otherwise, I would use it now, TBH. It's tiny and fast and has 100× the functions I need.
 
Word 95 fixes all that, but the snag with Word 95 is that it uses the old DOS version file format. Modern apps don't support the file format.
 
Word 97 uses a new file format, which remained the same until 2003. Office 2007 introduced new Zip-compressed XML files, and the Ribbon, and broke everything.
 
But it's a lot easier to load Word 97 .DOC files into any other modern app than Word 95 ones, or else I'd still use Word 95.
 
But yes, Word 6 harmonized the UI, the file format, and the version number across Win, Mac and DOS. And it *did* come after Word 5.5 for DOS and Word 5.1 for Mac.
 
 
DOS Word 5.5 is freeware now, but the differences from the Word 6 UI are annoying.
 
Word 6 for DOS fixes that and is a nice app to use, but it's not free. I wish MS made it freeware too, but I think the company knows that for a lot of professional writers, Word 6 for DOS is adequate to the task and it might actually hurt sales of Office.
 
OTOH Corel could free WordPerfect 6 for DOS and it might actually _help_ sales of the Windows version. It also should re-enter the Mac market, IMHO. But it's too late now.
 
liam_on_linux: (Default)
Way way back, before DOS and the PC and so on, the UCSD p-System was very widespread.
 
Borland's Turbo Pascal supplanted it, but TP on DOS was very different from the original CP/M TP, and indeed with Delphi on Windows it transformed again into something wholly different and much more powerful.
 
Delphi fused Turbo Pascal, its fast compiler and rich capable DOS IDE, with something much like NeXTstep's Interface Builder and a set of OOPS libraries for Pascal to construct GUIs.
 
Which inspired MS to copy it, taking the forms painter from the Ruby database tool, and an extended kinda-sorta BASIC, and some OLE/COM GUI controls, to make something... well, sprawling and unfocused and sluggish and overcomplicated.
 
Then, when MS was seriously afraid that its OS and apps divisions would be split up by the DoJ, which the company forcibly transformed into .NET so it would have a tool for asserting cross-platform apps dominance.
 
But the fierce and determined Judge Thomas Penfield Jackson was replaced with the conciliatory Judge Colleen Kollar-Kotelly, and she backed down and let MS get away with it.
 
So the big split never happened, and MS was left with a fancy cross-platform tool it no longer really needed.
 
The result has been decades of bloat and flab, a somewhat tokenistic FOSS version for Unix-like OSes, and a wasted opportunity... but which nonetheless strangled the 3rd party compiler/dev-tools market on Windows.
 
Mac OS X succeeded because it bundled great dev tools, therefore strangling the Mac dev tools market.
 
And the Linux world does its monastic Unix self-denial thing -- plain text, horrible 1970s text editors, because suffering is good for the soul -- and never goes anywhere much. C++ is an evil modern heresy! We can have 50-line "object" calls in plain C, and it remains clean and holy, just as Saint Ritchie and the prophet Stallman decreed.

(Repurposed from a comment in this discussion.)
liam_on_linux: (Default)

CP/M did not support subdirectories, so it did not have a directory separator.

Its design was derived from a multiuser minicomputer OS (or several of them, principally DEC OS-8) and so it had user areas instead:

A1:

B6:

C3:

 

I suppose it's fairly natural to assume from the POV of the 3rd decade of the 21st century that computers from 50 years ago had basic facilities like hierarchical directories... but not all of them did.

Big ones like high-end minicomputers did. I learned on VAX/VMS at university in the 1980s, and it did.

Cheap low-end minis didn't, and nor did the early 8-bit machines whose design was inspired by low-end minicomputers.

The main influences on CP/M were DEC OSes: OS/8 and TOPS-10.

The 2nd is mentioned in Wikipedia's history of CP/M.

I have never used either -- I'm not that old -- and I am not sure but I don't think either supported hierarchical subdirectories.

The closest thing that evolved into something that did was the DEC OS RSX-11, which influenced VMS. They used a filesystem called DEC Files-11. So that did end up supporting hierachical directories, but there isn't a "directory separator" per se. That's a UNIX-ism that MS-DOS 2.0 copied.

A VMS file in its folder might have been called something like:

VAXA$DKA100::[USERS.LPROVEN.SOURCE.FORTRAN]WUMPUS.FOR;42

In other words, devices had multicharacter names that had meaning (e.g. what type of controller board, then which controller, then which disk), possibly after a cluster node name, then in square brackets a folder path separated by dots, then a filename, then another dot, then a 3-letter extension, then a semicolon, then a version number.

Looks baroque compared to the dead simple UNIX name style...

/home/lproven/source/fortran/wumpus

Every time you save a file, the version number increments automatically. And path specs combined with device names and cluster names mean that the filename could point to another disk on this controller, or another disk on a different controller, or a disk on another node in the cluster, or a disk on any node in a named cluster.

You can't do that so easily with the Unix naming system. So the sysadmin has to mount folders from other machines into this one's filesystem, and that means also setting up some kind of distributed authentication like NIS or YP or LDAP, and then some PAM modules or something... and it all gets very complicated.

In other words, the simplicity of the UNIX design doesn't take complexity away: it just hides it. It's still there but it becomes someone else's problem.

Whereas the DEC way of doing things sort of puts the complexity right there in front of you, but in return, you got rich facilities right there.

Much less need for Git when files are versioned and you can go back to an older version before you broke something just using info encoded into the filename. Much less need for NFS mounts when the filesystem knows about controllers and networking and clusters and lets you address them. Much less need for bolted-on fancy authentication when that's built into the OS because the designers thought about stuff like networks, clusters, and authentication, when all that got taken _out_ of UNIX and then had to be bolted back on later.

So, yeah, the UNIX way is simpler, but OTOH that also means it's poorer. Poorer as in less rich. As in the DEC system was richer: being richer lets you do more.

CP/M evolved into multitasking multiuser OSes in time, but DR didn't get to re-invent all this stuff.

Maybe if DOS had never happened, DR would have prospered and bought DEC instead of Compaq buying DEC, and this stuff would have made it into PC OSes.

Who knows...               

liam_on_linux: (Default)
[Nicked from a Reddit comment to one of my own posts]

I love Unity and I still use it daily. I think it's the single most polished Linux desktop there's ever been, and although it is succumbing to bitrot a little now, it still works very well indeed.

Whereas GNOME is the canonical (pun intended) instantiation of Chesterton's Fence in desktop design.

https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence

They don't know what the top panel is for, but all the desktops they know have one, so they kept it. But they don't know how to use title bars, so removed them. Desktop icons were hard, so remove them.

It's a deep lack of understanding.

So, for example, title bars. First, you need to know how to use a 3-button mouse properly. The middle button is important. Middle-click a title bar and it sends it to the back of the Z-stack: behind all other windows.

Select text, then middle-click elsewhere, and it copies it. No formatting, just text, and without going through the clipboard.

So, you get the ability to copy & paste two things at once. One in the clipboard, one by middle-clicking. Copy a web page title and the URL in a single action.

But trackpad pilots don't know this, so they think the middle button isn't important, so they take its functionality away.

Secondly, once you know how to use the middle button, turn it into a scroll wheel. It's still a button. You can click it. All the above still works.

But now, you can scroll up on a title bar, and it collapses into just the title bar. It's an alternative to minimisation, called the windowblind or windowshade effect.

Scroll down, it unrolls again.

But GNOME folk didn't know how to do this. They don't know how to do window management properly at all. So they take away the title bar buttons, then they say nobody needs title bars, so they took away title bars and replaced them with pathetic "CSD" which means that action buttons are now above the text to which they are responses. Good move, lads. By the way, every written language ever goes from top to bottom, not the reverse. Some to L to R, some go R to L, some do both (boustrophedon) but they all go top to bottom.

The guys at Xerox PARC and Apple who invented the GUI knew this. The clowns at Red Hat don't.

There are a thousand little examples of this. They are trying to rework the desktop GUI without understanding how it works, and for those of us who do know how it works, and also know of alternative designs these fools have never seen, such as RISC OS, which are far more efficient and linear and effective, it's extremely annoying.

liam_on_linux: (Default)
 A lot of history in computing is being lost. Stuff that was mainstream, common knowledge early in my career is largely forgotten now.

This includes simple knowledge about how to operate computers… which is I think why Linux desktops (e.g. GNOME and Pantheon) just throw stuff out: because their developers don’t know how this stuff works, or why it is that way, so they think it’s unimportant.

Some of these big companies have stuff they’ve forgotten about. They don’t know it’s historically important. They don’t know that it’s not related to any modern product. The version numbering of Windows was intentionally obscure.

Example: NT. First release of NT was, logically, 1.0. But it wasn’t called that. It was called 3.1. Why?

Casual apparent reason: well because mainstream Windows was version 3.1 so it was in parallel.

This is marketing. It’s not actually true.

Real reason: MS had a deal in place with Novell to include some handling of Novell Netware client drive mappings. Novell gave MS a little bit of Novell’s client source code, so that Novell shares looked like other network shares, meaning peer-to-peer file shares in Windows for Workgroups.

(Sound weird? It wasn’t. Parallel example: 16-bit Windows (i.e. 3.x) did not include TCP/IP or any form of dial-up networking stack. Just a terminal emulator for BBS use, no networking over modems. People used a 3rd party tool for this.

But Internet Explorer was supported on Windows 3.1x. So MS had to write its own alll-new dialup PPP stack and bundle it with 16-bit IE. Otherwise you could download the MS browser for the MS OS and it couldn’t connect and that would look very foolish.

The dialup stack only did dialup and could not work over a LAN connection. The LAN connection could not do PPP or SLIP over a serial connection. Totally separate stacks.

Well, the dominant server OS was Netware and again the stack was totally separate, with different drivers, different protocols, everything. So Windows couldn’t make or break Novell drive mappings, and the Novell tools couldn’t make or break MS network connections.

Thus the need for some sharing of intellectual property and code.)

Novell was, very reasonably, super wary of Microsoft. MS has a history of stealing code: DoubleSpace contained stolen STAC code; Video for Windows contained stolen Apple QuickTime code; etc. etc.

The agreement with Novell only covered “Windows 3.1”. That is why the second, finished, working edition of Windows for Workgroups, a big version with massive changes, was called… Windows for Workgroups 3.11.

And that’s why NT was also called 3.1. Because that way it fell under the Novell agreement.

Postscript

A decade ago I wrote about the decline and fall of Netware:
https://www.theregister.com/Print/2013/07/16/netware_4_anniversary/

But I didn't mention another pecularity of the Novell/MS uneasy relationship around the time of the launch of NT.

Novell did not really believe that a new MS
 OS had a chance. So, although MS kept asking, and provided Novell with betas, Novell did not write a Netware client for NT. 

So MS wrote its own. It reverse-engineered the protocol and embedded its own Netware client into NT. It was initially able to connect to Netware 3 servers, but later gained basic authentication-only support for Netware 4's NDS as well.

Novell backpedalled and hastily wrote a client. If I recall correctly – it's more than 30 years ago now – it shipped after NT 3.1 came out. So it was initally buggy and that meant it could crash the new crash-proof OS.

Meaning that they competed: admins, including me, had a choice. Run the functionally-limited but stable MS client, or the feature-rich Novell client that could destabilise your very expensive high-end workstations?

Worse was to come. Since they'd already reverse-engineered the client, MS implemented a server as well. NT could pretend to be a Netware server, and unmodified Netware client PCs (DOS, Windows 3, Windows for Workgroups, whatever) could connect to an NT box without changing the client. And as that was elaborate and involved a lot of memory optimisation, that helped.

The server emulation wasn't a deal-breaker, but it weakened the Novell position. But failing to write a client for what rapidly became a serious business workstation OS was a critical error and at that extremely risky time for Novell, it contributed to the company's fall.

liam_on_linux: (Default)
 About 5Y ago, I got a job at a big FOSS vendor and needed a desktop client. The company no longer maintained its own client for its own in-house email server.
 
I started with Thunderbird.
 
I found a problem -- later identified as being server-side -- and tried as many others as I could find in the distro's repos: Evolution, Sylpheed, Claws, KMail, Balsa, GNUstep Mail.app, Geary, and more.
 
Evolution is better than it was and isn't quite so determinedly Outlook-like any more. (I do not like Outlook.)
 
Claws is pretty good, but it isn't multithreaded, so it hangs when collecting mail. This is very annoying.
 
Claws and Sylpheed desperately need to merge again. They are basically the same app, but with slightly different feature sets. AIUI the author of Sylpheed, Yamamoto Hiroyuki, refuses to accept patches/PRs. He really needs to get over himself and learn to act a bit more like Linus Torvalds did. This intransigence is crippling both programs.
 
It is the 21st century and I do not want a CLI/text-mode email app. They have their place, for instance if you need to do email over ssh. I do not. But I want something that readily scales to a large window, has a CUA UI, can show basic formatting, etc. So, no Mutt/Neomutt/Pine for me.
 
In the end, I went back to Thunderbird and I still use it today. It is, after considerable research and experimentation, the best FOSS email app there is.
 
It is cross-platform: I can and do use the same app on Linux, Windows and macOS.
 
It talks to everything. I have or have had it connecting to Gmail, Hotmail, Yahoo, AOL, Exchange Server, Groupwise, CIX, and more different accounts and servers than I can remember.
 
It does address books and calendaring as well.
 
It has integration with handy features like Google's various chat and note-taking services.
 
It uses standard storage formats that can be accessed from other apps.
 
It's big, it is a bit sluggish, and like Firefox Quantum, some add-ons no longer work. This is a foolish decision of Mozilla's. However, it still has a useful range of add-ons.
 
It handles secure email and encryption well.
 
Snags: it really needs a working sync function.
 
But, after a lot of time and effort, it remains best-of-breed for my needs.
 
liam_on_linux: (Default)
I really hate it whenever I see someone calling Apple fans fanboys or attacking Apple products as useless junk that only sells because it's fashionable.

Every hater is 100% as ignorant and wrong as any fanatically-loyal fanboy who won't consider anything else.

Let me try to explain why it's toxic.

If someone/some group are not willing to make the effort to see why a very successful product/family/brand is successful, then it prevents them from learning any lessons from that success. That means that the outgroup is unlikely to ever challenge the success.

In life it is always good to ask why. If this thing is so big, why? If people love it so much, why?

I use a cheap Chinese Android phone. It's my 3rd. I also have a cheap Chinese Android tablet that I almost never use. But last time I bought a phone, I had a Planet Computers Gemini on order, and I didn't want two new ChiPhones, so I bought a used iPhone. This was a calculated decision: the new model iPhones were out and dropped features I wanted. This meant the previous model was now quite cheap.

I still have that iPhone. It's a 6S+. It's the last model I'd want: it has a headphone socket and a physical home button. I like those. It's still updated and last week I put the latest iOS on it.

It allowed me to judge the 2020s iOS ecosystem. It's good. Most of the things I disliked about iOS 6 (the previous iPhone model I had) have been fixed now. Most of the apps can be replaced or customised. It's much more open than it was. The performance is good, the form factor is good, way better than my iPhone 4 was.

I don't use iPhones because I value things like expansion slots, multiple SIMs, standard ports and standard charging cables, and a customisable OS. I don't really use tablets at all.

But my main home desktop computer is an iMac. I am an expert Windows user and maintainer with 35 years' of experience with the platform. I am also a fairly expert Linux user and maintainer with 27 years' experience. I am a full-time Linux professional and have been for nearing a decade... because I am a long-term Windows expert and that is why I choose not to use it any more.

My iMac (2015 Retina 27") is the most gorgeous computer I've ever owned. It looks good, it's a joy to use, it is near silent and trouble-free to a degree that any Windows computer can only aspire to be. I don't need expansion slots and so on: I want the vendor to make a good choice, integrate it well and for it to just work and keep just working, and it does.

It is slim, unobtrusive for a large machine, silent, and the picture (and sound) quality is astounding.

I chose it because I have extensive knowledge of building, specifying, benchmarking, reviewing, fixing, supporting, networking, deploying, and recycling old PCs. It is over 3 decades of expert knowledge of PCs and Windows that is why I spent my own money on a Mac.

So every time someone calls Mac owners fanboys, I know they know less than me and therefore I feel entirely entitled to dump on their ignorance from a great height.

I do not use iDevices. I also do not use Apple laptops. I don't like their keyboards, I don't like their pointing devices, I don't like their hard-to-repair designs. I use old Thinkpads, like most experienced geeks.

But I know why people love them, and if one wishes to pronounce edicts about Apple kit, you had better bloody well know your stuff.

I do not recommend them for everyone. Each person has their own needs and should learn and judge appropriately. But I also do not condemn them out of hand.

I have put in an awful lot of Windows boxes over the years. I have lost large potential jobs when I recommended Windows solutions to Mac houses, because it was the best tool for the job. I have also refused large jobs from people who wanted, say, Windows Server or Exchange Server when it *wasn't* the right tool for the job.

It was my job to assess this stuff.

Which equips me well to know that every single time someone decries Apple stuff, that means that they haven't done the work I have. They don't know and they can't bothered to learn.
liam_on_linux: (Default)
But the dev teams are quite negative.

A common objection is that supporting ARM SBCs is hard, because there are so many.

This is true. There are indeed dozens, maybe hundreds, of ARM SBCs out there. Many don't have very good Linux support, which is why ARMbian exists, for example.

Supporting them all is a massive undertaking for a small community-driven OS.

But the RasPi is not just another ARM SBC. It is *the* inexpensive SBC. They had already sold *38 million* of the things by this time last year:
https://www.tomshardware.com/news/raspberry-pi-9th-birthday

As they sell about ⅔ of a million per month, and 1¾ million per quarter:
https://www.zdnet.com/article/raspberry-pi-sales-jump-heres-why-the-tiny-computers-in-demand-in-coronavirus-crisis/

There are more RasPis out there than anything since the Commodore 64 or Sinclair ZX Spectrum.

Yes there are over 20 models, but basically, if one confines one's efforts to the latest model, then they're _still_ in the millions of units, there's basically just 1 chipset to support, and they're so cheap that potential users with a different SBC can just buy a RasPi instead for the cost of a modest restaurant meal.

Even the £5 Pi Zero is a quad-core machine with ½ gig of RAM, a very capable target for most hobbyist OSes. It costs less than a small SD card.

Yes, it's true, supporting all SBCs is very hard -- *so don't*. Support 1 or at most 2 models: the best-selling ones in the world, which are in fact already the largest compatible hardware platform in the world outside of the x86 PC, with more machines that are highly compatible with each other, that have only used 2 or 3 SBCs in a decade, than even x86 Apple Macs.

It's not a comparison of equals. All ARM hardware isn't alike. Yes, there is a vast profusion of ARM hardware. Even of ARM SBCs. Even of cheap consumer end-user ARM SBCs.

But even so, despite that, there is a very clear obvious market leader, and it's well-documented, and there are already multiple FOSS OSes that run on it, with drivers available for study. Not just Linux, like basically every other ARM SBC.

Most RasPi round ups of operating systems just list half a dozen Linux distros, but OSnews readers know that Linux is just one OS. Distros are cosmetics.

*Excluding* Linux, the RasPi runs FreeBSD, NetBSD, OpenBSD, Plan 9, Inferno, and RISC OS. (And Windows IoT but that's not FOSS.)

No other SBC in the world can run as many different OSes. In fact, aside from the PC, I don't think any other single model range of computers ever made can run as many different OSes as the RasPi.
liam_on_linux: (Default)
The 65C816 was a dead end, I'm afraid. It was a fairly poor 16-bit chip, and the notional successor, the 65C832, was never made. It only existed as a datasheet:

https://downloads.reactivemicro.com/Electronics/CPU/WDC%2065C832%20Datasheet.pdf

Backwards compatibility is really limiting. Look how long it took the PC to catch up with mid-1980s graphical computers, such as the Mac, Amiga or ST. Any of them was frankly far ahead of Windows 3.x, and it wasn't 'til Windows 95 that it could compare.

Innovation is hard. Everyone tends to overlook the Lisa, which is the machine that pioneered most of the significant concepts of the Mac: not just the GUI, but a rigorously and completely specified set of UI guidelines, plus a polished, 2nd-generation GUI.

(Xerox's original was very Spartan. No menu bars, no standardised window controls, no standardised dialog boxes, etc. It was a toolkit for writing GUI apps, and a fancy language to implement them in.

Apple added a _lot_. But the first version was, just like the Xerox Star, way too complicated (hard disk! Multitasking!) and *way* too expensive.

It took a second system to get it right, and it took cutting it back *HARD* to make it affordable enough so people would notice. Yes, sure, 128 kB wasn't really enough. One single-sided floppy wasn't enough. But even so it was $2500. It had to be pared to the *bone* to get it down to a quarter of the price of the Lisa.

It was a trailblazer. It showed that a single-user standalone GUI machine was doable, and worth having, and could be just about affordable.

Just 9 months later, a 512 kB model was doable for only $200 more. Tech advanced fast back then.

They simply could not have done a IIGS at that kind of price point in 1984. It wasn't possible. The Mac was only barely possible. The other new 680x0 personal computer of 1984, the Sinclair QL, had 128 kB too.

If there'd been no Mac, the GS wouldn't have had its GUI. The GUI was a re-implementation of the Mac one. Without that, it would have just been a slow kinda-sorta 16-bit machine, released a year and 2 months after the Amiga 1000 – which was $1300 but which had much better graphics, comparable sound, a full multitasking GUI, and a 7.1 MHz 68000 – a much more capable chip.

Or the Atari ST, which was another full 68000 machine, with half a meg of RAM, and a GUI, and was (unlike the Amiga) usable with a single floppy because the OS was in ROM... and which was $800 in June 1985.

There is more to the universe than just Apple.

In the gap between the Lisa and the Apple IIGS, IBM released the PC-AT, which my friend Guy Kewney, perhaps the most famous IT journalist in the UK then, called "his first experience of Raw Computer Power". His caps.

The year after that, Intel released the 80386, a true 32-bit chip. The same month as the IIGS, Compaq released the Deskpro 386, the first true 32-bit PC. Sure, $6,500 -- but vastly more powerful and capable than a 65C816.

The IIGS was a gorgeous machine. I was at the UK launch. I wanted one very badly. But bear in mind that the Apple II was _not_ a successful machine in Europe -- it was was too expensive. A $1000 computer in 1977 was no use to us: that was more than the price of a car. We got Sinclair ZX80s and ZX81s, the first £100 computers. :-)

So outside a few countries, the IIGS had no existing catalogue of software and so on. Neither did the Amiga or ST at launch, but they'd been around for over a year by the time the IIGS appeared, and they had amazing best-of-breed apps and games by then.
liam_on_linux: (Default)
Thanks to a fellow member on a mailing list, who sent me a replacement motherboard (complete with RAM!) I've been able get my Microserver N40L working again (as well as to upgrade my recently-bought cheap 2nd hand Microserver N54L to 8GB.)

So far the N40L has no disks. I plan to reinstall the disks of its old RAID, built under Ubuntu 14.04 about 8 years ago by [personal profile] hobnobs.

Because the array is a Linux mdraid, which won't work on FreeBSD (I think), I put OpenMediaVault on a USB key. The web UI works although IMHO it does look, well, a little bit amateurish.

I've upgraded its firmware to the Oct 2013 version, which AFAIK is the latest.

The thing is, its fans run full-speed all the time. It didn't do that before.

Is there anything you can do to set the fans to automatic speed control rather than full? My N54L is virtually silent, but the N40L sounds like a vacuum cleaner and can be heard 2 rooms away.

Speaking of the N54L: I put TrueNAS Core on an old laptop HDD in an eSATA caddy (powered from a USB port). It works like a dream. It happily imported my old ZFS RAIDZ, created with 64-bit Ubuntu Server on a Raspberry Pi 4 a couple of years ago now. It allowed me to add a couple of shares, and without any faffing around with permissions, it Just Works™. I'm very impressed. It even has htop preinstalled. The web GUI is very smart and professional-looking.

It supports both SMB and AFP out of the box, and right now my fiancée's MacBook Pro is backing up to a Time Machine share on the TrueNAS box, over wifi.

But TrueNAS Core does want to be installed on a proper hard disk. The old FreeNAS and NAS4Free could be installed onto, and run from, a USB key. If you still want that, you might try XigmaNAS. It's a fork of FreeNAS before iXsystems renamed it, and it does still support booting from a USB key. I gave it a quick whirl in VirtualBox and found its installer a lot more complex and confusing, though, so I gave it a pass.

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 14th, 2026 07:25 pm
Powered by Dreamwidth Studios