liam_on_linux: (Default)
It's been a while since I have had any time to work on one of my pet projects...

So, herewith, step 1 in it: a downloadable FAT32 PC-DOS 7.1 Virtualbox disk image.

This is the PC DOS 2000 disk image from Connective VirtualPC – I described how I created that this time last year.

I've replaced the kernel files, COMMAND.COM and a few utilities with those from the freely-downloadable PC DOS 7.1 made available by IBM. I've described that and how to get it, too.

So what I did was make a new FAT32 8GB virtual drive. Partitioned it with PC-DOS 7.1's FDISK32 command. Formatted it with PC-DOS 7.1's FORMAT32 command. Copied the system from the Connectix FAT16 drive, check it boots, and here it is.

Next planned step: add in the IBM Warp Server DOS LAN Services & IBM TCP/IP and make it able to talk to the VirtualBox host. Sadly, after so long away from this, it took me some hours to remember where I was up to and build this disk image.
liam_on_linux: (Default)
I recently commented about a deeply misguided comment on HN that claimed that Windows 98 was the beginning of integrated networking in Windows. Wolfie Pauli (as I like to call him) applies: "that is not only not right, that is not even wrong!"
But, to be fair, which I rarely feel the urge to be towards Microsoft, Win98 did have one killer networking feature: (effectively) unlimited IP addresses.
I do not know the internals and the Web seems to have forgotten if it ever did... but the Win9x and NT networking stacks were very different.

The NT network stack is cleanly layered, supported multiple adaptor types and protocols and clients all in one, and was so complicated that up to NT 4, when you finished making changes to the networking configuration and clicked "OK", a tiny embedded Prolog interpreter fired up and ran a single embedded Prolog program, the only one in all of the DOS, OS/2 and Windows codebase that I'm aware of -- possibly the only one in any commercial OS anywhere. This Prolog code parsed your desired config, worked out how to interconnect all the layers of the NT network stack, and wrote the configuration file(s) and registry settings in order to give you what you wanted.

That's why a little progress dialog box popped up for a while.

I do not know if the Prolog has gone, but I suspect that the progress indicator box has gone just because [a] everything defaults to TCP/IP now, and the Netware support stuff is mostly or all gone and [b] computers are much faster now so there's no perceptible pause, so no need for a progress bar.
Win9x, on  the other hand, was Windows 4, the successor to Windows for Workgroups 3.11, and so I strongly suspect that its network stack was just a modified version of the WfWg 3.11 stack, from what tiny bit I know from experience plus some educated guesswork.
In Win95/95B you could only have a maximum of 4 TCP/IP addresses. That's on all adapters (Ethernet, modem, Direct Cable Conneciton, AOL dialup, etc.) put together.

Aside: yes, the AOL adaptor was its own device, not a Windows-managed modem device: AOL managed its own dial-up. Don't knock it, it worked: it was much easier to get online with AOL in the 1990s than it was with, well anything else, because AOL did a lot of R&D to make it happen. I know this because I've had a free journo AOL account since the 1990s and it gave me toll-free worldwide dialup, so I used it a lot when visiting Oslo, which I did monthly for a couple of years at the start of the century. (Thanks, Ryanair!) No wifi or broadband back then!

Win95 bound all protocols to all adaptors. No need for Prolog here. I had a Thinkpad 701C -- the classic folding-keyboard Butterfly model -- and I had several PCMCIA cards for it. One had a 56K modem. One was a 10base-T network card. One was a 100base-T network card. And I had direct cable connection for linking to other PCs. That's four network connections, if you're counting.

The AOL dial-up adaptor made 5.

Awooga! Alert! That means five network cards, and Win95 didn't support more than four IP addresses, in total, for all devices. A dynamic address -- that is, an adaptor that does BootP or DHCP -- is still an address. An adaptor that's not plugged in at the moment was still an adaptor and had TCP/IP bound to it so it needed a slot for an address.

(Oh, and forget about IPv6 – Win95 just didn't do that, period; it hadn't been invented yet.)

Result: problem. A basically-insoluble problem. Microsoft didn't really expect people to want so many IP addresses in those days.

Windows 98 allowed effectively unlimited IP addresses, so I could have all those adaptors at once.

Problem: the Thinkpad 701C is a 75MHz 486 with 40MB of RAM, and Windows 98 was too much for it. I used 98lite to trim it down, but it was still bulky and sluggish.

In desperation I did try Windows 2000 on the machine. Win2K on a 486 with 40MB of RAM is possible, just, but it's not much fun to use. Thankfully the old Thinkpad's hard disk tray was removable and I had a few drives and caddies, so I could switch Windows versions in half a minute.

Anyway: that was the sole significant improvement in networking in Win98 that I am aware of. The other thing 98 could do was drive multiple monitors, if you had the right — that is, from an extremely limited selection — graphics cards. Otherwise, it was just 95B with more drivers and the odious Active Desktop built in.
liam_on_linux: (Default)
Someone on HN claimed "the arrival of built-in Windows Networking in Windows 98".

Since that is, IMHO, "not even wrong", I felt I had to reply...

This is not correct.

Windows 95 had built-in networking from launch and an email client on the desktop. (It did not have a bundled web browser at launch, but it had networking including TCP/IP and dial-up.)

Built-in networking arrived with Windows for Workgroups 3.1 in 1992, and became mainstream with Windows for Workgroups 3.11 in 1993, because of the performance enhancements of 32-bit File Access. From '93 on almost all PCs shipped with WfWg 3.11 as the sole default version.

WfWg 3.11 had an optional extra add-on delivering 32-bit TCP/IP, and Internet Explorer was a free download that gave it dial-up TCP/IP.

Also, Windows NT launched in 1993, with built-in networking including TCP/IP over wired and dial-up networks.

But networking does not and did not equal TCP/IP. DOS and Windows 3.x defaulted to NetBIOS, with optional IPX/SPX for Novell Netware, which was the dominant PC networking standard from the late 1980s. Until the mid-1990s, TCP/IP was a niche protocol only needed if you wanted to communicate with expensive RISC-based UNIX™ workstations.

Microsoft used NetBEUI. Novell used IPX. Apple used AppleTalk. DEC used DECnet. IBM used lots of protocols including DLC but didn't use Ethernet all that much -- it had its own network system, Token Ring -- so you needed special hardware to talk to IBM kit and only IBM-centric businesses used it much.

LANs in office networks were certainly entirely mainstream in the first half of the 1990s. When I started my first job in London in 1991, we had just one client who didn't have an office network. That was considered unusual but it was an intentional management decision, intended to slow the possible spread of malware and increase real-life face-to-face staff communication.

What wasn't mainstream was them being based on TCP/IP.

These days, networking and TCP/IP seem synonymous, but that's just how it happens to be this century. Networking, mostly over Ethernet, initially Thin Ethernet (10base-2), in wide use as a common office tool predated the rise of TCP/IP by a good 15 years or so. Some early adopters were using it 20+ years earlier.

Network protocols then were a bit like OSes are now. Many people use Windows but lots use Macs, some use *BSD, a few still use commercial UNIX, etc., and there are things like ChromeOS, thin clients over RDP, and stuff. It's not at all homogenous and it's hard to even say there's a clear majority for any one OS: Windows has the edge, but not by a lot any more.

Well, in the era of MS-DOS, Novell was the server OS of choice for almost everyone, with rivalry from 3Com and its 3+Share MS-DOS-based server OS; 3+Share was related to MS LAN Manager, which was in OS/2 and led to 3+Open. They all used NetBEUI. LAN Manager also ran on VMS thanks to DEC Pathworks, running over DECnet, which was handy because it also supported terminal sessions -- remember, this is before SSH -- and X.11 and DEC email and more.

Focussing on big businesses was Banyan VINES, with its own protocol derived from Xerox's Alto and so on. This had the first network directory. Novell designed Netware 4, with NDS, as a direct response to Banyan's StreetTalk, and in the NT 3.x era it kicked Microsoft's behind in the market; the tide only turned with NT 4.

Speaking of big enterprises, email was common long before LANs or TCP/IP. All big DEC users and IBM users had those companies' email systems. Small firms used dial-up to pre-Internet service providers -- I used CIX, which dominated in Britain. My 1991 CIX email address is still live and still works. Americans favoured CompuServe, AKA Compu$erve, but it was too expensive in Europe where we pay for local calls too.

This stuff is, ballpark, a quarter of a century older than TCP/IP and Internet-based networking. And given that that is only about 25 years old, what I am saying is that widespread LAN use didn't begin with Win98.

At the time, Win98 wasn't even a blip; it was nicknamed "GameOS" in enterprise IT circles and few companies even considered it. NT 4 was where it was at, and it launched with full TCP/IP support two whole years before Win98.

So no, Win98's networking didn't begin anything at all. It wasn't significant in any way, then or now. Win98 was a home OS for standalone PCs with dial-up, but it merely took over from Win95 which created that market.

The rise of TCP networking in business LANs arguably began with Windows NT, but NT 3.x wasn't very significant, and NT itself arrived about half way through the lifetime of business use of machine-to-machine communications, email, groupware, etc. from its beginning to now.

If you want to argue that integrated networking in Windows was a significant turn, that I won't argue with. But it began 5 years before Win98, with Windows for Workgroups and Windows NT.

The fact that now it looks big and significant that it's when TCP/IP became the default is an emergent artifact of the current focus on IP. It wasn't at the time.

Email is a 1960s thing. The Internet started to become significant in the 1970s, long after email. Corporate LANs rose in prominence in the 1980s and by the 1990s were almost a given. Macintosh-based companies (mostly in design, print, repro etc.) did direct peer-to-peer comms over ISDN.

In the 1990s, for most people, TCP/IP only ran over dial-up modem connections, and it was contemporaneous with the industry moving to 10base-T: Ethernet over UTP replacing Ethernet over Coax.

For a time, the obvious successor to 10base-T looked to be ATM, which is a protocol at well as a cabling system; TCP/IP had to be tunnelled over ATM, but it looked clear for a while that ATM was the future. 100base-T (Fast Ethernet) was just one contender among several.

But actually, as it happened, TCP/IP rose vastly in importance, and networking switched to 100base-T and then wifi.

LANs switched to IP in the 21st century but they were a roughly 20-year-old, established, totally normal technology then.
liam_on_linux: (Default)
Some companies sell laptops with Linux pre-installed. However in some cases I have read about, there may be significant caveats.

Some examples:

  • Dell pre-installed their own drivers for Ubuntu on their laptops, and if you format the machine and reinstall, or reinstall a different distro, you can't get the source of the drivers and build your own.

  • In other instances I've heard of, the machines work fine but some features are not supported on Linux. Or perhaps only works on the vendor's supported distro & not other distros. Or perhaps on Linux but not on -- say -- FreeBSD.

  • Or all features work, but you require Windows to update the firmware, or to update peripherals' firmware, such as docking stations.

  • Or the Linux models have slightly different specs, such as a specific WLAN card, and the generic Windows version of the same model is not 100% compatible.


The fact that someone offers one or two specific models with one particular Linux distro as an option is good, sure, but it doesn't automatically mean that that particular machine may be a good choice if you run a different distro, or don't want their pre-installed OS, or you didn't buy it with Linux and put it on later.

Long long ago, in the mid-1990s, I ran the testing labs for a major UK computer magazine, called PC Pro. In about 1996 I proposed and ran and edited a feature which the editors were very dubious about, but it proved to be a big hit.

The idea was very simple: at that time, all PCs shipped with Windows 95. As 95 was DOS-based at heart and had no concept of user space vs kernel space, drivers were quite easy. You could in a push use DOS drivers, or port Win32 drivers from Windows for Workgroups which did terrible hacky direct-hardware access stuff.

So my feature was: we want machines designed, built and supplied with Windows NT. At the time, that meant NT 4.

NT 4 was not at all like Win95; it just looked superficially like it. It needed its own, new, specially-written drivers for everything. It had built-in drivers for some things, for example EIDE (i.e. PATA) hard disks, but these did not use DMA, only programmed IO. (Not slow, but caused very high CPU usage; no problem on Win9x, but a performance-killer on NT.)

The PC vendors loved and hated us for it.

Some vendors...

  • promised machines then withdrew at the last minute;

  • promised machines, then changed the spec or price;

  • delivered machines with features not working;

  • delivered machines with expensive replacement hardware for built-in parts that didn't work with NT.


And so on. There was a huge delta in performance (while all Win9x machines performed pretty much alike: we could look at the parts list and predict the benchmark scores with an accuracy of about 5%.)

Many vendors didn't know about DMA hard disk drivers.

Some did but didn't know how to fix it. Some fitted SCSI hard disks as a way round this, not knowing that with the motherboard came a floppy disk with a free driver that would enable DMA on EIDE.

Some shipped CD burners that couldn't burn because the burner software didn't work on NT. Some shipped DVD drives which couldn't play movies on NT because the graphics adaptor's video playback acceleration didn't work on NT.

And so on.

Readers *loved* that feature because it separated the wheat from the chaff: it showed the cheap vendors whose PCs mostly worked but they didn't know how to tune them, from the solid vendors who knew what they were doing and how to make stuff work, from the solid vendors who could build a great PC for the task but it doubled the price.

I got a lot of praise for that article, and it was well worth the work.

Some vendors thanked me because it was so educational for them!

Well, Linux on laptops is still a bit like that today. There is a whole pile of stuff that's easy and a given on Windows that is difficult or problematic on Linux and just plain impossible on any other FOSS OS.

  • Switchable GPUs are a problem

  • Proprietary binary graphics drivers are sometimes a problem

  • Displays on docking stations can be tricky


Interactions between these things is even worse; e.g. multiple displays on USB docking stations can be extra-tricky

For example, with openSUSE Leap I found that with Intel graphics, two screens on a USB-C docking station was easy, but with nVidia Optimus, almost impossible.

With my own Latitude E7270, under KDE I can only drive 1 external screen; if I add 2 as well as the built-in one, then window borders disappear on the laptop screen and so windows can't be moved or resized. But under the lighter-weight Xfce, this is fine & all 3 screens can be used. And that's with an Intel GPU and a proper, PCIe-bus-attached dock.

But every time I un-dock or re-dock, it forgets the screen arrangement and the Display preferences have to be redone every single time.

Most apps can't remember what screen they were on and reopen on a random monitor every time. Possibly entirely offscreen if I have a different screen arrangement.

Even the same screens attached directly to the machine and via the dock confuse it. And I have both a full-size and mini dock. All the ports appear different.

Linux on laptops is still complicated.

Just because things work for 1 person doesn't mean they'll work for everyone. Just because a vendor ships a model with Linux doesn't mean all models work. Just because a vendor ships 1 distro doesn't mean all distros work.

And when the machine is new, you can probably be sure that there will be serious firmware issues with Linux because the firmware was only tested against Windows and sketchily even then. This is the era of Agile and minimum viable products, after all.

So do not take it as read that because Dell ship 2 or 3 models with Ubuntu, all models will Just Work™ with any disto.

I absolutely categorically promise you they don't and they won't.
liam_on_linux: (Default)

I'm an AOL hacking historian. No doubt about it.

I remember intimate details about most of the major breaches that occurred between the mid 90s and 2000s. I was there and actively participating in most of it. America Online's security was being compromised nonstop. It was unbeliveable. Corporate cybersecurity has a bad reputation now, but what was commonplace then is unthinkable now.  Security was bad. This was especially the case with AOL.

Through phishing attacks, password cracking, social engineering, whatever it took - we were breaking into employee accounts and staff areas at scale. In the late 90s and early 2000s the golden goose was the customer records information system, or as it was known mostly commonly, "CRIS", which AOL employees used to action customer accounts.



Early AOL "Mac hackers," (Macintosh users) - many congregating in the the private chat "macfilez", were the first to access CRIS. Legendary early Mac hackers like "Happy Hardcore" were able to breach various internal accounts and gain access. this was before the keyword became LAN only, i.e. on-campus VPN use only. There is a misconception in cybersecurity. Massive corporations are expected to have the tightest security. The truth is that the more employees a corporation has, the less secure it is - and AOL is a perfect example.

AOL + loads of employees = tons of AOL accounts being hacked through CRIS, and later Merlin. This is in contrast to the AIM team which was very small.

AIM + very few employees = very few AIM accounts being hacked outside of screen name exploits here and there. Nobody even knew the name of AIM's internal area(s).

Fast forward to autumn of 2003 and meet a couple old friends of mine - Dime and Toast. that's when they discovered, and subsequently broke into the LAN-only AIM admin web area. It was called WHAOPS. For the first time in history AOL hackers were finally able to learn the name of, and actually view, the elusive AIM administration panel.





Dime programmed a botnet web browser in Delphi in such a way that it let him take infected AOL employee computers and leverage them to connect to staff-only websites. Prior to finding WHAOPS, we'd just been hanging out and watching zombie AOL staff computers fill an IRC channel, casually surfing their internal networks to see what we could uncover.

Toast, Dime's twin brother, found WHAOPS. from there they started methodically targeting AIM team members, of which there were few. Wventually they hacked an AIM admin screen name, used the botnet web browser, logged into https://whaops.aol.com (iirc) and started raising hell. They reset the password to "OnlineHost", stole a slew of screen names, suspended and unsuspended other people's accounts at will - all types of fuckery.

It was an incredible night, and it was dime's final performance. He pulled off the impossible and quit the scene permanently. I think it's unfortunate that only hackers who were busted were immortalized by the internet. Dime and Toast are legends you'd have never otherwise heard of. This post is for them. {S GOODBYE

<3 pad

YCombinator Update Sept 23, 2021
First of all, Null, or "risk", Dime just said you're full of shit. Nobody helped him code anything. I say you're full of shit too.

The following comment was written by someone named Justin Perras aka Null who came into the scene after Dime had already left. This guy always pops up to lie on my reputation to discredit me (by, again, lying) whenever I write anything. The last time this happened was on DG last year. Today it was YCombinator.



The cognative disonnance needed for Justin call me a groupie is immesurable. YCombinator ghosted my response post because it was submitted by a new account - here's that:



Hopefully that summarizes it but the tl;dr is that I'm a triple OG and Null is a coattail riding poor retard with abysmal grammar. You'll never make anything of yourself my guy. Hell, we're old. You just never pulled it off. While the upper echelon (i.e. not you) was pushing up 7 figure stats you were skidding about in the playground we left behind for you. You never managed to find yourself a seat at the table Justin. I talk to millionaire entrepreneurs all day and do very well for myself. How about you? You're a loser and that's nothing new - and you are a liar.

In other news - I managed to summon the backend programmer of WHAOPS. That was interesting.



L8Z

liam_on_linux: (Default)
I must be mellowing in my old age (possibly as opposed to bellowing) because I have been getting praise and compliments recently on comments in various places.

Don't worry, there are still people angrily shouting at me as well.

This was the earlier comment, I think... There was a slightly forlorn comment in the Reddit Lisp community, talking about this article, which I much enjoyed myself:

Someone was asking why so few people seemed interested in Lisp.

As an outsider – a writer and researcher delving into the history of OSes and programming languages far more than an actual programmer – my suspicion is that part of the problem is that this positively ancient language has accumulated a collection of powerful but also ancient tooling, and it's profoundly off-putting to young people used to modern tools who approach it.

Let me describe what happened and why it's relevant.

I am not young. I first encountered UNIX in the late 1980s, and UNIX editors at the same time. But I had already gone through multiple OS transitions by then:

[1] weird tiny BASICs and totally proprietary, very limited editors.

[2] early standardised microcomputer OSes, such as CP/M, with more polished and far more powerful tools.

[3] I personally went from that to an Acorn Archimedes: a powerful 32-bit RISC workstation with a totally proprietary OS (although it's still around and it's FOSS now) descended from a line of microcomputers as old as CP/M, meaning no influence from CP/M or the American mainstream of computers. Very weird command lines, very weird filesystems, very weird editors, but all integrated and very powerful and capable.

[4] Then I moved to the same tools I used at work: DOS and Windows, although I ran them under OS/2. I saw the strange UIs of CP/M tools that had come across to the DOS world run up against the new wave of standardisation imposed by (classic) MacOS and early Windows.

This meant: standard layouts for menus, contents of menus, for dialog boxes, for keystrokes as well as mouse actions. UIs got forcibly standardised and late-1980s/early-1990s DOS apps mostly had to conform, or die.

And they did. Even then-modern apps like WordPerfect gained menu bars and changed their weird keystrokes to conform. If their own weird UIs conflicted, then the standards took over. WordPerfect had a very powerful, efficient, UI driven by function keys. But it wasn't compatible with the new standards. It used F3 for help and Escape to repeat a character, command or macro. The new standards said F1 must be help and Esc must be cancel. So WordPerfect complied.

And until the company stumbled, porting to OS/2 and ignoring Windows until it was too late, it worked. WordPerfect remained the dominant industry-standard, even as its UI got modernised. Users adapted.

So why am I talking about this?

Because the world of tools like Emacs never underwent this modernisation.

Like it or not, for 30 years now, there's been a standard language for UIs and so on. Files, windows, the clipboard, cut, copy, paste. Standard menus in standard places and standard commands on them with standard keystrokes.

Vi ignores this. Its fans love its power and efficiency and are willing to learn its weird UI.

Emacs ignores this, for the same reasons. The manual and tutorial talk about "buffers" and "scratchpads" and "Meta keys" and dozens of things that no computer made in 40 years has: a whole different language before the Mac and DOS and Windows transformed the world of computing.

The result of this is that if you read guides and so on about Lisp environments, they don't tell you how to use it with the tools you already know, in terms you're familiar with.

Instead they recommend really weird editors and weird add-ons and tools and options for those editors, all from long before this era of standardization. They don't discuss using Sublime Text or Atom or VS Code: no, it's "well you can use your own editor but we recommend EMACS and SLIME and just learn the weird UI, it's worth it. Trust us."

It's counter-productive and it turns people off.

I propose that a better approach would be to modernize some of the tooling, forcibly make it conform to modern standards. I'm not talking about trivial stuff like CUA-mode, but bigger changes, such as ErgoEmacs. By all means leave the old UI there and make it possible for those who have existing configs to keep it, but update the tools to use standard terminology, use the names printed on actual 21st century keyboards, and editors that work the same way as every single GUI editor out there.

Then once the barrier to entry is lowered a bit, start modernising it. Appearance counts for a lot. "You never get a second chance to make a first impression."

One FOSS tool that's out there is Interlisp Medley. There are efforts afoot to modernise this for current OSes.

How about just stealing the best bits and moving it to SBCL? Modernising its old monochrome GUI and updating its look and feel so it blends into a modern FOSS desktop?

Instead of pointing people at '70s tools like Emacs, assemble an all-graphical, multi-window, interactive IDE on top of the existing infrastructure and make it look pretty and inviting.

Keep the essential Lispiness by all means, but bring it into the 2020s and make it pretty and use standard terms and standard keystrokes, menu layouts, etc. So it looks modern and shiny, not some intimidating pre-GUI-era beast that will take months to learn.

Why bother? Who'll do it?

Well, Linux spent a decade or more as a weird, clunky, difficult and very specialist OS, which was just fine for its early user community... until it started catching up with Windows and Mac and growing into a pretty smooth, polished, quite modern desktop... partly fuelled by server advancements. Things like NetBSD still are and have zero mainstream presence.

Summary: You have to get in there and compete with mainstream, play their game by their rules, if you want to compete.

I'd like to have the option to give Emacs a proper try, but I am not learning an entire new vocabulary and a different UI to do it. I learned dozens of 'em back in the 1980s and it was a breath of fresh air when one standard one swept them all away.

There were very modern Lisp environments around before the rise of the Mac and Windows swept all else away. OpenGenera is still out there, but we can't legally run it any more -- it's IP that belongs to the people who inherited Symbolics when its founders died.

But Interlisp/Medley is still there and it's FOSS now. I think hardcore Lispers see stuff like a Lisp GUI and natively-graphical Lisp editors as pointless bells and whistles – Emacs was good enough for John McCarthy and it still is for me! – but they really are not in 2021.

There were others, too. Apple's Dylan project was built in Lisp, as was the amazing SK8 development environment. They're still out there somewhere.

liam_on_linux: (Default)
A short extract of Neal Stephenson's seminal essay has been doing the rounds on HackerNews.


OK, fine, so let's go with it.

Since my impression is that HN people are [a] xNix fans [b] often quite young therefore [c] have little exposure to other OSes, let me try to unpack what Stephenson was getting at, in context.

The Hole Hawg is a dangerous and overpowered tool for most non-professionals. It is big and heavy. It can take on big tough jobs with ease, but its size and its brute power mean that it is not suitable for precision work. It has relatively few safety features, so that if used inexpertly, it will hurt its operator.

DIY stores are full of smaller, much less powerful tools. This is for good reasons:

  • because for non-professional users, those smaller, less-powerful tools are much safer. A company which sells a tool to untrained users which tends to maim or kill them will go out of business.

  • because smaller, less-powerful tools are better for smaller jobs, that a non-professional might undertake, such as hanging a picture, or putting up some shelves.

  • professionals know to use the right tool for the job. Surgeons do not operate with chainsaws (even though they were invented for surgery). Carpenters do not use axes.


The Hole Hawg, as described, is a clumsy tool that needs things attached to it in order to be used, and even then, you need to know the right way or it will hurt you.

Compare with a domestic drill with a pistol grip that is ready to use out of its case. Modern ones are cordless, increasing their convenience.

One is a tool for someone building a house; the other is a better tool for someone living in that house.

That's the drill part.

Now, let's discuss the OSes talked about in the rest of the 1999 piece from which that's a clipping [PDF].

There are:

  • Linux, before KDE, with no free complete desktop environments yet;

  • Windows, meaning Windows 98SE or NT 4;

  • Classic MacOS – version 9;

  • BeOS.

Stephenson points out that Linux is as powerful as any of them, cheaper, but slower, ugly and unfriendly.

He points out that MacOS 9 is as pretty, friendly, and comprehensible as OSes get, but it doesn't multitask well, it is not very stable, and when a program crashes, your entire computer probably goes with it.

He points out that Windows is overpriced, performs poorly, and is not the best option for anyone – but that everyone runs it and most people just conform with what the mainstream does.

He praises BeOS very highly, which was 100% justified at the time: it was faster than anything else, by a large margin. It has superb multimedia support and integration, better than anything else at the time. It was standards-compliant but not held back by it. For its time, it has a supermodern OS, eliminating tonnes of legacy cruft.

But it didn't have many apps so it was mainly for people in narrow niches, such as music production or maybe video editing.

It was manifestly the future, though. But we're living in the future and it wasn't. This was 23 years ago, nearly a quarter of a century, before KDE and GNOME, before Windows XP, before Mac OS X. You need to know that.

What Unix people interpret as praise here is in fact criticism.

That Unix is very unfriendly and can easily hurt its user. (Think `rm -rf /` here.)

That Unix has a great deal of raw power but maybe more than most people need.

That Unix is, frankly, kinda ugly, and only someone who doesn't care about appearances would choose it.

That something of this brute power is not suitable for fine precision work. (Which it still mostly isn't -- Mac OS X is Unix, tuned and polished, and that's what the creative pros use now.)

Here's a response from 17 years ago.
liam_on_linux: (Default)
Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It's entitled "First new vax in ...30 years? 🙂"

Someone posted it on Hackernews. One of the comments said, roughly, that they didn't see the significance and could someone "explain it like I'm a Computer Science undergrad." This is my attempt to reply...

Um. Now I feel like I'm 106 instead of "just" 53.

OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families... and both were from the same company.

Minicomputers are what came after mainframes, before microcomputers. A microcomputer is a computer whose processor is a microchip: a single integrated circuit containing the whole processor. Before the first one was invented in 1974 (IIRC), processors were made from discrete logic: lots of little silicon chips.

The main distinguishing feature of minicomputers from micros is that the early micros were single-user: one computer, one terminal, one user. No multitasking or anything.

Minicomputers appeared in the 1960s and peaked in the 1970s, and cost just tens to hundreds of thousands of dollars, while mainframes cost millions and were usually leased. So minicomputers could be afforded by a company department, not an entire corporation... meaning that they were shared, by dozens of people. So, unlike the early micros, minis had multiuser support, multitasking, basic security and so on.

The most significant minicomputer vendor was a company called DEC: Digital Equipment Corporation. DEC made multiple incompatible lines of minis, many called PDP-something -- some with 9-bit logic, some with 12-bit, 16-bit, 18-bit, or 36-bit logic (and an unreleased 24-bit model, the PDP-2).

One of its early big hits was the 12-bit PDP-8. It ran multiple incompatible OSes, but one was called OS/8. This OS is long gone but it was the origin of a command-line interface (largely shared with TOPS-10 on the later, bigger and more expensive, 36-bit PDP-10 series) with commands such as DIR, TYPE, DEL, REN and so on. It also had a filesystem with 6-letter names (all in caps) with semi-standardised 3-letter extensions, such as README.TXT.

This OS and its shell later inspired Digital Research's CP/M OS, the first industry-standard OS for 8-bit micros. CP/M was planned to be the OS for the IBM PC, too, but IBM got a cheaper deal from Microsoft for what was essentially a clean-room re-implementation of CP/M, which called IBM called "PC DOS" and Microsoft "MS-DOS".

So DEC's PDP-8 and OS-8 directly inspired the entire PC-compatible industry, the whole x86 computer industry.

Another DEC mini was the 18-bit PDP-7. Like almost all DEC minis, this too ran multiple OSes, both from DEC and others.

A 3rd-party OS hacked together as a skunkworks project on a disused spare PDP-7 at AT&T's research labs was UNIX.

More or less at the same time as the computer industry gradually standardised on the 8-bit byte, DEC also made 16-bit and 32-bit machines.

Among the 16-bit machines, the most commercially successful was the PDP-11. This is the machine that UNIX's creators first ported it to, and in the process, they rewrote it in a new language called C.

The PDP-11 was a huge success so DEC was under commercial pressure to make an improved successor model. It did this by extending the 16-bit PDP-11 instruction set to 32 bits. For this machine, the engineer behind the most successful PDP-11 OS, called RSX-11, led a small team that developed a new, pre-emptive multitasking, multiuser OS with virtual memory, called VMS.

(When it gained a POSIX-compliant mode and TCP/IP, it was renamed from VAX/VMS to OpenVMS.)

OpenVMS is still around: it was ported to DEC's Alpha, the first 64-bit RISC chip, and later to the Intel Itanium. Now it has been spun out from HP and is being ported to x86-64.

But the VMS project leader, Dave Cutler, and his team, were headhunted from DEC by Microsoft.

At this time, IBM and Microsoft had very acrimoniously fallen out over the failed OS/2 project. IBM kept the x86-32 version OS/2 for the 386, which it completed and sold as OS/2 2 (and later 2.1, 3, 4 and 4.5. It is still on sale today under the name Blue Lion from Arca Noae.)

At Microsoft, Cutler and his team got given the very incomplete OS/2 version 3, a planned CPU-independent portable version. Cutler et al finished this, porting it to the new Intel RISC chip, the i860. This was codenamed the "N-Ten". The resultant OS was initially called OS/2 NT, later renamed – due to the success of Windows 3 – as Windows NT. Its design owes as much to DEC VMS as it does to OS/2.

Today, Windows NT is the basis of Windows 10 and 11.

So the PDP-7, PDP-8 and PDP-11 directly influenced the development of CP/M, MS-DOS, OS/2, Windows 1 through to Windows ME.

A different line of PDPs directly led to UNIX and C.

Meanwhile, the PDP-11's 32-bit successor directly influenced the design of Windows NT.

When micros grew up and got to be 32-bit computers themselves, and vendors needed multitasking OSes with multiuser security, they turned back to 1970s mini OSes.

This project is a FOSS re-implementation of the VAX CPU on an FPGA. It is at least the 3rd such project but the earlier ones were not FOSS and have been lost.
liam_on_linux: (Default)
[Another recycled mailing list post]

I was asked what options there were for blind people who wish to use Linux.

The answer is simple but fairly depressing: basically every blind person I know personally or via friends of friends who is a computer user, uses Windows or Mac. There is a significant move from Windows to Mac.

Younger computer users -- by which I mean people who started using computers since the 1990s and widespread internet usage, i.e. most of them -- tend to expect graphical user interfaces, menus and so on, and not to be happy with command-line-driven programs.

This applies every bit as much to blind users.

Linux can work very well for blind users if they use the terminal. The Linux shell is the richest and most powerful command-line environment there is or ever has been, and one can accomplish almost anything one wants to do using it.

But it's still a command line, and a notably unfriendly and unhelpful one at that.

In my experience, for a lot of GUI users, that is just too much.

For instance, a decade or so back, the Register ran some articles I wrote on switching to Linux. They were, completely intentionally, what is sometimes today called "opinionated" -- that is, I did not try to present balance or a spread of options. Instead I presented what was, IMHO, the best choices.


Multiple readers complained that I included a handful of commands to type in. "This is why Linux is not usable! This is why it is not ready for the real world! Ordinary people can't do this weird arcane stuff!" And so on.

Probably some of these remarks are still there in the comments pages.

In vain did some others try to reason with them.

But it was 10x quicker to copy-and-paste these commands!
-> No, it's too hard.

He could give GUI steps but it would take pages.
-> Then that's what he should have done, because we don't do this weird terminal nonsense.

But then the article would have been 10x longer and you wouldn't read it.
-> Well then the OS is not ready, it's not suitable for normal people.

If you just copy-and-paste, it's like 3 mouse clicks and you can't make a typing error.
-> But it's still weird and scary and I DON'T LIKE IT.

You can't win.

This is why Linux Mint succeeded -- partly because when Ubuntu introduced its non-Windows-like desktop after Microsoft threatened to sue, Mint hoovered up those users who wanted it Windows-like.

But also because Mint didn't make you install the optional extras. It bundled them, and so what if that makes it illegal to distribute in some countries? It Just Worked out of the box, and it looked familiar, and that won them millions of fans.

Mac OS X has done extremely well partly because users never ever need to go need a command line, for anything, ever. You can if you want, but you never, ever need to.

If that means you can't move your swap file to another drive, so be it. If that means that a tonne of the classic Unix configuration files are gone, replaced by a networked configuration database, so be it.

Apple is not afraid to break things in order to make something better.

The result has been to become the first trillion-dollar computer company, and hundreds of millions of happy customers.

Linux gives you choices, lets you pick what you want, work the way you want... and despite offering the results for free, the result has been about 1% of the desktop market and basically zero of the tablet and smartphone markets.

Ubuntu made a valiant effort to make a desktop of Mac-like simplicity, and it successfully went from a new entrant in a busy marketplace in 2004 to being the #1 desktop Linux within a decade. It has made virtually no dent on the non-Linux world, though.

After 20 years of this, Google (after *bitter* internal argument) introduced ChromeOS, a Linux which takes away all your choices. It only runs on Google hardware, has no apps, no desktop, no package management, no choices at all. It gives you a dead cheap, virus-proof computer that gets you on the Web.

In less time than Ubuntu took to win about 1% of the Windows market over to Linux, ChromeBooks persuaded about one third of the world laptop buying market to switch to Linux. More Chromebooks sell every year -- tens of millions -- than Ubuntu users in total since it lauched.

What effect has this had on desktop Linux? Zero. None at all. If that is the price of success, they are not willing to pay it. What Google has done is so unspeakable foul, so wrong, so blasphemous, they don't even talk about it.

What effect has it had on Microsoft? A lot. Cheaper Windows laptops than ever, new low-end editions of Windows, serious efforts to reduce the disk and memory usage...

And little success. The cheap editions lose what makes Windows desirable, and ultra-cheap Windows laptops make poorer slower Chromebooks than actual Chromebooks.

Apple isn't playing. It makes its money in the high-end.

Unfortunately a lot of people are very technologically conservative. Once they find something they like, they will stay with it at all costs.

This attitude is what has kept Microsoft immensely profitable.

A similar one is what has kept Linux as the most successful server OS in the world. It is just a modernised version of a quick and dirty hack of an OS from the 1960s, but it's capable and it's free. "Good enough" is the enemy of better.

There are hundreds of other operating systems out there. I listed 25 non-Linux FOSS OSes in this piece, and yes, FreeDOS was included.

There are dozens that are better in various ways than Unix and Linux.

  • Minix 3 is a better FOSS Unix than Linux: a true microkernel which can cope with parts of itself failing without crashing the computer.

  • Plan 9 is a better UNIX than Unix. Everything really is a file and the network is the computer.

  • Inferno is a better Plan 9 than Plan 9: the network is your computer, with full processor and OS-independence.

  • Plan 9's UI is based on Oberon: an entire mouse-driven OS in 10,000 lines of rigorous, type-safe code, including the compiler and IDE.

  • A2 is the modern descendant of Oberon: real-time capable, a full GUI, multiprocessor-aware, internet- and Web-capable.

(And before anyone snarks at me: they are all niche projects, direly lacking polish and not ready for the mass market. So was Linux until the 21st century. So was Windows until version 3. So was the Mac until at the very least the Mac Plus with a hard disk. None of this in any way invalidates their potential.)

But almost everyone is too invested in the way they know and like to be willing to start over.

So we are trapped, the monkey with its hand stuck in a coconut shell full of rice, even though it can see the grinning hunter coming to kill and eat it.

We are facing catastrophic climate change that will kill most of humanity and most species of life on Earth, this century. To find any solutions, we need better computers that can help us to think better and work out better ways to live, better cleaner technologies, better systems of employment and housing and everything else.

But we can't let go of the single lousy handful of rice that we are clutching. We can't let go of our broken political and economic and military-industrial systems. We can't even let go of our broken 1960s and 1970s computer operating systems.

And every day, the hunter gets closer and his smile gets bigger.
liam_on_linux: (Default)
In fact, there are two free versions: one for Classic MacOS, made freeware when WordPerfect discontinued Mac support, and a native Linux version, for which Corel offered a free, fully-working, demo version.

But there is a catch – of course: they're both very old and hard to run on a modern computer. I'm here to tell you how to get them and how to install and run them.

WordPerfect came to totally dominate the DOS wordprocessor market, crushing pretty much all competition before it, and even today, some people consider it to be the ultimate word-processor ever created.

Indeed the author of that piece maintains a fan site that will tell you how to download and run WordPerfect for DOS on various modern computers,  if you have a legal copy of it. And, of course, if you run Windows, then the program is still very much alive and well and you can buy it from Corel Corp.

Sadly, the DOS version has never been made freeware. It still works – I have it running under PC-DOS 7.1 on an old Core 2 Duo Thinkpad, and it's blindingly fast. It also works fine on dosemu. It is still winning new fans today. Even the cut-down LetterPerfect still cost money. The closest thing to a free version is the plain-text-only WordPerfect Editor.

Edit: I do not know if Corel operates a policy like Microsoft, where owning a new version allows you run any older version. It may be worth asking.

But WordPerfect was not, originally, a DOS or a PC program. It was originally developed for a Data General minicomputer, and only later ported to the PC. In its heyday, it also ran on classic MacOS, the Amiga, the Atari ST and more. I recall installing a text-only native Unix version on SCO Xenix 386 for a customer. In theory, this could run on Linux using iBCS2 compatibility.

When Mac OS X loomed on the horizon, WordPerfect Corporation discontinued the Mac version – but when they did so, they made the last ever release, 3.5e, freeware.

WordPerfect 3.5e 
(Image source.)

Of course, this is not a great deal of use unless you have a Mac that can still run Classic – which today means a PowerPC Mac with Mac OS X 10.4 or earlier. However, hope springs eternal: there is a free emulator called SheepShaver that can emulate classic MacOS on Intel-based Macs, and the WPDOS site has a downloadable, ready-to-use instance of the emulator all set up with MacOS 9 and WordPerfect for Mac.

To be legal, of course, you will need to own a copy of MacOS 9 – that, sadly, isn't free. Efforts are afoot to get it to run natively on some of the later PowerMac G4 machines on which Apple disabled booting the classic OS. I must try this on my Mac mini G4 and iBook G4.

The non-Windows version of WordPerfect that lived the longest, though, was the Linux edition. Corel was very keen on Linux. It had its own Linux distro, Corel LinuxOS, which had a very smooth modified KDE and was the first distro to offer graphical screen-resolution setting. Corel made its own ARM-based Linux desktop, the NetWinder, as reviewed in LinuxJournal.

And of course it made WordPerfect available for Linux.

Edit: Sadly, though, Microsoft intervened, as it is wont to do. The programs in WordPerfect Office originally came from different vendors. Some reviews suggested that the slightly different looks and feels of the different apps would be a problem, compared to the more uniform look and feel of MS Office. (The Microsoft apps in Office 4 were very different from one another. Office 95 and Office 97 had a lot of effort put in to make them more alike, and not much new functionality.)

Corel was persuaded to license the MS Office look-and-feel – the button bars and designs – and the macro language (Visual BASIC for Applications) and incorporate them into WordPerfect Office.

But the deal had a cost above the considerable financial one: Corel had to discontinue all its Linux efforts. So it sold off Corel LinuxOS, which became Xandros. It sold its NetWinder hardware, which became independent. It killed off its native Linux app, and ended development of WordPerfect Office for Linux, which was a port of the then-current Windows version using Winelib. In fact, Corel contributed quite a lot of code to the WINE Project at this time in order to bring WINE up to a level where it could completely and stably support all of WordPerfect Office.


I'm not sure if the text-only WordPerfect for Unix ever had a native Linux version – I didn't see it if it did – but a full graphical version of WordPerfect 8 was included with Corel LinuxOS and also sold at retail. Corel offered both a free edition with fewer bundled fonts, as well as a paid version.

This is still out there – although most of its mirrors are long gone, the Linux Documentation Project has it. It's not trivial to install a 20-year-old program on a modern distro, but luckily, help is at hand. The XWP8Users site has offered some guidance for many years, but I confess I never got it to work except by installing a very old version of Linux in a VM. For instance, it's easy enough to get it running on Ubuntu 8.04 or 8.10 – Corel LinuxOS was a Debian-derivative, and so is Ubuntu.

The problem is that even in these days of containers for everything, Ubuntu 8 is older than anything supports. Linux containers came along rather later than 2008. In fact, in 2011 I predicted that containers were going to be the Next Big Thing. (I was right, too.)

So I've not been able to find any easy way to create an Ubuntu 8.04 container on modern Ubuntu. If anyone knows, or is up for the challenge, do please get in touch!

But the "Ex WP8 Users" site folk have not been idle, and a few months ago, they released a big update to their installation instructions. Now, there's a script, and all you need to do is download the script, grab the WordPerfect 8.0 Downloadable Personal Edition (DPE), put them in a folder together and run the script, and voilá. I tried it on Ubuntu 20.04 and it works a treat so long as I run it as root. I have not seen any reports from anyone else about this, so it might be just my installation.

Read about it and get the script here.

Edit:

For more info, read the WordPerfect for Linux FAQ. This includes instructions on adding new fonts, fixing the MS Word import filter and some other useful info.

From the discussion on Hackernews and the FAQ, I should note that there are terms and conditions attached to the free  WP 8.0 DPE. It is only free for personal, non-commercial use, and some people interpret Corel's licence as meaning that although it was a free download, it is not redistributable. This means that if you did not obtain it from Corel's own Linux site (taken down in 2003) or from an authorised re-distributor (such as bundled with SUSE Linux up to 6.1 and early versions of Mandrake Linux, and the "WordPerfect for Linux Bible" hardcopy book, and a few resellers) then it is not properly licensed.

I dispute this: as multiple vendors did re-distribute it and Corel took no action, I consider it fair play. I also very much doubt that anyone will use this in a commercial setting in 2021.

If you are interested in the more complete WordPerfect 8.1, I note that it was included in Corel LinuxOS Deluxe Edition and that this is readily downloaded today, for example from the Internet Archive or from ArchiveOS. However, unless you bought a licence to this, this is not freeware and does not include a licence for use today.



r/linux - A blast from the past: native WordPerfect 8 for Linux running on Fedora 13. It still works! [pic]
(Image source.)

Postscript

If you really want a free full-function word-processor for DOS, which runs very well under DOSemu on Linux, I suggest Microsoft Word 5.5. MS made this freeware at the turn of the century as a free Y2K update for all previous versions of Word for DOS.

How to get it:
Microsoft Word for DOS — it’s FREE

Sadly, MS didn't make the last ever version of Word for DOS free. It only got one more major release, Word 6 for DOS. This has the same menu layout and the same file format as Word 6 for Windows and Word 6 for Mac, and also Word 95 in Office 95 (for Win95 and NT4). It's a little more pleasant to use, but it's not freeware — although if you own a later version of Word, the licence covers previous versions too.

Here is a comparison of the two:
Microsoft Word 5.5 And 6.0 In-depth DOS Review With Pics
liam_on_linux: (Default)
I like cheap Chinese phones. I am on my 3rd now: first an iRulu Victory v3, which came with 5.1. First 6.5" phablet I ever saw: plasticky, not very hi-res, but well under €200 and had dual SIMs, a µSD slot and a replaceable battery. No compass though.

Then a PPTV King 7, amazing device for the time, which came with 5 as well but half in Chinese. I rooted it and put CyanogenMod on it, getting me Android 6. Retina screen, dual SIM or 1 + µSD, fast, amazing screen.

Now, an Umidigi F2, which came with Android 10. Astonishing spec for about €125. Dual SIM + µSD, 128GB flash, fast, superb screen.

But with all of them, typically, you get 1 ROM update ever, normally the first time you turn it on, then that's it. The PPTV was a slight exception as a 3rd party ROM got me a newer version, but with penalties: the camera autofocus failed and all images were blue-tinged, the mic mostly stopped working, and the compass became a random-number generator.

They are all great for the money, but the chipset will never get a newer Android. This is normal. It's the price of getting a £150 phone with the specification of a £600+ phone.

In contrast, I bought my G/F a Xiaomi A2. It's great for the money – a £200 phone – but it wasn't high-end when new. But the build quality is good, the OS has little bloatware (because Android One), at 3YO the battery still lasts a day, there are no watermarks on photos etc.

It had 3 major versions of Android (7, then 8, then 9) and then some updates on top.

This is what you get with Android One and a big-name Chinese vendor.

Me, I go for the amazing deals from little-known vendors, and I accept that I'll never get an update.

MediaTek are not one of those companies that maintain their version for years. In return, they're cheap and the spec is good when they're new. They just move on to new products. Planet persuaded 'em to put 8 on it, and they deserve kudos for that, not complaining. It's an obsolete product; there's no reason to buy a Gemini when you could have a Cosmo, other than cost.

No, these are not £150 phones. They're £500 phones, because of the unique form-factor: a clamshell with the best mobile keyboard ever made.

But Planet Computers are a small company making an almost-bespoke device: i.e. in tiny numbers by modern standards. So, yes, it's made from cheap parts from the cheapest possible manufacturers, because the production run is thousands. A Chinese phone maker like Xiaomi would consider a production run of only 20 million units to be a failure. (Source: interview with former CEO.) 80 million is a niche product to them.

PlanetComp production is below prototype scale for these guys. It's basically a weird little niche hand-made item.

For that, £500 is very good. Compare with the F(x)tech Pro-1, still not shipping a good 18 months after I personally enquired about one, which is about £750 – for a poorer keyboard and a device with fewer adaptations to landscape use.

This is what you get when one vendor -- Google -- provides the OS, another does the port, another builds products around it, and often, another sells the things. Mediatek design and build the SoC, and port one specific version of Android to it... a bit of work from the integrator and OEM builder, and there's your product.

This is one of the things you sometimes get if you buy a name-brand phone: OS updates. But the Chinese phones I favour are ½-â…“ of the price of a cheap name-brand Android and ¼ of the price of a premium brand such as Samsung. So I can replace the phone 2-3× more often and keep more current that way... and still be a lot less worried about having it stolen, or breaking it, or the like. Win/win, for my perspective.

Part of this is because the ARM world is not like the PC world.

For a start, in the x86 world, you can rely on their being system firmware to boot your OS. Most PCs used to use a BIOS; the One Laptop Per Child XO-1 used Open Firmware, like NewWorld PowerMacs. Now, we all get UEFI.

(I do not like UEFI much, as regular readers, if I have a plural number of those, may have gathered.)

ARM systems have no standard firmware. No bootloader, nothing at all. The system vendor has to do all that stuff themselves. And with a SoC (System On A Chip), the system vendor is the chip designer/fabricator.

(For instance, the Raspberry Pi's ARM cores are actually under the control of the GPU which runs its own OS -- a proprietary RTOS called ThreadX. When a RasPi boots, the *GPU* loads the "firmware" from SD card, which boots ThreadX, and then ThreadX starts the ARM core(s) and loads an OS into them. That's why there must be the special little FAT partition: that is what ThreadX reads. That's also why RasPis do not use GRUB or any other bootloader. The word "booting" is a reference to Baron Münchausen lifting himself out of a swamp by his own bootstraps. The computer loads its own software, a contradiction in terms: it lifts itself into running condition by its own bootstraps. I.e. it boots up.

Well, RasPis don't. The GPU boots, loads ThreadX, and then ThreadX initialises the ARMs and puts an OS into their memory for them and tells them to run it.)

So each and every ARM system (i.e. device built around a particular SoC, unless it's very weird) has to have a new native port of every OS. You can't boot a one phone off the Android from another.

A Gemini is a cheapish very-low-production-run Chinese Android phone, with an additional keyboard wired on, and the screen forced to landscape mode in software. (A real landscape screen would have cost too much.)

Cosmo piggybacks a separate little computer in the lid, much like the "touchbar" on a MacBook Pro is a separate little ARM computer running its own OS, like a tiny, very long thin iPad.

AstroSlide will do away with this again, so the fancy hinge should make for a simpler, less expensive design... Note, I say should...
liam_on_linux: (Default)
I want to get this down before I forget.

I use Gmail. I even pay for Gmail, which TBH really rankles, but I filled up my inbox, I want that stuff as a searchable record (not a Zip or something), any other way of storing dozens of gigs of email means either a lot more work, or moving to other paid storage so a lot of work and paying, so paying for more storage is the least-effort strategy.

I also use a few other web apps, mostly Google: calendar, contacts (I've been using pocket computers since the 1980s, so I have over 5000 people in my address book, and yes, I do want to keep them all), Keep for notes (because Evernote crippled their free tier and don't offer enough value to me to make it worth buying). I very occasionally use Google Sheets and Docs, but would not miss them.

But by and large, I hate web apps. They are slow, they are clunky, they have poor UI with very poor keyboard controls, they often tie you to specific browsers, and modern browsers absolutely suck and are getting worse with time. Firefox was the least worst, but they crippled it with Firefox Quantum and since then it just continues to degenerate.

I used Firefox because it was customisable. I had a vertical tab bar (one addon) – no, not tree-style, that's feature bloat – and it shared my sidebar with my bookmarks (another addon), and the bookmarks were flattened not hierarchical (another addon) because that's bloat too.

Some examples of browser bloat:

  • I have the bookmarks menu for hierarchical bookmark search & storage. I don't need it in the bookmarks sidebar too; that's duplication of functionality.

  • I don't need hierarchical tabs; I have multiple windows for that. So TreeStyleTabs is duplication of functionality – but it's an add-on, so I don't mind too much, so long as I have a choice.

  • I don't need user profiles; my OS has user profiles. That should be an add-on, too.

  • Why do all my browsers include the pointless bloat of web-developer tools? I and 99.99% of the Web's users never ever will need or use them. Why are they part of the base install?

I don't think Mozilla thought of this. I think the Mozilla dev team don't do extensive customisation of their browser. So when they went with multi-process rendering (a good thing) and Rust for safer rendering (also a good thing), they looked at XUL and all the fancy stuff it did, and they compared it with Chrome's (crippled, intrusive) WebExtensions, and decided to copy Chrome and ripped out their killer feature, because they didn't know how to use it effectively themselves.

We all have widescreens now. It's hard to get anything but widescreens. Horizontal toolbars are the enemy of widescreens. Vertical pixels are in short supply; horizontal ones are cheap. The smart thing to do is "spend" cheap, plentiful horizontal space on toolbars, and save valuable, "expensive" vertical space for what you are working on.

The original Windows 95 Explorer could do a vertical taskbar, which is superb on widescreens -- although they hadn't been invented yet. But the Cinnamon and Mate FOSS desktops, both copies of the Win95 design, can't do this. KDE does it so badly that for me it's unusable.

It's the Mozilla syndrome: don't take the time to understand what the competition does well. Just copy the obvious bits from the competition and hope.

Hiding stuff from view and putting it on keystrokes or mouse gestures is not the smart answer. That impedes discoverability and undoes the benefits of nearly 40 years of work on graphical user interfaces. It's fine to do that as well as good GUI design, but not instead of it. If your UI depends on things like putting a mouse into one corner, then slapping a random word in there as a visual clue (e.g. "Activities") is poor design. GUIs were in part about replacing screensful of text with easy, graphical cues.

Chrome has good things. Small ones. The bookmarks toolbar that only appears in new tabs and windows? That's a good thing. In 15 years, Firefox never copied that, but it ripped out its rich extensions system and copied Chrome's broken one.

Tools like these extensions are local tools that do things I need. Mozilla tried to copy Chrome's simplicity by ripping out the one thing that kept me using Firefox. They didn't look at what was good about what they had; they tried to copy what was good about what their competitor had. I used Firefox because it wasn't Chrome.

For now, I switched to Waterfox, because I can keep my most important XUL extensions.

I run 2 browsers because I need some web apps. I use Chrome for Google's stuff, and Waterfox for everything else. Why them? Because they are cross-platform. I use macOS on my home desktop, Linux on my work desktop and my laptops. I rarely use Windows at all but they both work on those too. I don't care if Safari has killer features, because it doesn't work on Windows (any more) or Linux, so it's no use to me. Any Apple tool gets replaced with a cross-platform tool on my Macs.

I also use Thunderbird, another of its own tools Mozilla doesn't understand. It's my main work email client, and I use it at home to keep a local backup of my Gmail. But I don't use it as my home email client, partly because I switch computers every day. My email is IMAP so it's synched – all clients see the same email. But my filters are not. IMAP doesn't synch filters. I have over 100 filters for home use and a few dozen for work use. I get hundreds of emails every day, and I only see half a dozen in my inbox because the rest are filtered into subfolders.

We have a standard system for cross-platform email storage (IMAP), that replaced minimal mail retrieval for local storage (POP3), but nobody's ever extended it to try to compete with systems, such as Gmail or MS Outlook and Exchange Server, that offer more, such as rules, workflow, rich calendaring, rich contacts storage. And so local email clients are fading away and more and more companies use webmail.

Why have web apps prospered so when rich local tools can do more? Because only a handful of companies grok that rich local tools can be strong selling points and keep enhancing them.

I used to use Pidgin for all my chat stuff. Pidgin talked to all the chat protocols: AIM (therefore Apple iMessage too), Yahoo IM, MSN IM, ICQ, etc. Now, I use Franz, because it can talk to all the modern chat systems: Facebook Messenger, WhatsApp, Slack, Discord, etc. It's good: a big help. But it's bloated and sluggish, needs gigs of RAM, and each tab has a different UI inside it, because it's an Electron app. Each tab is a dedicated web browser.

Pidgin, via libpurple plugins, can do some of these – FB via a plugin FB keeps trying to block; Skype; Telegram, the most rich and mature of the modern chat systems. But not all, so I need Franz too. Signal, because it's a nightmare cluster of bad design by cryptology and cryptocurrency nerds, doesn't even work in a web browser.

Chat systems, like email, are a failure, both of local rich app design to keep up, and of protocol standardisation in order to compete with proprietary solutions.

Email is not a hard problem. This is a more than fifty-year-old tool.

Chat is not hard either. This is a more than forty-year-old tool.

But groupware is different. Groupware builds on these but adds forms, workflow, organisation-wide contacts and calendar management. Groupware never got standardised.

Ever see Engelbart's "mother of all demos"? Also more than half a century ago. It included collaborative file editing. But it was a server-based demo, because microcomputers hadn't been invented yet. So, yes, for some things, like live editing by multiple users, a web app can do things local apps can't easily do.

But for most things, a native app should always be able to outcompete a web app. Web apps grew because they exploited a niche: if you have lots of proprietary protocols, then that implies lots of rich proprietary clients for lots of local OSes. Smartphones and tablets made that hard – lots of duplication of functionality in what must be different apps because different OSes need different client apps – so the functionality was moved into the server, enabled by Javascript.

Javascript was and is a bad answer. It was a hastily-implemented, poorly-specified language, which vastly bloats browsers, needs big expensive just-in-time-compiling runtimes and locks lower-powered devices out of the web.

The web is no longer truly based on HTML and formatting enhancements such as CSS. Now, the Web is a delivery medium for Javascript apps.

Why?

Javascript happened because the protocols for common usage of internet communications were inadequate.
This meant one company could obtain a stranglehold via proprietary communications tools.

That was a bad thing.

FOSS xNix arose because of standardisation.

xNix is a shorthand for Unix. Unix™ does not mean "an OS based on AT&T UNIX code". It has not meant this since 1993, when Novell gave the Unix trademark to the Open Group. Since then, "Unix" means "any OS that passes Open Group Unix compatibility testing." Linux has passed these tests, more than once. Both the K-OS and EulerOS distros passed the tests. This means that Linux is a Unix these days. Accept it and move on; it is a matter of legal fact and record. The "based on AT&T code" thing has not been true for thirty-eight years. It is long past time to let it go.

The ad-hoc, informal IBM PC compatibility standard meant that any computer that wanted to be sold as "IBM compatible" had to run IBM software, not just run MS-DOS. All the other makes of MS-DOS computer couldn't run MS Flight Simulator and Lotus 1-2-3, so they died out. Later, that came to include powerful 32-bit 80386DX computers, which allowed 32-bit OSes to come to the PC. Later still, the 80386SX made 32-bit computers cheap and widespread, and that allowed 32-bit OSes to become mainstream. Some, like FreeBSD, stuck to their own standards (e.g. their own partitioning schemes), and permissive licenses meant people could fork them or take their code into proprietary products. Linux developed on the PC and from its beginning embraced PC standards, including PC partitioning, PC and Minix filesystems and so on... and its strict licence largely stopped people building Linux into proprietary software. So it throve in ways no BSD ever did.

Because of standards. Standards, even informally-specified ad-hoc ones, are good for growth and very good for FOSS.

The FOSS world does have basic standards for email retrieval and storage, but they're not rich enough, which means proprietary groupware systems had an edge and thrived.

Web apps are the third-party software-vendor world's comeback against Windows, Office and Exchange. They let macOS and iOS and Android and ChromeOS (and as a distant outlier, other Linux distros) participate in complex workflows. Smartphones and tablet and ChromeOS have done well because their local apps are, like Franz's tabs, mostly embedded single-app web browsers.

Web apps use a loose ad-hoc standard – web browsers and Javascript – to offer the rich functionality previously dominated by one vendor's proprietary offerings.

But they delivered their rich cross-platform functionality using a kludge: an objectively fairly poor, mostly-interpreted language, in browsers.

Even browser and browser developers haven't even learned the lessons of rich local clients.

Standards too need to evolve and move and keep up with proprietary tech, and they haven't. XMPP and Jabber were pretty good for a while, and originally, FB Messengers and Google Chat were XMPP-based... but they didn't cover some use cases, so they got extended and replaced.

I've read many people saying Slack is just enhanced IRC: multi-participant chat. XMPP doesn't seem to handle multi-participant chat very well. And that's a moving target: Slack adds formatting, emoticons, animated attachments, threading...

The FOSS answer should be to make sure that open standards for this stuff keep up and can be implemented widely, both by web apps and by local clients. Standards-based groupware. There are forgotten standards such as NNTP that are relevant to this.

But standards go both ways. Old-fashioned internet-standard email – plain text, inline quoting, and so on – has compelling advantages that "business-style" email cannot match. Rich clients (local or web-based) need to enforce this stuff and help people learn to use them. Minimal standards that everyone can use are good for accessibility, good for different UI access methods (keyboard+mouse, or keyboard + touchscreen, or tablet + keyboard, or smartphone).

Richer local apps aren't enough. Chandler showed that. Standards so multiple different clients can use it are needed too.

What I am getting at here is that there is important value in defining minimum viable functionality and ensuring that it works in an open, documented way.

The design of Unix largely predates computer graphics and totally predates GUIs. The core model is "everything is a file", that files contain plain text, markup is also plain text, and that you can link tools together by exchanging text:

ls -la | grep ^d | less

This is somewhat obsolete in the modern era of multimedia and streaming, but the idea was sound back then. It's worth remembering that Windows now means Windows NT, and the parents of the core Windows NT OS (not the Win32 GUI part) were the processor-portable OS/2 3 and DEC VMS. VMS had a richer content model than Unix, as it should – its design is nearly a decade younger.

Dave Cutler, lead architect of both VMS and NT, derided the Unix it's-all-just-text model by reciting "Get a byte, get a byte, get a byte byte byte" to the tune of the finale of Rossini's William Tell Overture.

Defined protocols for communication, so that different apps from different teams can communicate – just like an email client receives messages from an email server and so can download your mail, and then this evolved to it being able to ask the server and show you your mail while the messages stay on the server. This is immensely powerful, and now, we are neglecting it.

We can't force companies to open their protocols. We can reverse-engineer them and so write unauthorised clients, like many libpurple plugins for Pidgin, but that's not ideal. What we need to do is look at the important core functionality and make sure that FOSS protocols can do that too. Email clients ask for POP version 3, not just any old POP. IMAP v4 added support for multiple mailboxes (i.e. folders). I propose that it's time for something like IMAP v5, adding server-side filters... and maybe IMAP v6, that grandfathers in some part of LDAP for a server-side contact list too. And maybe IMAP v7, which adds network calendar support.

Got a simple email client that doesn't do calendaring? No problem, stick with IMAP 4. So long as the server and client can negotiate a version both understand, it's all good.

Ditto XMPP: extend that so it supports group chats.

NNTP and RSS have relevance to Web fora and syndication.

Getting together and talking, defining minimum acceptable functionality, and then describing a standard for it. Even if it's an unofficial standard, not ratified by any body, it can still work.

But by the same token, I think it's time to start discussing how we could pare the Web back to something rich and useful but which eliminates Javascript and embedded apps inside web pages. Some consensus for something based on most of HTML5 and CSS, and make it a fun challenge to see how much interactivity programmers can create without Javascript.

What's the minimum useful working environment we could build based on simple open standards that are easily implemented? Not just email + IRC, but email with basic text formatting – the original *bold* _underline_ /italic/ ~strikethrough~ that inspired Markdown, plus shared local and global address books, plus local and global calendars, plus server-side profile storage – so when you enter your credentials, you don't just get your folders, you also get your filters, your local and organisation-wide address books, your local and org-wide calendar, too. If you wish, via separate apps. I don't particularly want them all in one, myself.

Ditto shared drives for safe, encrypted, drive mounts and shares over the internet. I can't use my Google Drive or my MS OneDrive from Linux. Why not? Why isn't there some FOSS alternative mechanism?

Is there any way to get out of the trap of prioprietary apps running on top of open-ish standards (Web 2.0) and help rich local software get more competitive? I am not sure. But right now, we seem to be continuing up a blind alley and everyone's wondering why the going is so slow...


The year of Linux on the desktop came, and the Linux industry didn't notice. It's ChromeOS. Something like 30 million ChromeBooks sold in 2020. No Linux distro ever came close to that many units.

But ChromeOS means webapps for everything.

I propose it's time for a concerted effort at a spec for a set of minimal clean local apps and open protocols to connect them to existing servers. As a constraint, set a low ceiling: e.g. something that can run on some $5 level device, comparable to the raw power of a Raspberry Pi Zero: 1GB RAM and 1GHz of CPU in 1 core. Not enough for web apps, but more than enough for a rich capable email client, for mounting drives, for handling NNTP and internet news.

Something that can be handed out to the couple of billion people living in poverty, with slow and intermittent Internet access at best. This isn't just trying to compete with entrenched businesses: it should be philanthropic, too.
liam_on_linux: (Default)
Someone asked me if I could describe how to perform DOS memory allocation. It's not the first time, either. It's a nearly lost art. To try to illustrate that it's a non-trivial job, I decided to do something simpler: describe how DOS allocates drive letters.

I have a feeling I've done this before somewhere, but I couldn't find it, so I tried writing it up as an exercise.

Axioms:

  • DOS only understands FAT12, FAT16 and in later versions FAT32. HPFS, NTFS and all *nix filesystems will be skipped.

  • We are only considering MBR partitioning.

So:

  • Hard disks support 2 partition types: primary and logical. Logical drives must go inside an extended partition.

  • MBR supports a legal max of 4 primaries per drive.

  • Only 1 primary partition on the 1st drive can be marked "active" and the BIOS will boot that one _unless_ you have a *nix bootloader installed.

  • You can only have 1 extended partition per drive. It counts as a primary partition.

  • To be "legal" and to support early versions of NT and OS/2, only 1 DOS-readable primary partition per drive is allowed. All other partitions should go inside an extended partition.

  • MS-DOS, PC DOS and NT will only boot from a primary partition. (I think DR-DOS is more flexible and  I don't know for FreeDOS.)

Those are our "givens". Now, after all that, how does DOS (including Win9x) assign drive letters?

  1. It starts with drive letter C.

  2. It enumerates all available hard drives visible to the BIOS.

  3. The first *primary* partition on each drive is assigned a letter.

  4. Then it goes back to the start and starts going through all the physical hard disks a 2nd time.

  5. Now it enumerates all *logical* partitions on each drive and assigns them letters.

  6. So, all the logicals on the 1st drive get sequential letters.

  7. Then all the logicals on the next drive.

  8. And so on through all logicals on all hard disks.

  9. Then drivers in CONFIG.SYS are processed and if they create drives (e.g. DRIVER.SYS) those letters are assigned next.

  10. Then drivers in AUTOEXEC.BAT are processed and if they create drives (e.g. MSCDEX) those are assigned next.

So you see... it's quite complicated. :-)

Assigning upper memory blocks is more complicated.

NT changes this and I am not 100% sure of the details. From observation:

  • NT 3 did the same, but with the addition of HPFS and NTFS (NT 3.1 & 3.5) and NTFS (3.51) drives.

  • NT 4 does not recognise HPFS at all but the 3.51 driver can be retrofitted.

  • NT 3, 4 & 5 (Win2K) *require* that partitions are in sequential order.

Numbers may be missing but you can't have, say:
[part â„– 1] [part â„– 2] [part â„– 4] [part â„– 3]

They will blue-screen on boot if you have this. Linux doesn't care.

Riders:

  1. The NT booloader must be on the first primary partition on the first drive.

  2. (A 3rd party boot-loader can override this and, for instance, multi-boot several different installations on different drives.)

  3. The rest of the OS can be anywhere, including a logical drive.

NT 6 (Vista) & later can handle it, but this is because MS rewrote the drive-letter allocation algorithm. (At least I think this is why but I do not know for sure; it could be a coincidence.)

Conditions:

  • The NT 6+ bootloader must be in the same drive as the rest of the OS.

  • The bootloader must be on a primary partition.

  • Therefore, NT 6+ must be in a primary partition, a new restriction.

  • NT 6+ must be installed on an NTFS volume, therefore, it can no longer dual-boot with DOS on its own & a 3rd party bootloader is needed.

NT 6+ just does this:

  1. The drive where the NT bootloader is becomes C:

  2. Then it allocates all readable partitions on drive 1, then all those on drive 2, then all those on drive 3, etc.

So just listing the rules is quite complicated. Turning into a step-by-step how-to guide is significantly longer and more complex. As an example, the much simpler process of cleaning up Windows 7/8.x/10 if preparing to dual-boot took me several thousand words, and I skipped some entire considerations to keep it that "short".

Errors & omissions excepted, as they say. Corrections and clarifications very welcome. To comment, you don't need an account — you can sign in with any OpenID, including Facebook, Twitter, UbuntuOne, etc.
liam_on_linux: (Default)
The story of why A/UX existed is simple but also strangely sad, IMHO.

Apple wanted to sell to the US military, who are a huge purchaser. At that time, the US military had a policy that they would not purchase any computers which were not POSIX compliant – i.e. they had to run some form of UNIX.

So, Apple did a UNIX for Macs. But Apple being what they are, they did it right – meaning they integrated MacOS into their Unix: it had a Mac GUI, making it the most visually-appealing UNIX of its time by far, and it could network with MacOSs and run (some) MacOS apps.

It was a superb piece of work, technically, but it was a box-ticking exercise: it allowed the military to buy Macs, but in fact, most of them ran MacOS and Mac apps.

For a while, the US Army hosted its web presence on classic MacOS. It wasn't super stable, but it was virtually unhackable: there is no shell to access remotely, however good your 'sploit. There's nothing there.

The irony and the sad thing is that A/UX never got ported to PowerPC. This is at least partly because of the way PowerPC MacOS was done: MacOS was still mostly 68K code and the whole OS ran under an emulator in a nanokernel running underneath it. This would have made A/UX-style interoperability, between a PowerPC-native A/UX and 68K-native MacOS, basically impossible without entirely rewriting MacOS in PowerPC code.

But around the same time that the last release of A/UX came out (3.1.1 in 1995), Apple was frantically scrabbling around for a new, next-gen OS to compete with Win95. If AU/X had run on then-modern – i.e. PowerPC- and PCI-based – Macs by that time, it would have been an obvious candidate. But it didn't and it couldn't.

So Apple spent a lot of time flailing around with Copland and Gershwin and Taligent and OpenDoc, wasted a lot of money, and in the end merged with NeXT.

The irony is that in today's world, spoiled with excellent development tools, everyone has forgotten that late-1980s and early-to-mid 1990s dev tools were awful: 1970s text-mode tools for writing graphical apps.

Apple acquired NeXT because it needed an OS, but what clinched the deal was the development tools (and the return of Jobs, of course.) NeXT had industry-leading dev tools. Doom was written on NeXTs. The WWW was written on NeXTs.

Apple had OS choices – modernise A/UX, or buy BeOS, or buy NeXT, or get bought and move to Solaris or something – but nobody else had Objective-C and Interface Builder, or the NeXT/Sun foundation classes, or anything like them.

The meta-irony being that if Apple had adapted A/UX, or failing that, had acquired Be for BeOS, it would be long dead by now, just a fading memory for middle-aged graphical designers. Without the dev tools, they'd never have got all the existing Mac developers on board, and never got all the cool new apps – no matter how snazzy the OS.

And we'd all be using Vista point 3 by now, and discussing how bad it was on Blackberries and clones...
liam_on_linux: (Default)
Wow. This is possibly the nerdiest talk I have ever seen, but it is very relevant to my own interests, especially my FOSDEM 2018 talk.

The talk takes very quick looks at Symbolics Genera and OpenGenera and then compares it to Interlisp-D – or as they compare them, "west coast and east coast takes on the Lisp Machine context". That's a powerful comment right there. They draw comparisons between Interlisp-D and Smalltalk, although I do not see a lot of direct resemblance myself, but that is an interesting point. Another interesting factoid is that Interlisp-D is now open source, and efforts are afoot to modernise it.

Then it moves on to BTRON, which I'd never met before. BTRON is still available. It's the desktop iteration of the TRON family, which is doubtless by far the most widely-used operating system you've never heard of. iTRON is used in millions of embedded roles in Japanese consumer electronics and there are also real-time and server products. It has tens to hundreds of millions of instances out there.

And it concludes with IBM i, formerly known as IBM OS/400 for the AS/400 minicomputer range. This is the only surviving single-level store OS in the world (as far as I know; I welcome corrections!) and although it's very much a niche server OS it therefore is also a pointer to a future of PMEM-only computers which just have nonvolatile RAM and dispense with the 1960s concept of "disk drives" and "second level storage" – i.e. the concept behind every other OS you've ever heard of, of any form whatsoever.


Direct link if the embedded video doesn't work.
liam_on_linux: (Default)
I just finished doing up an old white MacBook from 2008 (note: not MacBook Pro) for Jana's best friend, back in Brno.

I hit quite a few glitches along the way. Partly for my own memory, partly in case anyone else hits them, here are the work-arounds I needed...

BTW, I have left the links visible and in the text so you can see where you're going. This is intentional.

Picking a distribution and desktop

As the machine is maxed out with 4GB of RAM, and only has a fairly feeble Intel HD 3100 GPU, I went for Xfce as a lightweight desktop that's very configurable and doesn't need hardware OpenGL. (I just wish Xfce had the GNOME 2/Maté facility to lock controls and panels into place.)

Xubuntu (18.10, later upgraded to 19.04) had two peculiar and annoying errors.

  1. On boot, NumLock is always on. This is a serious snag because a MacBook has no NumLock key, nor a NumLock indicator to tell you, and thus no easy way to turn it off. (Fn+F6 twice worked on Xubuntu 18/19, but not on 20.04.) I found a workaround: https://help.ubuntu.com/community/AppleKeyboard#Numlock_on_Apple_Wireless_Keyboard

  2. Secondly, Xubuntu sometimes could not bring the wifi connection up. Rebooting into Mac OS X and then warm-booting into Xubuntu fixed this.

For this and the webcam issue below, I really strongly recommend keeping a bootable Mac OS X partition available and dual-booting between both Mac OS X and Linux. OS X Lion (10.7) is the latest this machine can run. Some Macs – e.g. MacBook Pro and iMac models –  from around this era can run El Cap (10.11) which is probably still somewhat useful. My girlfriend's MacBook Pro is a 2009 model, just one year younger, and it can run High Sierra (10.13) which still supports the latest Firefox, Chrome, Skype, LibreOffice etc without any problem.

By the way: there are "hacks" to install newer versions of macOS onto older Macs which no longer support them. Colin "dosdude1" Mistr has a good list, here: http://dosdude1.com/software.html

However quite a few of these have serious drawbacks on a machine this old. For instance, my 2008 MB might be able to run Mountain Lion (10.8) but probably nothing newer, and if it did, I would have no graphics acceleration, making the machine slow and maybe unstable. Similarly, my 2011 Mac Mini maxes out at High Sierra. Mojave (10.14) and Catalina (10.15) apparently work well, but Big Sur (11) again has no graphics acceleration and is thus well-nigh unusable. But if you have a newer machine and the reports are that it works well as a hack, this may make it useful again.

I had to reinstall Lion. Due to this, I found that the MacBook will not boot Lion off USB; I had to burn a DVD-R. This worked perfectly first time. There are some instructions here:
https://www.lifewire.com/install-os-x-lion-using-bootable-dvd-2260333

Beware, retail Mac OS X DVDs are dual-layer. If the image is more than 5GB, it may not fit on an ordinary single-layer DVD-R.

If I remember correctly, Lion was the last version of Mac OS X that was not a free download. However, that was 10 years and 8 versions ago, so I hope Apple will forgive me helping you to pirate it. A Bittorrent can be found here.

Incidentally, a vaguely-current browser for Lion is ParrotGeeks Firefox Legacy. I found this made the machine much more useful with Lion, able to access Facebook, Gmail etc. absolutely fine, which the bundled version of Safari cannot do. If you disable all sharing options in OS X and only use Firefox, the machine should be reasonably secure even today. OS X is immune to all Windows malware. Download Firefox Legacy from here:
https://parrotgeek.com/fxlegacy.html

However, saying all that, Linux Mint does not suffer from either of these Xubuntu issues, so I recommend Linux Mint Xfce. I found Mint 20 worked well and the upgrade to Mint 20.1 was quick and seamless.

Installation

If you make a 2nd partition in Disk Utility while you're (re-)installing Mac OS X, you can just reformat that as ext4 in the Linux setup program. This saves messing around with Linux disk partitioning on a UEFI MacBook, which I am warning you is not like doing it on a PC. (I accidentally corrupted the MacBook's hard disk trying to copy a Linux partition onto it with gparted, then remove it using fdisk. That's why I had to reinstall. Again, I strongly recommend doing any partitioning with Mac OS X's Disk Utility, and not with Linux.) All Intel Macs have UEFI, not a BIOS, and so they all use only GPT partitioning, not MBR.

I set aside 48GB for Lion and all the rest for Mint. (Mint defaults to using a swapfile in the root partition, just like Ubuntu. This means that 2 partitions are enough. I was trying to keep things as simple as possible.)

If you use Linux fdisk, or Gparted, to look at the disk from Linux, remember to leave the original Apple EFI System Partition ("ESP") alone and intact. You need that even if you single-boot Linux and nothing else.

Wifi doesn't work out of the box on Mint. You need to connect to the Internet via Ethernet, then open the Software and Drivers settings program and install the Broadcom drivers. That was enough for me; more info is here:
https://askubuntu.com/questions/55868/installing-broadcom-wireless-drivers

While connected with a cable, I also did a full update:

sudo -s
apt update
apt full-upgrade -y
apt autoremove -y
apt purge
apt clean


Glitches and gotchas

Startup or shutdown can take ages, or freeze the machine entirely, hanging during shutdown. The fan may spin up during this. The fix is an simple edit to add an extra kernel parameter to GRUB, described here:
https://forums.linuxmint.com/viewtopic.php?t=284960

(Aside: hoping to work around this, I installed kexec-tools for faster reboots. It didn't work. I don't know why not. Perhaps it's something to do with the machine using UEFI, not a BIOS. I also installed the Ubuntu Hardware Enablement stack with its newer kernel, in case that helped, but it didn't. It didn't seem to cause any problems, though, so I left it.)

GRUB shows an error about not being able to find a Mok file, then continues because SecureBoot is disabled. This is non-fatal but there is a fix here:
https://askubuntu.com/questions/1279602/ubuntu-20-04-failed-to-set-moklistrt-invalid-parameter

While troubleshooting the Mok error above, I found that the previous owner of this machine had Fedora on it at some point, and even though I removed and completely reinstalled OS X Lion in a new partition, the UEFI boot entry for Fedora was still there and was still the default. I removed it using the instructions here:
https://www.linuxbabe.com/command-line/how-to-use-linux-efibootmgr-examples

NOTE: I suggest you don't set a boot sequence. Just set the ubuntu entry as the default and leave it at that. The Apple firmware very briefly displays a no-bootable-volume icon (a folder with a question mark on it) as it boots. I think this is why, when I used efibootmgr to set Mint as the default then OS X, it never loaded GRUB but went straight into OS X.

(Mint have not renamed their UEFI bootloader; it's still called "ubuntu" from the upstream distro. I believe this means that you cannot dual-boot a UEFI machine with both Ubuntu and Mint, or multiple versions of either. This reflects my general impression that UEFI is a pain in the neck.)

The Apple built-in iSight Webcam requires a firmware file to work under Linux, which you must extract from Mac OS X:
https://help.ubuntu.com/community/MactelSupportTeam/AppleiSight

Both Xubuntu and Mint automatically install entries in the GRUB boot menu for Mac OS X. For Lion, there are 2: one for the 32-bit kernel, one for the 64-bit kernel. These will not work. To boot into macOS, hold down the Opt key as the machine powers on; this will display the firmware's graphical boot-device selection screen. The Linux partition is described as "EFI Boot". Click on "macOS" or whatever you called your Mac HD partition. If you want to boot into Linux, just power-cycle it and then leave it alone – the screen goes grey, then black with a flashing cursor, then the GRUB menu appears and you can pick Linux. The Linux partition is not visible from macOS and you can't pick it in the Startup Disk system preference-pane.

Post-install fine-tuning

I also added the ubuntu-restricted-extras package to get some nicer web fonts, a few handy codecs, and so on. Remember when installing this that you must use the cursor keys and Enter/Return to say "yes" to the Microsoft free licence agreement. The mouse won't work – use your keyboard. I also added Apple HFS support, so that Linux can easily manipulate the Mac OS X partition.

I installed Google Chrome and Skype, direct from their vendors' download pages. Both of these add their own repositories to the system, so they will automatically update when the OS does. I also installed Zoom, which does not have a repo and so won't get updated. This is an annoyance; we'll have to look at that later if it becomes problematic. I also added VLC because the machine has a DVD drive and this is an easy way to play CDs and DVDs.

As this machine and the old Thinkpad I am sending along with it are intended for kids to use, I installed the educational packages from UbuntuEd. I added those that are recommended for pre-school, primary and secondary schoolchildren, as listed here:
https://discourse.ubuntu.com/t/ubuntu-education-ubuntued/17063

I enabled unattended-upgrades (and set the machine to install updates at shutdown) as described here:
https://www.cyberciti.biz/faq/set-up-automatic-unattended-updates-for-ubuntu-20-04/

While testing the webcam, I discovered that Mint doesn't include Cheese, so I installed that, too:
sudo apt install -y ubuntu-restricted-extras hfsprogs vlc cheese
liam_on_linux: (Default)

My occasional project to resurrect DR-DOS and make something vaguely useful from it continues, and in the spirit of "release early, release often", I thought that someone somewhere might enjoy having a look at some of my work-in-progress snapshots.

So while there is nothing vastly new here, building a bootable DOS VM is not completely trivial without what is now some very old knowledge, so I thought these might help someone.

The story so far...

In the OpenDOS Enhancement Project, Udo Kuhnt took Caldera's FOSS release of DR-DOS 7.01 (which they had renamed OpenDOS) and added in FAT32 support and some other things. Caldera spin-off Lineo (later DeviceLogics) implemented these in later, closed-source versions of DOS, but they were not officially FOSS. They also used bits of FreeDOS and were later withdrawn. DeviceLogics has since gone out of business.

Udo's disk images are on Archive.org but they aren't bootable. I've made bootable images you can download. I have a bootable VM of DR-DOS 7.01-08 but I need to clean it up and give it some spit and polish. I also added back the ViewMax GUI from DR-DOS 6.

Meantime, what I have uploaded here are three Zip-compressed VirtualBox VDI files. A VDI is the hard disk of a VirtualBox VM. These contain FAT16 hard disks.

The quick way to use them:


  1. Download the image.

  2. Run VirtualBox. Create a new VM. Call it (e.g.) "DR-DOS 6". You must have "DOS" in the name for Virtualbox to correctly configure the new VM for DOS! Otherwise you must manually do that part.

  3. When you get to the "create or add hard disk stage", stop!

  4. Switch to the file manager. Unzip the file. Put it in the newly-created VM's directory.

  5. Go back to VirtualBox. Pick "add an existing hard disk". Browse to the file you just moved into place. Click it, and click "Add".

  6. Now you're back at the "choose a disk" dialog. Pick the newly-added one.

  7. Finish VM setup.

Now you can start the new DOS VM and enjoy.
liam_on_linux: (Default)
Something interesting that has come out of Caldera's release of the original DR GEM code as FOSS 20 years ago, and I totally missed it...



This is a great ~40min intro to EmuTOS.

Nowadays there are two different all-FOSS OSes for STs, compatibles & ST emulators.

I knew about AFROS and have played with it -- it's a compilation of various ST GEM enhancements and replacement modules and so on, mostly based on the FreeMINT multitasking OS, to create a complete multitasking GEM OS for advanced STs.

It mainly targets the ARANYM emulator.

The one bit that wasn't free was basically the ST ROM – TOS itself. TOS shared ancestry with both DR's CP/M-68K and what later became DR-DOS. A very rough description is a DOS-like kernel and drivers for the ST hardware, with floppy drive support, just enough to launch the GEM desktop. No command line.

The AFROS project wrote their own ROM, and back when I was actively looking at ARANYM, they described it as something like "just enough ROM to boot our OS, and not very compatible with actual ST software".

Well what I didn't know until this evening is that the EmuTOS project has taken on a life of its own and they released v1.0 about 6 months ago. It's a complete single-tasking GEM replacement for STs: in other words, a whole replacement ROM. It replaces the BIOS and OS kernel and all of the GEM stack, and that part is based on Caldera's GEM code.

They have something that is built in GCC, can just about fit into the smallest ST ROM chip (192kB) and is broadly compatible with Atari TOS 3. For later models it can go into a bigger ROM chip which gives you a command-line and even multi-language support.

Or you can boot it from floppy, or you can load it as an app from real Atari TOS if you have enough memory. You can even boot it on Amigas, with some restrictions currently.

I'm really impressed. I found this very interesting viewing.

Source etc: is on GitHub. There's a slightly dated Wikipedia article too.

There are or were other ST OSes around. A popular one was called MagiC, and at least part of this has been made FOSS recently. It came with emulators to allow it to run on macOS and Windows. Snag: it's largely in assembler, apparently.

But EmuTOS is slightly different from things like AFROS, FreeMINT or MagiC, inasmuch as it's able to run on original unmodified STs (and the Amiga!) and can be freely distributed with emulators.

A company called Atari still exists and still holds the old copyrights, so the original Atari ROMs are not strictly distributable.

Incidentally, I found this via the m68k.info page, which hosted another presentation this weekend, on the Sinclair QL OS descendants Minerva and SMSQ/E.



Not really any relevance to GEM etc. but may be of interest to folk – it was to me.

I found that because I was asking if there were any 16-bit homebrew computers these days, and was told about the amazing Kiwi 68K.
liam_on_linux: (Default)

  1. Oh, so GNOME is Reinventing tabs.

    Oh yay! Another reason not to use GNOME! Thanks, guys! I had enough already, but every additional one makes the decision easier and easier!

    • Titlebars. I liked title bars. You merged them with toolbars. I don’t want that. No thank you. But it’s not a choice, it’s mandatory! Well, thanks folks, but your desktop isn’t mandatory, either!

    • Menu bars. Menu bars are fast and efficient. You got rid of them & patronise me like I’m on some crippled smartphone app, where a tiny, hard-to-find hamburger menu is enough for the paltry selection of choices in a crippled mobile app. GNOME brings this to the desktop, so now, my decisions are easier! Not only do I not want to run GNOME, I don’t want to run any GNOME app, either!

    • You got rid of all the menus, but for some weird reason, you kept the menu bar. That’s a centimetre of my widescreen gone forever. Gee, thanks! Oh, and the clock is front and centre. Why? Do you want me to stamp in and out of my desktop with a card clock, as well? At least I can have app indicators for Skype and Dropbox and stuff ins– oh, you’re taking those away too? But they were useful! Not to you? But isn’t the customer always right? I’m not paying for it so I’m not a customer? Well if I am not paying, why do you want a gig of RAM? I’m paying for that! Can I have it back, please? No? Oh. Guess I will go use Xfce then.

    • Hey, where are my desktop icons? What do you mean they’re a legacy feature? They were useful!

    • At least the virtual desktops down the right were handy. Shame they are always on the right of my primary screen, not of the whole desktop, but I get that you don’t want me to use multiple screens. I don’t know why, but– oh, you’ve moved them? Where to? To the bottom? But I have a 16:9 monitor! I have nearly twice as much width as depth! Can I move it where I want? No? Why not? Whaddya mean you know what I want better than me? Hey, I got news for you, buddy… Meet my friends Mr rm -rf and Mr fdisk. They got a message for you.



liam_on_linux: (Default)

Back in December, as part of the #DOScember event across Reddit and other places, I converted the IBM PC DOS 2000 (that is, PC DOS 7.01) virtual disk that was included with Connectix (later Microsoft) VirtualPC into VirtualBox format. You can still download VirtualPC if you wish — here's the 2004 version. The final version was 2007 SP1 as far as I know.


I made the PC DOS VDI available for download and wrote a blog post announcing it.


This was an unmodified installation of PC DOS 7.01, complete with non-working Connectix "guest additions" for DOS. I just converted the format.


Well, I have now gone a very small way towards improving that image for VirtualBox users.



  • I've tweaked the memory management for VirtualBox (I am using the latest version at the time of writing, 6.1.18), so now DOS has Upper Memory Blocks available and both EMS and XMS memory – and a whopping 618kB of conventional memory available (that is a lot for DOS!);

  • I have installed IBM's own IDE CD driver, and MSCDEX and the SmartDrve disk cache;

  • I've enabled a mouse driver;

  • I've enabled ANSI.SYS so that you can change the number of rows shown on-screen. 


So here is a compressed disk image, 2.4MB in size. Just unzip it (it is only 11MB uncompressed), create a default DOS VM in VirtualBox — if you call it something like "PC DOS 2000" it will register the word "DOS" in the filename and configure a VM for DOS, with 32MB of RAM and so on. The Virtual Disk Image is called PC_DOS_2000.VDI and contains a FAT16-format C drive of 2GB. 


Share and enjoy.

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 15th, 2026 12:42 pm
Powered by Dreamwidth Studios