liam_on_linux: (Default)
2025-05-20 07:17 pm

I am so sick and tired of "AI"

Spent much of today getting a Live AROS USB key working, which wasn't trivial... it needed a USB 3 key, which I had to go and buy specially.
 
But after that... I am so tired. I want to write about new stuff in software, but there feels to be no area not contaminated with "AI".

I get the depressing feeling that computing is just being eaten up by bloody "AI". Virtually every press release I've seen this week has been AI. Mozilla adopts new AI search engine. Red Hat releases RHEL 10 with built in AI chat bot to help clueless PFYs admin the thing. Windows bloody Notepad has AI built in. AI in Google Docs. AI boosters in my mentions telling me and my friends that AI is helping them read antique books or whatever. 
 
Is there anywhere outside of retrocomputing that doesn't have AI in it?
 
The emperor has no clothes. LLM bots are not artificial and they are not intelligent. Not at all, not even a little bit, not even if you redefine the words "artificial" or "intelligent".  

"AI" is not "AI". The liars and the shills redefined what people used to mean by "AI" as "AGI", artificial _general_ intelligence, so they could market their stupid plagiarism bots as AI. 

AGI is not real. It doesn't exist. It will probably never exist. Hell, at the rate humanity is going, we won't be able to build new computers any more by 2050 and the survivors at the poles will be nostalgic for electricity.

AI is a scam. It's a hoax. It's fake news. There is no AI, and what is being sold as AI is such an incredibly poor fake that it is profoundly disheartening that so many people are so stupid to be deceived into thinking it is AI.

The blockchain is a scam. Everything to do with it is a scam. 

Alternative medicine is a scam. All of it. There is no such thing. If it's called "alternative" that means it's been proved not to work.

All religions are scams. No exceptions. 

People have made billions from selling scams for my whole lifetime.

Meanwhile, other scams, like plastics being recyclable -- they aren't, it's a lie -- or biofuels -- also a lie -- mean our civilisation is on the verge of collapse. We are killing the planetary ecosphere that keeps us alive with plastic and pollution from burning stuff. We have to stop burning everything, stop cutting down trees, and stop making all forms of single-use products. No more jet planes. No more private cars. No more foreign holidays. We can't afford it.

And with all that we are almost certainly still doomed.

But I kind of want to see my industry go first, if all it's got now is AI.

Snag is, I need a job. I have Ada to pay for.


liam_on_linux: (Default)
2025-05-20 03:38 pm

My little USB-DOS project

Since I know there are some folks who read this but may not read my Register stuff.

It's here:

https://github.com/lproven/usb-dos

I added a "Buy me a coffee" tip jar effort. :-)

I am considering updates. Robert Sawyer was kind enough to send a list of suggestions and I should act on them.

I have also received requests for PC-Write. Copies are still out there.

Any other writer-oriented apps that anyone would like to see, or other functionalty?

I discovered the Reg editors did sneak in a mention on the end of this at the start of the year:

https://www.theregister.com/2024/12/23/svardos_drdos_reborn/

I wrote up how I built it later:

https://www.theregister.com/2025/04/26/dos_distraction_free_writing/
 
liam_on_linux: (Default)
2025-05-03 11:42 am

On becoming living history

It is one of the oddest things in computing that stuff to me, as a big kid of heading for 60 years old but who still feels quite young and enjoys learning and exploring, that the early history of Linux – a development that came along mid-career for me – and indeed Unix, which was taking shape when I was a child, is mysterious lost ancient history now to those working in the field.

It’s not that long ago. It’s well within living memory for lots of us who are still working with it in full time employment. Want to know why this command has that weird switch? Then go look up who wrote it and ask him. (And sadly yes there’s a good chance it’s a “him”.)

Want to know why Windows command switches are one symbol and Unix ones another? Go look at the OSes the guys who wrote them ran before. They are a 2min Google away and emulators are FOSS. Just try them and you can see what they learned from.

This stuff isn’t hieroglyphics. It’s not carved on the walls of tombs deep underground.

The reason that we have Snap and Flatpak and AppImage and macOS .app is all stuff that happened since I started my first job. I was there. So were thousands of others. I watched it take shape.

But now, I write about how and why and I get shouted at by people who weren’t even born yet. It’s very odd.

To me it looks like a lot of people spend thousands of developer-hours flailing away trying to rewrite stuff that I deployed in production in my 30s and they have no idea how it’s supposed to work or what they’re trying to do. They’re failing to copy a bad copy of a poor imitation.

Want to know how KDE 6 should have been? Run Windows 95 in VirtualBox and see how the original worked! But no, instead, the team flops and flails adding 86 more wheels to a bicycle and then they wonder why people choose a poor-quality knock-off of a 2007 iPhone designed by people who don’t know why the iPhone works like that.

I am, for clarity, talking about GNOME >3. And the iPhone runs a cut down version of Mac OS X Tiger’s “Dashboard” as its main UI. 
liam_on_linux: (Default)
2025-04-04 02:33 pm

How come Linux replaced Unix? What happened to proprietary Unix?

The personal histories involved are highly relevant and they are one of the things that get forgotten in boring grey corporate histories.

Bill Gates didn't get lucky: he got a leg up from mum & dad, and was nasty and rapacious and fast, and clawed his way to industry dominance. On the way he climbed over Gary Kildall of Digital Research and largely obliterated DR.

 

Ray Noorda of Novell was the big boss of the flourishing Mormon software industry of Utah. (Another big Utah company was WordPerfect.)

Several of them were in the Canopy Group:

https://en.wikipedia.org/wiki/Canopy_Group

Ray Noorda owned the whole lot, via NFT Ventures Inc., which stood for "Noorda Family Trust".

https://en.wikipedia.org/wiki/Ray_Noorda

Caldera acquired the Unix business from SCO, as my current employers reported a quarter of a century ago:

 

https://www.theregister.com/2000/08/02/caldera_goes_unix_with_sco/

Noorda managed to surf Gates's and Microsoft's wave. Novell made servers, with their own proprietary OS, and workstations, with their own OS, and the network. As Microsoft s/w on IBM-compatible PCs became dominant, Novell strategically killed off first its workstations and pivoted to cards for PCs and clients for DOS. Then it ported its server OS to PC servers, and killed its server hardware. Then it was strong and secure and safe for a while, growing fat on the booming PC business.

But Noorda knew damned well that Gates resented anyone else making good money of DOS systems. In the late 1980s, when DR no longer mattered, MS screwed IBM because IBM fumbled OS/2. MS got lucky with Windows 3.

MS help screw DEC and headhunted DEC's head OS man Dave Cutler and his core team and gave him the leftovers of the IBM divorce: "Portable OS/2", the CPU-independent version. Cutler turned Portable OS/2 into what he had planned to turn DEC VMS into: a cross-platform Unix killer. It ended up being renamed "OS/2 NT" and then "Windows NT".

Noorda knew it was just a matter of time 'til MS had a Netware-killer. He was right. So, he figured 2 things would help Novell adapt: embrace the TCP/IP network standard, and Unix.

And Novell had cash.

So, Novell bought Unix and did a slightly Netwarified Unix: UnixWare.

He also spied that the free Unix clone Linux would be big and he spun off a side-business to make a Linux-based Windows killer, codenamed "Corsair" -- a fast-moving pirate ship.

Corsair became Caldera and Caldera OpenLinux. The early version was expensive and had a proprietary desktop, but it also had a licensed version of SUN WABI). Before WINE worked, Caldera OpenLinux could run Windows apps.

Caldera also bought the rump of DR so it also had a good solid DOS as well: DR-DOS.

Then Caldera were the first corporate Linux to adopt the new FOSS desktop, KDE. I got a copy of Caldera OpenLinux with KDE from them. Without a commercial desktop it was both cheaper and better than the earlier version. WABI couldn't run much but it could run the core apps of MS Office, which was what mattered.

So, low end workstation, Novell DOS; high end workstation, Caldera OpenLinux (able to connect to Novell servers, and run DOS and Windows apps); legacy servers, Netware; new open-standards app servers, UnixWare.

Every level of the MS stack, Novell had an alternative. Server, network protocol, network client/server, low end workstation, high end workstation.

Well, it didn't work out. Commercial Unix was dying; UnixWare flopped. Linux was killing it. So Caldera snapped up the dying PC Unix vendor, SCO, and renamed itself "SCO Group", and now that its corporate ally, the also-Noorda-owned-and-backed Novell owned the Unix source code, SCO Group tried to kill Linux by showing it was based on stolen Unix code, and later when that failed, that it contained stolen Unix code.

Caldera decided DOS wasn't worth having and open sourced it. (I have a physical copy from them.) Lots of people were interested. It realised DOS was still worth money, reverse course and made the next version non-FOSS again. It also offered me a job. I said no. I like drinking beer. Utah is dry.

The whole sorry saga of the SCO Group and the Unix lawsuits was because Ray Noorda wanted to outdo Bill Gates.

Sadly Noorda got Alzheimer's. The managers who took over tried to back away, but bits of Noorda's extended empire started attacking things which other bits had been trying to exploit. It also shows the danger and power of names.

Now the vague recollection in the industry seems to be "SCO was bad".

No: SCO were good guys and SCO Xenix was great. It wasn't even x86-only: an early version ran on the Apple Lisa, alongside 2 others.
 

The SCO Group went evil. SCO was fine. SCO != SCO Group.

 

Caldera was an attempt to bring Linux up to a level where it could compete with Windows, and it was a good product. It was the first desktop Linux I ran as my main desktop OS for a while. 

Only one company both owned and sold a UNIX™ and had invested heavily in Linux and had the money to fight the SCO Group: IBM.

IBM set its lawyers on the SCO Group lawsuit and it collapsed.

Xinuos salvaged the tiny residual revenues to be had from the SCO and Novell Unixware product lines.

Who owns the Unix source code? Microfocus, because it owns Novell.

Who sells actual Unix? Xinuos.

Who owns the trademark? The Open Group. "POSIX" (a name coined by Richard Stallman) became UNIX™.

Who owns Bell Labs? AT&T spin off Lucent, later bought by Alcatel, later bought by Nokia.

Was Linux stolen? No.

Does anyone care now? No.

Did anyone ever care? No, only Ray Noorda with a determined attempt to out-Microsoft Microsoft, which failed. 
liam_on_linux: (Default)
2025-04-02 11:30 am

Why FOSS OSes often don't have power management as good as proprietary ones

(Especially Haiku.)

It may seem odd but it's not.

Haiku is a recreation of a late-1990s OS. News for you: in the 1990s and until then, computers didn't do power management.

The US government had to institute a whole big programme to get companies to add power management.

https://en.wikipedia.org/wiki/Energy_Star

Aggressive power management is only a thing because silicon vendors lie to their customers. Yes, seriously.

From the mid-1970s for about 30 years, adding more transistors meant computers got faster. CPUs went from 4-bit to 8-bit to 16-bit to 32-bit, then there was a pause while they gained onboard memory management (Intel 80386/Motorola 68030 generation) then scalar execution and onboard hardware floating point (80486/68040 generation), then onboard L1 cache (Pentium), then superscalar execution and near-board L2 cache (Pentium II), then onboard L2 (Pentium III), then they ran out of ideas to spend CPU transistors on, so the transistor budget went on RAM instead, meaning we needed 64-bit CPUs to track it.

The Pentium 4 was an attempt to crank this as high as it would go by running as fast as possible and accepting a low IPC (instructions per clock). It was nicknamed the fanheater. So Intel US pivoted to Intel Israel's low-power laptop chip with aggressive power management. Voilà, the Core and then Core 2 series.

Then, circa 2006-2007, big problem. 64-bit chips had loads of cache on board, they were superscalar, decomposing x86 instructions into micro ops, resequencing them for optimal execution with branch prediction, they had media and 3D extensions like MMX2, SSE, SSE2, they were 64-bit with lots of RAM, and there was nowhere to spend the increasing transistor budget.

Result, multicore. Duplicate everything. Tell the punters it's twice as fast. It isn't. Very few things are parallel.

With an SMP-aware OS, like NT or BeOS or Haiku, 2 cores make things a bit more responsive but no faster.

Then came 3 and 4 cores, and onboard GPUs, and then heterogenous cores, with "efficiency" and "performance" cores... but none of this makes your software run faster. It's marketing.

You can't run all the components of a modern CPU at once. It would burn itself out in seconds. Most of the chip is turned off most of the time, and there's an onboard management core running its own OS, invisible to user code, to handle this.

Silicon vendors are selling us stuff we can't use. If you turned it all on at once, instant self-destruction. We spend money on transistors that must spend 99% of the time turned off. It's called "dark silicon" and it's what we pay for.

In real life, chips stopped getting Moore's Law speed increases 20 years ago. That's when we stopped getting twice the performance every 18 months.

All the aggressive power management and sleep modes are to help inadequate cooling systems stop CPUs instantly incinerating themselves. Hibernation is to disguise how slowly multi-gigabyte OSes boot. You can't see the slow boot if it doesn't boot so often.

For 20 years the CPU and GPU vendors have been selling us transistors we can't use. Power management is the excuse.

Update your firmware early and often. Get a nice fast SSD. Shut it down when you're not using it: it reboots fast.

Enjoy a fast responsive OS that doesn't try to play the Win/Lin/Mac game of "write more code to use the fancy accelerators and hope things go faster".   

liam_on_linux: (Default)
2025-03-12 02:12 pm

Fun with the Sharp Zaurus SL-5500 (and an unexpected OpenBSD link)

The Zaurus SL-5500 was an early, tiny, Linux pocket computer-cum-PDA. I had one. Two, in fact. They got stolen from my house. :-(

It had a CF card slot, so you could even remove your storage card and insert a CF Wifi card instead, and have mobile Internet in your pocket, 20 years ago!  

But if you did, you got a free extra with a wifi adaptor – a battery life of about 15-20 minutes.

It was clever, but totally useless. With the wifi card in, you couldn’t have external storage any more, so there was very little room left.

I had to check: https://uk.pcmag.com/first-looks/30821/sharp-zaurus-sl-5500

64MB RAM, 16MB flash, and a 320x240 screen. Or rather 240x320 as it was portrait.

The sheer amount of thought and planning that went into the Linux-based Zaurus was shown by the fact that the tiny physical keyboard had no pipe symbol. Bit of a snag on an xNix machine, that.

Both mine were 2nd hand, given to me by techie mates who’d played with them and got bored and moved on. I'm told others got better battery life on Wifi. Maybe their tiny batteries were already on the way out or something.

Fun side-note #1: I do not remember the battery pack looking like this one, though. I feel sure I would have noticed.

https://www.amazon.co.uk/Battery-Zaurus-SL-5500-900mAh-Li-ion/dp/B007K0DRIU

Fun side-note #2: both came with Sharp’s original OS, version 1.0. I had an interesting time experimenting with alternative OS builds, new ROMs etc. Things did get a lot better, or at least less bad, after the first release. But the friend who gave me my first unit swore up and down that he’d update the ROM. I can’t see any possible mechanism for flash memory to just revert to earlier contents on its own, though.

With replacement OS images you had to decide how to partition the device’s tiny amount of storage: some as read-only for the OS, some as read-write, some as swap, etc. The allocations were fixed and if you got it wrong you had to nuke and reload.

This would have been much easier if the device had some form of logical volume management, and dynamically-changeable volume sizes.

Which is a thought I also had repeatedly around 2023-2024 when experimenting with OpenBSD. It uses an exceptionally complex partitioning layout, and if you forcibly simplify it, you (1) run up against the limitations of its horribly primitive partitioning tool and (2) reduce the OS’s security.

I have got just barely competent enough with OpenBSD that between writing this in early 2022 and writing this in late 2024, two and a half years later, I went from “struggling mightily just to get it running at all in a VM” to “able with only some whimpering and cursing to get it dual-booting on bare metal with XP64, NetBSD, and 2 Linux distros.”

But it’s still a horrible horrible experience and some form of LVM would make matters massively easier.

Which is odd because I avoid Linux LVM as much as possible. I find it a massive pain when you don’t need it. However, you need it for Linux full-disk encryption, and one previous employer of mine insisted upon that.

In other words: I really dislike LVM, and I am annoyed by Linux gratuitously insisting on it in situations where it should not strictly speaking be needed – but in other OSes and other situations, I have really wanted it, but it wasn’t available.


liam_on_linux: (Default)
2025-03-03 12:03 pm

Basic MS-DOS memory management for beginners

From a Reddit post

     A very brief rundown:

  1. If you are using Microsoft tools, you need to load the 386 memory manager, emm386.exe, in your CONFIG.SYS file.

  2. But, to do that, you need to load the XMS manager, HIMEM.SYS, first.

  3. So your CONFIG.SYS should begin with the lines:

DEVICE=C:\WINDOWS\HIMEM.SYS
DEVICE=C:\WINDOWS\EMM386.EXE
DOS=HIGH,UMB

4. That's the easy bit. Now you have to find free Upper Memory Blocks to tell EMM386 to use.

5. Do a clean boot with F5 or F8 -- telling it not to process CONFIG.SYS or run AUTOEXEC.BAT. Alternatively boot from a DOS floppy that doesn't have them.

6. Run the Microsoft Diagnostics, MSD.EXE, or a similar tool such as Quartdeck Manifest. Look at the memory usage between 640kB and 1MB. Note, the numbers are in hexadecimal.

7. Look for unused blocks that are not ROM or I/O. Write down the address ranges.

8. An example: if you do not use monochrome VGA you can use the mono VGA memory area: 0xB000-0xB7FF.

9. One by one, tell EMM386 to use these. First choose if you want EMS (Expanded Memory Services) or not. It is useful for DOS apps, but not for Windows apps.

10. If you do, you need to tell it:

DEVICE=C:\WINDOWS\EMM386.EXE RAM

And set aside 64kB for a page frame, for example by putting this on the end of the line:

FRAME=E0000

Or, tell it not to use one:

FRAME=none

11. Or disable EMS:

DEVICE=C:\WINDOWS\EMM386.EXE NOEMS

12. Important Add these parameters one at a time, and reboot and test, every single time, without exception.

13. Once you told it which you want now you need to tell it the RAM blocks to use, e.g.

DEVICE=C:\WINDOWS\EMM386.EXE RAM FRAME=none I=B000-B7FF

Again, reboot every time to check. Any single letter wrong can stop the PC booting. Lots of testing is vital. Every time, run MSD and look at what is in use or is not in use. Make lots of notes, on paper.

14. If you find EMM386 is trying to use a block that it mustn't you can eXclude it:

DEVICE=C:\WINDOWS\EMM386.EXE RAM X=B000-B7FF

The more blocks you can add, the better.

15. After this -- a few hours' work -- now you can try to populate your new UMBs.

16. Device drivers: do this by prefixing lines in CONFIG.SYS with DEVICEHIGH instead of DEVICE.

Change:

DEVICE=C:\DOS\ANSI.SYS

To:

DEVICEHIGH=C:\DOS\ANSI.SYS

17. Try every driver, one by one, rebooting every time.

18. Now move on to loadable Terminate and Stay Resident (TSR) programs. Prefix lines that run a program in AUTOEXEC.BAT with LH, which is short for LOADHIGH.

Replace:

MOUSE

With:

LH MOUSE

Use MSD and the MEM command -- MEM /c /p -- to identify all your TSRs, note their sizes, and load them all high.

This is a day or two's work for a novice. I could do it in only an hour or two and typically get 625kB or more base memory free, and I made good money from this hard-won skill.   


liam_on_linux: (Default)
2025-01-01 08:58 pm

DOS live USB image with tools for writers

 I finally got round to publishing a version 1.0 of my long-running hobby project: a bootable DOS live USB image with tools for writers, providing a distraction-free writing environment.

github.com/lproven/usb-dos

This is very rushed and the instructions are incomplete. Only FAT16 for now; FAT32 coming real soon now.

liam_on_linux: (Default)
2024-12-31 02:53 pm

Why are hobbyist 21st century 8-bit computers so constrained?

I learned about a new DIY machine to me, the Cody Computer

It looks kind of fun, but once again, it does make me wonder why it’s so constrained. Extremely low-res graphics, for instance. TBH I would have sneered at this for being low-end when I was about 13 years old. (Shortly before I got my first computer, a 48K ZX Spectrum.)

Why isn’t anyone trying to make an easy home-build high-end eight-bit? Something that really pushes the envelope right out there – the sort of dream machine I wanted by about the middle of the 1980s.

In 1987 I owned an Amstrad PCW9512:

  • 4MHz Z80A
  • 512 kB RAM, so 64kB CP/M 3 TPA plus something over 400kB RAMdisc as drive M:
  • 720 x 256 monochrome screen resolution, 90 x 30 characters in text mode

Later in 1989 I bought an MGT SAM Coupé:

  • 6MHz Z80B
  • 256 kB RAM
  • 256 x 192 or 512 x 192 graphics, with 1/2/4 bits per pixel

Both had graphics easily outdone by the MSX 2 and later Z80 machines, but those had a dedicated GPU. That might be a reach but then given the limits of a 64 kB memory map, maybe a good one.

Another aspirational machine was the BBC Micro: a expandable, modular OS called MOS; an excellent BASIC, BBC BASIC, with structured flow, named procedures, with local variables, enabling recursive programming, and inline assembly language so if you graduated to machine-code you could just enter and edit it in the BASIC line editor. (Which was weird, but powerful – for instance, 2 independent cursors, one source and one destination, eliminating the whole “clipboard” concept.) Resolution-independent graphics, and graphics modes that cheerfully used most of the RAM, leaving exploitation as an exercise for the developer. Which they rose to magnificently.

The BBC Micro supported dual processors over the Tube interface, so one 6502 could run the OS, the DOS, and the framebuffer, using most of its 64 kB, and Hi-BASIC could run on the 2nd 6502 (or Z80!) processor, therefore having most of 64 kB to itself.

In a 21st century 8-bit, I want something that comfortably exceeds a 1980s 8-bit, let alone a 1990s 8-bit.

(And yes, there were new 8-bit machines in the 1990s, such as the Amstrad CPC Plus range, or MSX Turbo R.)

So my wish list would include…

  • At least 80-column legible text, ideally more. We can forget analog TVs and CRT limitations now. Aim to exceed VGA resolutions. 256 colours in some low resolutions but a high mono resolution is handy too.
  • Lots of RAM with some bank-switching mechanism, plus mechanisms to make this useful to BASIC programmers not just machine code developers. A RAMdisc is easy. Beta BASIC on the ZX Spectrum 128 lets BASIC declare arrays kept in the RAMdisc, so while a BASIC program is limited to under 30 kB of RAM, it can manipulate 100-odd kB of data in arrays. That’s a simple, clever hack.
  • A really world-class BASIC with structured programming support.
  • A fast processor (double-digit megahertz doesn’t seem too much to ask).
  • Some provision for 3rd party OSes. There are some impressive ones out there now, such as SymbOS, Contiki, and Fuzix. GEOS is open source now, too.
liam_on_linux: (Default)
2024-12-21 01:57 pm

Plan 9 is a bicycle

Someone on Reddit asked how easy it was to do "simple stuff" on 9front.

This is not a Linux distribution. It is an experimental research OS.

Look, all Linux distros are the same kernel with different tools slapped on top. Mostly the GNU tools and a bunch of other stuff. Linux is one operating system.

Linux is a GPL implementation of a simple monolithic 1970s Unix kernel. All the BSDs are BSD-licensed implementations of a simple monolithic 1970s Unix kernel.

Taking a high-level view they are different implementations of the same design.

So it's very easy to port the same apps to all of them. All run Firefox and Thunderbird and LibreOffice. They are slightly different flavours of a single design.

They are all just Unixes.

Solaris and AIX and HP/UX are the same design. All just Unixes.

Now we get to outliers. Some break up the kernel into different programs that work together. This is called a microkernel design. Mac OS X/macOS, Minix 3, QNX, CoyotOS, keyKOS. Still pretty much Unixes but weird ones.

The big names among them, like macOS, still run the same apps. Firefox, LibreOffice, etc.

Still UNIX.

9front is a distro of Plan 9. Plan 9 is NOT a Unix.

A small team -- originally 2 guys, Dennis Ritchie and Ken Thompson, designed Unix and C.

It caught on. Lots of people built versions of it. Some of them changed the design a bit. Doesn't really matter. It is all just Unix.

It takes the core design and adds a million layers of junk on top, implemented by well-meaning people who just had jobs to do and get stuff working, so now it's huge and vastly complex... but it's just Unix.

It's an ancient tradition to compare computers to vehicles. Unix is a car. Lots of people make cars. It's surprisingly hard to define what a "car" is but it's a box on wheels, probably with a roof (but maybe not), probably with windows (but maybe not), on wheels (probably 4, maybe 3, could be 6) with an engine.

All Unixes are types of car. You can't take the gearbox of a Ford and just bolt it into a Honda. Won't fit. But you can take a Ford and take a Honda and put 4 people in it and drive it on the same road to the same shop and buy stuff and carry it home.

Windows is... not a car, but it's close. Let's say it's a bus. Still a box on wheels, still carries people (but lots of them.) You can buy a bus to yourself and drive to the shops, with 40 friends instead of 4, but you wouldn't want to. It's big and slow and hard to drive and expensive. But you could do.

Plan 9 is not Unix. Plan 9 is what the guys who invented Unix did next.

Plan 9 is not a car.

You are only thinking of cars. We are not talking about cars any more.

Plan 9 is, say, a bicycle. (I know, bicycles came before cars. Sue me, it's a metaphor not a history lecture.)

It still has wheels. It still goes places. You can sit on it, and ride it, and go hundreds of miles. You can go to the shops and do your shopping and take it home, but no, 4 of you can't. You can't put the shopping in the boot. It doesn't have a boot. You need a backpack or panniers.

Stop thinking of cars. We have left car-land behind. There are a hundred other types of "things that have wheels and go" that aren't cars. There are motorbikes and roller skates and skateboards and go-karts and racing cars and unicycles and roller blades and cross-country-skiing roller-trainers and wheely shoes and loads more.

You're asking what kind of car a bicycle is. It isn't.

> I'm just wondering how easy it would be to load this on a cheap laptop and get up and running.

It's doable. A few hours work maybe.

> Does it require a lot of tweaking to get simple things working?

You do not define "simple things". But downthread you do.

You will never usefully browse the web on 9front. It doesn't really have a web browser. There are some kinda sorta things that do 1% of what a mainstream web browser does but you won't like them.

It doesn't really have "apps". Nobody ever wrote any. (With rounding errors. There is a tiny bit of 3rd party software, but you won't recognise anything.)

Plan 9 is a bicycle. It can take you places but you can't drive it if you only know how to drive cars. Never mind that it has a manual gear shift and there are 27 gears in 2 different gearing systems and no clutch and you need to memorise all the combinations you need to climb a hill and speed along the flat.

Also, you know, you need to pedal.

There's no engine.

"I want to write Markdown text and print it to a laser printer."

Right, well, you'll need to find a dozen separate tools, learn how to work them, and learn how to link them together... Or, you'll need to write your own.

Plan 9 is not the end point of the story, either.

Plan 9 was a step on the road to Inferno. Inferno is not a car and it's not a bicycle. It is, in extremely vague and general terms, a cross between an operating system, and Java, and the JVM. All in one.

It's... a pedal-powered aeroplane. You can't ride it to the shops but it is in its way even more amazing than a bicycle... it can fly.

What you call "simple stuff" is car stuff. You can't do it. It is not as "simple" as you think it is.
liam_on_linux: (Default)
2024-12-05 11:27 am

Raw Computer Power (with apologies to Guy Kewney)

First Unix box I ever touched, in my first job, here on the Isle of Man 36Y ago.

It was a demo machine, but my employers, CSL Delta, never sold any AFAIK. It sat there, running but unused, all day every day. Our one had a mono text display on it, and no graphics ability that I know of.

I played around, I wrote "Hello, world!" in C and compiled it and it took me a while to find that the result wasn't called "hello" or "hello.exe" or anything but "a.out".

If I had the knowledge then, I'd have written a Mandelbrot generator or something and had it sit there cranking them out -- but I was not skilled enough. It was not networked to our office network, but it had a synchronous modem allowing it to access some IBM online service which we used to look up tech support info.

Synchronous modem comms, or serial comms, are very different indeed to the familiar Unix asynchronous serial comms used on RS-232 connections for terminals and things. Sync comms are a mainframe thing, more than a microcomputer thing.

https://wiki.radioreference.com/index.php/Asynchronous_vs_Sy...

That modem was a very specialised bit of kit that cost more than a whole PC -- when PCs cost many thousands each -- and it couldn't talk to anything else except remote IBM mainframes, basically.

The RT/PC felt more powerful than a high-end IBM PC compatible of the time, but only marginally. It had a bit of the feeling of Windows NT about 6-7 years later: when you were typing away and you did something demanding, the hard disk cranked up and you could hear, and even feel the vibrations, that the machine was working hard, but it stayed responding to you the same as ever. It's a bit hard to describe because all modern OSes work like this, but it was not normal in the 1980s.

Then, OSes didn't multitask or they did it badly, and things like hard disk controllers of the time took over the CPU completely when reading or writing. So on MS-DOS, or PC-DOS or OS/2 1.x or DR Concurrent DOS, when you typed commands or interacted with programs, the computer responded right away as fast as it could. But if you gave a command that made the machine work hard, like asked for a print preview or a spell-check of a multi-page document, or sorted a spreadsheet of thousands of rows, or asked it to draw a graph from hundreds of points of data, the computer locked up on you. The hard disks span up, you heard the read/write heads chattering away as it worked, but it was no longer listening to you and anything you pressed or typed was lost. Or, worse, buffered, and when it was done, then it tried to do those commands, and quite possibly did something very much not what you wanted, like deleted loads of work.

(Decades later something similar happened with cooling fans, and now that's going away too. But with hearing the fans spin up, there's a hysteresis: it takes time, and tens of billions of CPU cycles, for the CPU to heat up, so the fans come on later, and maybe stay on for a while after it's done. A PC locking up as the hard disk went crazy was immediate.)

The RT/PC was a Unix box. It didn't do that. No idea how much RAM or disk ours had: maybe 4MB if that, perhaps 100-200MB disk. A lot for 1988! But if I did, say,

cd / ls -laR

... then it would sit there for several minutes with the HDD chuntering away, listing files to screen... but what was remarkable was that you could switch to another virtual console and it stayed perfectly responsive as if nothing were happening. That hard disk was SCSI of course, so it didn't use loads of CPU under heavy disk load.

The machine always felt a little slower, a little less responsive than DOS, but it never slowed down even when working hard. You had the feeling of sitting behind the wheel of a Rolls Royce with some massive engine there, but pulling a massive weight, so it didn't accelerate or brake fast, but could just keep accelerating slowly and steadily 'til you ran out of road... and you'd make an impressively large crater.

We sold a lot of IBM PS/2 machines with Xenix, and it was a Unix too and felt the same... but limited by the puny I/O buses of even high-end 1980s IBM PS/2 kit, so it sssslllloooowwwweeeedddd way down doing that big directory listing.

Whereas contemporary PC OSes responded quicker but just locked up when working hard. This included Windows 2, 3.x, 95, 98 and ME, and also OS/2 1.x, 2.x, and Warp. The kernels did not support multithreading and background I/O very well, so it didn't matter that the hardware didn't either.

Then Windows NT 4.0 came along, and it did. Suddenly the hardware mattered. But if you had a Pentium 1 machine, with an Intel Triton chipset on the motherboard, there was an innocent looking driver floppy in the box. On that was a busmastering DMA driver for the Intel PIIX EIDE controller. Install it on Win9x and it could see a CD-ROM on the PATA bus. Handy but not world-shattering.

Install it on an NT machine and once the kernel booted, the sound of the hard disk changed because the kernel was now using busmastering to load stuff from disk into RAM. As the machine booted the mouse pointer kept moving smoothly, with no jerkiness. When the login screen appeared it blinked onto the screen and you could press Ctrl-Alt-Del and start typing and your username appeared slowly but smoothly. The stars representing your password, the same.

It suddenly had that "massive computer power being used to keep the machine responsive" feeling of an RT/PC the decade before. Like that PIIX driver had made the machine's £100 cheapo IDE disk into a £400 SCSI disk.
liam_on_linux: (Default)
2024-10-12 11:39 am

Outliner notes


Word is a nightmare.

«
RT ColiegeStudent on Twitter 
 
using microsoft word
 
*moves an image 1 mm to the left*
 
all text and images shift. 4 new pages appear. in the distance, sirens.
»

But there's still a lot of power in that festering ball of 1980s code.

In 6 weeks in 2016, I drafted, wrote, illustrated, laid out and submitted a ~330 page technical maintenance manual for a 3D printer, solo, entirely in MS Word from start to finish. I began in Word 97 & finished it in Word 2003, 95% of the time running under WINE on Linux... and 90% of the time, using it in Outline Mode, which is a *vastly* powerful writer's tool which the FOSS word has nothing even vaguely comparable to.

But as a novice... Yeah, what the tweet said. It's a timeless classic IMHO.

Some Emacs folks told me Org-mode is just as good as an outliner. I've tried it. This was my response.

Org mode compared to Word 2003 Outline View is roughly MS-DOS Edlin compared to Emacs. It's a tiny fragmentary partial implementation of 1% of the functionality, done badly, with a terrible *terrible* UI.

No exaggeration, no hyperbole, and there's a reason I specifically said 2003 and nothing later.

 

I've been building and running xNix boxes since 1988. I have often tried both Vi and Emacs over nearly 4 decades. I am unusual in terms of old Unix hands: I cordially detest both of them.

The reason I cite Word 2003 is that that's the last version with the old menu and toolbar UI. Everything later has a "ribbon" and I find it unusable.

Today, the web-app/Android/iOS versions of Word do not have Outline View, no. Only the rich local app versions do.

But no, org-mode is not a better richer alternative; it is vastly inferior, to the point of being almost a parody.

It's really not. I tried it, and I found it a slightly sad crippled little thing that might be OK for managing my to-do list.

Hidden behind Emacs' *awful* 1970s UI which I would personally burn in a fire rather than ever use.

So, no, I don't think it's a very useful or capable outliner from what I have seen. Logseq has a better one.

To extend my earlier comparison:

Org-mode to Word's Outline View is Edlin to Emacs.

Logseq to Outline View is MS-DOS 5 EDIT to Emacs: it's a capable full-screen text editor that I know and like and which works fine. It's not very powerful but what it does, it does fine.

Is Org-mode aimed at something else? Maybe, yes. I don't know who or what it's aimed at, so I can't really say.
 

Word Outline Mode is the last surviving 1980s outliner, an entire category of app that's disappeared.

outliners.com/default.html

It's a good one but it was once one among many. It is, for me, *THE* killer feature of MS Word, and the only thing I keep WINE on my computers for.

It's a prose writer's tool, for writing long-form documents in a human language.

Emacs is a programmer's editor for writing program code in programming languages.

So, no, they are not the same thing, but the superficial similarity confuses people.
 

I must pick a fairly small example as I'm not very familiar with Emacs.

In Outline Mode, a paragraph's level in the hierarchy is tied with its paragraph style. Most people don't know how to use Word's style sheets, but think of HTML. Word has 9 heading levels, like H1...H9 on the Web, plus Body Text, which is always the lowest level.

As you promote or demote a paragraph, its style automatically changes to match.

(This has the side effect that you can see the level from the style. If that bothered you, in old versions you could turn off showing the formatting.)

As you move a block of hierarchical text around the outline all its levels automatically adopt the correct styles for their current location.

This means that when I wrote a manual in it, I did *no formatting by hand* at all. The text of the entire document is *automatically* formatted according to whether it's a chapter heading, or section, or subsection, or subsubsection, etc.

When you're done Word can automatically generate a table of contents, or an index, or both, that picks up all those section headings. Both assign page numbers "live", so if you move, add or delete any section, the ToC and index update immediately with the new positions and page numbers.
 

I say a small example as most professional writers don't deal with the formatting at all. That's the job of someone else in a different department.

Or, in technical writing, this is the job of some program. It's the sort of thing that Linux folks get very excited about LaTeX and LyX, or for which documentarians praise DocBook or DITA, but I've used both of those and they need a*vast* amount of manual labour -- and *very* complex tooling.

XML etc are also *extremely* fragile. One punctuation mark in the wrong place and 50 pages of formatting is broken or goes haywire. I've spent days troubleshooting one misplaced `:`. It's horrible.

Word can do all this automatically, and most people *don't even know the function is there.* It's like driving an articulated lorry as a personal car and never noticing that it can carry 40 tonnes of cargo! Worse still, people attach a trailer and roofrack and load them up with stuff... *because they don't know their vehicle can carry 10 cars already* as a built in feature.

I could take a sub sub section of a chapter and promote it to a chapter in its own right, and adjust the formatting of 100 pages, in about 6 or 8 keystrokes. That will also rebuild the index and redo the table of contents, automatically, for me.
 

All this can be entirely keyboard driven, or entirely mouse driven, according to the user's preference. Or any mixture of both, of course. I'm a keyboard warrior myself. I can live entirely without a pointing device and it barely slows me down.

You can with a couple of clicks collapse the whole book to just chapter headings, or just those and subheadings, or just all the headings and no body text... Any of 9 levels, as you choose. You can hide all the lower levels, restructure the whole thing, and then show them again. You can adjust formatting by adjusting indents in the overview, and then expand it again to see what happened and if it's what you want.

You could go crazy... zoom out to the top level, add a few new headings, indent under the new headings, and suddenly in a few clicks, your 1 big book is now 2 or 3 or 4 smaller books, each with its own set of chapters, headings, sub headings, sub sub headings etc. Each can have its own table of contents and index, all automatically generated and updated and formatted.
 

I'm an xNix guy, mainly. I try to avoid Windows as much as possible, but the early years of my career were supporting DOS and then Windows. There is good stuff there, and credit where it's due.

(MS Office on macOS also does this, but the keyboard UI is much clunkier.)

Outliners were just an everyday tool once. MS just built a good one into Word, way back in the DOS era. Word for DOS can do all this stuff too and it did it in like 200kB of RAM in 1988!

Integrating it into a word processor makes sense, but they were standalone apps.

It's not radical tech. This is really old, basic stuff. But somehow in the switch to GUIs on the PC, they got lost in the transition.

And no, LibreOffice/Abiword/CalligraWords has nothing even resembling this.
 

There are 2 types of outliner: intrinsic and extrinsic, also known as 1-pane or 2-pane.

en.wikipedia.org/wiki/Outliner

There are multiple 2-pane outliners that are FOSS.

But they are tools for organising info, and are almost totally useless for writers.

There are almost no intrinsic outliners in the FOSS world. I've been looking for years. The only one I know is LoqSeq, but it is just for note-taking and it does none of the formatting/indexing/ToC stuff I mentioned. It does handle Markdown but with zero integration with the outline structure.

So it's like going from Emacs to Notepad. All the clever stuff is gone, but you can still edit plain text.

 

liam_on_linux: (Default)
2024-10-12 10:44 am

Inferno notes

Plan 9 is Unix but more so. You write code in C and compile it to a native binary and run it as a process. All processes are in containers all the time, and nothing is outside the containers. Everything is virtualised, even the filesystem, and everything really is a file. Windows on screen are files. Computers are files. Disks are files. Any computer on the network can load a program from any other computer on the network (subject to permissions of course), run it on another computer, and display it on a third. The whole network is one giant computer.
 
You could use a slower workstation and farm out rendering complicated web pages to nearby faster machines, but see it on your screen.
 
But it's Unix. A binary is still a binary. So if you have a slow Arm64 machine, like a Raspberry Pi 3 (Plan 9 runs great on Raspberry Pis), you can't run your browser on a nearby workstation PC because that's x86-64. Arm binaries can't run on x86, and x86 binaries can't run on Arm.
 
Wasm (**W**eb **AS**se**M**bly) is a low-level bytecode that can run on any OS on any processor so long as it has a Wasm runtime. Wasm is derived from asm.js which was an earlier effort to write compilers that could target the Javascript runtime inside web browsers, while saving the time it takes to put Javscript through a just-in-time compiler.
 
https://en.wikipedia.org/wiki/WebAssembly
 
eBPF (extended Berkeley Packet Filters) is a language for configuring firewall rules, that's been extended into a general programming language. It runs inside the Linux kernel: you write programs that run _as part of the kernel_ (not as apps in userspace) and can change how the kernel works on the fly. The same eBPF code runs inside any Linux kernel on any architecture. 
 
https://en.wikipedia.org/wiki/EBPF
 
Going back 30 years, Java runs compiled binary code on any CPU because code is compiled to JVM bytecode instead of CPU machine code... But you need a JVM on your OS to run it.
 
https://en.wikipedia.org/wiki/List_of_Java_virtual_machines
 
All these are bolted on to another OS, usually Linux.
 
But the concept works better if integrated right into the OS. That's what Taos did.
 
https://wiki.c2.com/?TaoIntentOs
 
Programs are compiled for a virtual CPU that never existed, called VP.
 
https://en.wikipedia.org/wiki/Virtual_Processor
 
They are translated from that to whatever processor you're running on as they're loaded from disk into RAM. So *the same binaries*  run natively on any CPU. X86-32, x86-64, Arm, Risc-V, doesn't matter.
 
Very powerful. It was nearly the basis of the next-gen Amiga.
 
http://www.amigahistory.plus.com/deplayer/august2001.html
 
But it was a whole new OS and a quite weird OS at that. Taos 1 was very skeletal and limited. Taos 2, renamed Int**e**nt (yes, with the bold), was much more complete but didn't get far before the company went under.
 
Inferno was a rival to Java and the JVM, around the time Java appeared.
 
It's Plan 9, but with a virtual processor runtime built right into the kernel. All processes are written in a safer descendant of C called Limbo (it's a direct ancestor of GoLang) and compiled to bytecode that executes in the kernel's VM, which is called Dis.
 
Any and all binaries run on all types of CPU. There is no "native code" any more. The same compiled program runs on x86, on Risc-V, on Arm. It no longer matters. Run all of them together on a single computer. 
 
Running on a RasPi, all your bookmarks and settings are there? No worries, run Firefox on the headless 32-core EPYC box in the next building, displaying on your Retina tablet, but save on the Pi. Or save on your Risc-V laptop's SSD next to your bed. So long as they're all running Inferno, it's all the same. One giant filesystem and all computers run the same binaries.
 
By the way, it's like 1% of the size of Linux with Wasm, and simpler too.
 
liam_on_linux: (Default)
2024-09-30 09:35 pm
Entry tags:

Chris da Kiwi's personal history of computers

This is Chris's "Some thoughts on Computers" – the final, edited form.

 

The basic design of computers hasn't changed much since the mechanical one, the Difference Engine, invented by Charles Babbage in 1822 – but not built until 1991.  

Ada Lovelace was the mathematical genius who saw the value in Babbage’s work, but it was Alan Turing who   invented computer science, and the ENIAC in 1945 was arguably the first electronic general-purpose digital computer. It filled a room. The Micral N was the world's first “personal computer,” in 1973.

Since then, the basic design has changed little, other than to become smaller, faster, and on occasions, less useful.

The current trend to lighter, smaller gadget-style toys – like cell phones, watches, headsets of various types, and other consumer toys – is an indication that the industry has fallen into the clutches of mainstream profiteering, with very little real innovation now at all.

 

I was recently looking for a new computer for my wife and headed into one of the main laptop suppliers only to be met with row upon row of identical machines, at various price points arrived at by that mysterious breed known as "marketers". In fact, the only difference in the plastic on display was how much drive space had the engineers fitted in, and how much RAM did they have. Was the case a pretty colour, that appealed to the latest 10-year-old-girl, or a rugged he-man, who was hoping to make the school whatever team? In other words, rows of blah.

 

Where was the excitement of the early Radio Shack "do-it-yourself" range: the Sinclair ZX80, the Commodore 8-bits (PET and VIC-20),which ran the CPM operating system, (one of my favorites) later followed by the C64? What has happened to all the excitement and innovation? My answer is simple: the great big clobbering machine known as "Big Tech".

 

Intel released its first 8080 processor in 1972 and later followed up with variations on a theme, eventually leading to the 80286, the 80386, the 80486 (getting useful), and so on. All of these variations needed an operating system which basically was a variation of MS-DOS, believed to have been based on QDOS, or "Quick and Dirty Operating System," the work of developer Tim Paterson at a company called Seattle Computer Products (SCP). It was later renamed 86-DOS, after the Intel 8086 processor, and this was the version that Microsoft licensed and eventually purchased. Or alternatively the newer, FOSS, FreeDOS

Games started to appear, and some of them were quite good. But the main driver of the computer was software.


In particular, word-processors and spreadsheets. 


At the time, my lost computer soul had found a niche in CP/M, which on looking back was a lovely little operating system – but quietly disappeared into the badlands of marketing. 


Lost and lonely I wandered the computerverse until I hooked up with Sanyo – itself now long gone the way of the velociraptor and other lost prehistoric species.
 

The Sanyo bought build quality, the so-called "lotus card" to make it fully compatible with the IBM PC, and later, an RGB colour monitor and a 10 meg hard drive. The basic model was still two 5¼" floppy drives, which they pushed up to 720kB, and later the 3.½" 1.25MB floppy drives. Ahead of its time, it too went the way of the dinosaur.


These led to the Sanyo AT-286, which became a mainstay, along with the Commodore 64. A pharmaceutical company had developed a software system for pharmacies that included stock control, ordering, and sales systems. I vaguely remember that machine and software bundle was about NZ$ 15,000, which was far too rich for most.  Although I sold many of them over my time.


Then the computer landscape began to level out, as the component manufacturers began to settle on the IBM PC-AT as a compatible, open-market model of computer that met the Intel and DOS standards. Thus, the gradual slide into 10000 versions of mediocrity.


The consumer demand was for bigger and more powerful machines, whereas the industry wanted to make more profits. A conflict to which the basic computer scientists hardly seemed to give a thought.

I was reminded of Carl Jung's dictum that “greed would destroy the West.” 


A thousand firms sprang up, all selling the same little boxes, whilst the marketing voices kept trumpeting the bigger/better/greater theme… and the costs kept coming down, as businesses became able to afford these machines, and head offices began to control their outlying branches through the mighty computer. 


I headed overseas, to escape the bedlam, and found a spot in New Guinea – only to be overrun by a mainframe which was to be administered from Australia, and was going to run my branch – for which I was responsible, but without having any control.


Which side of the fence was I going to land on? The question was soon answered by the Tropical Diseases Institute in Darwin, which diagnosed dengue fever… and so I returned to NZ.


For months I battled this recurring malady, until I was strong enough to attend a few hardware and programming courses at the local Polytechnic, eventually setting up my own small computer business, building up 386 machines for resale, followed by 486 and eventually a Texas Instrument laptop agency. Which was about 1992 from my now fragile memory.  I also dabbled with the Kaypro as a personal beast and it was fun but not as flexible as the Sanyo AT I was using.

The Texas Instruments laptop ran well enough and I remember playing Doom  on it, but it had little battery life, and although rechargeable, they needed to be charged every two or three hours. At least the WiFi worked pretty consistently, and for the road warrior, gave a point of distinction.

Then the famous 686 arrived, and by the use of various technologies, RAM began to climb up to 256MB, and in some machines 512MB.


Was innovation happening? No – just more marketing changes. As in, some machines came bundled with software, printers or other peripherals, such as modems, scanners, or even dot matrix printers. 

As we ended the 20th century, we bought bigger and more powerful machines. The desktop was being chased by the laptop, until I stood in my favorite computer wholesaler staring at a long row of shiny boxes that were basically all the same, wondering which one my wife would like… knowing that it would have to connect to the so-called "internet", and in doing so, make all sorts of decisions inevitable.  As to securing a basically insecure system which would require third part programs of dubious quality and cost. 


Eventually I chose a smaller Asus, with 16GB of main RAM and an NVIDIA   card, and retreating to my cottage, collapsed in despair. Fifty years of computing and wasted innovation left her with a black box that, when she opened, it said “HELLO” against a big blue background that promised the world – but only offered more of the same.  As in, a constant trickle of hackers, viruses, Trojans and barely anything useful – but now included several new perversions called  chat-bot or “AI”.


I retired to my room in defeat.

 

We have had incremental developments, until we have today's latest chips from Intel and AMD based on the 64-bit architecture first introduced around April 2003.

 

So where is the 128-bit architecture – or the 256 or the 512-bit?

 

What would happen if we got really innovative? I still remember Bill Gates saying "Nobody will ever need more than 640k of RAM." And yet, it is common now to buy machines with 8 or 16 or 32GB of RAM, because the poor quality of operating systems fills the memory with badly codded garbage that causes memory leaks, stack-overflow errors and other memory issues.

 

Then there is Unix which I started using at my courses in Christchurch polytechnic.  A Dec 10 from memory which also introduced me to the famous or infamous BOFH.
 

I spent many happy hours chuckling over the BOF’s exploits. Then came awareness of the twin geniuses: Richard Stallman, and from Linus Torvalds, GNU/Linux. A solid, basic series of operating systems, and programs by various vendors, that simply do what they are asked, and do it well.

  

I wonder where all this could head, if computer manufacturers climbed onboard and developed, for example, a laptop with an HDMI screen, a rugged case with a removable battery, a decent sound system, with a good-quality keyboard, backlight with per-key colour selection. Enough RAM slots to boost the main memory up to say 256GB, and video RAM to 64GB, allowing high speed draws to the screen output.

 

Throw away the useless touch pads, and gimmicks like second mini screens built in to the chassis. With the advent of Bluetooth mice, they are no longer needed. Instead, include an 8TB NV Me drive, then include a decent set of controllable fans and heat pipes that actually kept the internal temperatures down, so as to not stress the RAM and processors.


I am sure this could be done, given that some manufacturers, such as Tuxedo, are already showing some innovation in this area. 


Will it happen? I doubt it. The clobbering machine will strike again.

- - - - -

Having found that I could not purchase a suitable machine for my needs, I wandered throughout the computerverse until I discovered in a friends small computer business an Asus ROG Windows 7 model, in about 2004. It was able to have a RAM upgrade, which I duly carried out, with 2 × 8GB sodim ram plus 4GB of SDDR2 video RAM, and 2×500GB WD 7200RPM spinning rust hard drives. This was beginning to look more like a computer. Over the time I used it, I was able to replace the spinning-rust drives with 500GB Samsung SSDs, and as larger sticks of RAM became available, increased that to the limit as well. I ran that machine, which was Linux-compatible, throwing away the BSOD [Blue Screen Of Death – Ed.] of Microsoft Windows, and putting one of the earliest versions of Ubuntu with GNOME on it. It was computing heaven: everything just worked, and I dragged that poor beast around the world with me.


While in San Diego, I attended Scripps University and lectured on cot death for three months as a guest lecturer. 

Scripps at the time was involved with IBM in developing a line-of-sight optical network, which worked brilliantly on campus. It was confined to a couple of experimental computer labs, but you had to keep your fingers off the mouse or keyboard, or your machine would overload with web pages if browsing. I believe it never made it into the world of computers for ordinary users, as the machines of the day could not keep up.


There was also talk around the labs of so-called quantum computing, which had been talked about since the 1960s on and off, but some developments appeared in 1968.

The whole idea sounds great – if it could be made to work at a practicable user level.  But in the back of my mind, I had a suspicion that these ideas would just hinder investment and development of what was now a standard of motherboards and BIOS-based systems. Meanwhile, my Tux machine just did what was asked of it.


Thank you, Ian and Debra Murdoch, who developed the Debian version of Linux – on which Ubuntu was based.

I dragged that poor Asus around the Americas, both North and South, refurbishing it as I went. I found Fry's, the major technology shop in San Diego, where I could purchase portable hard drives and so on at a fraction of the cost of elsewhere in the world as well as just about any computer peripheral  dreamed of.  This shop was a techs heaven so to speak. And totally addictive to some on like me.


Eventually, I arrived in Canada, where I had a speaking engagement at Calgary University – which also had a strong Tux club – and I spent some time happily looking at a few other distros. Distrowatch had been founded about 2001, which made it easy to keep up with Linux news, new versions of Tux, and what system they were based on. Gentoo seemed to be the distro for those with the knowledge to compile and tweak every little aspect of their software.


Arch attracted me at times. But eventually, I always went back to Ubuntu –  until I learned of Ubuntu MATE. The University had a pre-release copy of Ubuntu MATE 14.10, along with a podcast from Alan Pope and Martin Wimpress, and before I could turn around I had it on my Asus. It was simple, everything worked, and it removed the horrors of GNOME 3.


I flew happily back to New Zealand and my little country cottage.


Late in 2015, my wife became very unwell after a shopping trip. Getting in touch with some medical friends, they were concerned she’d had a heart attack. This was near the mark: she had contracted a virus which had destroyed a third of her heart muscle. It took her a few years to die, and a miserable time it was for her and for us both. After the funeral, I had rented out my house and bought a Toyota motor home, and I began traveling around the country. I ran my Asus through a solar panel hooked up to an inverter, a system which worked well and kept the beast going.


After a couple of years, I decided to have a look around Australia. My grandfather on my father's side was Australian, and had fascinated us with tales of the outback, where he worked as a drover in the 1930s and ’40s.


And so, I moved to Perth, where my brother had been living since the 1950s. 


There, I discovered an amazing thing: a configurable laptop based on a Clevo motherboard – and not only that, the factory  of manufacturers Metabox was just up the road in Fremantle.


Hastily, I logged on to their website, and in a state of disbelief, browsed happily for hours at all the combinations I could put together. These were all variations on a theme by Windows 7, (to misquote Paganini) and there were no listing of ACPI records or other BIOS information with which to help make a decision.


I looked at my battered old faithful; my many-times-rebuilt Asus, and decided the time had come. I started building. Maximum RAM and video RAM, latest NVIDIA card, two SSDs, their top-of-the-line WiFi and Bluetooth chip sets, sound cards, etc. Then, as my time in Perth was at an end I got it sent to New Zealand, as I was due to fly back the next day. 


That was the first of four Metabox machines I have built, and is still running flawlessly using Ubuntu MATE. I gave it to a friend some years ago and he is delighted with it still.


I had decided to go to the Philippines and South east Asia to help set up clinics for distressed children, something I had already done in South America, and the NZ winter was fast approaching.  Hastily I arranged with a church group in North Luzon to be met at Manila airport.  I had already contacted an interpreter who was fluent in Versaya and Tagalog, and was an english teacher so we arranged to meet at Manila airport and go on from there.

Packing my trusty Metabox I flew out of Christchurch in to a brand new world. 

The so called job soon showed up as a scam and after spending a week or so In Manila I suggested that rather than waste visa we have a look over some of the country.  Dimp pointed out her home was on the next Island over and would make a good base to move from.

So we ended up in Cagayan de Ora – the city of the river of gold!  After some months of traveling around  we decided to get married and so I began the process of getting a visa for Dimp to live in NZ.  This was a very difficult process, but with the help of a brilliant immigration lawyer, and many friends, we managed it and next year Dimp becomes a NZ citizen.

My next Metabox was described as a Windows 10 machine, but I knew that it would run Linux beautifully – and so it did. A few tweaks around the ACPI subsystem and it computed away merrily, with not a BSOD in sight. A friend of mine who had popped in for a visit was so impressed with it that he ordered one too, and that arrived about three months later. A quick wipe of the hard drive (thank you, Gparted!), both these machines are still running happily, with not a cloud on the horizon.

One, I gave to my stepson about three months back: a Win 10 machine, and he has taken it back with him to the Philippines, where he reports it is running fine in the tropical heat.

My new Metabox arrived about six weeks ago, and I decided – just out of curiosity – to leave Windows 11 on it. A most stupid decision, but as my wife was running Windows 11 and had already blown it up once, needing a full reset (which, to my surprise, worked), I proceeded to charge it for the recommended 24 hours, and next day, switched it on. “Hello” it said, in big white letters, and then the nonsense began… a torrent of unwanted software proceeded to fill up one of my 8TB NVMe drives, culminating after many reboots with a Chatbot, an AI “assistant”, and something called “Co-pilot”. 

“No!” I cried, “not in a million years!” – and hastily plugging in my Ventoy stick, I rebooted it into Gparted, and partitioned my hard drive as ext4 for Ubuntu MATE.


So far, the beast seems most appreciative, and it hums along with just a gentle puff of warm air out of the ports. I needed to do a little tweaking, as the latest NVIDIA cards don’t seem to like Wayland as a graphics server, and the addition to GRUB of  acpi=off, and another flawless computer is on the road.


Now, if only I could persuade Metabox to move to a 128-bit system, and can get delivery of that on the other side of the great divide, my future will be in computer heaven.


Oh, if you’re wondering what happened to the Asus? It is still on the kitchen table in our house in the Philippines, in pieces, where I have no doubt it is waiting for another rebuild! Maybe my Stepson Bimbo will do it and give it to his niece.  Old computers never die they just get recycled


— Chris Thomas

In Requiem 

03/05/1942 — 02/10/2024 



 
liam_on_linux: (Default)
2024-09-23 06:23 pm

The second and final part of Chris' personal history with Linux

This is the second, and I very much fear the last, part of my friend Chris "da Kiwi" Thomas' recollections about PCs, Linux, and more. I shared the first part a few days ago.

Having found that I could not purchase a suitable machine for my needs, I discovered the Asus ROG Windows 7 model, in about 2004. It was able to have a RAM upgrade, which I duly carried out, with 2
× 8GB SO-DIMMs, plus 4GB of SDDR2 video RAM, and 2×500GB WD 7200RPM hard drives. This was beginning to look more like a computer. Over the time I used it, I was able to replace the spinning-rust drives with 500GB Samsung SSDs, and as larger sticks of RAM became available, increased that to the limit as well. I ran that machine, which was Tux-compatible [“Tux” being Chris’s nickname for Linux. – Ed.], throwing away the BSOD [Blue Screen Of Death – that is, Microsoft Windows. – Ed.] and putting one of the earliest versions of Ubuntu with GNOME on it. It was computing heaven: everything just worked, and I dragged that poor beast around the world with me.


While in San Diego, I attended Scripps and lectured on cot death for three months as a guest. Scripps at the time was involved with IBM in developing a line-of-sight optical network, which worked brilliantly on campus. It was confined to a couple of experimental computer labs, but you had to keep your fingers off the mouse or keyboard, or your machine would overload with web pages if browsing. I believe it never made it into the world of computers for ordinary users, as the machines of the day could not keep up.


There was also talk around the labs of so-called quantum computing, which had been talked about since the 1960s on and off, but some developments appeared in 1968.

The whole idea sounds great –
if it could be made to work at a practicable user level.  But in the back of my mind, I had a suspicion that these ideas would just hinder investment and development of what was now a standard of motherboards and BIOS-based systems. Meanwhile, my Tux machine just did what was asked of it.


Thank you, Ian and Debra Murdoch, who developed the Debian version of Tux – on which Ubuntu was based.

I dragged that poor Asus around the Americas, both North and South, refurbishing it as I went. I found Fry's, the major technology shop in San Diego, where I could purchase portable hard drives and so on at a fraction of the cost of elsewhere in the world.


Eventually, I arrived in Canada, where I had a speaking engagement at Calgary University – which also had a strong Tux club – and I spent some time happily looking at a few other distros. Distrowatch had been founded about 2001, which made it easy to keep up with Linux news, new versions of Tux, and what system they were based on. Gentoo seemed to be the distro for those with the knowledge to compile and tweak every little aspect of their software.


Arch attracted me at times. But eventually, I always went back to Ubuntu –  until I learned of Ubuntu MATE. The University had a pre-release copy of Ubuntu MATE 14.10, along with a podcast from Alan Pope and Martin Wimpress, and before I could turn around I had it on my Asus. It was simple, everything worked, and it removed the horrors of GNOME 3.


I flew happily back to New Zealand and my little country cottage.


Late in 2015, my wife became very unwell after a shopping trip. Getting in touch with some medical friends, they were concerned she’d had a heart attack. This was near the mark: she had contracted a virus which had destroyed a third of her heart muscle. It took her a few years to die, and a miserable time it was for her and for us both. After the funeral, I had rented out my house and bought a Toyota motorhome, and I began traveling around the country. I ran my Asus through a solar panel hooked up to an inverter, a system which worked well and kept the beast going.


After a couple of years, I decided to have a look around Australia. My grandfather on my father's side was Australian, and had fascinated us with tales of the outback, where he worked as a drover in the 1930s and ’40s.


And so, I moved to Perth, where my brother had been living since the 1950s. 


There, I discovered an amazing thing: a configurable laptop based on a Clevo motherboard – and not only that, their factory was just up the road in Fremantle.



Hastily, I logged on to their website, and in a state of disbelief, browsed happily for hours at all the combinations I could put together. These were all variations on a theme by Windows 7, and there were no listing of ACPI records or other BIOS information.


I looked at my battered old faithful, my many-times-rebuilt Asus, and decided the time had come. I started building. Maximum RAM and video RAM, latest nVidia card, two SSDs, their top-of-the-line WiFi and Bluetooth chipsets, sound cards, etc. Then, I got it sent to New Zealand, as I was due back the next day.


That was the first of four Metabox machines I have built, and is still running flawlessly using Ubuntu MATE. 


My next Metabox was described as a Windows 10 machine, but I knew that it would run Tux beautifully – and so it did. A few tweaks around the ACPI subsystem and it computed away merrily, with not a BSOD in sight. A friend of mine who had popped in for a visit was so impressed with it that he ordered one too, and that arrived about three months later. A quick wipe of the hard drive (thank you, Gparted!), both these machines are still running happily, with not a cloud on the horizon.


One, I gave to my stepson about three months back, and he has taken it back with him to the Philippines, where he reports it is running fine in the tropical heat.


My new Metabox arrived about six weeks ago, and I decided – just out of curiosity – to leave Windows 11 on it. A most stupid decision, but as my wife was running Windows 11 and had already blown it up once, needing a full reset (which, to my surprise, worked), I proceeded to charge it for the recommended 24 hours, and next day, switched it on. “Hello” it said, in big white letters, and then the nonsense began… a torrent of unwanted software proceeded to fill up one of my 8TB NVMe drives, culminating after many reboots with a Chatbot, an AI “assistant”, and something called “Co-pilot”. 


“No!” I cried, “not in a million years!” – and hastily plugging in my Ventoy stick, I rebooted it into Gparted, and partitioned my hard drive for Ubuntu MATE.


So far, the beast seems most appreciative, and it hums along with just a gentle puff of warm air out of the ports. I needed to do a little tweaking, as the latest nVidia cards don’t seem to like Wayland as a graphics server, and the addition to GRUB of  acpi=off, and another flawless computer is on the road.


Now, if only I could persuade Metabox to move to a 128-bit system, and can get delivery of that on the other side of the great divide, my future will be in computer heaven.



Oh, if you’re wondering what happened to the Asus? It is still on the kitchen table in our house in the Philippines, in pieces, where I have no doubt it is waiting for another rebuild! 


Chris Thomas

In Requiem 

03/05/1942 — 02/10/2024 

 
liam_on_linux: (Default)
2024-09-21 12:14 am
Entry tags:

Guest post: "Some thoughts on computers", by Chris da Kiwi

A friend of mine via the Ubuntu mailing list for the last couple of decades, Chris is bedbound now and tells me he's in his final weeks of life. He shared with me a piece he's written. I've lightly edited it before sharing it, and if he's feeling up to it, there is some more he wants to say. We would welcome thoughts and comments on it.

                                                  Some thoughts on Computers

 

The basic design of computers hasn't changed much since the mechanical one, the Difference Engine, invented by Charles Babbage in 1822 – but not built until 1991. Alan Turing invented computer science, and the ENIAC in 1945 was arguably the first electronic general-purpose digital computer. It filled a room. The Micral N was the world's first “personal computer,” in 1973.

 

Since then, the basic design has changed little, other than to become smaller, faster, and on occasions, less useful.

 

The current trend to lighter, smaller gadget-style toys – like cell phones, watches, headsets of various types, and other consumer toys – is an indication that the industry has fallen into the clutches of mainstream profiteering, with very little real innovation now at all.

 

I was recently looking for a new computer for my wife and headed into one of the main laptop suppliers only to be met with row upon row of identical machines, at various price points arrived at by that mysterious breed known as "marketers". In fact, the only difference in the plastic on display was how much drive space had the engineers fitted in, and how much RAM did they have. Was the case a pretty colour, that appealed to the latest 10-year-old-girl, or a rugged he-man, who was hoping to make the school whatever team? In other words, rows of blah.

 

Where was the excitement of the early Radio Shack "do-it-yourself" range: the Sinclair ZX80, the Commodore 8-bits (PET and VIC-20), later followed by the C64? What has happened to all the excitement and innovation? My answer is simple: the great big clobbering machine known as "Big Tech".

 

Intel released its first 8080 processor in 1972 and later followed up with variations on a theme [PDF], eventually leading to the 80286, the 80386, the 80486 (getting useful), and so on. All of these variations needed an operating system which basically was a variation of MS-DOS, or more flexibly, PC DOS. Games started to appear, and some of them were quite good. But the main driver of the computer was software.


In particular, word-processors and spreadsheets. 


At the time, my lost computer soul had found a niche in CP/M, which on looking back was a lovely little operating system – but quietly disappeared into the badlands of marketing. 


Lost and lonely I wandered the computerverse until I hooked up with Sanyo – itself now long gone the way of the velociraptor and other lost prehistoric species.
 

The Sanyo bought build quality, the so-called "lotus card" to make it fully compatible with the IBM PC, and later, an RGB colour monitor and a 10 gig hard drive. The basic model was still two 5¼" floppy drives, which they pushed up to 720kB, and later the 3.½" 1.25MB floppy drives. Ahead of its time, it too went the way of the dinosaur.


These led to the Sanyo AT-286, which became a mainstay, along with the Commodore 64. A pharmaceutical company had developed a software system for pharmacies that included stock control, ordering, and sales systems. I vaguely remember that machine and software bundle was about NZ$ 15,000, which was far too rich for most.


Then the computer landscape began to level out, as the component manufacturers began to settle on the IBM PC-AT as a compatible, open-market model of computer that met the Intel and DOS standards. Thus, the gradual slide into 100 versions of mediocrity.


The consumer demand was for bigger and more powerful machines, whereas the industry wanted to make more profits. A conflict to which the basic computer scientists hardly seemed to give a thought.

I was reminded of Carl Jung's dictum: that “greed would destroy the West.” 


A thousand firms sprang up, all selling the same little boxes, whilst the marketing voices kept trumpeting the bigger/better/greater theme… and the costs kept coming down, as businesses became able to afford these machines, and head offices began to control their outlying branches through the mighty computer. 


I headed overseas, to escape the bedlam, and found a spot in New Guinea – only to be overrun by a mainframe run from Australia, which was going to run my branch – for which I was responsible, but without any control.


Which side of the fence was I going to land on? The question was soon answered by the Tropical Diseases Institute in Darwin, which diagnosed dengue fever… and so I returned to NZ.


For months I battled this recurring malady, until I was strong enough to attend a few hardware and programming courses at the local Polytechnic, eventually setting up my own small computer business, building up 386 machines for resale, followed by 486 and eventually a Texas Instrument laptop agency.


These ran well enough, but had little battery life, and although they were rechargeable, they needed to be charged every two or three hours. At least the WiFi worked pretty consistently, and for the road warrior, gave a point of distinction.


[I think Chris is getting his time periods mixed up here. —Ed.]


Then the famous 686 arrived, and by the use of various technologies, RAM began to climb up to 256MB, and in some machines 512MB.


Was innovation happening? No – just more marketing changes. As in, some machines came bundled with software, printers or other peripherals, such as modems.

As we ended the 20th century, we bought bigger and more powerful machines. The desktop was being chased by the laptop, until I stood at a long row of shiny boxes that were basically all the same, wondering which one my wife would like… knowing that it would have to connect to the so-called "internet", and in doing so, make all sorts of decisions inevitable.


Eventually I chose a smaller Asus, with 16GB of main RAM and an nVidia card, and retreating to my cottage, collapsed in despair. Fifty years of computing and wasted innovation left her with a black box that, when she opened, it said “HELLO” against a big blue background that promised the world – but only offered more of the same.  As in, a constant trickle of hackers, viruses, Trojans and barely anything useful – but now included a new perversion called a chat-bot or “AI”.


I retired to my room in defeat.

 

We have had incremental developments, until we have today's latest chips from Intel and AMD based on the 64-bit architecture first introduced around April 2003.

 

So where is the 128-bit architecture – or the 256 or the 512-bit?

 

What would happen if we got really innovative? I still remember Bill Gates saying "Nobody will ever need more than 640k of RAM." And yet, it is common now to buy machines with 8 or 16 or 32GB of RAM, because the poor quality of operating systems fills the memory with poorly-written garbage that causes memory leaks, stack-overflow errors and other memory issues.

 

Then there is Unix – or since the advent of Richard Stallman and Linus Torvalds, GNU/Linux. A solid, basic series of operating systems, by various vendors, that simply do what they are asked. 

 

I wonder where all this could head, if computer manufacturers climbed onboard and developed, for example, a laptop with an HDMI screen, a rugged case with a removable battery, a decent sound system, with a good-quality keyboard, backlit with per-key colour selection. Enough RAM slots to boost the main memory up to say 256GB, and video RAM to 64GB, allowing high speed draws to the screen output.

 

Throw away the useless touch pads. With the advent of Bluetooth mice, they are no longer needed. Instead, include an 8TB NVMe drive, then include a decent set of controllable fans and heatpipes that actually kept the internal temperatures down, so as to not stress the RAM and processors.


I am sure this could be done, given that some manufacturers, such as Tuxedo, are already showing some innovation in this area. 


Will it happen? I doubt it. The clobbering machine will strike again.



Friday September 20th 2024 

liam_on_linux: (Default)
2024-08-29 09:56 am

To a tiling WM user, apparently other GUIs are like wearing handcuffs

 This is interesting to me. I am on the other side, and ISTM that the tiling WM folks are the camp you describe.

Windows (2.01) was the 3rd GUI I learned. First was classic MacOS (System 6 and early System 7.0), then Acorn RISC OS on my own home computer, then Windows.

Both MacOS and RISC OS have beautiful, very mouse-centric GUIs where you must use the mouse for most things. Windows was fascinating because it has rich, well-thought-out, rational and consistent keyboard controls, and they work everywhere. In all graphical apps, in the window manager itself, and on the command line.

-- Ctrl + a letter is a discrete action: do this thing now.

-- Alt + a letter opens a menu

-- Shift moves selects in a continuous range: shift+cursors selects text or files in a file manager. Shift+mouse selects multiple icons in a block in a file manager.

-- Ctrl + mouse selects discontinuously: pick disconnected icons.

-- These can be combined: shift-select a block, then press ctrl as well to add some discontinuous entries.

-- Ctrl + cursor keys moves a word at a time (discontinuous cursor movement).

-- Shift + ctrl selects a word at a time.

In the mid-'90s Linux made Unix affordable and I got to know it, and I switched to it early '00s.

But it lacks that overall cohesive keyboard UI. Some desktops implement most of Windows' keyboard UI (Xfce, LXDE, GNOME 2.x), some invent their own (KDE), many don't have one.

The shell and editors don't have any consistency. Each editor has its own set of keyboard controls, and some environments honour some of them -- but not many because the keyboard controls for an editor make little sense in a window manager. What does "insert mode" mean in a file manager?

They are keyboard-driven windowing environments built by people who live in terminals and only know the extremely limited keyboard controls of the most primitive extant shell environment, one that doesn't honour GUI keyboard UI because it predates it and so in which every app invents its own.

Whereas Windows co-evolved with IBM CUA and deeply embeds it.

The result is that all the Linux tiling WMs I've tried annoy me, because they don't respect the existing Windows-based keystrokes for manipulating windows. GNOME >=3 mostly doesn't either: keystrokes for menu manipulation make little sense when you've tried to eliminate menus from your UI.

Even the growing-in-trendiness MiracleWM because the developer doesn't use plain Ubuntu, he uses Kubuntu, and Kubuntu doesn't respect basic Ubuntu keystrokes like Ctrl+Alt+T for a terminal, so neither does MiracleWM.

They are multiple non-overlapping, non-cohesive, non-uniform keyboard UIs designed by and for people who never knew how to use a keyboard-driven whole-OS UI because they didn't know there was one. So they all built their own ones without knowing that there's 30+ years of prior art for this.

All these little half-thought-out attempts to build something that already existed but its creators didn't know about it.

To extend the prisoners-escaping-jail theme:

Each only extends the one prisoner cell that inmate knew before they got out, where the prison cell is an app -- often a text editor but sometimes it's one game.

One environment lets you navigate by only going left or straight. To go right, turn left three times! Simple!

One only lets you navigate in spirals, but you can adjust the size, and toggle clockwise or anticlockwise.

One is like Asteroids: you pivot your cursor and apply thrust.

One uses Doom/Quake-style WASD + mouse, because everyone knows that, right? It's the standard!

One expects you to plug in a joypad controller and use that.

liam_on_linux: (Default)
2024-07-26 06:20 pm
Entry tags:

Bring back distro-wide themes!

Someone on Reddit was asking about the Bluecurve theme on Red Hat Linux.

Back then, Red Hat Linux only offered KDE and GNOME, I think. The great thing about Bluecurve was that they looked the same and both of them had the Red Hat look.

Not any more. In recent years I've tried GNOME, Xfce, MATE, KDE, Cinnamon, and LXQt on Fedora.

They all look different. They may have some wallpaper in common but that's it. In any of them, there's no way you can glance from across a room (meaning, too far away to read any text or see any logos) and go "oh, yeah, that's Fedora."

And on openSUSE, I tried all of them plus LXDE and IceWM. Same thing. Wallpaper at best.

Same on Ubuntu: I regularly try all the main flavours, as I did here and they all look different. MATE makes an effort, Unity has some of the wallpapers, but that's about it.

If a vendor or project has one corporate brand and one corporate look, usually, time and money and effort went into it. Into logos, colours, tints, gradients, wallpaper, all that stuff.

It seems to me that the least the maintainers of different desktop flavours or spins could do is adopt the official theme and make their remixes look like they are the same OS from the same vendor.

I like Xfce. Its themes aren't great. Many, most, make window borders so thin you can't grab them to resize. Budgie is OK and looks colourful, but Ubuntu Budgie does not look like Ubuntu.

Kubuntu looks like Fedora KDE looks like Debian with KDE looks like anything with KDE, and to my eyes, KDE's themes are horrible, as they have been since KDE 1 -- yes I used 1.0, and liked it -- and only 3rd party distro vendor themes ever made KDE look good.

Only 2 of them, really: Red Hat Linux with Bluecurve, and Corel LinuxOS and Xandros.

Everyone else's KDE skins are horrible. All of them. It's one reason I can't use KDE now. It almost hurts my eyes. (Same goes for TDE BTW.) It is nasty.

Branding matters. Distros all ignore it now. They shouldn't.

And someone somewhere should bring back Bluecurve, or failing that, port GNOME's Adwaita to all the other desktops. I can't stand GNOME but its themes and appearance are the best distro in the West. (Some of the Chinese ones like Deepin and Kylin are beautiful, but everyone's afraid they're full of spyware for the Chinese Communist Party... and they might be right.)

liam_on_linux: (Default)
2024-07-24 05:49 pm

"Computer designs, back then": the story of ARRA, the first Dutch computer

ARRA was the first ever Dutch computer.
 
There's an account of its creation entitled 9.2 Computers ontwerpen, toen ("Computer Designs, then") by the late Carel S Scholten, but sadly for Anglophone readers it's in Dutch.

This is a translation into English, done using ChatGPT 4o by Gavin Scott. I found it readable and fun, although I have no way to judge how accurate it is.

C.S. Scholten

In the summer of 1947, I was on vacation in Almelo. Earlier that year, on the same day as my best friend and inseparable study mate, Brain Jan Loopstra, I had successfully passed the qualifying exams in mathematics and physics. The mandatory brief introduction to the three major laboratories—the Physics Laboratory, the V.d. Waals Laboratory, and the Zeeman Laboratory—was behind us, and we were about to start our doctoral studies in experimental physics. For two years, we would be practically working in one of the aforementioned laboratories.

 

One day, I received a telegram in Almelo with approximately the following content: "Would you like to assist in building an automatic calculating machine?" For assurance, another sentence was added: "Mr. Loopstra has already agreed." The sender was "The Mathematical Center," according to further details, located in Amsterdam. I briefly considered whether my friend had already confirmed my cooperation, but in that case, the telegram seemed unnecessary, so I dismissed that assumption. Both scenarios were equally valid: breaking up our long-standing cooperation (dating back to the beginning of high school) was simply unthinkable. Furthermore, the telegram contained two attractive points: "automatic calculating machine" and "Mathematical Center," both new concepts to me. I couldn’t deduce more than the name suggested. Since the cost of a telegram exceeded my budget, I posted a postcard with my answer and resumed my vacation activities. Those of you who have been involved in recruiting staff will, I assume, be filled with admiration for this unique example of recruitment tactics: no fuss about salary or working hours, not to mention irrelevant details like pension, vacation, and sick leave. For your reassurance, it should be mentioned that I was indeed offered a salary and benefits, which, in our eyes, were quite generous.

 

I wasn't too concerned about how the new job could be combined with the mandatory two-year laboratory work. I believed that a solution had to be found for that. And a solution was found: the laboratory work could be replaced by our work at the Mathematical Center.

 

Upon returning to Amsterdam, I found out the following: the Mathematical Center was founded in 1946, with a goal that could roughly be inferred from its name. One of the departments was the 'Calculation Department,' where diligent young ladies, using hand calculators—colloquially known as 'coffee grinders'—numerically solved, for example, differential equations (in a later stage, so-called 'bookkeeping machines' were added to the machinery). The problems dealt with usually came from external clients. The head of the Calculation Department was Dr. ir. A. van Wijngaarden. Stories about automatic calculating machines had also reached the management of the Mathematical Center, and it was clear from the outset that such a tool—if viable—could be of great importance, especially for the Calculation Department. However, it was not possible to buy this equipment; those who wanted to discuss it had to build it themselves. Consequently, it was decided to establish a separate group under the Calculation Department, with the task of constructing an automatic calculating machine. Given the probable nature of this group’s activities, it was somewhat an oddity within the Mathematical Center, doomed to disappear, if not after completing the first machine, then certainly once this kind of tool became a normal trade object.

 

We were not the only group in the Netherlands involved in constructing calculating machines. As we later discovered, Dr. W.L. v.d. Poel had already started constructing a machine in 1946.

 

Our direct boss was Van Wijngaarden, and our newly formed two-man group was temporarily housed in a room of the Physics Laboratory on Plantage Muidergracht, where Prof. Clay was in charge. Our first significant act was the removal of a high-voltage installation in the room, much to the dismay of Clay, who was fond of the thing but arrived too late to prevent the disaster. Then we thought it might be useful to equip the room with some 220V sockets, so we went to Waterlooplein and returned with a second-hand hammer, pliers, screwdriver, some wire, and a few wooden (it was 1947!) sockets. I remember wondering whether we could reasonably submit the exorbitant bill corresponding to these purchases. Nonetheless, we did.

 

After providing our room with voltage, we felt an unpleasant sensation that something was expected from us, though we had no idea how to start. We decided to consult the sparse literature. This investigation yielded two notable articles: one about the ENIAC, a digital (decimal) computer designed for ballistic problems, and one about a differential analyzer, a device for solving differential equations, where the values of variables were represented by continuously variable physical quantities, in this case, the rotation of shafts. The first article was abominably written and incomprehensible, and as far as we understood it, it was daunting, mentioning, for instance, 18,000 vacuum tubes, a number we were sure our employer could never afford. The second article (by V. Bush), on the other hand, was excellently written and gave us the idea that such a thing indeed seemed buildable.

 

Therefore, it had to be a differential analyzer, and a mechanical one at that. As we now know, we were betting on the wrong horse, but first, we didn’t know that, and second, it didn’t really matter. Initially, we were not up to either task simply because we lacked any electronic training. We were supposed to master electricity and atomic physics, but how a vacuum tube looked inside was known only to radio amateurs among us, and we certainly were not. Our own (preliminary) practicum contained, to my knowledge, no experiment in which a vacuum tube was the object of study, and the physics practicum for medical students (the so-called 'medical practicum'), where we had supervised for a year as student assistants, contained exactly one such experiment. It involved a rectifier, dated by colleagues with some training in archaeology to about the end of the First World War. The accompanying manual prescribed turning on the 'plate voltage' only tens of seconds after the filament voltage, and the students had to answer why this instruction was given. The answers were sometimes very amusing. One such answer I won’t withhold from you: 'That is to give the current a chance to go around once.'

 

Our first own experiment with a vacuum tube would not have been out of place in a slapstick movie. It involved a triode, in whose anode circuit we included a megohm resistor for safety. Safely ensconced behind a tipped-over table, we turned on the 'experiment.' Unlike in a slapstick movie, nothing significant happened in our case.

 

With the help of some textbooks, and not to forget the 'tube manuals' of some manufacturers of these useful objects, we somewhat brushed up on our electronic knowledge and managed to get a couple of components, which were supposed to play a role in the differential analyzer, to a state where their function could at least be guessed. They were a moment amplifier and a curve follower. How we should perfect these devices so that they would work reliably and could be produced in some numbers remained a mystery to us. The solution to this mystery was never found. Certainly not by me, as around this time (January 1948), I was summoned to military service, which couldn’t do without me. During the two years and eight months of my absence (I returned to civilian life in September 1950), a drastic change took place, which I could follow thanks to frequent contacts with Loopstra.

 

First, the Mathematical Center, including our group, moved to the current building at 2nd Boerhaavestraat 49. The building looked somewhat different back then. The entire building had consisted of two symmetrically built schools. During the war, the building was requisitioned by the Germans and used as a garage. In this context, the outer wall of one of the gymnasiums was demolished. Now, one half was again in use as a school, and the other half, as well as the attic above both halves, was assigned to the Mathematical Center. The Germans had installed a munitions lift in the building. The lift was gone, but the associated lift shaft was not. Fortunately, few among us had suicidal tendencies. The frosted glass in the toilet doors (an old school!) had long since disappeared; for the sake of decorum, curtains were hung in front of them.

 

Van Wijngaarden could operate for a long time over a hole in the floor next to his desk, corresponding with a hole in the ceiling of the room below (unoccupied). Despite his impressive cigar consumption at that time, I didn’t notice that this gigantic ashtray ever filled up.

 

The number of employees in our group had meanwhile expanded somewhat; all in all, perhaps around five.

 

The most significant change in the situation concerned our further plans. The idea of a differential analyzer was abandoned as it had become clear that the future belonged to digital computers. Upon my return, a substantial part of such a computer, the 'ARRA' (Automatische Relais Rekenmachine Amsterdam), had already been realized. The main components were relays (for various logical functions) and tubes (for the flip-flops that composed the registers). The relays were Siemens high-speed relays (switching times in the order of a few milliseconds), personally retrieved by Loopstra and Van Wijngaarden from an English war surplus. They contained a single changeover contact (break-before-make), with make and break contacts rigidly set, although adjustable. Logically appealing were the two separate coils (with an equal number of windings): both the inclusive and exclusive OR functions were within reach. The relays were mounted on octal bases by us and later enclosed in a plastic bag to prevent contact contamination.

 

They were a constant source of concern: switching times were unreliable (especially when the exclusive OR was applied) and contact degradation occurred nonetheless. Cleaning the contacts ('polishing the pins') and resetting the switching times became a regular pastime, often involving the girls from the Calculation Department. The setting was done on a relay tester, and during this setting, the contacts were under considerable voltage. Although an instrument with a wooden handle was used for setting, the curses occasionally uttered suggested it was not entirely effective.

 

For the flip-flops, double triodes were used, followed by a power tube to drive a sufficient number of relays, and a pilot lamp for visual indication of the flip-flop state. Since the A had three registers, each 30 bits wide, there must have been about 90 power tubes, and we noted with dismay that 90 power tubes oscillated excellently. After some time, we knew exactly which pilot lamp socket needed a 2-meter wire to eliminate the oscillation.

 

At a later stage, a drum (initially, the instructions were read from a plugboard via step switches) functioned as memory; for input and output, a tape reader (paper, as magnetic tape was yet to be invented) and a teleprinter were available. A wooden kitchen table served as the control desk.

 

Relays and tubes might have been the main logical building blocks, but they were certainly not the only ones. Without too much exaggeration, it can be said that the ARRA was a collection of what the electronic industry had to offer, a circumstance greatly contributed to by our frequent trips to Eindhoven, from where we often returned with some 'sample items.' On the train back, we first reminisced about the excellent lunch we had enjoyed and then inventoried to determine if we brought back enough to cover the travel expenses. This examination usually turned out positive.

 

It should be noted that the ARRA was mainly not clocked. Each primitive operation was followed by an 'operation complete' signal, which in turn started the next operation. It is somewhat amusing that nowadays such a system is sometimes proposed again (but hopefully more reliable than what we produced) to prevent glitch problems, a concept we were not familiar with at the time.

 

Needless to say, the ARRA was so unreliable that little productive work could be done with it. However, it was officially put into use. By mid-1952, this was the case. His Excellency F.J. Th. Rutten, then Minister of Education, appeared at our place and officially inaugurated the ARRA with some ceremony. For this purpose, we carefully chose a demonstration program with minimal risk of failure, namely producing random numbers à la Fibonacci. We had rehearsed the demonstration so often that we knew large parts of the output sequence by heart, and we breathed a sigh of relief when we found that the machine produced the correct output. In hindsight, I am surprised that this demonstration did not earn us a reprimand from higher-ups. Imagine: you are the Minister of Education, thoroughly briefed at the Department about the wonders of the upcoming computing machines; you attend the official inauguration, and you are greeted by a group explaining that, to demonstrate these wonders, the machine will soon produce a series of random numbers. When the moment arrives, they tell you with beaming faces that the machine works excellently. I would have assumed that, if not with the truth, at least with me, they were having a bit of fun. His Excellency remained friendly, a remarkable display of self-control.

 

The emotions stirred by this festivity were apparently too much for the ARRA. After the opening, as far as I recall, no reasonable amount of useful work was ever produced. After some time, towards the end of 1952, we decided to give up the ARRA as a hopeless case and do something else. There was another reason for this decision. The year 1952 should be considered an excellent harvest year for the Mathematical Center staff: in March and November of that year, Edsger Dijkstra and Gerrit Blaauw respectively appeared on the scene. Of these two, the latter is of particular importance for today's story and our future narrative. Gerrit had worked on computers at Harvard, under the supervision of Howard Aiken. He had also written a dissertation there and was willing to lend his knowledge and insight to the Mathematical Center. We were not very compliant boys at that time. Let me put it this way: we were aware that we did not have a monopoly on wisdom, but we found it highly unlikely that anyone else would know better. Therefore, the 'newcomer' was viewed with some suspicion. Gerrit’s achievement was all the greater when he convinced us in a lecture of the validity of what he proposed. And that was quite something: a clocked machine, uniform building blocks consisting of various types of AND/OR gates and corresponding amplifiers, pluggable (and thus interchangeable) units, a neat design method based on the use of two alternating, separate series of clock pulses, and proper documentation.

 

We were sold on the plan and got to work. A small difficulty had to be overcome: what we intended to do was obviously nothing more or less than building a new machine, and this fact encountered some political difficulties. The solution to this problem was simple: formally, it would be a 'revision' of the ARRA. The new machine was thus also called ARRA II (we shall henceforth speak of A II), but the double bottom was perfectly clear to any visitor: the frames of the two machines were distinctly separated, with no connecting wire between them.

 

For the AND/OR gates, we decided to use selenium diodes. These usually arrived in the form of selenium rectifiers, a sort of firecrackers of varying sizes, which we dismantled to extract the individual rectifier plates, about half the diameter of a modern-day dime. The assembly—the selenium plates couldn't tolerate high temperatures, so soldering was out of the question—was as follows: holes were drilled in a thick piece of pertinax. One end of the hole was sealed with a metal plug; into the resulting pot hole went a spring and a selenium plate, and finally, the other end of the hole was also sealed with a metal plug. For connecting the plugs, we thought the use of silver paint was appropriate, and soon we were busy painting our first own circuits. Some time later, we had plenty of reasons to curse this decision. The reliability of these connections was poor, to put it mildly, and around this time, the 'high-frequency hammer' must have been invented: we took a small hammer with a rubber head and rattled it along the handles of the units, like a child running its hand along the railings of a fence. It proved an effective means to turn intermittent interruptions into permanent ones. I won't hazard a guess as to how many interruptions we introduced in this way. At a later stage, the selenium diodes were replaced by germanium diodes, which were simply soldered.

 

The AND/OR gates were followed by a triode amplifier and a cathode follower. ARRA II also got a drum and a tape reader. For output, an electric typewriter was installed, with 16 keys operable by placing magnets underneath them. The decoding tree for these magnets provided us with the means to build an echo-check, and Dijkstra fabricated a routine where, simultaneously with printing a number, the same number (if all went well) was reconstructed. I assume we thus had one of the first fully controlled print routines. Characteristic of ARRA II’s speed was the time for an addition: 20 ms (the time of a drum rotation).

 

ARRA II came into operation in December 1953, this time without ministerial assistance, but it performed significantly more useful work than its predecessor, despite the technical difficulties outlined above.

 

The design phase of ARRA II marks for me the point where computer design began to become a profession. This was greatly aided by the introduction of uniform building blocks, describable in a multidimensional binary state space, making the use of tools like Boolean algebra meaningful. We figured out how to provide ARRA II with signed multiplicative addition for integers (i.e., an operation of the form (A,S) := (M) * (±S') + (A), for all sign combinations of (A), (S), and (M) before and of the result), despite the fact that ARRA II had only a counter as wide as a register. As far as I can recall, this was the first time I devoted a document to proving that the proposed solution was correct. Undoubtedly, the proof was in a form I would not be satisfied with today, but still... It worked as intended, and you can imagine my amusement when, years later, I learned from a French book on computers that this problem was considered unsolvable.

 

In May 1954, work began on a (slightly modified) copy of ARRA II, the FERTA (Fokker's First Calculating Machine Type A), intended for Fokker. The FERTA was handed over to Fokker in April 1955. This entire affair was mainly handled by Blaauw and Dijkstra. Shortly thereafter, Blaauw left the service of the Mathematical Center.

 

In June 1956, the ARMAC (Automatic Calculating Machine Mathematical Center), successor to ARRA II, was put into operation, several dozen times faster than its predecessor. Design and construction took about 1½ years. Worth mentioning is that the ARMAC first used cores, albeit on a modest scale (in total 64 words of 34 bits each, I believe). For generating the horizontal and vertical selection currents for these cores, we used large cores. To drive these large cores, however, they had to be equipped with a coil with a reasonable number of windings. Extensive embroidery work didn’t seem appealing to us, so the following solution was devised: a (fairly deep) rim was turned from transparent plastic. Thus, we now had two rings: the rim and the core. The rim was sawed at one place, and the flexibility of the material made it possible to interlock the two rings. Then, the coil was applied to the rim by rotating it from the outside using a rubber wheel. The result was a neatly wound coil. The whole thing was then encased in Araldite. The unintended surprising effect was that, since the refractive indices of the plastic and Araldite apparently differed little, the plastic rim became completely invisible. The observer saw a core in the Araldite with a beautifully regularly wound coil around it. We left many a visitor in the dark for quite some time about how we produced these things!

 

The time of amateurism was coming to an end. Computers began to appear on the market, and the fact that our group, which had now grown to several dozen employees, did not really belong in the Mathematical Center started to become painfully clear to us. Gradual dissolution of the group was, of course, an option, but that meant destroying a good piece of know-how. A solution was found when the Nillmij, which had been automating its administration for some time using Bull punch card equipment, declared its willingness to take over our group as the core of a new Dutch computer industry. Thus it happened. The new company, N.V. Elektrologica, was formally established in 1956, and gradually our group’s employees were transferred to Elektrologica, a process that was completed with my own transfer on January 1, 1959. As the first commercial machine, we designed a fully transistorized computer, the XI, whose prototype performed its first calculations at the end of 1957. The speed was about ten times that of the ARMAC.

 

With this, I consider the period I had to cover as concluded. When I confront my memories with the title of this lecture, it must be said that 'designing computers' as such hardly existed: the activities that could be labeled as such were absorbed in the total of concerns that demanded our attention. Those who engaged in constructing calculating machines at that time usually worked in very small teams and performed all the necessary tasks. We decided on the construction of racks, doors, and closures, the placement of fans (the ARMAC consumed 10 kW!), we mounted power distribution cabinets and associated wiring, we knew the available fuses and cross-sections of electrical cables by heart, we soldered, we peered at oscillographs, we climbed into the machine armed with a vacuum cleaner to clean it, and, indeed, sometimes we were also involved in design.

 

We should not idealize. As you may have gathered from the above, we were occasionally brought to the brink of despair by technical problems. Inadequate components plagued us, as did a lack of knowledge and insight. This lack existed not only in our group but globally the field was not yet mastered.

 

However, it was also a fascinating time, marked by a constant sense of 'never before seen,' although that may not always have been literally true. It was a time when organizing overtime, sometimes lasting all night, posed no problem. It was a time when we knew a large portion of the participants in international computer conferences at least by sight!


liam_on_linux: (Default)
2024-03-24 11:45 am

Another day, another paean of praise for the Amiga's 1980s pre-emptive multitasking GUI

Yes, the Amiga offered a GUI with pre-emptive multitasking, as early as 1985 or so. And it was affordable: you didn't even need a hard disk.

The thing is, that's only part of the story.

There's a generation of techies who are about 40 now who don't remember this stuff well, and some of the older ones have forgotten with time but don't realise. I had some greybeard angrily telling me that floppy drives were IDE recently. Senile idiot.

Anyway.

Preemptive multitasking is only part of the story. Lots of systems had it. Windows 2.0 could do preemptive multitasking -- but only of DOS apps, and only in the base 640kB of RAM, so it was pretty useless.

It sounds good but it's not. Because the other key ingredient is memory protection. You need both, together, to have a compelling deal. Amiga and Windows 2.x/3.x only had the preemption part, they had no hardware memory management or protection to go with it. (Windows 3.x when running on a 386 and also when given >2MB RAM could do some, for DOS apps, but not much.)

Having multiple pre-emptive tasks is relatively easy if they are all in the same memory space, but it's horribly horribly unstable.

Also see: microkernels. In size terms, AmigaOS was a microkernel, but a microkernel without memory protection is not such a big deal, because the hard part of a microkernel is the interprocess communication, and if they can just do that by reading and writing each other's RAM it's trivially easy but also trivially insecure and trivially unstable.

RISC OS had pre-emptive multitasking too... but only of text-only command-line windows, and there were few CLI RISC OS apps so it was mostly useless. At least on 16-bit Windows there were lots of DOS apps so it was vaguely useful, if they'd fit into memory. Which only trivial ones would. Windows 3 came along very late in the DOS era, and by then, most DOS apps didn't fit into memory on their own one at a time. I made good money optimising DOS memory around 1990-1992 because I was very good at it and without it most DOS apps didn't fit into 500-550kB any more. So two of them in 640kB? Forget it.

Preemption is clever. It lets apps that weren't designed to multitask do it.

But it's also slow. Which is why RISC OS didn't do it. Co-op is much quicker which is also why OSes like RISC OS and 16-bit Windows chose it for their GUI apps: because GUI apps strained the resources of late-1980s/very-early-1990s computers. So you had 2 choices:

• The Mac and GEM way: don't multitask at all.

• The 16-bit Windows and RISC OS way: multitask cooperatively, and hope nothing goes wrong.

Later, notably, MacOS 7-8-9 and Falcon MultiTOS/MiNT/MagiC etc added coop multitasking to single-tasking GUI OSes. I used MacOS 8.x and 9.x a lot and I really liked them. They were extraordinarily usable to an extent Mac OS X has never and will never catch up with.

But the good thing about owning a Mac in the 1990s was that at least one thing in your life was guaranteed to go down on you every single day.               

(Repurposed from a HN comment.)