liam_on_linux: (Default)
I have just recently discovered that my previous post about Commodore BASIC went modestly viral, not only featuring on Hacknernews but getting its own story on Hackaday.

Gosh.

This in itself has provoked some discussion. It's also resulted in a lot of people telling me that NO COMMODORE WAS TEH AWESOME DONT YOU EVEN and so on, as one might expect. Some hold that the C64's lousy PET BASIC was a good thing because it forced them to learn machine code.

People on every 8-bit home micro who wanted to do games and things learned machine code, and arguably, there is good utility in that. 8-bitters just didn't have the grunt to execute any interpreter that fast, and most of the cut-price home machines didn't have the storage to do justice to compilers.

But for those of us who never aspired to do games, who were just interested in playing around with algorithms, graphics, graphs and fractals and primitive 3D and so on, then there was an ocean of difference between a good-enough BASIC, like Sinclair BASIC, and the stone-age 1970s ones that Commodore shipped, designed for machines that didn't have graphics and sound. I learned BASIC on a PET 4032, but I never wanted a PET of my own -- too big, too expensive, and kinda boring. Well what use is an all-singing all-dancing colour computer with the best music chip on the market if it has PET BASIC with all the sound and pictures and motion of which a PET was capable? (I.e. none.)

I used my Spectrum, got a better Spectrum with more RAM, then got a PCW and learned a bit of CP/M, and then I got an Archimedes and a superb BASIC that was as quick as Z80 assembly on a Spectrum.

But what occurred to me recently was that, as I discovered from ClassicCmp, a lot of Americans barely know that there were other computer markets than the American one. They don't know that there were cheaper machines with comparable capabilities to the C64, but better BASICs (or much better, world-class BASICs.) They don't know that other countries' early-1980s 8-bit BASICs were capable of being rich, powerful tools, for learning advanced stuff like recursion and drawing full-colour high-res fractals using said recursion, entirely in BASIC.

For many people, Atari and Apple were mid-price-range and Commodore were cheap, and MS BASIC was basically all there was.

In the last 30 years, America has largely guided the world of software development. The world runs 2 software ecosystems: the DOS/Windows line (both American, derived from DEC OSes which were also American), and various forms of UNIX (also American).

All the other OSes and languages are mostly dead.

• Ada, the *fast* type-safe compiled language (French)? Largely dead in the market.

• The Pascal/Modula-2/Oberon family, a fast garbage-collected compiled family suitable for OS kernels (Swiss), or the pioneering family of TUI/GUI OSes that inspired Plan 9, Acme, & Go? Largely dead.

• Psion/EPOC/Symbian (British), long-battery-life elegant multitasking keyboard-driven PDAs, & later their the super-fast realtime-capable C++ smartphone OS that could run the GSM comms stack on the same CPU as the user OS? Totally dead.

• Nokia's elegant, long-life, feature-rich devices, the company who popularised first the cellphowe and then the smartphone? Now rebadges Chinese/American kit.

• Acorn RISC OS (British), the original ARM OS, limited but tiny and blindingly fast and elegant? Largely dead.

• DR-DOS, GEM, X/GEM, FlexOS -- mostly the work of DR's UK R&D office? Dead & the American company that inherited the remains didn't properly open-source them.

• possibly the best, richest ever 8-bit word processor LocoScript, pioneering GUI language BASIC+ , first integrated internet suite for Windows Turnpike, all from British Locomotive Software? Dead.

In my early years in this business, in the 1980s and 1990s, there were as many important European hardware and software products as there were American, including European CPUs and European computer makers, and European software on American hardware.

Often, the most elegant products -- the ones that were the most powerful (e.g. the Archimedes), or the most efficient (e.g. Psion), or had the longest battery life (e.g. Nokia) -- all dead and gone, and their products nearly forgotten.

30y ago I had a personal RISC workstation for under $1000 that effortlessly outperformed IBM's fastest desktop computers costing 10x more. British.

25y ago I had excellent multiband mobile phones with predictive text and an IRDA link to my PDA. The phone lasted a week on a charge, and the PDA a month or 2 on 2 AA batteries. British and Finnish.
15y ago I had a smartphone that lasted a few days on a charge, made by the company who made the phone above running software from the PDA company. Finnish.

Now, I have sluggish desktops and sluggish laptops, coupled with phones that barely last a day...

And I think a big reason is that Europe was poorer, so product development was all about efficiency, cost-reduction, high performance and sparing use of resources. The result was very fast, efficient products.

But that's not the American way, which is to generalise. Use the most minimal, close-to-the-metal language that will work. Use the same OS in desktop and mobile. Don't build new OSes -- reuse old ones and old, tried-and-tested tools and methods. Use the same OS on desktop and laptop and server and phone. Moore's Law will catch up and fix the performance.

Its resulted in amazing products of power and bling... but they need teams of tens of thousands to fix the bugs caused by poor languages and 1970s designs, and a gigabyte of updates a month to keep them functional. It's also caused an industry worth hundreds of millions exploiting security holes, both by criminals and by developing-world callcentre businesses prodiving the first-line support these overcomplex products need.

And no, I am not blaming all that on Commdore or the C64! 😃 But I think some of the blame can be pointedf that way. Millions of easily-led kids being shown proof that BASIC is crap and you've got to get close to the metal to make it work well -- all because one dumb company cut a $20 corner too much.
liam_on_linux: (Default)
In answer to this Ask HN question.

• I had a client with a Novell IntraNetware 4.1 network. I did a bargain-basement system upgrade for them. With a local system builder, we took a whole storage closet full of decade-old 386 and 486 desktops and turned them into Cyrix 6x86 166+ clients. The motherboards had integrated graphics and NICs (rare back then), 32MB RAM and a smallish local EIDE hard disk, say 1.2GB. No CD drives, original 14-15" SVGA CRTs.

A 2nd Novell server would have been too expensive, so I put in an old Pentium 133 workstation as a fileserver running Caldera OpenLinux with its built-in MARSNWE Netware server emulation. It held CD images of NT 4 Workstation, the latest Service Pack, the latest IE, MS Office 97 and a few other things like printer drivers. Many gigs of stuff, which would have required a new hard disk in the main server, which with Netware would have meant a mandatory RAM upgrade -- Netware 3 & 4 kept disks' FATs in RAM, so the bigger the disk, the more RAM the server needed.

On each client, I booted from floppy and installed DOS 6.22. Then I installed the Netware client and copied the NT 4 installation files from the new server. Ran WINNT.EXE and half an hour later it was an NT workstation. Install Office etc. straight off the server. (An advantage of this was that client machines could auto-install any extra bits they needed straight off the server.)

For the cost of one fancy Dell server & a NOS licence, I upgraded an entire office to a fleet of fast new PCs. As a bonus, they had no local optical drives for users to install naughty local apps.

• Several 486s with PCI USB cards, driving "Manta Ray" USB ADSL modems -- yes, modems -- running Smoothwall, a dedicated Linux firewall distro.

http://www.computinghistory.org.uk/det/36102/Alcatel-Stingra...

https://www.smoothwall.org/

This was at the end of the 1990s, when 486s were long obsolete, but integrated router/firewalls were still very expensive.

Smoothwall also ran a caching Squid proxy server, which really sped up access for corporate users regularly accessing the same stuff. For instance, if all the client machines ran the same version of Windows, say, Windows 2000 Pro, then after the first ran Windows Update, all successive boxes downloaded the updates from the Smoothwall box in seconds. Both far easier and much cheaper than MS Systems Management Server. (And bear in mind, at the turn of the century, fast broadband was 1Mb/s. Most of my clients had 512kb/s.)

There was one really hostile, aggressive guy in the Smoothwall team, who single-handedly drove away a lot of people, including me. The last such box I put in ran IPCop instead. http://www.ipcop.org/ After that, though, routers became affordable and a lot easier.
liam_on_linux: (Default)

I was a huge Archimedes fan and still have an A310, an A5000, a RiscPC and a RasPi running RISC OS.

But no, I have to disagree. RISC OS was a hastily-done rescue effort after Acorn PARC failed to make ARX work well enough. I helped to arrange this talk by the project lead a few years ago.

RISC OS is a lovely little OS and a joy to use, but it's not very stable. It has no worthwhile memory protection, no virtual memory, no multi-processor support, and true preemptive multitasking is a sort of bolted-on extra (the Task Window). When someone tried to add pre-emption, it broke a lot of existing apps.

It was not some industry-changing work of excellence that would have disrupted everything. It was just barely good enough. Even after 33 years, it doesn't have wifi or bluetooth support, for instance, and although efforts are going on to add multi-processor support, it's a huge amount of work for little gain. There are a whole bunch of memory size limits in RISC OS as it is -- apps using >512MB RAM are very difficult and that requires hackery.

IMHO what Acorn should have done is refocus on laptops for a while -- they could have made world-beating thin, light, long-life, passively-cooled laptops in the late 1990s. Meanwhile, worked with Be on BeOS for a multiprocessor Risc PC 2. I elaborated on that here on this blog.

But RISC OS was already a limitation by 1996 when NT4 came out.

I've learned from Reddit that David Braben (author of Elite and the Archimedes' stunning "Lander" demo and Zarch game) offered to add enhancements to BBC BASIC to make it easier to write games. Acorn declined. Apparently, Sony was also interested in licensing the ARM and RISC OS for a games console -- probably the PS1 -- but Acorn declined. I had no idea. I thought the only 3rd party uses of RISC OS were NCs and STBs. Acorn's platform was, at the time, almost uniquely suitable for this -- a useful Internet client on a diskless machine.

The interesting question, perhaps, is the balance between pragmatic minimalism as opposed to wilful small-mindedness.

I really recommend the Chaos Computer Congress Ultimate Archimedes talk on this subject.

There's a bunch of stuff in the original ARM2/IOC/VIDC/MEMC design (e.g. no DMA, e.g. the 26-bit Program Counter register) that looks odd but reflects pragmatic decisions about simplicity and cost above all else... but a bit like the Amiga design, one year's inspired design decision may turn out, a few years later, to be a horrible millstone around the team's neck. Even the cacheless design which was carefully tuned to the access speeds of mid-1990s FP-mode DRAM.

They achieved greatness by leaving a lot out -- but not just from some sense of conceptual purity. Acorn's Steve Furber said it best: "Acorn gave us two things that nobody else had. No people and no money."

Acorn implemented their new computer on four small, super-simple, chips and a minimalist design, not because they wanted to, but because it was a design team of about a dozen people and almost no budget. They found elegant work-arounds and came up with a clever design because that's all they could do.

I think it may not be a coincidence that a design that was based on COTS parts and components, assembled into an expensive, limited whole eventually evolved into the backbone of the entire computer industry. It was poorly integrated but that meant that parts could be removed and replaced without breaking the whole: the CPU, the display, the storage subsystems, the memory subsystem, in the end the entire motherboard logic and expansion bus.

I refer, of course, to the IBM PC design. It was poor then, but now it's the state of the art. All the better-integrated designs with better CPUs are gone, all the tiny OSes with amazing performance and abilities in a tiny space are gone.

When someone added proper pre-emptive multitasking to RISC OS, it could no longer run most existing apps. If CBM had added 68030 memory management to AmigaOS, it would have broken inter-app communication.

Actually, the much-maligned Atari ST's TOS got further, with each module re-implemented by different teams in order to give it better display support, multitasking etc. while remaining compatible. TOS became MINT -- Mint Is Not TOS -- and then MINT became TOS 4. It also became the proprietary MaGiC OS-in-a-VM for Mac and PC, and later, volunteers integrated 3rd party modules to create a fully GPL edition, AFROS.

But it doesn't take full advantage of later CPUs and so on -- partly because Atari didn't.
Apple famously tried to improve MacOS into something with proper multitasking, nearly went bankrupt doing so, bought their co-founder's company NeXT and ended up totally dumping their own OS, frameworks, APIs and tooling -- and most of the developers -- and switching to a UNIX.

Sony could doubtless have done wonderful stuff with RISC OS on a games console -- but note that the Playstation 4 runs Orbis, which is based on FreeBSD 9, but none of Sony's improvements have made it back to FreeBSD.

Apple macOS is also in part based on FreeBSD, and none of its improvements have made it back upstream. macOS has a better init system, launchd, and a networked metadata directory, netinfo, and a fantastic PDF-based display server, Quartz, as well as some radical filesystem tech.
You won't find any of that in FreeBSD. It may have some driver stuff but the PC version is the same ugly old UNIX OS.

If Acorn made its BASIC into a games engine, that would have reduced its legitimacy in the sciences market. Gamers don't buy expensive kit, universities and laboratories do. Games consoles sell at a loss, like inkjet printers -- the makers earn a profit on the games or ink cartridges. It's called the Gilette razors model.

As a keen user, it greatly saddened me when Acorn closed down its workstations division, but the OS was by then a huge handicap, and there simply wasn't an available replacement by then. As I noted in that blog post I linked to, they could have done attractive laptops, but it wouldn't have helped workstation sales, not back then.

The Phoebe, the cancelled RISC PC 2, had PCI and dual-processor support. Acorn could have sold SMP PCs way cheaper than any x86 vendor, for most of whom the CPU was the single most expensive component. But it wasn't an option, because RISC OS couldn't use 2 CPUs and still can't. If they'd licensed BeOS, and maybe saved Be, who knows -- a decade as the world's leading vendor of inexpensive multiprocessor workstations doesn't sound so bad -- well, the resultant machines would have been very nice, but they wouldn't be RISC PCs because they wouldn't run Archimedes apps, and in 1998 the overheads of running RISC OS in a VM would have been prohibitive. Apple made it work, but some 5 years later, when it was normal for a desktop Mac to come with 128MB or 256MB of RAM and a few gigs of disk, and it was doable to load a 32-64MB VM with another few hundred megs of legacy OS in it. That was rather less true in 1997 or 1998, when a high-end PC had 32 or 64MB of RAM, a gig of disk, and could only take a single CPU running at a couple of hundred megahertz.

I reckon Acorn and Be could have done it -- BeOS was tiny and fast, RISC OS was positively minute and blisteringly fast -- but whether they could have done it in time to save them both is much more doubtful.
I'd love to have seen it. I think there was a niche there. I'm a huge admirer of Neal Stephenson and his seminal essay In The Beginning Was The Command Line is essential reading. It dissects some of the reasons Unix is the way it is and accurately depicts Linux as the marvel it was around the turn of the century. He lauds BeOS, and rightly so. Few ever saw it but it was breathtaking at the time.

Amiga fans loved their machine, not only for its graphics and sound, but multitasking too. This rather cheesy 1987 video does show why...


Just a couple of years later, the Archimedes did pretty much all that and more and it did it with raw CPU grunt, not fancy chips. There are reasons its OS is still alive and still in use. Now, it runs on a mass-market £25 computer. AmigaOS is still around, but all the old apps only run under emulation and it runs on niche kit that costs 5-10x more than a PC of comparable spec.

A decade later, PCs had taken over and were stale and boring. Sluggish and unresponsive despite their immense power. Acorn computers weren't, but x86 PCs were by then significantly more powerful, had true preemptive multitasking, built-in networking and WWW capabilities and so on. But no pizazz. They chugged. They were boring office kit, and they felt like it.

But take a vanilla PC and put BeOS on it, and suddenly, it booted in seconds, ran dozens of apps with ease without flicker or hesitation, played back multiple video streams while rendering them onto OpenGL 3D solids. And, like the Archimedes did a decade before, all in software, without hardware acceleration. All the Amiga's "wow factor" long after we'd given up ever seeing it again.

This, at the time when Linux hadn't even got a free desktop GUI yet, required hand-tuning thousands of lines of config files like OS/2 at its worst, and had no productivity apps.

But would this have been enough to keep A&B going until mass-market multi-core x86 chips came along and stomped them? Honestly, I really doubt it. If Apple had bought Be, it would have got a lovely next-gen OS, but it wouldn't have got Steve Jobs, and it wouldn't have been able to tempt classic MacOS devs to the new OS with amazing next-gen dev tools. I reckon it would have died not long after.

If Acorn and Be had done a deal, or merged or whatever, would there have been enough appeal in the cheapest dual-processor RISC workstation, with amazing media abilities, in the industry? (Presumably, soon after, quad-CPU and even 6- or 8- CPU boxes.)

I hate to admit it, but I really doubt it.
liam_on_linux: (Default)
(Repurposing a couple of Reddit comments from a chap considering switching to Linux because of design and look-and-feel considerations.)

I would say that you need to bear in mind that Linux is not a single piece of software by a single company. Someone once made the comparison something like this: "FreeBSD is a single operating system. Linux is not. Linux is 3,000 OS components flying in close formation."

The point is that every different piece was made by a different person, group of people, organisation or company, working to their own agenda, with their own separate plans and designs. All these components don't look the same or work the same because they're all separately designed and written.

If you install, say, a GTK-based desktop and GTK-based components, then there's a good chance there will be a single theme and they'll all look similar, but they might not work similarly. If you then install a KDE app it will suck in a whole ton of KDE libraries and they might look similar but they might also look totally different -- it depends on how much effort the distro designers put in.

If you want a nice polished look and feel, then your best bet is to pick a mainstream distro and its default desktop, because the big distro vendors have teams of people trying to make it look nice.

That means Ubuntu or Fedora with GNOME, or openSUSE with KDE.

(Disclaimer: I work for SUSE. I run openSUSE for work. I do not use KDE, or GNOME, as I do not personally like either.)

If you pick an OS that is a side-project of a small hardware vendor, then you are probably not going to get the same level of fit and finish, simply because the big distros are assembled by teams of tens to hundreds of people as their day job, whereas the smaller distros are a handful of volunteers, or people working on a side-job, and the niche distros are mostly one person in their spare time, maybe with a friend helping out sometimes.

Windows is far more consistent in this regard, and macOS is more consistent than Windows. None of them are as consistent as either Windows or Classic MacOS were before the WWW blew the entire concept of unified design and functionality out of the water and vapourised it into its component atoms, never to be reassembled.

Don't judge a book by its cover -- everyone knows that. Well, don't judge a distro by a couple of screenshots.

As for my expertise -- well, "expertise" is very subjective! :-D You would easily find people who disagree with me -- there are an awful lot of strong biases and preconceptions in the Linux world.

For one thing, it is so very customisable that people have their own workflows that they love and they won't even consider anything else.

For another, there is 51 years of UNIX™ cultural baggage. For example in the simple matter of text editors. There are two big old text editors in the UNIX world, both dating from the 1970s. Both are incredibly powerful and capable, but both date from an era before PCs, before screens could display colours or formatting or move blocks of characters around "live" in real time, before keyboards had cursor keys or keys for insert, delete, home, end, and so on.

So both are horrible. They are abominations from ancient times, with their own weird names for everyday stuff like "files" and "windows" -- because they are so old they predate words like "files" and "windows"! They don't use the normal keyboard keys and they have their own weird names for keyboard keys, names from entire companies that went broke and disappeared 30 or 40 years ago.

But people still use these horrible old lumps of legacy cruft. People who were not yet born when these things were already obsolete will fight over them and argue that they are the best editors ever written.

Both GNOME and KDE are very customisable. Unfortunately, you have to customise them in the ways that their authors thought of and permitted.

KDE has a million options to twiddle, but I happen to like to work in ways that the KDE people never thought of, so I don't get on with it. (For example, on a widescreen monitor, I put my taskbar vertically on the left side. This does not work well with KDE, or with MATE, or with Cinnamon, or most other desktops, because they never thought of it or tried it, even though it's been a standard feature of Windows since 1995.)

GNOME has almost no options, and its developers are constantly looking for things they don't use and removing them. (Unfortunately, some of these are things I use a dozen times a day. Sucks to be me, I guess.) If you want to customise GNOME, you have to write your own add-on extensions in JavaScript. JavaScript is very trendy and popular, which is a pity, as it is probably the worst programming language in the world. After PHP, anyway.

So if you want to customise GNOME, you'd better hope that someone somewhere has programmed the customisation you want, and that their extension still works, because there's a new version of GNOME every 6 months and it usually breaks everything. If you have a broken extension, your entire desktop might crash and not let you log in, or log out, or do anything. This is considered perfectly normal in GNOME-land.

Despite this, these two desktops are the most popular ones around. Go figure.

There was one that was a ripoff of Mac OS X, and I really liked it. It was discontinued a few years ago. Go figure.

Rather than ripping off other desktops, the trend these days is to remove most of the functions, and a lot of people like super-minimal setups with what are called "tiling window managers". These basically try to turn your fancy true-colour hardware-3D-accelerated high-definition flat-panel monitor into a really big glass text terminal from 1972. Go figure.

There used to be ripoffs of other OSes, including from dead companies who definitely won't sue. There were pretty good ripoffs of AmigaOS, classic MacOS, Windows XP, Acorn RISC OS, SGI Irix, NeXTstep, Sun OpenLook, The Open Group's CDE and others. Most are either long dead, or almost completely ignored.

Instead today, 7 out of the 8 leading Linux desktops are just ripoffs of Windows 95, of varying quality. Go figure.

liam_on_linux: (Default)
[livejournal.com profile] lapswood was kind enough to record an impromptu, unscripted talk I did at Dysprosium, the 2018 UK Eastercon. I thought I was chairing a panel discussion, but there was no-one else, just me! So, I had to wing it, and try to recreate my March talk from memory.

Judge for yourself if it works...    

http://bit.ly/1OZhKEi (Dropbox link to 58 minute MP3)
liam_on_linux: (Default)
In the context of the `apt` command, `update` means "refresh the database containing the current index of what versions are in the configured repositories". It does not install, remove, upgrade or change any installed software.

I wonder if this is because of people lacking historical context?

The important things to know are 3 concepts: dependencies, recursion, and resolution.

The first Linux distributions, like SLS and Yggrasil and so on, were built from source. You want a new program? Get the source and compile it.

Then package managers were invented. Someone else got the source, compiled it, bundled it up in a compressed archive with any config files it needed and instructions for where to put its contents on your computer.

As programs got more complex, they were built using other programs. So the concept of "dependencies" appeared. Let's say text editor "Superedit" can import RTF (Revisable Text Format) files, and save RTF (Rich Text Format) files. It does not read these formats itself: it uses another tool, rich_impex, and rich_impex needs rft_import and rtf_export.

(Note: RTF and RFT are real formats and they are totally different and unrelated. I picked them intentionally as their names are so similar.)

If you need a new version of Superedit, then you first need new version of rich_impex. But rich_impex needs rtf_import and rtf_export.

So in the early days of Linux with package managers, e.g. Red Hat Linux 4, if you tried to install superedit.2.rpm, it would fail, saying it needed rich_impex-1.1.rpm. This is called a dependency.

And if you tried to install rich_impex-1.1.rpm, it said you needed rft_import 1.5 and rtf_export 1.7.

So to install Superedit 2, you had to try, fail, note down the error, then go try to install rich_impex, which would fail, then note down the error, then go install rft_import 1.5, and rtf_export 1.7.

THEN you could install rich_impex 1.1.

THEN you would find that it was now possible to install superedit_2.rpm.

It was a lot of work. Installing something big, like KDE 1, would be almost impossible as you had to go find hundreds of these dependencies, by trial and error. It could take days.

Debian was the first to fix this. To its package manager, dpkg, it added another tool on top: apt.

Apt did automatic dependency resolution. So when you tried to install superedit 2, it would check and find that superedit-2 needed rich_impex-1.1 and install that for you.

This is no use if it does 1 level and stops. It would fail when it couldn't install rich_impex because that in turn had its own dependencies.

So what is needed is a tool that goes, installs your dependencies, and their dependencies, and their dependencies, all the way down, starting with the ends of each chain. This requires a  programming technique called recursion:
https://dev.to/rapidnerd/comment/62km

Now, let's imagine that superedit-2, which depends on rich_impex, which depends on rft_import and rtf_export.

But sadly, the maintainer of rft_import got run over by a bus and died. So, no new versions of rft_import. That means no new version of rich_impex which means no new version of superedit.

So someone comes along, reads the source code of rft_import, thinks they could do it better, and writes their own routine. They call it import_rft because they don't want to have to fix any bugs in rft_import.

The writer of rich_impex does a new version, rich_impex 2. They switch the import filter, so rich_impex 2 uses import_rtf 1.0 and rft_export 1.8.

Superedit 3 also comes out and it uses rich_impex 2. So if you want to upgrade from superedit 2 to superedit 3, you need to upgrade rich_impex 2 to v3. To get rich_impex 3, you need to remove rft_import and install a new dependency, import_rft.

When you start doing recursive solution to a problem, you don't know where it's going to go. You find out on the way.

So apt has 2 choices:

[1] recurse, install newer versions of anything needed, until you can upgrade the target package (which could be "all packages"), but don't add anything that isn't there

OR

[2] recurse, install all newer versions of anything needed INCLUDING ADDING NEW PACKAGES, until the entire distribution has been upgraded

#1 is meant for 1 program at a time, but you can tell it to do all programs. But it won't add new packages.

So if you use `apt-get upgrade` you will not get superedit 3, because to install superedit 3, it will have to install rich_impex 2, and that means it would need to remove rft_import and install import_rft instead. `upgrade` won't do that -- it only installs newer versions. So your copy of superedit will be stuck at v2.

#2 is meant for upgrading the whole installed system to the latest version of all packages, including adding any new requirements it needs on the way.

If you do it, it will replace superedit 2 with superedit 3, because `dist-upgrade` has the authority to remove the rft_import module and install a different one, import_rft, in its place.

Neither of them will rewrite the sources listed in /etc/apt/sources.list. Neither of them will ever upgrade the entire distro to a new release. Neither of them will ever move from one major release of Ubuntu or Debian or Crunchbang or Mint or Bodhi or whatever to a new release.

All they do is update that version of the distribution to the newest version of that release.

"Ubuntu 20.04" is not a distribution. "Ubuntu" is the distribution. "20.04" is a release of the distribution. It's the 32nd so far. (W, H, B, D then through the alphabet from E to Z, then back to A. Now we're at F again.)

So `dist-upgrade` does not upgrade the release. It upgrades your whole DISTRO but only to the latest version of that release.

If you want a new release then you need `do-release-upgrade`.

Do not use `apt upgrade` for upgrading the whole distro; `apt dist-upgrade` does a more thorough job. `apt upgrade` will not install superedit 3 because it won't add new packages or remove obsolete ones.

In the old days, you should have used `apt-get dist-upgrade` because it will replace or remove obsoleted dependencies.

Now, you should use `apt full-upgrade` which does the same thing.

Relax. Rest assured, neither will ever, under any circumstances, upgrade to a new release.
liam_on_linux: (Default)
Commodore's Jack Tramiel got a very sweet deal from Microsoft for MS BASIC, as used in CBM's PET, once of the first integrated microcomputers. The company didn't even pay royalties. The result is that CBM used pretty much the same BASIC in the PET, VIC-20 and C64. It got trivial adjustments for the hardware, but bear in mind: the PET had no graphics, no colour, and only a beep; the VIC-20 had (poor) graphics and sound, and the C64 had quite decent graphics and sound.

So the BASIC was poor for the VIC-20 and positively lousy on the C64. There were no commands to set colours, draw or load or save graphics, play music, assemble sound effects, nothing.

I.e. in effect the same BASIC interpreter got worse and worse with each successive generation of machines, ending up positively terrible on the C64. You had to use PEEKs and POKEs to use any of the machine's facilities.

AIUI, CBM didn't want to pay MS for a newer, improved BASIC interpreter. It thought, with some justice, that the main uses of the VIC-20 and C-64 were games machines, using 3rd party games written in assembly language for speed, and so the BASIC was a reasonable saving: a corner it could afford to cut.

The C64 also had a very expensive floppy disk drive (with its own onboard 6502 derivative, ROM & RAM) but a serial interface to the computer, so it was both dog-slow and very pricey.

This opened up opportunities for competition, at least outside the US home market. It led to machines like (to pick 2 extremes):
• the Sinclair ZX Spectrum, which was cheaper & had a crappy keyboard, no joystick ports, etc., but whose BASIC included graphics and sound commands.
• the Acorn BBC Micro, which was expensive (like the C64 at launch), but included a superb basic (named procedures with local variables, allowing recursion; if/then/else, while...wend, repeat/until etc., and inline assembly code), multiple interfaces (printer, floppy drive, analogue joysticks, 1nd CPU, programmable parallel expansion bus, etc.)

All because CBM cheaped out and used a late-1970s MS BASIC in an early-1980s machine with, for the time, quite high-end graphics and sound.

The C64 sold some 17 million units, so a lot of '80s kids knew nothing else and thought the crappy BASIC was normal. Although it was one of the worst BASICs of its day, it's even been reimplemented as FOSS now! The worst BASIC ever lives on, while far finer versions such as Beta BASIC or QL SuperBASIC languish in obscurity.

It is also largely responsible, all on its own, for a lot of the bad reputation that BASIC has to this day, which in turn was in part responsible for the industry's move away from minis programmed in BASIC (DEC, Alpha Micro, etc.) and towards *nix programmed in C, and *nix rivals such as OS/2 and Windows, also programmed in C.

Which is what has now landed us with an industry centred around huge, unmaintainable, insecure OSes composed of tens of millions of lines of unsafe C (& C derivatives), daily and weekly mandatory updates in the order of hundreds of megabytes, and a thriving industry centred around keeping obsolete versions of these vast monolithic OSes (which nobody fully understands any more) maintained and patched for 5, 10, even 15 or so years after release.

Which is the business I work in.

Yay.

It sounds ridiculous but I seriously propose that much of this is because the #1 home computer vendor in the Western world kept using a cheap and nasty BASIC for nearly a decade after its sell-by date.

CBM had no real idea what it was doing. It sold lots of PETs, then lots more VIC-20s, then literally millions of C64s, without ever improving the onboard software to match the hardware.

So what did it do next? A very expensive portable version, for all the businesspeople who needed a luggable home gaming computer.

Then it tried to sell incompatible successor machines, which failed -- the Commodore 16 and Plus 4.

Better BASIC, bundled ROM business apps -- why?! -- but not superior replacements for its best-selling line. Both flopped horribly.

This showed that CBM apparently still had no real clue why the C64 was a massive hit, or who was buying it, or why.

Later it offered the C128, which had multiple operating modes, including a much better BASIC and an 80-column display, but also an entire incompatible 2nd processor -- a Z80 so it could run CP/M. This being the successor model to the early-'80s home computer used by millions of children to play video games. They really did not want, need or care about CP/M of all things.

This sold a decent 5 million units, showing how desperate C64 owners were for a compatible successor.

(Commodore people often call this the last new 8-bit home computer -- e.g. its lead designer Bil Herd -- which of course it wasn't. The Apple ][GS was in some ways more radical -- its 16-bit enhanced 6502, the 64C816, was more use than the C128's 2 incompatible 8-bit chips, for a start -- and came out the following year. Arguably a 16-bit machine, though, even if it was designed to run 8-bit software .

But then there was the UK SAM Coupé, a much-enhanced ZX Spectrum clone with a Z80, released 4 years later in 1989. Amstrad's PcW 16, again a Z80 machine with an SSD and a GUI OS, came out in 1995.)

There was nearly another, incompatible of course, successor model later still, the C65.

That would have been a worthy successor, but by then, CBM had bought the Amiga and wasn't interested any more -- and wisely, I think, didn't want to compete with itself.

To be fair, it's not entirely obvious what CBM should have done to enhance the C64 without encroaching too much into the Amiga's market. A better CPU, such as the SuperCPU, a small graphics upgrade as in the C128, and an optional 3.5" disk drive would have been enough, really. The GEOS OS was available and well-liked.

GEOS was later ported to the x86, as used in the HP OmniGo 100 -- I have one somewhere -- and later became GeoWorks Ensemble, which tried to compete with MS Windows. PC GEOS is still alive and is now, remarkably, FOSS. I hope it gets a bit of a renaissance -- I am planning to try it on my test Open DR-DOS and IBM PC-DOS 7.1 systems. I might even get round to building a live USB image for people to try out. 
liam_on_linux: (Default)
I have several RasPis lying around the place. I sold my π2 when I got a π3, but then that languished largely unused for several years, after the fun interlude of getting it running RiscOS in an old ZX Spectrum case.

Then I bought myself a π3+ in a passive-cooling heatsink/case for Yule 2018, which did get used for some testing at work, and since then, has also been gathering dust. I am sure this is the fate of many a π.

The sad thing about the RasPi is that it's a bit underpowered. Not unreasonable for a £30 computer. The π1 was a single rather gutless ARM6 core. The π2 at least had 4 cores, but still weedy ones. The π3 had faster cores and wifi, but all still only have 1GB of non-upgradable RAM. They're not really up to running a full Linux desktop. What's worse, the Ethernet and wifi are USB devices, sharing the single USB2 bus with any external storage – badly throttling the bandwidth for server stuff. The π3+ is a bit less gutless but all the other limitations apply – and it needs more power and some form of cooling.

But then a chap on FesseBouc offered an official π touchscreen, used and a bit cheaper than new. That gave me an idea. I listen to a lot of BBC 6music – I am right now, in fact – but it needs a computer. Czech radio seems to mainly play a lot of bland pop which isn't my thing, and of course I can't understand a useful amount of Czech yet. It's at about the level of my Swedish in 1993 or so: if I listen intently and concentrate very hard, I may be able to work out the subject being discussed, but not follow the discussion.

But I don't want to leave a laptop on 24×7 and I definitely don't want a big computer with a separate screen, keyboard and mouse doing it. What I want is something the size of a radio but which can connect to wifi and stream music to simple old-fashioned wired speakers, without listening to me. I most definitely do not want a spy basestation for a dot-com listening to my home, thank you.
Image
So I bought the touchscreen, connected it to my old π3, powered them both off a couple of old phone chargers, bunged in a spare µSD card, and started playing with software. I know where I am with software.

First I tried OSMC. It worked, detected and used the touchscreen, and could connect to my wifi... but it doesn't directly support streaming audio, as far as I can tell, and I could not work out how to install add-ins, nor how to update the underlying Linux.

I had a look at LibreElec but it looked very similar. While I don't really want the bloat of an entire general-purpose Linux distro, I just want this to work, and I had 8GB to play with, which is plenty.

So next I tried XBian. This is a cut-down Debian, running on Btrfs, which boots straight to Kodi. Kodi used to be called XBox Media Centre, and that's where I first met it – I softmodded an old original black XBox that my friend Dop gave me and put XBMC on it. It streamed movies off my server and played DVDs through my TV set, which is all I needed.

XBian felt a lot more familiar. It has a settings page through which I could update the underlying OS. It worked with the touchscreen out of the box. It has a UI for connecting to wifi. It too didn't include streaming Internet radio support, but it had a working add-ons browser, in which I found both BBC iPlayer and Internet Radio extensions.

Soon I was in business. It connected to wifi, it was operable with the touchscreen, connected to some old Altec Lansing speakers I had lying around. So I bought a case from Mironet, my friendly local electronics store. (There is a veritable Aladdin's Cave even closer to my office, GM electronic – but I'm afraid they're not very friendly. Sort of the opposite, in fact.)

I assembled the touchscreen and π3 into my case, and hit a problem. Only one available opening for a µUSB lead, but the screen needs its own. Some Googling later, it emerges than you can power the touchscreen from the π's GPIO pins, but I don't have the cables.

So off to GME it was, and some tricky negotiations later, I bought a strip of a dozen jumper cables. Three of them got me in business, but since it was £1 for all of them, I can't really complain about the wastage.

So now, there's a little compact unit in my bedroom which plays the radio whenever I want, on the power usage of a lightbulb. No fans, no extra cooling, nothing. I've had to use my single Official Raspberry Pi PSU brick, as all my phone chargers gave me the lightning-bolt icon undervoltage warning.

This emboldened me for Project 2.

Some years ago, Morgan's had a cheap offer on 2TB hard disks. I bought all their remaining stock, 5 mismatched drives. One went into an external case for my Mac mini and later died. The other four were in a box, pending installation into my old HP Microserver G1, which currently has 4×300GB drives in it, in a Linux software RAID controlled by Ubuntu. (Thanks to [livejournal.com profile] hobnobs!) However, this only has 2GB of RAM, and I figured that wasn't enough for a 5TB RAID. I may have accidentally killed it trying to fit more RAM, and the job of troubleshooting and fixing it has been waiting for, um, a couple of years now.

Meanwhile, the iMac's 1TB Fusion Drive was at 97.5% full and I don't have any drives big enough to back up everything on it.

I slowly and reluctantly conceded to myself that it might be quicker and easier to build a new server than fix and upgrade the old one.

The Raspberry Pi 4 is quite a different beast. Apart from a beefier 64-bit ARM7 quad-core, it has 2GB and 4GB RAM options, and it has much faster I/O. Its wifi and Ethernet are directly attached to the CPU, not on the USB bus, and it has 2 of those: the old USB2 bus (480Mb/s) and a new, separate 5Gb/s USB3 bus. This is useful power. It can also drive dual monitors via twin µHDMI ports.

But the π4 runs quite hot. The Flirc case my π3+ is in is only meant for home theatre stuff. A laden π4 needs something beefier, and sadly, my local mail-order electronics place, Alza, doesn't offer anything that appealed. I found the Maouii case on Amazon Germany and that fit the bill. (It also gave me a good excuse to buy the entire Luna trilogy by Ian McDonald in order to qualify for free shipping.)

So, from Alza I ordered a 4GB π4 and 4 USB3 desktop drive cases. From Mall CZ I ordered a USB3 hub with a fairly healthy 2.5A power output, thinking this would be enough to power a headless π4. USB C cables and µSD cards I have, and I figured all the USB 3 cables would come with the enclosures, which they did. In these quarantine lockdown times, the companies deliver to electronically-controlled mailboxes in shopping malls and so on, where you enter a code and pick up your package without ever interacting with a potentially-infectious human being.

It was all with me within days.

Now, I place some trust in those techies that I know who are more skilled and experienced than I, especially if they are jaded, cynical ones. File systems are one of the few significant differentiating factors between modern Linux server distros. Unfortunately, a few years ago, the kernel maintainers refused to integrate EVMS and picked the far simpler LVM instead. This has left something of a gap, with enterprise UNIXes still having more sophisticated storage tech than Linux. On the upside, though, this is driving differentiation.

SUSE favours Btrfs, although there's less enthusiasm outside the company. It is stable, but even now, you're recommended not to try to repair a Btrfs filesystem, and it can't give a reliable answer to the 'df' command – in other words, the basic question "how much free space have I got left?"

I love SUSE's YaST admin tool, and for other server stuff, especially on x86, I would probably recommend it, but it's not ideal for what I wanted in this role. Its support for the π4 is a bit preliminary so far, too.

Red Hat has officially deprecated Btrfs, but that left it with the problem that LVM with filesystems placed on top is a complex solution which still leaves something lacking, so with its typical galloping NIH syndrome, it is in the process of inventing an entirely new disk management layer, Stratis. Stratis integrates SGI's tried-and-tested, now-FOSS XFS filesystem with LVM into a unified disk management system.

Yeah, no thanks. Not just yet. I am not fond of Fedora, anyway. No stable or LTS versions (because that's RHEL's raison d'etre). CentOS is a different beast, and also not really my thing. And Fedora is also a bit more bleeding-edge than I like. I do not consider Fedora a server OS; it's more of a rolling tech testbed for RHEL.

Despite some dissenting opinions, the prevailing opinion seems to be that Sun's ZFS is the current state of the art. Ubuntu has decided to go with ZFS, although its license is incompatible with the Linux kernel's GPL. Ubuntu is, to be honest, my preferred distro for desktop stuff, and I've run it on πs before. It works well – better than Fedora, which like Debian eschews non-free drivers completely. It doesn't have Raspian's hardware acceleration but then everyone uses Raspbian on the π so it's an obvious target.

So, Ubuntu Server. Modern versions include ZFS built-in.

I tested this in a VM. Ubuntu Server 18.04 on its own ext4 boot drive... then add a bunch of 20GB drives to the VM... then tell it to create a RAIDZ. One very short time later, it has not only partitioned my drives, created an array, and formatted it, it's also created a mount point and mounted the new array on it. In seconds.

This is quite impressive and far more automatic than the many manual steps involved in doing this with the old Linux built-in 'mdraid' subsystem, as used in my old home server.

Conveniently – it was totally unplanned – by the time all my π4 bits were here, a new Ubuntu LTS was out, 20.04.

I installed all my drives into their new enclosures, plugged them one-by-one into my one of iMac's USB3 ports, and checked that they registered as 2TB drives. They did. Result. Oh, and yes, the cables were in the boxes. USB3 cables are entertainingly fat with shielding, but 5Gb/s is not to be sniffed at.

So, I put my new π4 in its case, put the latest Ubuntu Server on a µSD card – and hit a problem. I can't connect a display. I only have one HDMI monitor and nothing that will connect to a π4's micro-HDMI ports. And I don't really want to try to set this all up headless.

So off to Alza's actual physical shop I trogged to buy a µHDMI to HDMI convertor. Purchasing under quarantine is tricky, so it took a while, but I got it.

Fired up the π4 and it ran fine. No undervoltage warning running off the hub. So I hooked up all the drives, and sure enough, all were visible to the 'lsusb' command.

I referred to various howtos. Hmm. Apparently, you need to put partition records on them. Odd; I thought ZFS subsumed partitioning. Oh, well. I put an empty GUID disklabel on each drive. Then I added them to a RAIDZ, ZFS' equivalent of a RAID5 array.

Well, it wasn't as quick as in an VM, but only a minute or so of heavy disk activity later, the array is created, formatted, its mountpoint created and it's online. This is quite impressive stuff.



Then came the usual joys of Linux' fairly poor subsystem integration: Samba is a separate, different program, Samba user accounts are not Linux user accounts so passwords are different. Mounted filesystems inherit the permissions of their mountpoint. Macs still favour the old Apple protocol, so you need Netatalk as well. It, of course, doesn't integrate with Samba. NFS has two alternatives, and neither, of course, integrate with either Samba or Netatalk. There are good reasons NT caught on, which Apple successfully imitated and even exceeded in Mac OS X – and the Linux world remains as blindly indifferent to them as it has for a quarter of a century.

But some hours of swearing later, it all works. I can connect from Windows, Linux or Mac. It's all passively-cooled so it runs almost completely silently. It does need five power sockets, which is a snag, and there's a bit of cable spaghetti, but for an outlay of about £150 I have a running server which can sustain write speeds of about a gigabyte per second to the array.

I've put my old friend Webmin on it for a friendly web GUI.


So there you are.

While the π3 is a little bit underpowered, for a touchscreen Internet radio, it's great, and I'm very pleased with the result.

But the π4 is very different. It's a thoroughly capable little machine, perfectly usable as a general-purpose desktop PC, or as a server with quite decent bandwidth.

No, the setup has not been a beginner-friendly process. Apparently OpenMediaVault has versions for some single-board computers, including the π3, but not for the π4 yet. I am sure wider support will come.

But overall I'm impressed with how easy this was, without vast expert knowledge, and I'm delighted with the result. I will keep you posted on how it works longer-term.
liam_on_linux: (Default)
Hmmm. For the first time, ever, really, I hit the limits of modern vs. decade-old wifi and networking.

My home broadband is 500Mb/s. Just now, what with quarantine and so on, I have had to set up a home office in our main bedroom. My "spare" Mac, the Mac mini, has been relegated to the guest room and my work laptop is set up on a desk in the bedroom. This means I can work in there while Jana looks after Ada in the front room, without disturbing me too much.

(Aside: I'm awfully glad I bought a flat big enough to allow this, even though my Czech friends and colleagues, and realtor, all thought I was mad to want one so big.)

The problem was that I was only getting 3/5 bars of wifi signal on the work Dell Latitude, and some intermittent connectivity problems – transient outages and slowdowns. Probably this is when someone uses their microwave oven nearby or something.

It took me some hours of grovelling around on my hands and knees – which is rather painful if one knee has metal bits in -- but I managed to suss out the previous owners' wiring scheme. I'd worked out that there was a cable to the middle room, and connected it, but I couldn't find the other end of the cable to the master bedroom.

So, I dug out an old ADSL router that one of my London ISPs never asked for back: a Netgear DGN-1000. According to various pages Google found, this has a mode where it can be used as a wireless repeater.

Well, not on mine. The hidden webpage is there, but the bridge option isn't. Dammit. I should have checked before I updated its firmware, shouldn't I?

Ah well. There's another old spare router lying around, an EE BrightBox, and this one can take an Ethernet WAN – it's the one that firewalled my FTTC connection. It does ADSL as well but I don't need that here. I had tried and failed to sell this one on Facebook, which meant looking it up and discovering that it can run OpenWRT.

So I tried it. It's quite a process -- you have to enable a hidden tiny webserver in the bootloader, use that to unlock the bootloader, then use the unlocked bootloader to load a new ROM. I did quite a lot of reading and discovered that there are driver issues with OpenWrt. It works, but apparently ADSL doesn't work (don't care, don't need it), but also, its wifi chip is not fully supported and with the FOSS driver it maxes out at 54Mb/s.

Sounds like quite a lot, but it isn't when your broadband is half-gigabit.

So I decided to see what could be done with the standard firmware, with its closed-source Broadcom wifi driver.

(Broadcom may employ one of my Great Heroines of Computing, the remarkable Sophie Wilson, developer of the ARM processor, but their record on open-sourcing drivers is not good.)

So I found a creative combination of settings to turn the thing into a simple access point as it was, without hacking it. Upstream WAN on Ethernet... OK. Disable login... OK. Disable routing, enable bridging... OK.

Swaths of the web interface are disappearing as I go. Groups of fields and even whole tabs vanish each time I click OK. Disable firewall... OK. Disable NAT... OK. Disable DHCP... OK.

Right, now it just bridges whatever on LAN4 onto LAN1-3 and wifi. Fine.

Connect it up to the live router and try...

And it works! I have a new access point, and 2 WLANs, which isn't ideal -- but the second WLAN works, and I can connect and get an Internet connection. Great!

So, I try through the wall. Not so good.

More crawling around and I find a second network cable in the living room that I'd missed. Plug it in, and the cable in the main bedroom comes alive! Cool!

So, move the access point in there. Connect to it, test... 65-70 Mb/s. Hmm. Not that great. Try a cable to it. 85 Mb/sec. Uninspiring.

Test the wifi connection direct to the main router...

Just over 300 Mb/s.

Ah.

Oh bugger!

In other words, after some three hours' work and a fair bit of swearing, my "improved", signal-boosted connection is at best one-fifth as fast as the original one.

I guess the things are that firstly, my connection speed really wasn't as bad as I thought, and secondly, I was hoping with some ingenuity to improve it for free with kit I had lying around.

The former invalidates the latter: it's probably not worth spending money on improving something that is not in fact bad in the first place.

I don't recall when I got my fibre connection in Mitcham, but I had it for at least a couple of years, maybe even 3, so I guess around 2011-2012. It was blisteringly quick when I got it, but the speeds fell and fell as more people signed up and the contention on my line rose. Especially at peak times in the evenings. The Lodger often complained, but then, he does that anyway.

But my best fibre speeds in London were 75-80Mb/s just under a decade ago. My cable TV connection (i.e. IP over MPEG (!)) here in Prague is five times faster.

So the kit that was an adequate router/firewall then, which even supports a USB2 disk as some NAS, is now pitifully unequal to the task. It works fine but its maximum performance will actually reduce the speed of my home wifi, let alone its Fast Ethernet hub when now I need gigabit just for my broadband.

I find myself reeling a little from this.

It reminds me of my friend Noel helping me to cable up the house in Mitcham when I bought it in about 2002. Noel, conveniently, was a BT engineer.

We used Thin Ethernet. Yes, Cheapernet, yes, BNC connections etc. Possibly the last new deployment of 10base-2 in the world!

Why? Well, I had tons of it. Cables, T-pieces, terminators, BNC network cards in ISA or PCI flavours, etc. I had a Mac with BNC. I had some old Sun boxes with only BNC. It doesn't need switches or hubs or power supplies. One cable is the backbone for the whole building -- so fewer holes in the wall. Noel drilled a hole from the small bedroom into the garage, and one from the garage into the living room, and that was it. Strategic bit of gaffer tape and the job's a good 'un.

In 2002, 10 Mb/s was plenty.

At first it was just for a home LAN. Then I got 512kb/s ADSL via one of those green "manta ray" USB modems. Yes, modem, not router. Routers were too expensive. Only Windows could talk to them at first, so I built a Windows 2000 server to share the connection, with automatic fallback to 56k dialup to AOL (because I didn't pay call charges).

So the 10Mb/s network shared the broadband Internet, using 5% of its theoretical capacity.

Then I got 1Mb/s... Then 2Mb/s... I think I got an old router off someone for that at first. The Win 2K Server was a Pentium MMX/200MHz and was starting to struggle.

Then 8MB/s, via Bulldog, who were great: fast and cheap, and they not only did ADSL but the landline too, so I could tell BT to take a running jump. (Thereby hangs a tale, too.)

With the normal CSMA/CD Ethernet congestion, already at 8Mb/s, the home 10base-2 network was not much quicker than wifi -- but it was still worth it upstairs, where the wifi signal was weaker.

Then I got a 16Mb/s connection and now the Cheapernet became an actual bottleneck. It failed – the great weakness of 10base-2 is that a cable break anywhere brings down the entire LAN – and I never bothered to trace it. I just kept a small segment to link my Fast Ethernet switch to the old 10Mb/s hub for my testbed PC and Mac. By this point, I'd rented out my small bedroom too, so my main PC and server were in the dining room. That mean a small 100base-T star LAN under the dining table was all I needed.

So, yes, I've had the experience of networking kit being obsoleted by advances in other areas before – but only very gradually, and I was starting with 1980s equipment. It's a tribute to great design that early-'80s cabling remained entirely usable for 25 years or more.

But to find that the router from my state-of-the-art, high-speed broadband from just six years ago, when I emigrated, is now hopelessly obsolete and a significant performance bottleneck: that was unexpected and disconcerting.

Still, it's been educational. In several ways.

The thing that prompted the Terry Pratchett reference in my title is this:
https://www.extremetech.com/computing/95913-koomeys-law-replacing-moores-focus-on-power-with-efficiency
https://www.infoworld.com/article/2620185/koomey-s-law--computing-efficiency-keeps-pace-with-moore-s-law.html

A lot of people are still in deep denial about this, but x86 chips stopped getting very much quicker in about 2007 or so. The end of the Pentium 4 era, when Intel realised that they were never going to hit the 5 GHz clock that Netburst was aimed at, and went back to an updated Pentium Pro architecture, trading raw clock speeds for instructions-per-clock – as AMD had already done with the Sledgehammer core, the origin of AMD64.

Until then, since the 1960s, CPU power roughly doubled every 18 months. For 40 years.
8088: 4.77MHz.
8086: 8MHz.
80286: 6, 8, 12, 16 MHz.
80386: 16, 20, 25, 33 MHz.
80486: 25, 33; 40, 50; 66; 75, 100 MHz.
Pentium: 60, 66, 75, 90, 100; 120, 133; 166, 200, 233 MHz.
Pentium II: 233, 266, 300, 333, 350.
Pentium III: 333 up to 1GHz.
Pentium 4: topped out at about 3.5 GHz.
Core i7 is still around the same, with brief bursts of more, but it can't sustain it.

The reason was that adding more transistors kept getting cheaper, so processors went from 4-bit to 8-bit, to 16-bit, to 32-bit with a memory management unit onboard, to superscalar 32-bit with floating-point and Level 1 cache on-die, then with added SIMD multimedia extensions, then to 32-bit with out-of-order execution, to 32-bit with Level 2 cache on-die, to 64-bit...

And then they basically ran out of go-faster stuff to do with more transistors. There's no way to "spend" that transistor budget and make the processor execute code faster. So, instead, we got dual cores. Then quadruple cores.

More than that doesn't help most people. Server CPUs can have 24-32 or more cores now – twice that or more on some RISC chips – but it's no use in a general-purpose PC, so instead, the effort now goes to reducing power consumption instead.

Single-core execution speed, the most important benchmark for how fast stuff runs, now gets 10-15% faster every 18 months to 2 years, and has done for about a dozen years. Memory is getting bigger and a bit quicker, spinning HDs now reach vast capacities most standalone PCs will never need, so they're getting replaced by SSDs which themselves are reaching the point where they offer more than most people will ever want.

So my main Mac is 5 years old, and still nicely quick. My spare is 9 years old and perfectly usable. My personal laptops are all 5-10 years old and I don't need anything more.

The improvements are incremental, and frankly, I will take a €150 2.5 GHz laptop over a €1500 2.7 GHz laptop any day, thanks.

But the speeds continue to rise in less-visible places, and now, my free home router/firewall is nearly 10x faster than my 2012 free home router/firewall.

And I had not noticed at all until the last week.
liam_on_linux: (Default)
I ran the testing labs for PC Pro magazine from 1995 to 1996, and acted as the magazine's de facto technical editor. (I didn't have enough journalistic experience yet to get the title Technical Editor.)

The first PC we saw at PC Pro magazine with USB ports was an IBM desktop 486 or Pentium -- in late 1995, I think. Not a PS/2 but one of their more boring industry-standard models, an Aptiva I think.
We didn't know what they were, and IBM were none too sure either, although they told us what the weird little tricorn logo represented: Universal Serial Bus.
Image result for unicode usb logo

"It's some new Intel thing," they said. So I phoned Intel UK -- 1995, very little inter-company email yet -- and asked, and learned all about it.
But how could we test it, with Windows 95A or NT 3.51? We couldn't.
I think we still had the machine when Windows 95B came out... but the problem was, Windows 95B, AKA "OSR2", was an OEM release. No upgrades. You couldn't officially upgrade 95A to 95B, but I didn't want to lose the drivers or the benchmarks...

I found a way. It involved deleting WIN.COM from C:\WINDOWS which was the file that SETUP.EXE looked for to see if there was an existing copy of Windows.

Reinstalling over the top was permitted, though. (In case it crashed badly, I suppose.) So I reinstalled 95B over the top, it picked up the registry and all the settings... and found the new ports.
But then we didn't have anything to attach to them to try them. :-) The iMac wouldn't come out for another 2.5 years yet.
Other fun things I did in that role:
• Discovered Tulip (RIP) selling a Pentium with an SIS chipset that they claimed supported EDO RAM (when only the Intel Triton chipset did). Under threat of a lawsuit, I showed them that it did support it -- it recognised it, printed a little message saying "EDO RAM detected" and worked... but it couldn't use it and benchmarked at exactly the same speed as with cheaper FP-mode RAM.
I think that led to Tulip suing SIS instead of Dennis Publishing. :-)
• Evesham Micros (RIP) sneaking the first engineering sample Pentium MMX in the UK -- before the MMX name had even been settled -- into a grouptest of Pentium 166 PCs. It won handily, by about 15%, which should have been impossible if it was a standard Pentium 1 CPU. But it wasn't -- it was a Pentium MMX with twice as much L1 cache onboard.
Intel was very, very unhappy with naughty Evesham.
• Netscape Communications (RIP) refused to let us put Communicator or Navigator on our cover CD. They didn't know that Europeans pay for local phone calls, so that it cost money to make a big download (30 or 40 MB!). They wouldn't believe us and in the end flew 2 executives to Britain to explain to us that it was a free download and they wanted to trace who downloaded it.
As acting technical editor, I had to explain to them. Repeatedly.

When they finally got it, it resulted in a panicked trans-Atlantic phone call to Silicon Valley, getting someone senior out of bed, as they finally realised why their download and adoption figures were so poor in Europe.

We got Netscape on the cover CD, the first magazine in Europe to do so. :-) Both Communicator and Navigator, IIRC.
• Fujitsu supplied the first PC OpenGL accelerator we'd ever seen. It cost considerably more than the PC. We had no way to test it -- OpenGL benchmarks for Windows hadn't been invented yet. (It wasn't very good in Quake, though.)
I originally censored the company names, but I checked, and the naughty or silly ones no longer exist, so what the hell...
Tulip were merely deceived and didn't verify. Whoever picked SIS was inept anyway -- they made terrible chipsets which were slow as hell.

(Years later, they upped their game, and by C21 there really isn't much difference, unless you're a fanatical gamer and overcloker.)
Lemme think... other fun anecdotes...
PartitionMagic caused me some fun. When I joined (at Issue 8) we had a copy of v1 in the cupboard. Its native OS was OS/2 and nobody cared, I'm afraid. I read what it claimed and didn't believe it so I didn't try it.
Then v2 arrived. It ran on DOS. Repartitioning a hard disk when it was full of data? Preposterous! Impossible!
So I tried it. It worked. I wrote a rave review.
It prompted a reader letter.
"I think I've spotted your April Fool's piece. A DOS program that looks exactly like a Windows 95 app? Which can repartition a hard disk full of data? Written by someone whose name is an anagram of 'APRIL VENOM'? Do I win anything?"
He won a phonecall from me, but he did teach me an anagram of my name I never knew.
It led me to run a tip in the mag.

At the time, a 1.2 GB hard disk was the most common size (and a Quantum Fireball the fastest model for the money). Format that as a FAT16 drive and you got super-inefficient 16 kB clusters. (And in 1995 or early 1996, FAT16 was all you got.)
With PartitionMagic, you could take 200 MB off the end, make it into a 2nd partition, and still fit more onto the C: drive because of far more efficient 8 kB clusters. If you didn't have PQMagic you could partition the disk that way before installing. The only key thing was that C: was less than 1 GB. 0.99 GB was fine.
I suggested putting the swap file on D: -- you saved space and reduced fragmentation.
One of our favourite suppliers, Panrix, questioned this. They reckoned that having the swap file on the outer, longer tracks of the drive made it slower, due to slower access times and slower transfer speeds. They were adamant.
So I got them to bring in a new, virgin PC with Windows 95A, I benchmarked it with a single big, inefficient C: partition, then I repartitioned it, put the swapfile on the new D: drive, and benchmarked it again. It was the same to 2 decimal places, and the C drive had about 250MB more free space.
Panrix apologised and I gained another geek cred point. :-)
liam_on_linux: (Default)
[Repurposed from a reply in a Hackernews thread]

Apple looked at buying in an OS after Copland failed. But all the stuff about Carbon, Blue Box, Yellow Box, etc. -- all those were NeXT ideas after the merger. None of it was pre-planned.

So, they bought NeXTstep, a very weird UNIX with a proprietary, PostScript-based GUI and a rich programming environment with tons of rich foundation classes, all written in Objective C.

A totally different API, utterly unlike and unrelated to Classic MacOS.

Then they had to decide how to bring these things together.

NeXT already offered its OPENSTEP GUI on top of other Unixes. OPENSTEP ran on Sun Solaris and IBM AIX, and I think maybe others I've forgotten. Neither were commercial successes.

NeXT had a plan to create a compatibility environment for running NeXT apps on other OSes. The idea was to port the base ObjC classes to the native OS, and use native controls, windows, widgets etc. but to be able to develop your apps in ObjC on NeXTstep using Interface Builder.

In the end, only one such OS looked commercially viable: Windows NT. So the plan was to offer a NeXT environment on top of NT.

This is what was temporarily Yellow Box and later became Cocoa.

Blue Box was a VM running a whole copy of Classic MacOS under NeXTstep, or rather, Rhapsody. In Mac OS X 10.0, Blue Box was renamed the Classic environment and it gained the ability to mix windows with NeXT windows.

But there still needed to be a way to port apps from Classic MacOS to Mac OS X.

So what Apple did was go through the Classic MacOS API and cut it down, removing all the calls and functions that would not be safe in a pre-emptively multitasking, memory-managed environment.

The result was a safe subset of the Classic MacOS API called Carbon, which could be implemented both on Classic MacOS and on the new NeXTstep-based OS.

Now there was a transition plan:

• your old native apps will still work in a VM

• apps written to Carbon can be recompiled for OS X

• for the full experience, rewrite or write new apps using the NeXT native API, now renamed Cocoa.

• incidentally there was also a rich API for Java apps, too

Now there was a plan.

Here's how they executed it.

1. Copland was killed. A team looked at if anything could be salvaged.

2. They got to work porting NeXTstep to PowerPC

3. 2 main elements from Copland were extracted:

• The Appearance Manager, a theming engine allowing skins for Classic MacOS: https://en.wikipedia.org/wiki/Appearance_Manager

• A new improved Finder

The new PowerPC-native Finder had some very nice features, many never replicated in OS X... like dockable "drawers" -- drag a folder to a screen edge and it vanished, leaving just a tab, which opens a pop-out draw. Multithreading: start a copy or move and then carry on doing other things.

The Appearance Manager was grafted onto NeXTstep, leading to Rhapsody, which became Mac OS X Server: basically NeXTstep on PowerPC with a Classic MacOS skin, so a single menu bar at the top, desktop icons, Apple fonts and things -- but still using the NeXT "Miller columns" Workspace file manager and so on.

Apple next released MacOS 8, with the new Appearance control panel and single skin, called Platinum: a marginally-updated classic look and feel. There were never any official others, but some leaked, and a 3rd party tool called Kaleidoscope offered many more.

http://basalgangster.macgui.com/RetroMacComputing/The_Long_View/Entries/2011/2/26_Copland.html

So some improvements, enough to make it a compelling upgrade...

And also to kill off the MacOS licensing programme, which only covered MacOS 7. (Because originally 7 had been planned to be replaced with Copland, the real MacOS 8.)

MacOS 8 was also the original OS of the first iMac.

Then came MacOS 8.1, which also got HFS+, a new, more efficient filesystem for larger multi-gigabyte hard disks. It couldn't boot off it, though (IIRC).

MacOS 8.1 was the last release for 680x0 hardware and needed a 68040 Mac.

Then came the first PowerPC-only version, MacOS 8.5, which brought in booting from HFS+. Then MacOS 8.6, a bugfix release, mainly.

Then MacOS 9, with better-integrated WWW access and some other quite nice features... but all really stalling for time while they worked on what would become Mac OS X.

The paid releases were 8.0, 8.5 and 9. 8.1, 8.6, 9.1 and 9.2 were all free updates.

In a way they were just trickling out new features, while working on adapting NeXTstep:

1. Rhapsody (Developer Release 1997, DR2 1998)

2. Mac OS X Server (1.0 1999, 1.2 2000)

3. Mac OS X Public Beta (2000)

But all of these releases supported Carbon and could run Carbon apps, and PowerPC-native Carbon apps would run natively under OS X without the need for the Classic environment.

Finally in 2001, Mac OS X 10.0 "Cheetah".
liam_on_linux: (Default)
[Repurposed mailing list reply]

I mentioned that I still don't use GNOME even though there are extensions to fix a lof of the things I don't like. (My latest attempted ended in failure just yesterday.) Someone asked what functionality was still missing. It's a reasonable question, so I tried to answer.

It is not (only) a case of missing functionality, it is a case of badly-implemented or non-working functionality.

I can go into a lot of depth on this, if you like, but it is not very relevant to this list and it is probably not a good place.

A better place, if you have an OpenID of some form, might be over on my blog.

This post lays out some of my objections:

"Why I don't use GNOME Shell"

& is followed up here:

"On GNOME 3 and design simplicity"

Here's what I found using the extensions was like:

A quick re-assessment of Ubuntu GNOME now it's got its 2nd release

For me, Ubuntu Unity worked very well as a Mac OS X-like desktop, with actual improvements over Mac OS X (which I use daily.) I used it from the version when it was first released -- 11.04 I think? -- and still do. In fact I just installed it on 19.04 this weekend after my latest efforts to tame GNOME 3 failed.

I don't particularly like Win95-style desktops -- I'm old, I predate them -- but I'm perfectly comfortable using them. I have some tests I apply to see if they are good enough imitations of the real thing to satisfy me. Notable elements of these tests: does it handle a vertical taskbar? Is it broadly keystroke-compatible with Win9x?

Windows-like desktops which pass to some degree, in order of success: Xfce; LXDE; LXQt
Windows-like desktops which fail: MATE; Cinnamon; KDE 5

If I was pressed to summarise, I guess I'd say that some key factors are:
• Do the elements integrate together?
• Does it make efficient use of screen space, or does it gratuitously waste it?
(Failed badly by GNOME Shell and Elementary)
• Does it offer anything unique or is it something readily achieved by reconfiguring an existing desktop?
(Failed badly by Budgie & arguably Elementary)
• Do standard keystrokes work by default?
(Failed badly by KDE)
• Can it be customised in fairly standard, discoverable ways?
• Is the result robust?
E.g. will it survive an OS upgrade (e.g. Unity), or degrade gracefully so you can fix it (Unity with Nemo desktop/file manager), or will it break badly enough to prevent login (GNOME 3 + multiple extensions)?

If, say, you find that Arc Menu gives GNOME 3 an menu and what more can you want, or if you are happy with something as minimal as Fluxbox, then my objections to many existing desktops are probably things that have never even occurred to you and will probably seem trivial, frivolous, and totally unimportant. It may be very hard to discuss them, unless you're willing to accept that, as an opening position, stuff that you don't even notice is critically, crucially important to other people.

Elementary is quite a good example, because it seems to me that the team trying to copy the look and feel of Mac OS X in Elementary OS do not actually understand how Mac OS X works.

Elementary presents a cosmetic imitation of Mac OS X, but it is skin-deep. Its developers seem not to understand how Mac OS X works and how the elements of the desktop function. So, they have implemented things that look quite Mac-like, but don't work. Not "don't work in a Mac-like way". I mean, don't work at all.

It is what I call "cargo cult" software: when you see something, think it looks good, so you make something that looks like it and then you take it very seriously and go through the motions of using it and say it's great.



Actually, your aeroplane is made of grass and rope. It doesn't roll let alone fly. Your radio is a wooden fruit box. Your headphones are woven from reeds. They don't do anything. They're a hat.

You're wearing a hat but you think you're a radio operator.

As an example: Mac OS X is based on a design that predates Windows 3. Programs do not have a menu bar in their windows. Menus are elsewhere on the screen. On the Mac, they're always in a bar at the top. On NeXTstep, which is what Mac OS X is based on, they're vertically stacked at the top left of the screen.

If you don't know that, and you hear that these OSes were very simple to use, and you look at screenshots, then you might think "look at those apps! They have no menu bars! No menus at all! Wow, what a simple, clean  design! Right, I will write apps with no menus!"

That is a laudable goal in its way -- but it can mean that the result is a rather broken, braindead app, with no advanced options, no customisation, no real power. Or you have to stick a hamburger menu in the title bar with a dozen unrelated options that you couldn't fit anywhere else.

What's worse is that you didn't realise that that's the purpose of that panel across the top of the desktop in all the screenshots. You don't know that that's where the menus go. All you see is that it has a clock in it.

You don't know your history, so you think that it's there for the clock.  You don't know that 5 or 6 years after the OS was launched with that bar for the menus, someone wrote an add-on that put a clock on the end, and the vendor went "that's a good idea" and built it in.

But you don't care about history, you never knew and you don't want to... So you put in a big panel that doesn't do anything, with a clock in it, and waste a ton of valuable space...

Cargo cult desktops.

Big dock thing because the Mac has a dock but they don't know that the Dock has about 4 different roles (app launcher and app switcher and holds minimised windows and is a shortcut for useful folders and is a place for status monitors. But they didn't know that so their docks can't do all this.

Menu bar with no menus because the Mac has a menu bar and it looks nice and people like Macs so we'll copy it but we didn't know about the menus, but we listened to Windows users who tried Macs and didn't like the menu bar.
Copying without understanding is a waste. A waste of programmer time and effort, a waste of user time and effort, a waste of screen space, and a waste of code.

You must understand first and only then copy.

If you do not have time or desire to understand, then do not try to copy. Do something else while you learn.
liam_on_linux: (Default)
(30th June 2007 on The Inquirer)

THERE'S ANOTHER NEW social networking site around, from the guy behind Digg. It's called Pownce, it's still invitation-only and if they're offering anything genuinely new and different they aren't shouting about it. In particular, nobody's talking about the feature I want to see.

Get connected

There are myriads of social networking-type sites these days; Wikipedia lists more than ninety. Some of the big ones are MySpace, Bebo, Facebook and Orkut. Then there are "microblogging" sites like Twitter and Jaiku. Then of course there are all the tired old pure-play blogging sites like LiveJournal and Blogger. I have accounts on a handful of them - in some cases, just so I can comment, because OpenID isn't as well-supported as it deserves to be.

They all do much the same sort of thing. You get an account for free, you put up a profile, maybe upload some photos, tunes, video clips or a blog, then you can look up your mates and "add" them as "friends". Mainly, this allows you to get a summary list of what your mates are up to; secondarily, you can restrict who can see what that you're putting up.

Doesn't sound like much, but these are some of the biggest and most popular websites on the Internet. That means money: News International paid $580 million for MySpace and its founders are asking for $12.5 million a year each to stay on for another couple of years.

The purely social sites, like Myspace, sometimes serve as training wheels for Internet newbies. You don't need to understand email and all that sort of thing - you can talk to your mates entirely within the friendly confines of one big website. After all, there's no phonebook for the Internet - it's hard for friends to find one another, especially if they're not all that Net-literate.

A lot of the sites try to keep you in their confines. MySpace offers its own, closed instant-messenging service, for example - so long as you use Windows. Another way is that when someone sends you a message or comment on MySpace or Facebook, the site informs you by email - but the email doesn't tell you what the actual message was. You have to go to the site and sign in to read it.

Buzzword alert

Some sites aren't so closed - for example, the email notifications from Livejournal tell you what was said and let you respond from within your email client, and its profiles offer basic integration of external IM services. On the other hand, Facebook offers trendy Web 2.0 features, like "applications" that can run within your profile and can be rearranged by simple drag&drop, whereas LJ or Facebook owners who want unique customisations must fiddle with CSS and HTML or use a third-party application.

As well as aggregating your mates' blogs, many social networking sites let you syndicate "web feeds" from other sites. A "feed" - there are several standards to choose from, including Atom and various versions of RSS - supplies a constantly-updated stream of new stories or posts from one site into another. For instance, as I write, fifteen people on LiveJournal read The Inquirer through its LJ feed.

(If you fancy this aggregation idea but don't want to join a networking site, you can also do this using an "feed reader" on your own computer. There are a growing number of these: as well as standalone applications such as FeedReader or NetNewsWire, many modern browsers and email clients can handle RSS feeds - for example, IE7, Firefox, Outlook and Safari.)

But even with feeds, the social networking sites are still a walled garden. If you read a story or a post syndicated from another site, you'll probably get a space to enter comments - but you won't see the comments from users on the original site and they won't see yours. The same goes for users anywhere else reading a syndicated feed - only the stories themselves get passed through, not the comments.

A lot of the point of sites like Digg and Del.icio.us is the recently popular concept of "wisdom of crowds". If lots of people "tag" something as being interesting and the site presents a list of the most-tagged pages, then the reader is presented with an instantaneous "what's hot" list - say, what the majority of the users of the site are currently viewing.

There are sites doing lots of clever stuff with feeds, such as Yahoo Pipes, which lets you visually put together "programs" to combine the information from multiple feeds - what the trendy Web 2.0 types call a "mashup". What you don't get through a feed, though, is what people are saying.

Similarly, the social networking sites are, in a way, parasitic on email: you get more messages than before, but for the most part they have almost no informational content, and in order to communicate with other users, they encourage you to use the sites' own internal mechanisms rather than email or IM. Outside a site like Facebook, you can't see anything much - you must join to participate. Indeed, inside the site, the mechanisms are often rather primitive - for instance, Facebook and Twitter have no useful threading. All you get is a flat list of comments; people resort to heading messages "@alice" or "@bob" to indicate to whom they're talking. Meanwhile, the sites' notifications to the outside world are a read-only 1-bit channel, just signals that something's happened. You might as well just have an icon flashing on your screen.

In other words, it's all very basic. Feeds allow for clever stuff, but the actual mechanics of letting people communicate tend to be rather primitive, and often it's the older sites that do a better job. The social sites are in some ways just a mass of private web fora (it's the correct plural of "forum), with all their limitations of poor or nonexistent threading and inconsistent user interfaces. Which seems a bit back-asswards to me. Threaded discussions are 1980s technology, after all.

Going back into time

Websites have limits. Email may be old-fashioned, but it's still a useful tool, especially with good client software. Google's Gmail does some snazzy AJAX magic to make webmail into a viable alternative to a proper email client - its searching and threading are both excellent. An increasing number of friends and clients of mine are giving up on standalone email clients and just switching to Gmail. The snag with a website, though, is that if you're not connected - or the site is down - you're a bit stuck. When either end is offline, the whole shebang is useless.

Whereas if you download your email into a client on your own computer, you can use it even when not connected - if it's in a portable device, underground or on a plane or in the middle of Antarctica with no wireless Internet coverage. You can read existing emails, sort and organize, compose replies, whatever - and when you get back online, the device automatically does the sending and receiving for you. What's more, when you store and handle your own email, you have a major extra freedom - you can change your service provider. If you use Gmail or Hotmail, you're tied to the generosity of those noted non-profit philanthropic organizations Google and Microsoft.

The biggest reason email works so well is that it's open: it's all based on free, open standards. Anyone with Internet email can send messages to anyone with an Internet email address. Even someone on one proprietary system, say Outlook and Exchange, can send mail to a user on another, say Lotus Notes. Both systems talk the common protocols: primarily, SMTP, the Simple Mail Transfer Protocol. Outside the proprietary world, most email clients use POP3 or IMAP to receive messages from servers - and again, SMTP to send.

Now here's a thought. Wouldn't it be handy if there was an open standard for moving messages between online fora? (It's the correct plural of "forum", not "forums".) So that if you were reading a friend's blog through a feed into your preferred social networking site, you could read all the comments, too, and participate in the discussion? If it worked both ways, on a peer-to-peer basis, the people discussing a story on Facebook could also discuss it with the users on Livejournal. If it was syndicated in from Slashdot, they could talk to all the Slashdot users, too.

Now there is a killer feature for a new, up and coming social networking site. Syndication of group discussions, not just stories. It would be a good basis for competitive features, too - like good threading, management of conversations and so on.

The sting in the tail

The kicker is, there already is such a protocol. It's called NNTP: the Network News Transfer Protocol.

The worldwide system for handling threaded public discussions has been around for 26 years now. It's called Usenet and since a decade before the Web was invented it's been carrying some 20,000 active discussion groups, called "newsgroups", all around the world. It's a bit passé these days - spam originated on Usenet long before it came to email, and although Usenet still sees a massive amount of traffic, 99% of it is encoded binaries - many people now only use it for file sharing.

You may never have heard of it, but there's a good chance that your email system supports Usenet. Microsofties can read newsgroups in Outlook Express, Windows Mail and Entourage, or in Outlook via various addons; open sourcerers can use Mozilla's Thunderbird on Windows, Mac OS X or Linux. Google offers GoogleGroups, which has the largest and oldest Usenet archive in the world. There are also lots of dedicated newsreaders - on Windows, Forté's Agent is one of the most popular.

Usenet is a decentralised network: users download messages from news servers, but the servers pass them around amongst themselves - there's no top-down hierarchy. Companies can run private newsgroups if they wish and block these from being distributed. All the problems of working out unique message identifiers and so on were sorted out a quarter of a century ago. Messages can be sent to multiple newsgroups at once, and like discussion forum posts, they always have a subject line. Traditionally, they are in plain text, but you can use HTML as well - though the old-timers hate it.

There are things Usenet doesn't do well. There's no way to look up posters' profiles, for example - but that's exactly the sort of thing that social networking sites are good at. Every message shows its sender's email address - but then, the social networking sites all give you your own personal ID anyway.

Big jobs, little jobs

It would be a massive task to convert the software driving all the different online discussion sites to speaking NNTP, though. It isn't even remotely what they were intended for.

But there's another way. A similar problem already exists if you use a webmail service like Hotmail but want to download your messages into your own email client. Hotmail used to offer POP3 downloads as a free service, but it became a paid-for extra years ago. Yahoo and Gmail offer it for free, but lots of webmail providers don't.

Happily, though, there's an answer.

If you use Thunderbird, there's an extension called Webmail which can download from Hotmail as well as Yahoo, Gmail and other sites. Like all Mozilla extensions, it runs on any platform that Thunderbird supports.

But better still, there's a standalone program. It's called MrPostman and because it's written in Java it runs on almost anything - I've used it on Windows, Mac OS X and Linux. It's modular, using small scripts to support about a dozen webmail providers, including Microsoft Exchange's Outlook Web Access; it can even read RSS feeds. Its developers cautiously say that "Adding a new webmail provider might be as simple as writing a script of 50 lines."

And it's GPL open source, so it won't cost you anything. It's a fairly small program, too - it will just about fit on a floppy disk.

MrPostman shows that it's possible to convert a web-based email service into standard POP3 - and for this to be done by a third party with no no access to the source code of the server. Surely it can be done for a forum, too? And if it's done right, for lots of fora? It doesn't need the help or cooperation of the source sites, though that would surely help. More to the point, if it was done online, the servers offering the NNTP feeds can be separate from those hosting the sites.

What's more, there's a precedent. For users of the British conferencing service CIX, there's a little Perl program called Clink, which takes CoSy conferences and topics and presents them as an NNTP feed, so that you can read - and post to - CIX through your newsreader.

It sounds to me like the sort of task that would be ideal for the Perl and Python wizards who design Web 2.0 sites, and it would be a killer feature for any site that acts as a feed aggregator.

Rather than reading contentless emails and going off to multiple different sites to read the comments and post replies, navigating dozens of different user interfaces and coping with crappy non-threaded web for a, you could do it all in one place - as the idea spread, whichever site you preferred.

And, of course, the same applies to aggregator software as well. When you download this stuff to your own machine, you can read it at your leisure, without paying extortionate bills for mobile connectivity. Download the bulk of the new messages on a fast free connection, then just post replies on the move when you're paying for every kilobyte over a slow mobile link.

What's more, in my experience of many different email systems, it's the offline ones that are the fastest and offer the best threading and message management. It could bring a whole new life to discussions on the Web.

All this, and all I ask for the idea is a commission of 1 penny per message to anyone who implements it. It's a bargain.
liam_on_linux: (Default)

There have been multiple generations of Macs. Apple has not really divided them up.

1. Original 680x0 Macs with 24-bit ROMs (ending with the SE/30, Mac II, IIx & IIcx)
2. 32-bit-clean-ROM 680x0 Macs (starting with the Mac IIci)
3. NuBus-based PowerMacs (6100, 7100, 8100)
4. OldWorld-ROM PCI-based PowerMacs (all the Beige PowerMacs including the Beige G3 & black PowerBooks) ← note, many but not all of these can run Mac OS X
5. NewWorld-ROM PCI-based PowerMacs (iMac, iBook & later)
6. OS-X-only PowerMacs (starting with the Mirrored Drive Doors 1GHz G4 with Firewire 800)
7. 32-bit Intel Macs (iMac, Mac mini and MacBook Core Solo and Core Duo models)
8. 64-bit Intel Macs with 32-bit EFI (Core 2 Duo models from 2006)
9. 64-bit Intel Macs with 64-bit EFI (anything from 2008 onwards)

Classic MacOS was written for 68000 processors. Later it got some extensions for 68020 and 68030.

When the PowerMacs came out, Apple wrote a tiny, very fast emulator that translated 680x0 instructions on the fly into PowerPC instructions. However, unlike modern retrocomputer emulators, this one allowed apps to call PowerPC code, and the OS was adapted to run on the emulator. It was not like running an Amiga emulator on a PC or something, when the OS in the emulator doesn't "know" it's in an emulator. MacOS did and was tailored for it.

They ran Classic MacOS on this emulator, and profiled it.

They identified which parts were the most performance-critical and were running slowly through the emulator, and where they could, they rewrote the slowest of them in PowerPC code.

Bear in mind, this was something of an emergency, transitional project. Apple did not intend to rewrite the whole OS in PowerPC code. Why? Because:
1. It did not have the manpower or money
2. Classic MacOS was already rather old-fashioned and Apple intended to replace it
3. If it did, 68000 apps (i.e. all of them) wouldn't work any more

So it only did the most performance-critical sections. Most of MacOS remained 68K code and always did for the rest of MacOS' life.

However, all the projects to replace MacOS failed. Copland failed, Pink failed, Taligent failed, IBM Workplace OS failed.

So Apple was stuck with Classic MacOS. So, about the MacOS 7.5 timeframe, Apple got serious about Classic.
A lot of MacOS 7.6 was rewritten from assembly code and Pascal into C. This made it easier to rewrite chunks for PowerPC. However it also made 7.6 larger and slower. This upset a lot of users, but it meant new facilities: e.g. the previously-optional "MultiFinder" was now always on, & there was a new network stack, OpenTransport.

This is also the time that Apple licensed MacOS to other vendors.

Soon afterwards, Apple realised it could not build a new replacement OS itself, and would have to buy one. It considered former Apple exec Jean Louis Gassée's Be for BeOS, and Apple co-founder Steve Jobs' NeXT Computer for the Unix-based NeXTstep.

It bought NeXTstep and got Jobs back into the bargain. He regained control, fired Gil Amelio and killed off the licensing program. He also killed Copland, the experimental multitasking MacOS replacement, and got his coders to salvage as much as they could from it and bolt it onto Classic, calling the result MacOS 8.

MacOS 8 got a multithreaded Finder, desktop "drawers", new gaming and web APIs and more. This also killed the licensing programme, which only applied to MacOS 7.

MacOS 8.1 got a new filesystem, HFS+. This still works today and was the default up to High Sierra.

8.1 is the last release for 680x0 Macs and needs a 68040, although a few 68030 Macs work via Born Again.

The "monolithic" / "nanokernel" distinction applies to CPU protection rings.

These days this principally applies to OSes written entirely in compiled code, usually C code, where some core OS code runs in Ring 0, with no restrictions on its behaviour, and some in Ring 3 where it cannot directly access the hardware. IBM OS/2 2 and later uniquely used Ring 1. I've blogged about this before.

OS/2 2 using Ring 1 is why VirtualBox exists.

Decades ago, more complex OSes like Multics had many more rings and used all of them.

If a Unix-like OS is rewritten and split up sop that a minimal part of the OS runs in Ring 0 and manages the rest of the OS as little separate parts that run in Ring 3, that's called a "microkernel". Ignore the marketing, Mac OS X isn't one and neither is Windows NT. There are only 2 mass-market microkernel OSes and they are both obscure: QNX, now owned by Blackberry, and Minix 3, embedded in the control/management circuitry embedded into every modern Intel x86-64 CPU.

Classic MacOS is not a C-based OS, nor is it an Intel x86 OS. It does not have a distinction between kernel space and user space. It does not use CPU rings, at all. Everything is in Ring 0, all the time. Kernel, drivers, apps, INITs, CDEVs, screensavers, the lot.

MacOS 8.5 went PowerPC-only, and in the process of dropping support for 680x0 Macs, Apple made some provision for future improved PowerMacs.

The 68K emulator got a big upgrade to the emulator, now renamed a "nanokernel". It is not an OS in its own right: it boots and runs another OS on top of it.

It is not a HAL: this is native code, deep within an OS kernel, that allows the same OS to run with little modification on widely-different underlying hardware, with different memory maps, I/O spaces, APICs etc., without adapting the kernel to all the different platforms. MacOS 8.5+ only runs on Macs and the hardware could be adapted to the OS and the OS to the hardware. No need for a HAL.

It is not a hypervisor. A hypervisor partitions a machine up into multiple virtual machines -- it allows 1 PC to emulate multiple separate PCs and each virtual emulated PC runs a separate OS. Classic MacOS can't do that and only runs 1 OS at a time.

The MacOS nanokernel is a very small bit of code that boots first and then executes most of the rest of the code, and manages calls from apps from a 68K OS back to code written for the underlying PowerPC CPU.

It'sd a shame that this bit of code is secret and little-known, but some details have been leaked over the years.

liam_on_linux: (Default)

Or "What I Did On My Holidays by Liam Proven aged 52¼."

The first mainline talk I got to on Saturday was the one before mine: “The Hidden Early History of Unix” by Warner Losh of the FreeBSD project. [https://fosdem.org/2020/schedule/event/early_unix/]

This was a good deep-dive into the very early, including pre-C-language versions, and how little remains of them. Accidental finds of some parts and a lot of OCR work and manual re-keying has one PDP-7 version running in an emulator; for most of the others, nothing is left but at best the kernel and init and maybe a shell. In other words, not enough to run or to study.

What’s quite notable is that it was tied very closely to the machine -- they can even ID the serial numbers of the units that it ran on, and only those individual machines’ precise hardware configurations were supported.

There was an extensive list of the early ports, who did them, what they ran on and some of the differences, and what if any code made it back into the mainline -- but it’s gleaned from letters, handwritten notebooks, and a few academic papers. Printed publications have survived; machine-readable tapes or disks and actual code: almost nothing.

It’s within my lifetime but it’s all lost. This is quite sobering. Digital records are ephemeral and die quicker than their authors; paper can last millennia.

Then I did my talk, [https://fosdem.org/2020/schedule/event/generation_gaps/]. There’s a brief interview with me here: [https://fosdem.org/2020/interviews/liam-proven/]

It seems to have been well-received. A LinkedIn message said:

«Hello Liam,
Just wanted to let you know that your talk was one of the best so far on
FOSDEM. Thank you for all the context on OS/HW history, as well as for
putting Intel Optane on my map. I did not understand the potential of
that technology, now I think I do.»

That was very gratifying to hear. There has also been some very positive feedback on Twitter, e.g.
https://twitter.com/wstephenson/status/1223607640607141888
https://twitter.com/jhaand/status/1223918106839519232
https://twitter.com/pchapuis/status/1223622107361431552
https://twitter.com/untitledfactory/status/1223609733325651968

Then I went to “Designing Hardware, Journey from Novice to Not Bad” [https://fosdem.org/2020/schedule/event/openelectronicslab/]

A small team built an open source EKG machine for use in the developing world where unreliable power supply destroys expensive Western machinery.

They taught themselves SMT soldering by hand, they did demos and test runs that included moving the mouse pointer by mind control! It’s not just an EKG machine. Kit they used included an OpenHardware ExG shield and OpenSCAD. They noted that the Arduino build environment is great even for total beginners -- for example they learned to do SMT soldering from Youtube. (Top tip: solder paste really helps.) Don’t necessarily use a soldering iron, use a heat gun or do it by hand but under a dissection microscope.

Don’t be afraid to make mistakes. Chip’s legs can be missoldered: just cut them, lift a leg, and attach a wire. Wrong way round? Desolder a component, turn it, reattach it. It doesn’t need to look good, it just needs to work.

But you do need to understand legal risk, as described in ISO 14971. Some risk is acceptable, some can be mitigated… add audible alarms for things going wrong. Remove functions, you don’t need. For example, remove internet access if you don’t need it -- it makes your device much harder to hack. Similarly, if power reliability is a problem, run the device off a battery not the mains. That also reduces the chances of shocks to the patient, but also isolate sensors from control logic; isolators are a cheap off-the-shelf part.

Then I humoured myself with a fun ZX Spectrum item: “An arcade game port to the ZX Spectrum: a reverse engineering exercise” [https://fosdem.org/2020/schedule/event/retro_arcade_game_port_zx/]

We were warned that this will be tough on those who didn’t do assembly language. I never did.

Doing reverse engineering for fun is an educational challenge, but there are now competitions for this. You must pick your origin and target carefully. It is not like developing a new game. Also, you can throw away all you know about programming modern hardware. Vintage hardware limits can be v weird, such as not being able to clear the screen because it takes to long, or is not a supported function.

You need to know your game amazingly closely. You need to play it obsessively -- it took months of effort to map everything. You need to know how does it feel, which means you must watch others play, and also, play with others - multiplay teaches a lot.

To find the essence of a game is surprisingly hard. E.g. Pacman… Sure you recognise it, but how *well* do you know it? Do you know the character names, or the ghosts’ different search patterns? Or Tetris. Do you know it completely? Is next-part selection random? Are you sure?

Rui picked a game similar to “Bubble Bobble”. Everyone knows that, but in the colopured bubbles, are there patterns? If so, do they change? Are they different for 1 or 2 players?

Or “R Type”. Do you know how to beat all the bosses? His point being that you often can’t exactly reproduce a game, especially on lower-spec hardware, so you have to reproduce how it feels to play, even if it’s not identical.

Rui picked “Magical Drop 2”, to re-implement on a ZX Spectrum. This is a Neo Geo MVS game -- the professional, arcade NeoGeo. Its specifications are much higher than a Spectrum -- such as using a 12MHz 68000 CPU!
Even its sound chip is a full Z80 that is faster than the Spectrum’s.

To work out what he could do, he methodically worked out the bandwidth required.

So, a full Spectrum screen (255*192 pixels, with 32*22 colours) needs 6912 bytes per frame. At 50 Hz. The Spectrum’s CPU has just 70,000 ticks per frame. (That’s operations, but not instructions: the fastest Z80
instruction is 4 T-states, and pop/push take 10.)

If you draw a frame, that cuts the size of screen updates a lot, and it looks better. If you only update small bits, it’s quicker too. Rui came up with a clever hack: pre-draw the bubbles, then just change the colours. Black-on-black is invisible. Set the colour and it appears. But there are only 8, in 2 levels of brightness, and you need to reserve some for animations, leaving at most 5 or 6.

His demo of the effects was amazing: a Spectrum normally can’t draw stuff that fast.

Reverse engineering is not the same as a port. If you do a port, that implies you have source code access. RE means you have none. There are other things to note. The in-game player instructions are very basic. Why? Because it’s a coin-op! They want you to spend money learning to play!

Using colours not pixels is 8x faster, which leaves time for the game logic. He uses an array for ball colours and a mark/sweep algorithm to look for 3+ matching balls. But even this needs special care: edge checking is very instruction-intensive, so rather than check for bounds, which is too slow, he puts a fence around the playing field -- an area that doesn’t count.

He then listed a lot of optimisations one could use, from tail call optimisation for functions, moving constants out of loops, unrolling loops, and more to the point unrolling them in binary multiples that are efficient for a slow CPU. He even used self-modifying code (surrounded by skulls in the listings!) But it all got too technical for me.

After 6 months, he is still not finished. Single- and dual-player works, but not against the computer.

I was aghast at the amount of work and effort.

-----

On Sunday, I went to a talk by SUSE’s own Richard Brown, “Introducing libeconf” [https://fosdem.org/2020/schedule/event/ilbsclte/]

However, it was a bit impenetrable unless you code on MicroOS talking a lot to systemd.

Then I went to a talk on the new “NOVA Microhypervisor on ARMv8-A” [https://fosdem.org/2020/schedule/event/uk_nova/]

But this was very deep stuff. Also, the speaker constantly referred back to an earlier talk, so it was opaque unless you were at that. I sneaked out and went instead to:

“Regaining control of your smartphone with postmarketOS and Maemo Leste” [https://fosdem.org/2020/schedule/event/smartphones/]

This was a much more accessible overview of FOSS Linuxes for smartphones, including why you should use one. There were 2 speakers,  and one, Bart, spent so much time trying to be fair to other, rival distros that he left little reason to use his (postmarketOS). It’s a valiant effort to replace outdated Android with a far more standard mainline Linux OS, to keep old basic hardware working after vendors stop updating it.

The other speaker, Merlijn, was more persuasive about Maemo. This was Nokia’s Linux for the N900. It’s now abandoned by Nokia, and unfortunately was not all OSS. So some parts can’t be updated and must be thrown away and replaced. But all thw work since Nokia is FOSS. He talked a lot about its history, its broad app support, etc. The modernised version is built on Debian or Devuan. They have updated the FOSS bits, and replaced the proprietary bits. A single repo adds all the phone components to a standard ARM install. It is only Alpha quality for now. It runs on the original N900, the Motorola Droid 4 (one of the last smartphones with a physical QWERTY keyboard) & the new PinePhone.

The closing main item was “FOSSH - 2000 to 2020 and beyond!” by Jon “maddog” Hall. [https://fosdem.org/2020/schedule/event/fossh/]

maddog makes the point that he’s an old man now, at 69. He’s had 3 heart attacks, and as he puts it, is running on ½ a heart; the rest is dead. He’s been 50+ years in the industry.

He has a lot to teach. He started with how software used to be bundled with computer hardware, as a mix of source & binaries, until a historical Amdahl v IBM legal case. As a result, bundling became illegal for system vendors. Then software started to be sold as a product. This was so Amdahl plug-compatible mainframes could run IBM software, which enabled Amdahl to sell them.

He was using Hypervisors in 1968, and name-checked IBM VM (née CP-67) & on that, cms (née the Cambridge Monitor Systems, later renamed Conversational Monitor System).

He also pointed out that `chroot` has worked since 1979 - containers aren’t that new.

It’s often underestimated how the sales of video games in the ‘80s propelled software patents & copyright. Rip-off vendors could just clone the hardware and copy the ROM.

rms among others objected to this. While maddog “disagress with rms about a few things”, he credits him with the establishment of the community -- but points out that it’s a massive shame he didn’t call it The Freedom Software Foundation. That one extra syllable could have saved years of arguments.

And for all that rms hates copyright, and fought it with a different kind of licence agreement -- the GPL of course -- maddog points out that licenses don’t work without copyright…

Maddog had many years of non-free software experience before Linux -- CP/M, MS-DOS, Apple and more. But then came BSD… and we owe BSD a lot, because it’s much more than just a Unix. Many of the tools used on many
OSes, including Linux, come from BSD.

The commercial relevance is also important. Many “small” companies have come out of FOSS, including:


  • Ingres / Postgres

  • Cygnus

  • PrimeTime S/W

  • Walnut Creek


The invention of the CD-ROM was a big deal. Not just for size or speed, but for simple cost. A DEC TK50 tape was $100 new. But CD-ROMs were very nearly very bad for Unix. The ISO-9660 standard only used 8.3 names… because it was set by MS and DEC. It was enough for DOS and VMS. As it happened, at the time, maddog worked at DEC, so he traced the person who was the official standards setter and author, who worked a few cubicles away, and there not being much time, simply blackmailed him into including the Rock Ridge extensions for Unix-like filenames into the standard. This got a round of applause.

The original BSD Unix distro -- because distributions are not a new, Linux thing -- led to BSDi, which in turn led to the famous AT&T lawsuit.

But that also let to Unix System V. This caught on against a lot of opposition and led to the rise of Unix. For example, the very finely-tuned SunOS 4 was replaced with the still research-oriented System V. The Sun OS developers were horrified, but management forced them to adopt it. This is why it was called “Slowlaris” -- all the optimisations were lost. But it did lead to a more standardised Unix industry, and a lot more cross-compatibility, so it was good for everyone.

Keith Bostick led the effort to create BSD Lite and deserves a lot more credit for it than he got. He and his team purged all the code that even looked like AT&T code from BSD. This left just 17 questionable files, which they simply dropped. The result was criticised because it wasn’t a complete OS, but it was not so hard to replace those files, and the result, BSD Lite, led to Free BSD, Net BSD, Open BSD etc. It was very much worth it.

It nearly didn’t come in time. By ‘92 all the Unix vendors had ceded the desktop to M$ & Apple. (M$ is the term he used.) NT started to win everywhere… but then the Unix world realised MS wanted everything, the server too. A warning bell was when even O’Reilly started publishing NT books.

But then, just as it looked dark, came...


  • GNU (everything but a kernel)

  • Then the Linux kernel in 1991

  • Then Linux v1.0 in 1994.


Linux distros started, and maddog tried


  • SLS

  • Yggdrasil

  • Debian

  • RH

  • Slackware

  • And others.


There even came a time when he called Linus Torvalds in his office at Transmeta, and he answered the phone with “Lie-nus here”. He had gone so native, he even pronounced his own name the American way!

“Mind you, Linus said ‘I don’t care what you call the OS so long as you use it.’ So here’s your chance, BSD people! Call it BSD!”

But there were no apps. Instead, Linux was used in…


  • ISPs (to replace SPARC & Solaris)

  • shells

  • DNS

  • LAMP (thanks, timbl)

  • As a way to reuse old boxes

  • Firewall

  • file & print server (thanks, Samba)


Again underestimated, Beowulf clusters (1995) were important. All the old supercomputer vendors were going under. They would spend $millions on developing a new supercomputer, then sell 5. One “to a government agency we can’t name, but you all know who I mean”, and 4 to universities who couldn’t afford them. So, credit to Thomas Sterling & Don Becker. Beowulf changed this. There was no commercial supercomp software any more. Although apparently, Red Hat did a boxed supercomputer distro & sold it as a joke. But thousands of people bought it, so they could show it off on the shelf - never opened.

Then came a long run-through of the early stages of the commercial Linux industry.

From 1997-1999, Slashdot, Sourceforge and Thinkgeek. Linux International, the Linux Mark Institute and the LSB. Linux professional certification from Sair and the LPI. These bodies were supporters of early Linux marketing, for example trade shows like CeBIT and LinuxWorld, of user groups, and so on.

A good sign was when commercial databases announced they were supporting Linux. The first was Informix, on October 2nd 1998. Hearing about it, Oracle announced theirs 2 days before, but it didn’t ship until 9 months later. (Maddog is very evidently not a fan of Oracle.) Then Sun buys MySQL, then Oracle buys Sun.

The term “Open Source” -- it was not his fault. He was at the meeting, but he went to bathroom, and when he came back, it was too late. They’d named it.

The dot-com boom/bust was bad but not as bad as people thing. There were the RH and V.A. Linux IPOs. IBM invested $1Bn in Linux, & got it back many timers over. The OSDL (2000) was important, to. It helped CA, HP, and IBM with hardware. It even hired Torvalds, but went broke.

Although the following talk was meant to be about the history since 2000, maddog gave his favourites. His interesting events in or since 2000 were:


  • 2000

  • - Linux in Embedded systems

  • - Knoppix

  • - FreeBSD jails

  • 2001

  • - Steve Ballmer’s famous “cancer” quote

  • 2003 onwards

  • - SCO lawsuit -- and how it was the evil, post-Caldera SCO, not the original Michels family SCO, who were good guys. They even gave Linus an award. Doug Michels asked Linus “what can SCO do to help Linux?” Torvalds later told maddog of his embarrasment -- he could not think of a single thing to say.

  • 2004

  • - Ubuntu, of course. For better or worse, a huge step in popularising Linux.

  • 2008

  • - Android

  • - VMs: KVM, Xen, VBox, Bochs, UML

  • - The fog cloud. Yes, he calls it “the fog”.

  • 2011

  • - Rasberry Pi

  • - Containers

maddog’s favour illustration of Linux’s progress over time are 4 quotes from a leading industry analyst, Jonathan Eunice or Illuminata. They were


  1. “Linux is a toy.”

  2. “Linux is getting interesting”

  3. “I recommend Linux for non-critical apps.”

  4. “I recommend to at least look at Linux for *all* enterprise apps.”


On maddog’s last day at DEC, in 1999… his boss bought him a coffee and asked “whenm will this thing be ready? When can I get rid of all my Digital Unix engineers?” That’s when he knew it had won.

Why is free(dom) software important? As he put it, “I have a drawer full of old calculators & computers I can’t use, because their software wasn’t free.” Avoiding obsolescence is a significant issue that gave Linux an in-road into the mainstream, and it remains just as important.

The thing about freedom software is that both nobody & everybody owns it. Even companies making closed products with freedom software are still helping.

Today’s challenges & opportunities? Well, security & privacy -- it‘s worse than you think. Ease of use is still not good enough: “it’s gotta be easy enough for mom & pop.”

He doesn’t like the term AI -- it should be “inorganic intelligence”. It will be the same as our meat intelligence, in another substrate: maddog agrees with Alan Turing: at heart, all we need to do is duplicate the brain in silicon and we’re there. And he feels we can do that.

He feels that freedom software needs a lot more advertising. It needs to be on TV. It needs to be a household word, a brand.

Winding up, he says it’s all about love. Love is Love. Ballmer now *says* he loves FOSS, companies say it, but end-users should say they that they love freedom software. Or, as he put it, “world domination through cooperation”.


The final main item was “FOSDEM@20 - A Celebration” [https://fosdem.org/2020/schedule/event/fosdem_at_20/] by Steve Goodwin, @marquisdegeek

He felt it was very apt that FOSDEM happens in Brussels, as the Mundaneum in 1934 was the first attempt at an indexing system of everything.

Goodwin doesn’t take credit -- he says that the first, OSDEM, was organised by Raphaël Bauduin. He just claims to have inspired it, by sending out an email, which I noted had a misspelled subject line: “programers meeting” [sic]

FOSDEM 1 was in 2002. It even had its own dance, which Bauduin demonstrated. He also showed that he was wearing the same T-shirt as in the photo of him opening the first event, 20 years earlier.

FOSDEM was meant to be fun, because RB didn’t feel comfy at commercial FOSS conferences.

When it started, the IT world was very different. In 2001, there was no FaceBook, no Twitter, no Stack Overflow (to a chorus of boos), no Uber. Google was 3, Amazon was 7 and sold only books. Billie Eilish was born… in December, and Goodwin didn’t believe there’d be a single middle-aged geek who would have heard of her.

Mac OS X and XP were both new.

There are some photos on http://fosdem.3ti.be/ showing its intentional lack of professionalism or seriousness -- for instance, the Infodesk was subtitled “a bunch of wacko looneys at your service”. But they got a lot of big names. Miguel de Icaza was an early speaker, demoing Mono, GNOME, Xamarin. A heckler bizarrely shouted “this is Coca-cola!” i.e. demoing Mono controlling the proprietary Unity was wrong. Then there was a video speech from Eben Moglen introducing the Freedombox: https://freedombox.org/

And that’s a run-down of my FOSDEM. This is just my notes expanded to sentence length. Forgive the pretentious quote from Blaise Pascal: “Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.” (If I had more time, I would have written a shorter letter.)

liam_on_linux: (Default)
On Saturday 1st Feb, I did another FOSDEM talk in the History stream. (I was, in fact, the end of history.)

Here's the presentation that I made, with speakers' notes. It's a LibreOffice Impress file. I'll add a video link when I get it.

If you prefer plain text, well, here's the script... LibreOffice Writer or MS Word format.

UPDATE: in theory, there should be video here. It seems not to be available just yet, though.

https://video.fosdem.org/2020/Janson/generation_gaps.mp4
https://video.fosdem.org/2020/Janson/generation_gaps.webm

Here is my previous FOSDEM talk from 2018, if you're interested.
liam_on_linux: (Default)
About 15 years ago, I agreed to review Douglas Adams' TV documentary Hyperland for the newsletter of ZZ9, the Hitch-hikers' Guide fanclub. I still haven't, but I finally got around to watching it 6 months ago, and it features Ted Nelson.

My non-$DAYJOB research into computer science has led to me reading and following Ted Nelson, inventor of hypertext and arguably the man who inspired the creation of the WorldWideWeb because Nelson's Xanadu was never finished, so Tim Berners-Lee did a quick-and-dirty lightweight version of some of its core ideas instead.

And here I am blogging on it.

When I finally got round to watching Hyperland, who is interviewed but... Ted Nelson.

My recent research has led me to Niklaus Wirth's remarkable Oberon language and OS.

But in $DAYJOB I'm documenting SANs and the like, which led me to ATA-over-Ethernet among other things. It's a tech I've long admired for its technical elegance.

I found the author, and his blog... and he talks about his prior big invention, the Cisco PIX. That was my last big project in my last hands-on techie day-job. It emerges he also invented NAT. And reading about that:
http://coraid.com/b190403-the-pix.html

... What does he talk about but Oberon.
liam_on_linux: (Default)

I keep getting asked about this in various places, so I thought it was high time I described how I do it. I will avoid using any 3rd party proprietary tools; everything you need is built-in.

Notes for dual-booters:

This is a bit harder with Windows 10 than it was with any previous versions. There are some extra steps you need to do. Miss these and you will encounter problems, such as Linux refusing to boot, or hanging on boot, or refusing to mount your Windows drive.

It is worth keeping Windows around. It's useful for things like updating your motherboard firmware, which is a necessary maintenance task -- it's not a one-off. Disk space is cheap these days.

Also, most modern PCs have a new type of firmware called UEFI. It can be tricky to get Linux to boot off an empty disk with UEFI, and sometimes, it's much easier to dual-boot with Windows. Some of the necessary files are supplied by Windows and that saves you hard work. I have personally seen this with a Dell Precision 5810, for instance.

Finally, it's very useful for hardware troubleshooting. Not sure if that new device works? Maybe it's a Linux problem. Try it in Windows then you'll know. Maybe it needs initialising by Windows before it will work. Maybe you need Windows to wipe out config information. I have personally seen this with a Dell Precision laptop and a USB-C docking station, for example: you could only configure triple-head in Windows, but once done, it worked fine in Linux too. But if you don't configure it in Windows, Linux can't do it alone.

Why would you want to do this? Well, there are various reasons.


  1. You regularly, often or only run Windows and want to keep it performing well.

  2. You run Windows in a VM under another OS and want to minimize the disk space and RAM it uses.

  3. You dual-boot Windows with another OS, and want to keep it happy in less disk space than it might normally enjoy to itself.

  4. You're preparing your machine for installing Linux or another OS and want to shrink the Windows partition right down to make as much free space as possible.

  5. You've got a slightly troublesome Windows installation and want to clean things up as a troubleshooting step.

Note, this stuff also applies to a brand-new copy of Window, not just an old, well-used installation.

I'll divide the process into 2 stages. One assuming you're not preparing to dual-boot, and a second stage if you are.

So: how to clean up a Windows drive.

The basic steps are: update; clean up; check for errors.

If you're never planning to use Windows again, you can skip the updating part -- but you shouldn't. Why not? Well, as I advised above, you should keep your Windows installation around unless you are absolutely desperate for disk space and so poor that you can't afford to buy more. It's useful in emergencies. And in emergencies, you don't want to spend hours installing updates. So do it first.

Additionally, some Windows updates require earlier ones to be installed. A really old copy might be tricky to update.


  1. Updating. This is easy but not quite as easy as it looks at first glance. Connect your machine to the Internet, open Windows Update, click "Check for updates". But wait! There's more! Because Microsoft has a vested interest in making things look smooth and easy and untroubled, Windows lies to you. Sometimes, when you click "check for updates", it says there are none. Click again and magically some more will appear. There's also a concealed option to update other Microsoft products and it is, unhelpfully, off by default. You should turn that on.

  2. Once Windows Update has installed everything, reboot. Sometimes updates make you do this, but even if they don't, do it manually anyway.

  3. Then run Windows Update and check again. Sometimes, more will appear. If they do, install them and go back to step 1. Repeat this process until no new updates appear when you check.

  4. Next, we're going to clean up the disk. This is a 2-stage process.

  5. First, run Disk Cleanup. It's deeply buried in the menus so just open the Start menu and type CLEAN. It should appear. Run it.

  6. Tick all the boxes -- don't worry, it won't delete stuff you manually downloaded -- and run the cleanup. Normally, this is fast. A few minutes is enough.

  7. Once it's finished, run disk cleanup again. Yes, a second time. This is important.

  8. Second time, click the "clean up system files" button.

  9. Again, tick all the boxes, then click the button to run the cleanup.

  10. This time, it will take a long time. This is the real clean up and it's the step I suspect many people miss. Be prepared for your PC to be working away for hours, and don't try to do anything else while it works, or it will bypass files that are in use.

  11. When it's finished, reboot.

  12. After your PC reboots, right-click on the Start button and open an administrative command prompt. Click yes to give it permission to run. When it appears, type: CHKDSK C: /F

  13. Type "y" and hit "Enter" to give it permission.

  14. Reboot your PC to make it happen.

  15. This can take a while, too. This can fix all sorts of Windows errors. Give it time, let it do what needs to be done.

  16. Afterwards, the PC will reboot itself. Log in, and if you want an extra-thorough job, run Disk Cleanup a third time and clean up the system files. This will get rid of any created by the CHKDSK process.

  17. Now you should have got rid of most of the cruft on your C drive. The next step requires 2 things: firstly, that you have a Linux boot medium, so if you don't have it ready, go download and make one now. Secondly, you need to have some technical skill or experience, and familiarity with the Windows folder tree and navigating it. If you don't have that, don't even try. One slip and you will destroy Windows.

  18. If you do have that experiece, then what you do is reboot your PC from the Linux medium -- don't shutdown and then turn it back on, pick "restart" so that Windows does a full shutdown and reboot -- and manually delete any remaining clutter. The places to look are in C:\WINDOWS\TEMP and C:\USERS\$username\AppData\Local\Temp. "$username" is a placeholder here -- look in the home directory of your Windows login account, whatever that's called, and any others you see here, such as "Default", "Default User", "Public" and so on. Only delete files in folders called TEMP and nowhere else. If you can't find a TEMP folder, don't delete anything else. Do not delete the TEMP folders themselves, they are necessary. Anything inside them is fair game. You can also delete the files PAGEFILE.SYS, SWAPFILE.SYS and HIBERFIL.SYS in the root directory -- Windows will just re-create them next boot anyway.

That's about it. After you've done this, you've eliminated all the junk and cruft that you reasonably can from your Windows system. The further stages are optional and some depend on your system configuration.

Optional stages

Defragmenting the drive

Do you have Windows installed on a spinning magnetic hard disk, or on an SSD?

If it's a hard disk, then you may wish to run a defrag. NEVER defrag an SSD -- it's pointless and it wears out the disk.

But if you have an old-fashioned HDD, then by all means, after your cleanup, defrag it. Here's how.

I have not tested this on Win10, but on older versions, I found that defrag does a more thorough job, faster, if you run it in Safe Mode. Here's how to get into Safe Mode in Windows 10.

Turning off Fast Boot

Fast Boot is a featue that only shuts down part of Windows and then hibernates. Why? Because when you turn your PC on, it's quicker to wake Windows and then load a new session than it is to boot it from scratch, with all the initialisation that involves. Shutdown and startup both become a bit quicker.

If you only run Windows and have no intention of dual-booting, then ignore this if you wish. Leave it on.

But if you do dual-boot, it's definitely worth doing. Why? Because when Fast Boot is on, Windows doesn't totally stop when you shut down, only when you restart. This means that the C drive is marked as being still mounted, that is, still in use. And if it's in use, then Linux won't mount it and you can't access your Windows drive from Linux.

Worse still, if like me you mount the Windows drive automatically during bootup, then Linux won't finish booting. It waits for the C drive to become available, and since Windows isn't running, it never becomes available so the PC never boots. This is a new problem introduced by the Linux systemd tool -- older init systems just skipped the C drive and moved on, but systemd tries to be clever and as a result it hangs.

So, if you dual boot, always disable Fast Boot. It gives you more flexibility. I will list a few how-tos since Microsoft doesn't seem to officially document this.Turning off Hibernation

IF you have a desktop PC, once you have disabled Fast Boot, also disable Hibernation.

If you have a notebook, you might want to leave it on. It's useful if you find yourself in the middle of something but running out of power, or about to get off a train or plane. But for a desktop, there's less reason, IMHO.

There are a few reasons to disable it:

  1. It eliminates the risk of some Windows update turning Fast Boot back on. If Hibernation is disabled, it can't.

  2. It means when you boot Linux your Windows drive will always be available. Starting another OS when Windows is alive but hibernating risks drive corruption.

  3. It frees up a big chunk of disk space -- equal to your physical RAM -- that you can take off your Windows partition and give to Linux.

Here's how to disable it:In brief: open an Admin Mode command prompt, and type powercfg /h off.
That's it. Done.

Once it's done, if it's still there, in Linux you can delete C:\HIBERFIL.SYS.

Final steps -- preparing for installing a 2nd operating system

If you've got this far and you're not about to set up your PC for dual-boot, then stop, you're done.

But if you do want to dual-boot, then the final step is shrinking your Windows drive.

There are 2 ways to do this. You might want one or the other, or both.

The safe way is to follow a dual-booter's handy rule:

Always use an OS-native tool to manipulate that OS.

What this means is this: if you're doing stuff to, or for, Windows, then use a Windows tool if you can. If you're doing it to or for Linux, use a Linux tool. If you're doing it to or for macOS, use a macOS tool.

  • Fixing a Windows disk? Use a Windows boot disk and CHKDSK. Formatting a drive for Windows? Use a Windows install medium. Writing a Windows USB key? Use a Windows tool, such as Rufus.

  • Writing a Linux USB? Use Linux. Formatting a drive for Linux? Use Linux.

  • Adjusting the size of a Mac partition? Use macOS. Writing a bootable macOS USB? Use macOS.

So, to shrink a Windows drive to make space for Linux, then use Windows to do it.

Here's the official Microsoft way.

Check how much space Windows is using, and how much is free. (Find the drive in Explorer, right-click it and pick Properties.)

The free space is how much you can give to Linux.

Note, once Windows is shut down, you can delete the pagefile and swapfile to get a bit more space.

However, if you want to be able to boot Windows, then it needs some free working space. Don't shrink it down until it's full and there's no free space. Try to leave it about 50% empty, and at least 25% empty -- below that and Windows will hit problems when it boots, and if you're in an emergency situation, the last thing you need are further problems.

As a rule of thumb, a clean install of Win10 with no additional apps will just about run in a 16 GB partition. A 32 GB partition gives it room to breathe but not much -- you might not be able to install a new release of Windows, for example. A 64 GB partition is enough space to use for light duties and install new releases. A 128 GB partition is enough for actual work in Windows if your apps aren't very big.

Run Disk Manager, select the partition, right-click and pick "shrink". Pick the smallest possible size -- Windows shouldn't shrink the disk so much you have no free space, but note my guidelines above.

Let it work. When it's done, look at how much unpartitioned space you have. Is there enough room for what you want? Yes? Great, you're done. Reboot off your Linux medium and get going.

No? Then you might need to shrink it further.

Sometimes Disk Manager will not offer to shrink the Windows drive as much as you might reasonably expect. For example, even if you only have 10-20 GB in use, it might refuse to shrink the drive below several hundred GB.

If so, here is how to proceed.

  1. Shrink the drive as far as Windows' Disk Manager will allow.

  2. Reboot Windows

  3. Run "CHKDSK /F" and reboot again.

  4. Check you've disabled Fast Boot and Hibernation as described above.

  5. Try to shrink it again.

No joy? Then you might have to try some extra persuasion.

Boot off a Linux medium, and as described above, delete C:\PAGEFILE.SYS, C:\SWAPFILE.SYS and C:\HIBERFIL.SYS.

Reboot into Windows and try again. The files will be automatically re-created, but in new positions. This may allow you to shrink the drive further.

If that still doesn't work, all is not lost. A couple more things to try:

  • If you have 8 GB or more of RAM, you can tell Windows not to use virtual memory. This frees up more space. Here's how.

  • Disable System Protection. This can free up quite a bit of space on a well-used Windows install. Here's a how-to.

Try that, reboot, and try shrinking again.

If none of this works, then you can shrink the partition using Linux tools. So long as you have a clean disk, fully shut down (Fast Boot off, not hibernated, etc.) then this should be fine.

All you need to do is boot off your Linux medium, remove the pagefile, swapfile and any remaining hibernation file, then run GPARTED.Again, bear in mind that you should leave 25-50% of free space if you want Windows to be able to run afterwards.

Once you've shrunk the partition, try it. Reboot into Windows and check it still works. If not, you might need to make the C partition a little bigger again.

Once you have a small but working Windows drive, you're good to go ahead with Linux.
liam_on_linux: (Default)
Choose 68K. Choose a proprietary platform. Choose an OS. Choose games. Choose a fucking CRT television, choose joysticks, floppies, ROM cartridges, and proprietary memory. Choose no pre-emption, crap programming languages and sprite graphics. Choose a safe early-80s sound chip. Choose a second floppy drive. Choose your side. Choose badges, stickers and T-shirts to proclam your loyalty. Choose one of the two best-selling glorified games consoles with the same range of fucking games. Choose trying to learn to write video games and dreaming you'll be a millionaire from your parents' spare bedroom. Choose reading games magazines and pretending that one day you'll do one like that in AMOS or STOS, while buying another sideways-scrolling shooter or a platformer and thinking it's original or new or worth the thirty notes you paid for it. Choose rotting away at the end of it all, running the same old crap games in miserable emulators, totally forgotten by the generic x86 business boxes with liquid cooling that your fucked-up brats think are exciting, individual and fun... Choose your future. Choose 68K... But why would I want to do a thing like that?

I chose not to do that. I chose something different.

liam_on_linux: (Default)
Stumbled across this in the Internet Archive...

Red Hat Linux 6.2 Deluxe

Red Hat used to be a big fish in a small pond, but version 6.2 must prove itself seaworthy.

Red Hat is one ofthe longest-established Linux distributions and the first to be split into packages - archived bundles containing all the programs and supplementary files forming an application, allowing the user to add, remove or upgrade individual subsystems in a single operation. This modularity and upgradability made it the first Linux for non-experts and proved highly successful, to the extent that it remains the most widely used distribution in America and, in some ways, the de facto 'standard’ Linux.

In the past few years, though, rival distributions have surpassed it in some areas and the company’s rigorous stance against including commercial components has imposed some restrictions.

Now Red Hat is playing catch-up. Version 6.0 moved to the 2.2 kernel and version 6.1 aped Caldera and added a graphical installation program, Anaconda. This latest version, 6.2 (codenamed Zoot), smoothes out some wrinkles caused by these changes, adds an interactive startup sequence allowing troublesome components to be deactivated and claims better hardware detection. KDE is offered as an alternative GUI, although GNOME - now on its second release - is the recommended default.

Installation is quite easy. A boot floppy is provided, but the CD is bootable and  after a prompt launches straight into graphics mode. Like Corel LinuxOS, there’s an option to install Linux into a FAT filesystem if you want to keep Windows and don’t want to repartition your drive - although this reduces performance. The installer’s partitioning tool is pretty basic, though, and only FIPS (Federal Information Processing Standard) is supplied for non-destructive repartitioning; we recommend buying Partition Magic for this.

There is a selection of pre-configured installations, including server, GNOME and KDE workstations and a custom option which allows packages to be individually selected. The installer can update an existing Red Hat installation from version 2.0 upwards, which is a neat touch. We tried this on 6.0 and 6.1 installations and it worked well.

There were some niggles, though. On a recent notebook PC, all the hardware, including graphics, sound, PC Card slots, USB and power management was correctly detected and configured, but on an older Cyrix machine, vanilla NE2000 and SoundBlaster 16 cards were missed - although the 'Getting Started’ manual contained simple instructions on how to add them later.

[screenshot]
Red Hat 6.2 offers a choice of GUIs as well as a vast array of skins for that personal touch

Unless you choose a custom install, there’s no option as to where to install the LILO boot manager and it silently overwrote PowerQuest’s BootMagic.

You can choose whether to boot into text or graphics mode, but misconfiguration of the X server on the Cyrix desktop meant that graphics mode failed and had to be configured manually from the command line.

Once installed, the GNOME desktop is pretty good. There isn’t the same range of integrated accessories and utilities as with KDE, but a range of helpful non-GNOME tools is included and the GNOME tools include an excellent help system, file manager and a full spreadsheet, Gnumeric.

The choice of window managers and graphical 'skins’, wallpapers and screensavers is stunning: GNOME looks more attractive than KDE and is vastly more customisable. The desktop also holds links to helpful websites and local documentation and icons for CD and floppy drives. Ifyou choose to install KDE instead, or even alongside, you get only the default KDE desktop.

The basic version of Red Hat can be downloaded as a CD image from the company’s website or installed over the Internet. The Deluxe boxed edition adds 90 days oftelephone support, novice-level printed manuals and several additional CDs: documentation and source code as well as free 'PowerTools’ and commercial workstation applications. The Professional edition doubles the period of support, which also covers Apache configuration and includes more server-based tools.

Red Hat remains a solid distribution, but it no longer has the technological edge. SuSE is easier to install and includes vastly more software, Caldera is better integrated and has more corporate features and Corel, although immature, is the most user-friendly and Windows-like Linux around.

LIAM PROVEN


DETAILS

★★★

PRICE £64 (£54.47 ex VAT)

CONTACT Red Hat 01483 300169

http://europe.redhat.com/

SYSTEM REQUIREMENTS x86 processor with 16MB of RAM and 500MB of disk space
PROS Easier than ever; widely supported
CONS Poorer integration, features and user-friendliness than the competition
OVERALL Red Hat is the Linux baseline: if you’re already familiar with it, it’s still a sound choice, but other variants offer more

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 16th, 2026 01:22 am
Powered by Dreamwidth Studios