liam_on_linux: (Default)

Is there anyone more knowledgeable than me (not hard, honestly) who's up to the challenge of building, running and then writing a comparison of these?


All are of interest to me, but I strongly suspect beyond my skills. I failed to get an instance of Alpine fully running and that's relatively mainstream.


Sta.li: statically-linked Linux from the suckless.org folks:



KISS Linux: one of the most minimalist full-function distros I've seen.



And something I stumbled across looking for the above two — Monolinux:




liam_on_linux: (Default)

[Nicked from the FB Vintage Computer Club]


Someone was claiming that the big innovation of OS/2 2 was that it used CPU protection rings, and that made it better than any version of Windows ever. XKCD 386 got me.


Protection rings date back to  the 1960s or even the 1950s — the Multics OS that "inspired" Unix was  known for its extensive use of them.


All OSes that  use pre-emptive multitasking and hardware memory management use rings.  The only significant question is how many — most x86 OSes only use ring  0 and ring 3 and nothing in between, which is simpler but throws away a  massively useful protective feature.


OS/2 2 was  unusual because it also uses Ring 1, I believe. This made it very hard  to virtualise, which is what led to the development of VirtualBox.


Windows 2 286 & 386 both made very basic use of rings — even MS-DOS 386 memory managers such as QEMM do.


NT  makes extensive use of them and dates all the way back to 1993. NT  originated as OS/2 v3, of course, before the IBM/MS divorce.


Win  9x does make use of them, but because of its inspired hack of a design,  it basically runs in Ring 0 almost all the time. But don't knock it.  What the Win95 team did was amazing work: a protect-mode 386 OS that can  run and use MS-DOS mode drivers. This was stunning work and is what  made 32-bit Windows succeed and sell.


Read more... )
liam_on_linux: (Default)

(Another recycled Quora answer)


It was a pivotal release of the NT family of OSes.


It is forgotten now but Microsoft has had multiple tries at creating operating systems. In the very early 1980s, it had its own UNIX, called Xenix, which it offered for multiple computer platforms including the Apple Lisa.


Xenix failed.


Then there was MS-DOS 4, an attempt to create a multitasking DOS. This failed, and was replaced with IBM’s PC DOS 4, which was a very simple enhancement of MS-DOS 3.3, supporting larger hard disk partitions and adding a simple graphical program launcher called DOSshell.


Then there was OS/2, co-developed with IBM, designed from the ground up to be a multitasking, networked OS. Unfortunately, IBM crippled it, by insisting that the new OS could run on 80286 computers, because IBM had sold thousands of 80286-based PS/2 computers and promised its customers that they would one day be able to run OS/2.


The customers didn’t care — they ran PC DOS on the machines and were happy. OS/2 should have targetted the new 80386 processor, which would have made it able to multitask DOS applications. But it didn’t, so it failed.


A desperate Microsoft adopted an unofficial back-room skunkworks project to improve the commercial failure that was Windows 2. This was called Windows 3 and it was a huge success, but it ran on top of the very limited MS-DOS. It was a technical triumph that Windows 3 worked as well as it did.


Read more... )
liam_on_linux: (Default)

Note, there were 2 products:
• Netscape Communicator: full suite
• Netscape Navigator: just a browser

Mozilla was always the codename for the product while it was in development.

Netscape started out as just a browser. Then it gained email -- see Zawinski's Law: https://en.wikiquote.org/wiki/Jamie_Zawinski

Then it gained web editing. Then it gained calendaring when Netscape Corp bought Collabora.

Netscape was driving to bankruptcy by Microsoft, which gave away IE for free in order to, quote, "knife Netscape in the back" (S Ballmer).

SUN bought the server software. AOL bought the client software and the dying Netscape Corp made future versions open source.

Note, the current version (Netscape 4.x) was not made FOSS and never was. Only the unfinished future Netscape 5.x version.

AOL owned the name "Netscape" so the new FOSS project couldn't be called that. So it went back to its old codename: Mozilla, the Godzilla of Mosaics. (Mosaic was the original GUI web browser.)

It was not finished and most of the employees had been laid off, while the new owners, AOL, actually used and bundled IE as the browser with their client software. (If they did not, Microsoft said it would not bundle AOL with Windows 95 & 98. More illegal restraint of trade; never prosecuted.)

It took a long time to finish Mozilla 5.

Occasionally, as it got usable, AOL took a snapshot, packaged it as a branded free product, and badged it Netscape.

Netscape 6 was Mozilla 0.6 and so on:
https://en.wikipedia.org/wiki/Netscape_6

This was Communicator not Navigator. The whole suite, with email, address book, web editor, etc.

Navigator was never open-sourced.

The Mozilla Applications Suite became the default web browser on most Linux distros, but never became a hit on Windows or Mac. Part of the reason being that most OSes came with email & chat clients anyway, and few turn-of-the-century web users wanted or needed the web page editor.

A few years later, wanting to regain some of that old success, a team within Mozilla produced a new, cut-down browser-only program. It was called Mozilla Phoenix: the program that rose from the ashes of Netscape.

Snag is, there are other software products called Phoenix. Someone sued.

So it was renamed Mozilla Firebird. The phoenix is the fire-bird.

But there's another FOSS program called Firebird  (a database).


Read more... )
liam_on_linux: (Default)

MS Office was something new when it was launched.


Before MS Office, the market-leading apps in each sector tended to be from different vendors. E.g. in the late MS-DOS era, the leading apps were:



  • Spreadsheet: Lotus 1–2–3

  • Word processor: WordPerfect

  • Database: Ashton-Tate dBase IV

  • Presentation program: Harvard Graphics


The shift to Windows allows MS to get the upper hand and have competitive apps in all these sectors. To understand this you must understand that the plan was that MS-DOS would be replaced by OS/2, co-written by Microsoft with IBM. Big vendors such as Lotus and WordPerfect put a lot of effort into new OS/2 versions.


But OS/2 flopped, because at IBM’s insistence, OS/2 1.x ran on the 16-bit 80286 CPU. The 286 could not effectively multitask DOS apps, and as a result, neither could OS/2 1.x — or even offer very good DOS compatibility. This needed the 32-bit 80386 CPU — the origin of the name “x86”.


When OS/2 1.x flopped, Microsoft made a last-ditch effort to revive its failed Windows product on top of DOS. Windows 3 was a surprise hit. MS was not expecting it — when Windows 3 came out, the only major app Microsoft offered for its own GUI was Excel, which had been ported from the Mac to Windows 2. 


But MS pivoted quickly, hastily wrote a wordprocessor for Windows — Word for Windows 1 — and ported its Mac presentation program Powerpoint to Windows.


Read more... )
liam_on_linux: (Default)
Linux is big business now. Mostly on servers.

The only significant user-facing ones are Android and ChromeOS, which are both dramatically constrained systems, which is part of why they've been successful. It is taking desktop distro vendors way too long to catch up with what they are doing, but distros like Endless, and to a lesser extent Fedora Silverblue, are showing the way:

  • all apps containerised; no inter-app dependencies at all

  • OS image shipped as a complete, tested image

  • Most of the filesystem is read-only

  • no package manager, no end-user ability to install/remove/update packages. You get a whole new OS image periodically, like on a phone.

  • OS updates are transactional: it deploys the whole thing, and if it doesn't work, it rolls back the entire OS to the last known good snapshot. 2+ OS snapshots are maintained at any time so there should always be a good one.


This is a good thing. Software bloat is vast now. OSes are too complex for most people to understand, maintain, or fix. So you don't. Even the ability is removed.

This is in parallel with server deployments:

  • everything is virtualised: OSes only run in VMs with standardised hardware, the network connections are virtualised, the disks are virtualised.

  • VMs are built from templates and deployed automatically as needed, and destroyed again as needed.

  • there is as little as possible local state in any VM. It gets its state info automatically from a database over the network. The database is in a VM too, of course.

  • as few local config files as possible; config is kept in a database too and pushed out to local database instances


I could go on.

Unix is a late-1960s OS designed for late-1960s minicomputers:

  • big standalone non-networkerd servers with lots of small disks, shared by multiple interactive users on dumb text terminals

  • users built their own software from source

  • everything is a text file. Editors and piping are key tools.


With some 1970s tech on top that the industry spent 25 years getting working stably:

  • framebuffers and hi-res graphic displays are possible but very expensive

  • so, design for graphical terminals, or micros that are dedicated display servers

  • programs run over the network, executing on 1 machine, displaying on another

  • Ethernet networking has been bolted on. TCP/IP is the main protocol.

  • because GUIs and networking are add-ons, they break the "everything is a file" model. This is ignored. Editors etc do not allow for it yet alone use it.

  • machines treat one another as hostile. There is no federation, no process migration, etc.


Then in the 1980s this moribund minicomputer OS got a 2nd lease of life and started selling well because microcomputers got powerful enough to run it, growing up into expensive high-power workstations:

  • some effort at network integration: tools were bolted on top for distributing text-only config files automatically, machines could query each other to find resources

  • encryption was added for moving stuff over untrusted networks

  • a lot of focus on powerful programming tools and things like maths tools, 3D modelling tools

  • very little focus on user-friendliness or ease of use, as that sector was dominated by Macs, Amigas etc.

  • much of this stuff is proprietary because of the nature of the business model.

  • server support is half-hearted as there are dedicated server OSes for that


In the 1990s things changed again:

  • plain cheap PCs became powerful enough to run Unix usefully

  • the existing vendors flailed around trying to sell it but mostly failed as they kept their very expensive pricing models from the workstation era

  • FOSS re-implementations replace it, piggybacking on tech developed for Windows

  • After about 1½ decades of work, the leading FOSS *nix becomes a usable desktop OS. Linux wins. FreeBSD trails, but has some good work -- much of this goes into Mac OS X


Early 21st century:

  • high-speed Internet access can be assumed

  • non-technical end-users become a primary "market"

  • now it runs on local 64-bit multi-CPU micros with essentially infinite disk

  • it has a local 3D accelerator for a display


Results...

  • traditional troubleshooting/fault finding is obsolete. No need for keeping admin tools separate from user tools, no need for /bin and /sbin, /usr/bin and /usr/sbin, etc. Boot off a DVD or a USB, recover user data if any, nuke the OS and reload.

  • GUIs favour 3D chrome. When harmony is achieved & everyone standardises on GNOME 2, Microsoft attacks it and destroys it, resulting in vast duplication of desktop functionality and a huge amount of wasted effort.

  • Because of poor app portability between distros, just like in the days of proprietary Unix, only a few big-name apps exist for all distros.

  • Linux is mainly only usable for Web/email/chat/simple office stuff, and traditional coder work. Windows and Mac hoover up all of the rich-local-apps market, including games. Linux vendors do not even notice.

  • Linux on conventional desktops/laptops is weak, but that market is shrinking fast. But...

  • Not-really-Linux-any-more phone/tablet OSes are thriving

  • Consumer Internet use is huge, for content consumption, social networking, and retail


This drives a need for vast server farms, with the lowest possible unit software cost.

  • tools for automation -- for deployment, management, scaling -- are big money

  • because the job market is huge, skill levels are relatively low, so automated distribution of workloads is key:

  • - tools for deploying & re-deploying VM images automatically in case of failure of the contained app

  • - tools for large teams to interwork on incremental, iterative software development

  • - bolting together existing components, automated building and testing and packaging and deployment

  • as the only significant successful end-user apps are web browsers, all tools move onto the web platform:

  • - web mail, web chat,  web media, web file storage, web config management

  • Result: tooling written in Web tools -- JavaScript -- displaying over Web UIs (browser rendering engines)

  • On the server end, inefficiency can be solved by deploying more servers. They're cheap, the software is free.

  • On the client end, most focus is on fast browsers and using games acceleration hardware to deliver fast web browsing, media playback, and hardware accelerated UI


So the only possible method of fighting back and trying to deliver improved end-user tooling for power users is to use a mixture of web tools and games hardware.

Result: OSes that need 3D OpenGL compositing, with desktops and apps written in JavaScript, and packaging and deployment methods taken from those designed for huge server farms.

  • GNOME 3 and Cinnamon, and a distant 3rd, KDE. (The only others are principally defined by refusal to conform.)

  • Flatpak, Snappy and a distant 3rd, Appimage

  • systemd and an increasing move away from text files, including for config and logging -- server farm tools use database connections, because in the 1980s & 1990s, nobody saw any reason to try to copy Microsoft's LAN Manager, domains, Novell NDS, Banyan VINES' Streetalk, or any other more sophisticated LAN management tools.


Gosh. That turned into quite a rant.

Anyway. The Linux desktop is going to continue to move away from familiar *nix ways because they are historical now. Because the Linux desktop is only a tiny parasite on the flank of the vast Linux server market, it gets tooling designed for that.

If you want a more traditional Unix experience, try FreeBSD. It's thriving off the move to systemd and so on.
liam_on_linux: (Default)
Another Quora answer. Someone is wrong on the Internet!

Your history and your memories are both incorrect.

MS Windows 1 was released in 1985: Windows 1.0 - Wikipedia

It did not resemble GEM. MS worked closely with Apple and had designed Windows as a tiling window interface, with no desktop, no drive icons and no other features to resemble MacOS, which had been released the year before.

If you look at it you will see next to no resemblance: GUIdebook > Screenshots > Windows 1.01

Furthermore, GEM is not an Atari product. GEM was written by Digital Research and released on the PC before it was ported to the ST: Graphics Environment Manager - Wikipedia

Additionally the Atari ST was not only a games computer; perhaps its primary long-term market success was as a music sequencer, due to built-in MIDI ports. STs were still used for this well into this century. Here are some accounts: Red Bull Music Academy Daily

The Band Atari Teenage Riot were named after the machine for this reason. The musician Alec Empire still uses one. I have seen both, and I still own an ST. Have or do you?

GEM did closely resemble MacOS, Apple sued and won, and PC GEM was crippled so it did not look so Mac-like. Compare here:

GEM 2.0

No overlapping windows — tiled instead. No desktop drive icons.

The lawsuit did not affect the Atari version.

Atari TOS 1.0

GEM is now FOSS and the Mac-like features have been restored: Screenshots of FreeGEM

It does not “look like X-windows”. There are 2 primary reasons.


  1. There is no such thing as “X-Windows”. It is The X Window System, so called because it followed the W Window System.
    W Window System - Wikipedia
    It was called W because it ran on top of, i.e. came after, V:
    V (operating system) - Wikipedia
    There is not and never has been a product called “X-Windows”. The current version of X is version 11, so it is usually called X.11. The reference implementation for x86 PCs is run by the FreeDesktop foundation, whose website is X.Org so it is often called X.org.
    Decades ago they spent a lot of money on trying to teach people not to call it “X-Windows”. That was never the name.

  2. X imposes no look and feel. It just just a system for drawing windows on the screen and putting contents in them. Every X.11 environment looks different. Look at the early version with twm in the Wikipedia article and you will see it’s nothing like MS Windows. Or compare to SunOS:
    SunView - SunOS 3.5
    The later Motif toolkit looks a little like Windows, with similar controls, because it was licensed from Microsoft, so that it would be familiar to use.
    GUIdebook > Screenshots > CDE 1.5 in Solaris 9

I deployed Windows for Workgroups in production in 1992. It did come on floppies.

The next year, I replaced some of the nodes on the networks with early Pentium computers running Windows NT 3.1. It was shipped on CD. You can download CD images here if you wish: Windows NT 3.x 3.1

It looked like this:

Again, I have one. And 95, 95B, 98, 98SE, ME, NT 3.51, NT 4, and Windows 2000. Do you?

There were editions available, at extra cost, on floppies, yes, but as even NT 3.1 in 1993 took over 30 floppies, it was not a popular option.

NT 3.51 Workstation was 150 MB. You can look at the downloads for yourself here:

Windows NT 3.x 3.51

Since a high-density 3½” floppy diskette stores 1.4 MB, that means about 100 floppy disks. Nobody used this if they had a choice. You remember incorrectly if you think it came on 11 disks; it took 3 just to boot a text-mode installer!

Windows 95 shipped on CD by default. It looked like this:

Windows 95B, which added USB support, also came on CD:

Again, yes, floppies were available, or you could make your own, but it took a lot and was very cumbersome indeed.

As you can see from the label, even if you bought a PC with it pre-installed, you got the CD. You did not normally get floppies because there were so many of them it was too expensive to duplicate and ship them all.

Note that both NT 3 and Windows 9x came with boot floppies, because add-on CD-ROM drives on PCs were not usually bootable at this time.

So you booted the PC of floppies, loaded the CD-ROM device drivers into MS-DOS (for Win9x) and then accessed the CD and ran SETUP. This may be what you are thinking about.

NT had 3 boot floppies, to load the kernel, then some essential drivers, then the Setup program.

Win9x had just one and indeed the OS contained an image of a bootable floppy and could write it to disk for you. You can download that here:

Bootdisk.Com

You seem to be working from some very vague and patchy memories. Perhaps you were very young at the time.

I was not. I was a year into my first job in IT when Windows 3.0 was released. I correctly predicted that it would be a huge hit. The company did not believe me and refused to stock up.

Suffice to say that within a few years the company no longer existed.

I worked with this stuff as an adult professional. It was my stock-in-trade. I kept copies of stand-out highlight products.

I know whereof I speak.


liam_on_linux: (Default)

(Another recycled Quora answer.)

Multiple reasons. In no particular order:


  • Cross-platform support.
    WordPerfect was a highly-optimised, cross-platform text-mode app. It ran on everything from Macs, DOS, Xenix, Atari ST, Amiga, VAX-VMS, Data General — all the mid-to-late 1980s OSes.
    As a company, WordPerfect Corp missed that soon, Windows would be the dominant platform. It did not give it enough priority.
    Compare with Lotus, which devoted its effort to 1–2–3 for OS/2 and missed the market shift to Windows 3.
    This resulted in a poor Windows version: slow, buggy, with a poor UI. This got fixed in time.

  • Printer Drivers.
    Pre-GUI OSes did not have a single central driver mechanism or printing subsystem. Every app had to provide its own. WP had the biggest and best. It could drive every printer on the market, natively, and get the best from it.
    Additionally, graphical OSes managed fonts, and screen fonts became printer fonts too.
    On Windows and Mac this was irrelevant. The OS drove the printer, not the app, and text was rendered and printed in graphics mode. WP’s vast driver database and sophisticated font support became completely irrelevant and indeed a maintenance problem for the company.

  • User interface.
    As a very cross-platform app, WP largely ignored the underlying OS’s UI and imposed its own, weird, tricky but very powerful UI. All leading DOS apps did this: it was a mark of pride to memorise multiple ones.
    Windows and MacOS swept this away with a new, standardised UI and editing model, at odds with WP’s.
    See: CUA — IBM Common User Access - Wikipedia
    WP tried to maintain both, side-by-side. This sort of worked but the emphasis on the old system alienated GUI users.

  • Cost.
    WP was an expensive, standalone app. It became its maker’s sole product: the DataPerfect database, WordPerfect Editor plain-text editor, LetterPerfect cut-down word processor, WordPerfect Library menuing system & DOS utilities, all fell by the wayside. Satellite Software even renamed itself to WordPerfect Corporation.
    Word for Windows was good enough for most people, but the cheap way to buy WinWord was as part of the MS Office bundle.
    WordPerfect Corp had no such bundle. It only did wordprocessors. MS Office was far cheaper than buying a market-leading word processor (e.g. WordPerfect) plus a market-leading spreadsheet (e.g. Lotus 1–2–3) plus a market-leading database (e.g. dBase IV), etc.
    In the end, Novell bought WordPerfect, bundled it with other purchases, such as Borland’s QuattroPro spreadsheet and Paradox database. It was not enough and the apps did not integrate any better than any other random Windows apps. So Novell sold the suite off to Corel, which has made a modest success selling the bundle.
    Corel did a deal with Microsoft to integrate MS Visual BASIC for Applications as the suite’s macro language, and adopt the MS Office look and feel — not realising that MS changed the look and feel of Office with every new version, to keep it looking fresh. A term of this deal was killing the native Linux WordPerfect (a superb app and probably the best Linux word-processor ever written), and the forthcoming port of the entire WordPerfect Office suite to Linux.
    This was the end of cross-platform WordPerfect, the Mac version already being dead — a superb classic MacOS app, it was never updated for Mac OS X.

liam_on_linux: (Default)
So, once upon a time, there was a software PC emulator for the Mac. That's old PowerMacs running classic MacOS.

It was called SoftPC, by a British company called Insignia. SoftPC was a PC emulator for non-x86 computers. Unix workstation with RISC chips, basically. Some of the early RISC workstations were so much faster than PCs, you could run a usable emulation of a DOS PC and so run a few DOS apps.

It grew up to be a package called SoftWindows -- you can download it for free these days.

A bit later, the Acorn Archimedes came out -- a home computer fast enough to do the same thing. Acorn wrote their own, called, appropriately, "PC Emulator". Here's the manual [PDF], a compatibility list, and a contemporary write-up. (The latter is mainly about a follow-on product, but my original Archie was too low-spec to run that.)

I used it to take work home with me from my first ever job. The emulation gave me a slow PC but with very fast graphics and disk. It was certainly usable.

Later Insignia ported SoftPC to the Mac when PowerMacs became as powerful as the early UNIX machines (but 10× cheaper.)  SoftWindows was SoftPC enhanced with emulated (native Mac binary) device drivers to make Windows (and only Windows) run quicker. But since Windows is mainly what people needed, it did OK.

Fun fact: RISC versions of Windows NT (for MIPS, Alpha and PowerPC) ran 16-bit DOS apps and Win16 binaries via a licensed, embedded version of the Insignia SoftPC technology.

SoftWindows did so well that pioneering Mac vendor Connectix wrote their own version, Virtual PC. They'd already done other emulators so a PC one didn't seem so hard.

SoftWindows and Virtual PC were the two main rival products for Mac users who wanted occasional access to PC programs.

When VMware released their eponymous product, Connectix paid close attention.

VMware worked by trapping Ring 0 code (kernel code, stuff that directly manipulated the hardware) and running it through a CPU emulator -- on the native PC. This enabled x86 PCs to run virtualised x86 PCs. Before then, this needed special hardware (dedicated CPU instructions for virtualisation) that SPARC and POWER had but the x86 didn't. Indeed, the pundits had said it was impossible on x86.

Connectix thought "huh, we have a PC emulator already. We can do that." So they ported VirtualPC to the real PC. It was cheaper and easier to use than VMware.

Source: me. I interviewed the founder of Connectix, Jon Garber. He flew to the UK to meet me personally. Fun times.

As virtualisation took off, Intel added hardware virtualisation instructions to its chips. AMD did the same.

So the software emulators weren't needed any more -- it was much simpler to write one using the hardware facilities. That's exactly what KVM on Linux is.

But you need something to create the VM, manage virtual disks etc.

KVM uses the existing QEMU emulator for this.

Microsoft decided it wanted a hypervisor, so it bought Connectix and used those bits of VirtualPC. The rest was made a free download -- it's what runs XP Mode for Windows 7.

Microsoft Hyper-V is VirtualPC, integrated into Windows and minus the emulation engine that's no longer needed.

So, at different times and in different versions of the same product, Microsoft licensed and incorporated both SoftPC and VirtualPC.

liam_on_linux: (Default)

For a couple of weeks, since the 50th anniversary of Apollo 11 taking off, I've been riveted by "Curious Marc" Verdiell's Youtube channel. This isn't the first time -- his vlog of restoring a Xerox Alto was fascinating. But this project is even more historically significant: to get an original Apollo Guidance Computer running for the first time in about 45 years.

The AGC was all kinds of "first": the first computer made from integrated circuits; the first portable computer; the first computer to fly; the first computer on which humans landed on the moon.

Nonetheless I'm surprised to see the vlog even made the The Wall Street Journal.

If you don't know anything about the AGC, here's a f

antastic, very dense 1hr talk about it works.



Here's the Youtube playlist of all the restoration process.

And here's a link to the story of Margaret Hamilton, the team lead on the project of programming the AGC. You might recognise the rather famous photo of her standing next to a printout of the software, which is slightly taller than she is:
Image result for margaret hamilton agc

A fun detail of the software development process: not only was the machine extremely resource-constrained, and human lives depended on it -- so, no pressure then (!) -- but you must also consider the storage medium: core rope.

Core rope memory is not the same as core store. Core store uses tiny ferrite rings arranged on the intersections of very fine wires. By putting a current through both wires, the magnetic alignment of the core at their crossing-point could be read. But read destructively -- the act of reading it, erased it. Conversely, if the computer was off, the cores held their data indefinitely. People restoring 50 and 60 year old computers today can read what was in their core-store the last time they were turned off!

But core rope is different. It still uses cores, but big ones. Long wires thread in and out of cores, and the position of the wires encodes the data. So it's non-volatile: it's a kind of early ROM. You can never change the data. Ever. What was woven in when it was made is there forever. The phenomenally labour-intensive act of making it encodes the software: so weaving it was an extremely skilled task, given to experts... factories full of little old ladies, basically. This is software that is hand-knitted into the hardware. So after it's made, you can't change a single bit. The entire, multi-thousand-component hand-made rope must be re-woven.

This is CuriousMarc's playlist of the Xerox Alto restoration. The Alto was also a hugely signficant computer: the first GUI personal workstation, the machine on which the modern GUI was invented, the machine on which the pioneering object-oriented Smalltalk language was developed, and the first machine with Ethernet which more or less invented the idea of the Local Area Network. Some of original team came to admire the restoration process and help out -- and several of them are now dead.

The Alto is the machine that Steve Jobs & his team saw that led them to built the Lisa and then the Mac. They saw 3 things -- object-oriented programming, local-area networking & the GUI. Jobs himself said he fixated on the GUI and missed the (arguably, long-term) more important bits.

Source: the man himself.


This really is the last possible time to restore some of this stuff — while at least some of the creators are still alive.
liam_on_linux: (Default)

Another Quora answer.

I can’t say. My family was not rich enough to afford such high-end computers that cost £thousands. Only Americans could.

In early-1980s Britain we had Sinclair, Commodore and Oric computers (e.g. the ZX Spectrum or C64.) The better-off had Acorn machines. (There were many other more obscure brands.)

Common problems?

Well, mass storage was too expensive for children & home users. No floppy disks. Programs were stored on cassette tapes and loaded at 1200 baud or less. Loading a game could take 5 or 10 minutes.

It was common for computer magazines to print listings for you to type in yourself. This is how I learned programming. A big program could take days to type in, so an ever-present danger was the computer overheating and crashing, or someone accidentally unplugging it, and you losing all that work.

You saved to tape periodically. This could take 5–10 min again. The computers used ordinary audio cassette players. That means no automated control. No seek function. No directory listings. One program per side, and lots of hand-labelled tapes.

Audio tape is not a reliable medium. You could save hours of work and have it refused to load the next day.

Even professionally-duplicated tapes suffered this, especially if you played the game a lot so the tape got worn. “Tape loading errors” were a common nightmare.

Some manufacturers offered optional disk controllers for more serious users, e.g. adults with more money. However, every make and model had its own disk format: a Commodore 64 could not not read disks from a BBC Micro, and neither could read disks from a PC. Commodore disk drives used a serial interface and so were excruciatingly slow.

Sinclair aimed at the budget end of the market and invented its own medium, the Sinclair Microdrive: ZX Microdrive - Wikipedia

This was a form of stringy floppy: Exatron Stringy Floppy - Wikipedia

Also derived from an audio medium, as the mass market made the tech cheaper. In this case, 8-track cassettes: 8-track tape - Wikipedia

I had these before I saved up for a disk interface and a single 5¼” drive as a university student. Each microdrive cartridge stored under 100 kB. Access took tens of seconds, but was still an order of magnitude or more faster than cassettes, which took tens of minutes.

They were slow, small, unreliable, and failure-prone, but better than anything else for the price.

As these machines were very slow, and lacked enough storage to usefully run compilers, to get enough performance for games, programmers worked in machine code. Magazines published these too. This might mean typing in 4, 5 or 6 pages of numbers:

So instead of typing in this, which was at least meaningful and could be followed:

You had to type in pages of this:

Your Computer (David Horne’s ZX-81 1K Chess, February 1983.)

This is a notably short program: only 3 pages or so. It plays chess in 1000 bytes of total space, a notable achievement that is famous: 1K ZX Chess - Wikipedia

Try to imagine typing in 30–40 thousand characters of code, where a single mistaken character renders the entire thing useless. When buying a new game might cost £10 or £15, an amount of money that could take 6 months to save up, a week of evenings after school spent typing was worth doing.

This, note, on terrible keyboards that resembled a cheap pocket calculator:

No space bar. No cursor keys or delete key. Each key performing 5–6 different functions depending on which other keys were held down.

This is the machine I learned to code on; I spent years typing on this exact keyboard.

No hard disk. No floppy disks. No directly-accessible storage. Everything in RAM, so one second of power fluctuation and hours of work irretrievably lost.

This machine, with 48 kB of RAM, cost as much as a cheap ChromeBook new today. No monitor: you used a TV set, so the picture was fuzzy and unstable. The cassette player cost extra.

And you know what? We all absolutely loved it, and we miss it still today. :-)

liam_on_linux: (Default)
The evolution of DOS is interesting, and few remember the bigger picture now.

MS did a great deal when supplying DOS to IBM; MS retained the rights to sell it itself to other manufacturers.

So in the early days, there were other MS-DOS machines that weren't IBM compatible, such as the Apricot, Victor and Sirius.

But soon it became apparent that IBM compatibility was key. Compaq reverse-engineered the IBM BIOS and built the first clones, and the PC industry started from there.

PC DOS only came with IBM kit. MS-DOS came with everything else, but only with the computer. You couldn't buy it directly.

Excluding bugfixes, it went like this:

DOS 1: floppy-only machines.
DOS 2: added hard disk support (a single one) and subdirectories.
DOS 3: added support for 2 hard disks and networking. Then in a point release, support for 2 partitions per disk. Then in another point release, multiple "logical drives" in a single extended partition, so you could use all the space on a big drive... but still a max of 32 MB.

Other companies started tweaking their version of MS-DOS 3.3 to allow bigger than 32MB drives. The method used in Compaq DOS 3.31 is the one IBM and MS picked and it was used in DOS 4.

MS had a project to do a multitasking DOS 4 so didn't work on DOS 3.3 for ages. IBM did its own thing, and added big disk support, code page switching for international character sets, and a slightly clunky graphical launcher called DOSShell.

MS reluctantly released this as MS-DOS 4. It's the first release that required a bugfix fairly quickly. The multitasking version got abandoned: big disk support was needed more urgently. But DOS 4 had other gotchas -- such as using a lot more RAM so some apps couldn't run. (Everything in DOS had to fit into the first 640 kB).

DR noticed this. Its CP/M-86 was late, expensive and so lost out to MS, even thought it was the inspiration for SCP’s QDOS, the basis of DOS 1.0. DR had its own line of multitasking CP/M derivatives, for minicomputer like x86 machines with terminals: Concurrent CP/M, and later with DOS app compatibility, Concurrent DOS. It also had its own standalone single-user DOS, DOS Plus, which could run 3 background tasks on a single PC (if they all fitted into what was left from 640 kB after the OS loaded!)

So DR reworked DOS Plus, removed anything that broke compatibility, like the multitasking and CP/M app support, updated its MS-DOS compatibility with code from Concurrent DOS, and released it as DR-DOS. It bumped the version number from the last small, memory-efficient MS-DOS, MS-DOS 3.3, but included compatible large-disk support. So… DR-DOS 3.41.

It only offered it through OEMs at first. You couldn’t buy it at retail. But it proved moderately popular, a sort of cult hit. People heard about it. (This is all in the 1980s so pre-WWW.) People asked to buy it as an upgrade.


So DR had a great idea. There were already 3rd party memory managers for DOS on 386 computers, which let you map RAM into bits of the space between 640 kB and 1024 kB. You couldn’t run bigger apps using this space because it wasn’t contiguous with base memory, but you could load bits of DOS into them: keyboard drivers, CD drivers, mouse drivers, disk caches. Now, instead of having only 500-550 kB of 640 free for your apps after loading all your drivers, you got more room: up to 580-590 kB.

PC/MS-DOS 4 made this even more necessary as it used more memory than DOS 3.3.

DR wrote their own and bundled it into DR-DOS, and leapfrogged MS-DOS 4 by calling it DR-DOS 5. You could even move DOS itself out of the base memory, and have 620-630 kB free, without 3rd party tools. It was amazing. It also added a full-screen text editor, which incredibly MS-DOS still didn’t have.

And in a masterstroke, they made it available at retail. You could buy it in a shop and upgrade your PC or MS-DOS computer.

It sold extremely well and that made MS angry. It had never realised there was a potential retail market in after-market DOS upgrades or additional DOS features; it had been distracted by the success of Windows 3.

So MS copied the features of DR-DOS 5 and, playing catchup, made MS-DOS 5. All the features of MS-DOS 4, more free memory than ever with a memory manager, a full-screen editor (actually part of QBASIC, which was the GW-BASIC interpreter with the IDE from the QuickBASIC compiler.)

And sold it as a retail upgrade.

It did way better than DR-DOS 5 because it had Microsoft’s marketing muscle.

Novell bought DR around this time, intending to go against MS with a multi-pronged strategy: a better DOS, some best-of-breed apps - it also bought WordPerfect, now failing against Windows apps, notably Word of Windows and a Windows port of the Mac’s Excel spreadsheet. To rival Excel it bought Quattro Pro from Borland, a graphical spreadsheet for DOS.

Against Windows itself, Novell planned a Linux-based desktop, codenamed “Corsair”, which eventually became Caldera OpenLinux.

Novell bundled SuperStor disk compression, and re-implemented DOS Plus’ multitasking with TASKMAX.

Result, DR-DOS 6, AKA Novell DOS 6.

Microsoft responded with MS-DOS 6, still playing catchup. It added built-in antivirus and built-in backup, licensed in from other companies who never made the promised monies from selling enhanced versions. It also added disk compression. MS looked at licensing in disk compression from the #1 3rd party vendor, STAC, authors of Stacker. It got to see the code. In the end it didn’t go with Stacker but licensed Vertisoft DoubleDisk instead — presumably because it was cheaper. But it used some Stacker code in DoubleSpace.

STAC sued, won, and spend the money on moving out of the drive-compression market, knowing that drive sizes would grow and make its product irrelevant. It bought the ReachOut remote-control tool, and a server backup tool, and tried to rebrand as a server maintenance tools vendor, foreseeing the rise of internet-based remote admin — but too soon.

The result was MS-DOS 6.1, with no disk compression, while MS rewrote it to remove the stolen code.

Then MS-DOS 6.2, with DriveSpace instead of DoubleSpace, and the SCANDISK improved disk-repair tool.

Then MS-DOS 6.21 and 6.22, bug fixes.

Needless to say, Vertisoft made no money from add-on DriveSpace tools, and Central Point made no money from updates to DOS Antivirus or the bundled PC Backup. Both went under.

Novell responded with DR-DOS 7, with bundled peer-to-peer networking. MS didn’t bother as Windows for Workgroups already included that.

Then MS moved the goalposts with Windows 95, which actually bundled MS-DOS into Windows.

Novell did get Win95 running on top of DR-DOS, but there was no point and it wisely decided not to sell it. Once you had Win95, what DOS did underneath became rather irrelevant, memory management and all.

Novell gave up on the DOS line.

However, the Linux it sponsored did quite well. Caldera was the first desktop Linux I used as my main OS for a while. It had a great setup tool, LISA. It had the first graphical installer. It was the first distro to bundle the new KDE graphical desktop.

It was streets ahead of Red Hat or Debian at the time, let alone Slackware.

So Novell bought the Unix business off AT&T, and SCO, the leading PC UNIX vendor, and tried to get Caldera to integrate these 3 disparate products into a whole and a market.

It didn’t work but that’s a whole other story. What’s relevant to DOS is that Caldera spun off its DOS division as Lineo (who offered me a job once, as a leading DOS expert! But I didn’t want to move to Utah, partly because I like beer, partly because I’m atheist and thought it wouldn’t be too comfortable to live in the Mormon state.)

Lineo tried to make a business out of DR-DOS as a thin client OS. It didn’t work. But Lineo inherited what was left of Digital Research. The Concurrent DOS business had been sold off to 2 of its leading resellers, and that’s just barely still around, amazingly. The realtime OS FlexOS and multitasking X/GEM desktop had been sold off and was sold by IBM until recently, and now by Toshiba.

But the other DR properties — CP/M and the GEM desktop for DOS — Lineo made open source, and both are still around today.

Meanwhile, MS lost interest in DOS as it pursued Windows 95 OSR2, Windows 98 and Windows ME. Indeed the embedded DOS in NT has never moved beyond version 5.5. But IBM co-owns DOS, and it did not lose interest. It continued to develop it for years, including the new features from the embedded MS-DOS within Win9x. The result was IBM PC DOS 7, then PC DOS 2000 (briefly bundled with VirtualPC!) and finally IBM PC DOS 7.1. IBM eschewed MS's editor and BASIC, replacing them with a version of its own OS/2 and mainframe editor E, and replacing QBASIC with REXX. It's an interesting OS.

That is the last ever member of the mighty DOS dynasty. I've blogged about it before. It was never released on its own, but IBM's ServerGuide Scripting Toolkit is a free download and includes the kernel and utilities of PC DOS 7.1. You can combine this with the rest of PC DOS 2000 -- reminder, it was in VirtualPC, and VirtualPC was a free download, too -- and build your own complete working copy. I have it booting "on the metal" on a Thinkpad X200 and it's a pleasure to use -- and very, very fast. Free DOS apps such as Microsoft Word 5.5, the AsEasyAs spreadsheet, the WordPerfect Editor and so on all run fine and amazingly fast.
liam_on_linux: (Default)

From a Quora answer, because I like to keep my words outside of their walled garden.

The World Wide Web was originally developed on NeXT computers and the NeXTstep operating system and released around 1991.

This is the same year as the Linux project started, so the early 3rd party web browsers did not run on Linux; it barely existed yet. They ran on Windows and classic MacOS. (Mac OS X did not exist yet either.)

The most successful 2 were:

Both are proprietary code, available free of charge, but Netscape required a paid version for commercial use. Both are based on Mosaic from the NCSA.

Due in large part to illegal anti-competitive measures from Microsoft, for which it was found guilty in the US courts, Netscape went out of business. It was split up. Part (the browser) went to AOL, part (the web server) to Sun.

The unfinished next version of the browser, Communicator 5, was open-sourced as the Mozilla Project. However it took years to finish. The complete, working but proprietary Netscape 3 and 4 were not open-sourced.

Back to Linux.

The first complete FOSS Linux desktop environment was KDE. KDE had to re-implement a lot of technology that existed as proprietary code on Windows and classic MacOS. There was no Linux office suite yet — what would become OpenOffice and later LibreOffice was still a commercial, proprietary product, StarOffice.

KDE implemented a file manager (Konqueror), an office suite (KOffice), text editor (Kate), media players and much more.

KDE also implemented a web browser. This was integrated into the file manager, Konqueror. The KDE project wrote its own web-page rendering engine for this, called KHTML. This was the most complete FOSS browser engine after Mozilla’s Gecko engine.

When Apple bought NeXT and made the NeXTstep OS into the basis of the future Mac OS X, it needed a web browser. Mac OS X is built on a lot of FOSS code, much of it from the FreeBSD project. It did not include a web browser — FreeBSD uses Firefox (and others).

At first, Apple included Microsoft’s Internet Explorer. (This had prior history: older versions of classic MacOS bundled IE, Netscape, and Apple’s own, discontinued CyberDog browser.)

Apple decided that it needed its own, independent browser for Mac OS X. But writing a browser is a huge, complex task. So, it took the FOSS KHTML tool from KDE, and made that the basis of Safari, the new Apple web browser.

As KHTML is FOSS code, it needed to release its changes. It did this in the form of WebKit.

WebKit started out as KHTML, separated out from KDE and rewritten for OS X. Like KHTML before it, it was the leading FOSS browser engine. Apple did so much work on it that KDE re-adopted WebKit as the basis for its own browser, effectively replacing KHTML.

FOSS is cyclical like this: code is adopted from one project into another, improved, and sometimes goes back to its original creators in new form.

When Google also decided to do its own browser, Chrome, it used WebKit as the basis. Like WebKit and KHTML, Chrome is developed as an open-source project, called Chromium. This is mainly done by Google staff and is mainly paid for by Google.

So there are 2 branches of the Google browser: Chromium, which is FOSS, and Chrome, which is proprietary. Both are freeware.

Unfortunately, in English, the word for “at no price” is the same as the word for “at liberty”. The meanings are different, but both use the word “free”. In French, they are two different words: gratuit means no price, libre means at liberty.

Chromium is libre. Chrome is not. Both are gratuit. Software that is gratuit is often called “freeware”. “Freeware” is not the same as FOSS.

Google later forked the WebKit project. This means it took it in its own direction and stopped giving the changes back upstream to Apple. Google’s fork of WebKit is called Blink. Blink is the engine that powers the current versions of both Chromium and Chrome.

Microsoft has adopted Blink as the engine for future versions of Edge. This replaces its own in-house MSHTML engine, codenamed Trident.

So future Microsoft browsers will use the same engine as Chromium/Chrome, which is closely related to the WebKit engine used by Safari and KDE’s Konqueror.

This does not mean Microsoft is using Chrome, which is proprietary, or Chromium. Apple is not using Chromium/Chrome either, but the Blink engine is based on Apple’s WebKit.

liam_on_linux: (Default)
A friend on a vintage-computing mailing list mentioned his Fossil. Maybe the first working smartwatch, it was a tiny Palm PDA on your wrist -- but with no wireless comms.

It got me reminiscing about Psions.

Phone dialling was a built-in feature of the Psion range of PDAs. The address book app could dial any number in the address book, merely by holding it up to the phone mouthpiece.

It blew people's minds at the time (very early 1990s).

This wasn't a phone-dialling device or anything. It was a tiny pocket computer, but unlike something like an HP 95LX, it was a GUI machine with a diary, address book, word-processor, spreadsheet and so on.

https://en.wikipedia.org/wiki/Psion_Series_3

It wasn't the first "digital diary" of course, but it was the best. Ultimately a later, ARM version of the OS became the basis of Symbian.

But the fact that your pocket address book could dial the phone for you -- not by being a keypad or anything, just by picking it up, looking for Bob and pressing DIAL and then holding it near the phone -- was impressive for its time.

One of my favourite things to do with its successor model (the Series 5) was pull up an address entry, and when someone pulled out a Palm Pilot and starting trying to scribble Graffiti into it, to stop them and transmit the contact to them by IRDA. Most Palm owners had no idea that their devices spoke infra-red and for them to get a whole contact
instantly by wireless was deeply impressive to them.

I never had a Fossil. I was slightly tempted when they were being sold off cheap at the end of production, but I resisted. I was never a big fan of PalmOS, TBH. Too limited for me as a former Psion user, and the Palm devices were always very tied to a PC -- they were meant to be a way to take your Outlook (or whatever) address book and diary with you in your pocket. I didn't use Outlook or a desktop PC PIM at all. I used my Psions for that stuff. It multitasked with anything, had a better richer calendar app than any PC product ever written, was more reliable than any general-purpose desktop PC ever, and fit in my pocket and ran for a month on 2 AA cells.

I suspect that one of the things that contributed to Psion's downfall is that AFAIK they never really cracked the US market, which was dominated by weird expensive little gadgets that tried to be a tiny, hopelessly-compromised generic PC in a tiny form-factor -- things like, well, the OQO handheld WinXP PCs, but also the Poqet, the DIP Portfolio, the HP LX and Omnigo range, etc.

In the 1990s and indeed the first decade of the 2000s, it was, on the face of it, clear plain and obvious that you couldn't fit a generic PC clone that you'd actually want to use into your pocket, and if you compromised it so you could, it would be horrid: either it would have a battery life roughly as long as a hummingbird orgasm, or it would be a PC with the capabilities of a desktop from a decade or 2 earlier.

So, an early 1980s PC class machine in the 1990s -- HP LX etc. -- or a 1990s laptop in the noughties.

The result was, to my European eyes, a succession of overpriced, underspecified, clever but undesirable gadgets. And the response to that was the Palm range, which were just an accessory to a business PC.

I didn't want either.

The European solution was different. It said: "OK then, we can't fit the hardware to run a desktop OS into a pocket and deliver a good experience, so what we'll do is this: we'll fit the best hardware we can on a budget and with decent power consumption so it doesn't run out inconveniently fast, and we'll write bespoke software to run on it to deliver the functionality customers actually need."

The result was first, the Psions.

A little later, in the Nordic countries, the Nokia mobile phones.

Psion's first try, the MC laptops.

Neat hardware, clever OS, but decent PC laptops were coming. So they shrank it into the Psion Series 3 range.

I suspect many American readers have never seen or held one of these so these links might be worth a read.

http://www.computinghistory.org.uk/det/4020/Psion-Series-3/

https://stevelitchfield.com/historyofpsion.htm

The Series 3 had a small screen but an elegant multitasking GUI OS on an 8086. Optimised for keyboard operation, no touchscreen. Very rich PIM apps -- seriously, unsurpassed on any other platform. Rock-solid OS. Only connected to PCs for backing up.

The range gradually got bigger screens and more RAM over the next few years.

Then they realised they'd reached the end of the line fort the hardware, rewrote the OS in C++ for ARM and did the Psion 5 range. For comparison, see this Australian assessment versus the American machines.

When Psion saw that the writing was on the wall for PDAs without wireless comms, they formed Symbian, rewrote the OS to have a comms stack, and moved successfully into smartphones.

There were some missteps though. The OS was written in C++ before the language was really ready, and so it went its own, non-standard way.

(The same problem arguably afflicted Be and BeOS.)

There was no standard GUI for Symbian: they led each licensee do their own, with no source-code compatibility. That was a big mistake. As a result, there were several:

  • UIQ on Sony Ericsson devices

  • Nokia Series 60 -- for candybar phones with a numeric keypad

  • Nokia Series 80 -- a recreated Psion UI for the ill-fated 7700 series. That's what I bought.

  • Nokia Series 80 -- for the QWERTY-equipped Communicators, somewhat inspired by Geos and the HP OmniGo

  • MOAP by NTT DoCoMo -- Japanese market only

Then later, realising this was a mess, they tried to reconcile them, flailing around with a Qt abstraction later, buying TrollTech to do it, and other efforts, but it was too little too late.

Symbian had some unique attributes. E.g. it was the *only* smartphone OS to offer good enough realtime for single-CPU phones, running the comms stack on the same CPU as the user-facing GUI. *EVERY* other vendor had to run a separate CPU for the networking and comms.

But in the end, the American version won out. The iPhone had a radically simpler UI, in a single stroke obliterating Symbian and after a few years Blackberry too.

The only survivor was Android.

Designed by Android Inc as an OS for digital cameras, acquired by Google and repurposed for a Blackberry clone.

And then they saw the iPhone, pivoted again and did a very successful iPhone knock-off, just as Windows was a successful Mac System knock-off... after the first few versions.

Result of the eventual convergence on the American model:

We have amazingly sophisticated, high-spec smartphones and tablets, but they have a battery life of a single day, replacing European phones that lasted a week and PDAs that lasted a month.

Why, no, I am not happy about that.

The European PDAs had excellent keyboards you could type on. My Psion 5MX paid for itself in the first weekend of ownership: on a long-distance coach with a fold-down table the size of an iPad, I wrote 2 articles, both of which I sold and which paid for the device.

My Nokia phones had physical keyboards and very smart software for fast text input.

Now? No keyboards at all.

No, I am not happy about that, either.

I could read the screens of my Psion and Nokia in bright sunshine. American-design ones are slowly edging back towards that, but it's still difficult. Daylight-readable screens have disappeared from the market.

I'm not happy about that, either.

My Psions and Nokias had bulletproof OSes that lasted for years without a single update, and yes, they were Internet-connected by the last few generations. They ran in a few tens of megabytes of nonvolatile storage.

Now, my tablet and iPhone and Android phones need at least 3 or 4 apps updating every day. If I don't use one for a few weeks, it's just like Windows -- I have to do half an hour of updates before I can use it. The OS needs to be replaced every month or two to fix all the flaws in it, and that's a gigabyte or so of storage.

I am furious about this.

"The JesusPhone, I swear it is smiling at me: Come to me. come to me and be saved. The luscious curves, the polished glissade of the icons in the multi-touch interface - whoever designed that thing is an intuitive illusionist, I realise fuzzily as my fingertip closes in on the screen: That's at least a class five glamour."

(Charles Stross, The Fuller Memorandum)

They're very shiny. They do a lot.

But I had a better phone and a better PDA 20 years ago. The whole is much less than the sum of its parts.
liam_on_linux: (Default)
Because I miss classic MacOS.

Unix is Unix. It's boring. It's everywhere. I run it on my desktop, my laptops, my phone, my tablet, my work PC. All different versions.

I dislike it less than the other options. I grew up on CP/M and VAX/VMS, then DOS, then Windows. At home, RISC OS.

I am not a programmer. Much of the stuff in Unix is irrelevant to me. I find its shell actively hostile, its filesystem an arcane mess that hasn't made sense since about 1973, its profusion of weird little config files, all in different formats, all to be carefully amended with various disgusting 1970s abominations of editors, to be a massive pain.

That's why my computer is a Mac. It needs less maintenance than almost anything else. My laptops run Ubuntu (and Haiku and A2/Bluebottle and IBM PC DOS 7.1). I rarely open a terminal if I can avoid it.

Classic MacOS is something else entirely. It's a thing of its time, yes, but it is a thing of great beauty. *The* single cleanest, most elegant GUI of any OS ever, from any time. No trace of a command line anywhere, not a single config file on the entire OS, and yet over 15 years it grew from something that ran in 128 kB of RAM on a 400 kB floppy to a multitasking Internet-capable OS on which I surfed the web, did my email, chatted to my friends on a half a dozen systems, did my invoicing, send data to & from multiple server OSes _and_ laid out magazines.

Its kernel wasn't elegant but there's more to life than kernels. It was the best-integrated general-purpose mass-market GUI OS the world has ever seen, and nothing ever even came close to its versatility. It was smoother and cleaner than ST GEM. It had a bigger better app selection than Amiga OS. It had a simpler yet more capable GUI than RISC OS. It made Windows or OS/2 look like sick jokes for a straight decade. And all this without any nasty dirty stinky mess of config files gluing it all together in the background.

I own a number of vintage Macs, and they're lovely, but there's no point in using PowerPC Mac OS X these days, because an Intel box does the same job better -- but it's not _really_ a Mac, although mine have gorgeous mechanical keyboards from the 1980s on them, naturally, because I'm a vintage computer fan and that's why I am here.

But the last _real_ Mac was the Beige G3 for me. It looked like a Mac, it talked ADB and AAUI and SCSI to the outside world, and it was visibly the same family as a Mac Plus from 1985.

(I gave my Blue & Whites away, which I slightly regret.)

Even my G4s aren't that Mac-like any more. They use PC stuff like USB and Firewire and PCI, which makes them cheap to run, but somehow a bit soulless.

The Mac was not just a computer, it was a culture, and it's one I worked with at the time but could't afford to use myself.

And whereas the ST and Amiga were cultures too, which I respect, I wasn't part of them then. I was an Acorn user then, but Acorns were barely usable on the Internet or on LANs. That stuff came after their decline and fall, for all that I have a Raspberry Pi with RISC OS on it.

Classic MacOS came from that era, but it survived and prospered and thrived into the Internet era of the Web and USB and multimedia.

That deserves anyone's respect.
liam_on_linux: (Default)
PowerQuest PartitionMagic was one of my favourite pieces of software ever written.

It offered a lot of functionality for disk and partition management on the PC that had previously been considered impossible, or the sole domain of enterprise storage management systems, such as resizing drive partitions on the fly -- i.e. with all their contents intact.

Later, it gained additional functionality, such as the ability to merge 2 (or more) disk partitions into one larger one.

If, for example, you merged drives C, D and E, you ended up with a big drive C which contained subfolders called "\D\" with the full contents of D: and "\E\" with the full contents of E:

It was then up to you to move stuff around to sort it.

However, the thing is this:

When you move from one drive to another drive, including separate partitions, the OS must copy the data from source to destination, then when it's copied, remove the original file... then repeat this for every file. This is unavoidably slow. It applies even on the same physical drive, if there are multiple partitions.

But if you move a file from one folder to another folder in the same partition, on any modern filesystem, the OS can just rename the file from

/data/my/old/file

... to...

/data/my/new/file

The actual contents of "file" don't move. So it's very, very fast.

So cleaning up the folders left by a PQMagic partition merge was quite quick. It was the merge that took hours. It copied as much data as would fit, shrank D: as much as possible by moving the start, enlarged C: and then copied some more... and repeat. This could be a *very* lengthy process.

This kind of thing is the reason that logical volume management systems exist:

https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)

LVM is complicated and hard to understand. If the above article makes little sense, don't blame yourself. For standalone workstations, I recommend avoiding it.

So, there's LVM, then on top of the LVM space, you have partitions. Those are formatted with a filesystem, such as ext4, or older enterprise filesystems from old commercial Unixes, such as JFS (from IBM's AIX and OS/2), or XFS (from SGI IRIX).

https://en.wikipedia.org/wiki/XFS

https://en.wikipedia.org/wiki/JFS_(file_system)

Fedora enables LVM by default which is just one reason I avoid Fedora.

Then to make matters worse, there are filesystems which support "subvolumes" inside a partition, e.g. Btrfs.

https://en.wikipedia.org/wiki/Btrfs

Btrfs is the default FS of SUSE Linux.

Then you have subvolumes inside partitions on top of LVM volumes on top of disks, and personally it all makes my head spin.

*Because* LVM is hard, and its functionality overlaps with partitioning, there are projects that try to merge them.

For Linux, there was EVMS:

http://evms.sourceforge.net/

Unfortunately, it did not catch on, so we have LVM instead.

https://lwn.net/Articles/14816/

https://unix.stackexchange.com/questions/22885/is-there-a-more-modern-or-more-popular-version-of-evms2

RH does not support Btrfs. However, because it wants some of the features of Btrfs, RH is now building its own new combined logical volume manager / partitioner / filesystem, Stratis:

https://stratis-storage.github.io/

Stratis combines an LVM layer with the XFS filesystem.

I have heard comments that Stratis is in effect re-creating a subset of the functionality of EVMS.

This is a very typical Linux development path.

The richest filesystem/volume manager from commercial Unix is ZFS, from Sun (now Oracle) Solaris.

https://en.wikipedia.org/wiki/ZFS

Like JFS and XFS, ZFS is now open source. However, under a licence that is incompatible with the Linux kernel's GPL licence.

So you _can_ compile a Linux kernel with built-in XFS, but it violates the licence.

However, Ubuntu has found a way around this, with ZFS being a loadable module (AIUI) that isn't part of the kernel itself.

(AIUI. IANAL. Clarification welcome.)

Ubuntu Server offers ZFS instead, in place of Btrfs in SUSE or Stratis in Fedora (or XFS in all of them).

ZFS can replace the LVM _and_ also ext4/XFS/JFS, and therefore Stratis too, but neither SUSE nor RH will bundle ZFS because of licence concerns.

Apple _was_ going to bundle ZFS but it too decided the licensing was too tricky and it has developed its own system, APFS. But then Apple no longer is trying to compete in the server market.

https://en.wikipedia.org/wiki/Apple_File_System

Yes, it is confusing. Yes, it is a mess. Yes, there are too many standards.

https://xkcd.com/927/
liam_on_linux: (Default)
Apparently I'm renowned for my enthusiam for ancient software. I find it fun to play with kit that I couldn't dream of affording when it was current, in my youth. And it's very instructive to compare it with new stuff. But I also use some of it for actual paying work.

I run MS Word 97 under WINE to do my own writing. Not because I need any features in that version -- I don't. I'd be perfectly happy with Word 6, which is functionally near-identical, smaller and faster. The only reason I don't run Word 6 is that it doesn't support the scroll wheel or proportional thumbs -- the widget in the scrollbar is always a square so you get no visual feedback on how long your document is and what relative size chunk you're looking at. That's very useful info.

But I hunted, for years, for Word 6 for NT, the 32-bit version. Word 6 being a Windows 3 app. Why? Because the NT version supports long filenames and they're jolly handy. But not proportional thumbs or the scroll wheel. There's also the small convenience that Word 97 uses the same native file format that was used until Word 2003, and everything can import it: LibreOffice, Pages, AbiWord, you name it. The previous versions' file format -- also called .DOC -- is far less widely-supported, but then, it was phased out 22 years ago.

I've tried Word 6 for DOS, WinWord 6 (for Win3), Word 6 NT, Word 95 (v7), Word 97 (v8), Word 2000, Word XP, and Word 2003. Hunted 'em all down, tried on 32-bit and 64-bit Windows and WINE.

I have really looked deeply into this, mainly out of curiosity but also because I made my living using this tool.

One WINE compatibility test for me is: can I install the service releases under WINE? 97 yes, 2000 no, XP no, 2003 yes.

That falls down on Word 95. Why? Because there were no service releases.

This is an app I know inside-out and use hard and heavily, mainly because of the outlining feature that no other modern editor still offers. MS Word has gained no new features I need or use since 1993.

That was the first version that ran on both 16-bit and 32-bit Windows. Then came Office 95, 32-bit only, with zero functional changes but a new look and feel: square toolbar buttons, font formatting in Windows title bars, stuff like that. Trivial.

But Word 97 had no new functionality either, just a new .DOC file format, and yet Office 97 needed 3 service releases. The first release didn't even write a Word 6 .DOC file when you asked it to. It wrote an RTF file but gave it a .DOC extension. The older versions were smart enough to detect the format automatically and import it, so nobody noticed before release. How come?

The point of this tedious little aside ;-) being that the app has grown massively since then and it's now about 10× the size, probably more -- I haven't checked recent versions (since 2007) because I detest the ribbon interface.

I think a full install of Word 97 is 14 MB. I thought Office 97 was bloated when it came out. 14 MB may have been bloated in 1997 but today it seems svelte, almost minimalist.

I pick this one app because the additional functionality is amazingly trivial -- Word 97 has a virtual yellow highlighter pen and AFAIK that is the only functional addition. Word 2000 allows tables to be nested inside table cells, and AFAIK that is it.

My point being, it's a very mature codebase, and yet it's still bloated hugely in the ~20y since it last gained a major new feature, and it's also picked up major bugs in that time, so you really do want the service releases.

The greater point being that tech support is now even worse than when I left it, because the products are massively bigger, have a lot more functionality but not as much as you might expect in many places, and things originally introduced to reduce size and complexity, such as shared libraries, have in fact done the reverse and hugely increased size and complexity so that now we need major new OS features such as containers simply so that multiple apps with multiple specific versions of their shared libraries can co-exist on the same OS without causing horrible clashes.

Plan 9, I recently learned, doesn't have shared libraries. So when it was new, the Unix people of the time decried it, as it was missing something they thought important.

Plan 9's designers were right, but nobody now recognises that, because it's so long ago that the people inventing containers to fix shared-library versioning clashes don't know that there was a major successor product to Unix that identified shared libraries as a problem and eliminated it.

It was a generation ago. My colleagues sitting behind me, diligently testing an enterprise Linux distro, probably have never even heard of Plan 9.

A quondam GNU spokesman on Twitter spent a while last month lambasting me as a know-nothing idiot, and got a gang of his mates in to harangue and pillory me, because I told him that Linux is a Unix.

He thinks that's obviously wrong. He has probably never seen another Unix in his life, so obviously the old fart disagreeing with him is an idiot.

But to get to my larger point...

Tradeoffs were made in the design of Unix in the 1960s, to keep it very simple, because it was designed for a very low-end system: a PDP-7 with 4 kilowords (9 kB) of RAM.

Then in the 1970s it was rewritten in C, and further different tradeoffs were made in C to keep it very simple, to make it cross-platform and yet small and easy to compile.

Then in the 1980s, yet more tradeoffs were made in Minix to make it small and simple enough for students to understand, and yet still cross-platform on very low-end machines with no MMU etc. Shades of the original 1960s design, but nobody much noticed that.

Then in the 1990s, tradeoffs in the design of Linux were made, to make it simple to implement and to get it working in a reasonable timeframe, because commercial Unix was too expensive and Minix was too limited and Dr Andy Tanenbaum ("AST") wouldn't accept complex patches to make it 386-aware, give it proper virtual memory, etc.

And Torvalds and his followers vigorously defended those compromises, because they were pragmatic and meant it was doable and understandable and could be made to work, whereas the HURD couldn't.

Well now, we're further from the introduction of Linux (1991, 28y ago) than Linux was from Thompson & Ritchie's original PDP-7 Unix (1969, 22y before Linux). 50 years. Two human generations. [EDIT: corrected arithmetic error -- thank you Stefan!]

The sensible pragmatic compromises to get a simple monolithic kernel actually working and reasonably performant on a 25 MHz 80386 now mean a vast and vastly complex monolithic OS for gigahertz-class hardware with dozens of cores, with hundreds of thousands of full-time programmers around the world working flat out just to keep the thing working, patched and broadly generally mostly kinda safe-ish.

IMHO, AST was right.

Linux was Unix reimplemented as FOSS, from scratch, exactly as it had been done 2 decades earlier, partly because politics stopped the GNU project adopting the BSD kernel in the late 1980s. They looked at it and discarded it. If they'd adopted it, the GNU Project would have had a complete, working, FOSS Unix by about 1989 or 1990 and Linux would never have happened:
RMS was a very strong believer -- wrongly, I think -- in a very greedy-algorithm approach to code reuse issues. My first choice was to take the BSD 4.4-Lite release and make a kernel. I knew the code, I knew how to do it. It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today.

RMS wanted to work together with people from Berkeley on such an effort. Some of them were interested, but some seem to have been deliberately dragging their feet: and the reason now seems to be that they had the goal of spinning off BSDI. A GNU based on 4.4-Lite would undercut BSDI.

So RMS said to himself, "Mach is a working kernel, 4.4-Lite is only partial, we will go with Mach." It was a decision which I strongly opposed. But ultimately it was not my decision to make, and I made the best go I could at working with Mach and doing something new from that standpoint.

This was all way before Linux; we're talking 1991 or so.

-- Friar Thomas Bushnell; also see The H.

They learned nothing from proprietary Unix. They improved nothing. They just redid it, the same, but as FOSS.

It was the last good chance to make some structural improvements, and we blew it.

So now, we have a maintenance nightmare. A global catastrophe of software project management, but it's enabled an industry worth hundreds of billions of dollars, which pays to keep the entire cancerous mess working.

And not a single person in the entire industry has the guts to point out that it's a catastrophic mess, because by wasting all the finest programming minds of an entire human generation (bar the small fraction employed by Microsoft to work on Windows, which is just as bad, because it didn't even learn from Unix), it still works.

As the human race busily causes catastrophic global ecosystem collapse, which has happened in the same 25y period, we have failed to materially improve on the computing state of the art from the beginning of the 1970s.

Two generations of progress lost, because if you throw enough money at the problem, even a mistake can be made to work. As TWA's Paul E. Richter Jr. said, "Give me enough power and I can fly a barn door." A monolithic 1970s OS can power the entire world, if the programming effort of the entire world is devoted to keeping it working.

But it's not efficient. C enthusiasts think it's efficient and low-level. It isn't. There are tens of millions of lines of error-checking, test frameworks and all sorts, and much of that is because of limitations in C, because of the monolithic kernel, because of those shared libraries and the very complex recursive package managers needed to suck in the millions of dependencies and sensibly provide what's needed.

This sounds absurd, but here are some citations.

A Google project containing some 2000 files and 4.2MB of source sucked over 8 gigabytes of code through the C compiler due to nested #includes.

Firefox depends on 122 packages including libTIFF, but it can't render TIFFs.

Maybe you will protest: "But that's just how software is! It needs to be that way! Modern software has to cope with so many things!"

"Hello world" in C++ involves 18,000+ lines of code. Getting a modern C++ compiler to build itself on modern hardware can take an hour or so.

Compare with Oberon, where a modern, TCP/IP-networked, just about web-capable OS written in Oberon and also called Oberon was implemented in about 12,500 lines of code.

The Oberon compiler can build itself in 3 seconds on a 25 MHz CPU.

Pick a different but still low-level language and an entire OS, far more functional than Linux was back when it became self-hosting, takes less code than "hello world".

This is not just a minor difference of degree. This is very badly wrong. This is a trillion-dollar issue. It is not an exaggeration to say this kind of issue threatens the continuing existence of our civilisation.

"But microkernels are so inefficient! All those context switches!"

I am absolutely sure that microkernels would have been less efficient, a human generation ago when the 80386SX was the most powerful computer that most people could afford.

But now, if we'd done Free UNIX properly, with modular codebases, isolated services instead of monolithic kernels, integrated networking and clustering -- all known, established, standardised stuff that was working and in production before Linux 0.01 -- I am very confident that now, we'd have something which required a very great deal less maintenance.

It's an alternate history. I can't put numbers on it because it didn't happen.

But let's look at the tech changes in that time span.

Between the invention of Unix and the invention of Linux, there were multiple generation shifts:

  • the invention of the microprocessor (mid 1970s)

  • the spread of simple, standalone, 8-bit business micro computers with mass storage on floppies -- the CP/M and S100 bus generation (late 1970s)

  • their replacement with simpler, standalone, 8-bit home computers, mostly without mass storage, and simultaneously, 8/16-bit IBM PC-compatible business computers (early 1980s)

All the previous generations' software was replaced.

  • 8-bit home computers' replacement with GUI-based 16-bit home (and wealthier business) computers (mid 1980s)

All the previous generations' software was replaced again.

  • Their replacement with GUI-based 32-bit-capable PCs (running a 16-bit OS, Windows 3)

All the previous generations' software was replaced yet again.

Then came Linux.

Since Linux came along, we've gone through about half as many significant technological shifts, I'd say.

  • 32-bit GUI OSes become universal on the PC (mid-1990s)

  • the switch to NT/OS X, along with ubiquitous high-speed Internet connections (turn of the century)

  • Appearance of 64-bit X86 (mid noughties - introduced 2003)

  • Multi-core PCs became common (late noughties)


Basically, the PC industry moved to 32-bit Windows 25 years ago and hasn't substantially changed since. NT took over in the form of Windows XP in 2002-2003, around the same time as Mac OS X 10.2, the version that made Mac owners start to want to switch.

This is also around the time that Web commerce starts to drive adoption of Linux server farms, aided by VMware.

By which point, the entire computer industry was running 32-bit C-based multiuser OSes: Unix (including Mac OS X), Linux and Win NT. But their application ecosystem was shared with the previous generation of OSes from a decade before, and the underlying designs of those OSes are from the 1970s: Unix and DEC's VMS.

The industry attempted to move to new, home-grown OSes, such as OS/2 and Apple's Copland. It failed. So instead, OS design actually took a step backwards when 32-bit micros became usefully able to run 1970s minicomputer OSes, which emulated the previous generation of 16/32-bit microcomputer OSes well enough that users could keep their apps.

Since then, I would argue there has been no big shift. Just incremental performance and capacity improvements.

We had total computer generation shifts every decade or so from the invention of the reprogrammable digital computer just after WW2, for 2 human generations -- about 50 years. Everyone in the industry was used to it. It was normal.

It happened to me personally about 2-3 times between the time when I got interested as a schoolchild in the early 1980s: from a 48K Spectrum and cassettes, then microdrive, to a Spectrum 128 and MGT DISCiPLE and floppy drives, to an Amstrad PCW, to an Acorn Archimedes, to a 386 laptop running OS/2 2 (because I couldn't get Slackware to install), and then to a 486 running Windows 95.

Every time, I had to totally relearn the entire OS, learn new languages, and replace (or rewrite) every single program I used.

But since the mid-1990s, we've just been iterating the same design.

Now, we have a generation of programmers who have never in their lives, since childhood, ever seen a shift such as the ones I endured multiple times. The thought of throwing out all their software and starting again is unimaginable, inconceivable to them.

But before the mid-1990s, the entire computer industry did just that, at least every other hardware generation from the 1950s to the 1990s.

Then we stopped. Not because we'd achieved perfection -- we very definitely hadn't -- but we had stuff that was good enough, which with effort worked and kept working.

So we stopped developing new stuff, and just kept polishing the same things, more and more. Now they're very shiny indeed. As Douglas Adams said:

"It is very easy to be blinded to the essential uselessness of [their products] by the sense of achievement you get from getting them to work at all. In other words—and this is the rock solid principle on which the whole of the Corporation's Galaxy-wide success is founded—their fundamental design flaws are completely hidden by their superficial design flaws."

The result is that we've accumulated an almost inconceivably vast amount of cruft, but everyone thinks that that is perfectly normal. Even old hands like me. We use tools that are at least a quarter of a century old on OSes whose design is twice as old as that and we like it!

That is not normal, even for this industry.

There's been no substantial, revolutionary innovation since the GUI was invented in the late 1970s, and all we did with that was bolt that onto 1970s minicomputer OSes.

This has to end.

About 2012, pursing an retrocomputing story that hadn't been done to death already, I came across Lisp Machines and a war that has been successfully entirely covered up. That in the 1970s and 1980s, on the new class of personal workstations, there was a battle between two entirely different ways to write software and operating systems.

One side used radical high-level languages and designed special, elaborate processors to run them natively. The entire OSes were single, dynamic entities, whose code could be inspected and modified as it ran. No compilers, no linking, no static binaries. A whole new world, a wonderful programmer's playground.

The other side adapted a minimalistic decade-old minicomputer OS, written in the most minimalistic language around, bolted a clumsy GUI on top, and ran it on stripped-down, minimalist processors. It wasn't fancy or clever, but it was cheap and fast.

Cheap and fast won, of course.

Now, you never even hear that there was another way.

The rich dynamic languages hang on as programming tools on the not-even-slightly minimalistic descendants of those fast-and-cheap boxes, after they merged with IBM-compatible PCs. Even the sleek fast chips went away.

Only a few old folks are nostalgic for the good old times. 30, 40 and 50somethings, including my cohort, rhapsodize instead about 1980s home computer OSes: Amiga OS, ST GEM, RISC OS. They were small, fast and simple, and stomped all over PC OSes of the time. But the PC OSes won, and whenever those '80s OSes survive -- such as AROS or modern RISC OS -- they're hopelessly crippled and compromised compared to their modern rivals.

I wondered if there was anything that had the strengths but not the weaknesses. That was small, simple and fast, but also clean and elegant, which ran on modern hardware, on different architectures, and could exploit multiple CPUs, 64-bit machines with many gigabytes of RAM and so on.

So I found it very interesting that, after years of reading about this field, that I basically stumbled across Oberon and found that it fits what I was looking for remarkably well.

It's very small and simple (cf. Sinclair QDOS and kin).

It's FOSS (cf. QDOS descendants Minerva and SMSQ/E, or AROS, or AFROS, or RISC OS).

It has a current version and it does actually run properly on modern hardware, not just in an emulator. (Unlike all those, except RISC OS.)

It's a clean, simple design, done in 1 language. (Unlike all of them, really.)

It was actually used, by real people in the real world, for years. It has 3rd party apps, for instance. (Like all the above, albeit less so.)

It's very obscure but among those that know about it, it's widely admired.

It's renowned for its elegance.

It was very influential in its day. (Mostly, unlike all of them.)

It has survived in some places by becoming just another language-stroke-development environment for more mainstream OSes (like Smalltalk and Lisp before it, but unlike the others).

Unlike, say, Lisp, it's not arcane: its relatives Turbo Pascal & Delphi were once some of the world's favourite development tools, used worldwide by tens to hundreds of thousands, and so admired within MS that they went to great lengths to poach their programming lead.

Its most modern offshoot, A2/Bluebottle, is SMP-aware and can be used on the Web, a personal acid test of mine. It's only about as usable in this role as RISC OS, i.e. not very, but that it does it at all is impressive.

Haiku (the FOSS BeOS) ticks a lot of these boxes too, but it's x86 only, is only partially compatible with its predecessor, while being bigger and slower, and as a big, rich, modern 1990s OS that aspired to go mainstream, it's not some nice simple student-friendly thing.

For the avoidance of doubt, I am not saying that we should all just abandon Unix and switch to A2, or Lisp Machines, or Smalltalk boxes, or anything like them.

My thinking is more like this:

  • Like Yellowstone blowing its top or the San Andreas Fault slipping, we're overdue for a big shift.

  • This is really going to hurt when it comes.

  • Probably much of the current stuff will be wiped away. Eventually that will be seen as a good thing. (Unless it doesn't happen, in which case humanity's last biggest opportunity since the moon landings will be wasted.)

So what will it be? Multicore didn't do it -- that just killed off DOS/W9x, at last.

64-bit didn't do it. Barely a blip in the end, remarkably.

Java didn't do it.

SSDs didn't do it. Smartphones and tablets didn't do it. They run modified existing OSes, concealed behind simplified UIs and a lot of sandboxing.

What's left? Persistent memory, AKA nonvolatile RAM.

On servers, its impact could be quite minor. But setting servers aside, on client devices, it could be very radical indeed. If you have a terabyte or so of cheap fast main memory that's nonvolatile, why would you need a disk drive? Turn the computer off and when you turn it back on, it picks up where it left off. No suspending, no resuming. No shuffling data from RAM onto flash disks and back. No flash. No disks.

But if you have no disks, whereas you certainly can emulate a hard disk by partitioning your NVRAM and formatting a bit of it, why would you? There's no need. There's no need for a resident filesystem at all. The computer never normally boots, unless it's updated; it just stops and starts.

Can you imagine Unix, the OS where "everything is a file", if there is no filesystem and no files?

Can you imagine Windows without a C: drive? With no drives at all?

It doesn't really work. Their central metaphor is the notion of files on disk, which either contain binaries or data. Binaries are "loaded" into RAM. They read data from other files -- config, working matter -- then they put it back when they're done.

Eliminate that and there isn't much left.

So, if that happens, what will the resulting systems look like?

I went looking for previous systems that didn't have the filesystem-centric metaphor. I found just four good solid examples.

  • Multics, but I've barely been able to find anything much about it -- what there is is vast and impenetrable.

  • IBM OS/400 -- ditto.

These are both so old, they seem to assume almost no RAM. So although they're single-level-store systems, that one level is disk storage. RAM is no more important than the cache on a modern CPU: you rarely consider it when looking at or building software. So I don't think they have a lot that's relevant to teach us.

This leaves just two:

  • Lisp Machines: everything is lists, inside one giant interpreter with special CPUs to make it quick enough to be just about usable. One language all the way down, with no clear divide between "OS" and "apps" and "data".

  • Smalltalk boxes: everything is objects, inside one giant interpreter. This runs on top of a fairly conventional OS on a fairly conventional CPU, not that you see it, and the first examples were written in something vaguely Algol-like. One language for all the stuff you can see and work with.

In both cases, "booting" means they loading the static part of the core OS from disk but don't initialise it -- instead, they restore the system's previously-saved state from disk. You work for a while, write the state back to disk and then turn it off.

The central working abstractions for both types of machine are in-memory structures, specific to their programming languages, rather than the filesystems which are the bedrock concept in Windows and Unix.

This seems to me to be a fairly good fit for a machine with a single level store of nonvolatile memory.

Yes, of course they will still need to support filesystems. We're going to want to back these things up sometimes, install new versions and additional functionality, and we'll want to support removable media, and connections to existing remote servers, etc.

The key point here is that although filesystems are present, that the filesystem won't be the central, defining abstraction.

Before we got caught in the software tar pits of the mid-1990s, it was normal for old technology to gradually atrophy away. Those mid-1990s PCs evolved from early-'80s home computers with a ROM BASIC which loaded and saved to an audio cassette drive via a dedicated port, but that functionality's long gone from absolutely all PCs today -- and nobody misses it.

Another comparison:

Early mainframes had vast, complex front panels for their operators, festooned with the famous "blinkenlights". In later ones, they just gradually went away, almost unnoticed.

Minicomputers and the first mid-1970s microcomputers went through phases of toggle switches for entering code bit by bit, and a row of lights for output. This was the lowest common denominator of programming. Then the switches went away, replaced by teletypes and paper tape. Then the teletypes went away, replaced by glass terminals, and paper tape was replaced by floppy drives.

Although VMS, Windows and Unix all have roots in the era of front panel switches, paper tape and teletypes, they either support them very poorly or not at all today. It's just not relevant. But they didn't just disappear overnight. They gradually went from central and inseparable to less important to barely-maintained code nobody uses any more to a novelty feature that gets made into amusing Youtube videos.

Now, if you're rich, you have a terabyte of nonvolatile storage in your pocket, driven by multiple 64-bit RISC cores, with which you interact via a huge 24-bit graphical framebuffer and speech. This would be unimaginable to someone in the late 1960s or early 1970s. That's not a computer, that's SF. Clarke's 3rd law, and so on.

I think they might be even more incredulous if you told them that, deep inside, the OS would still be usable via a TTY and paper tape, if you were extraordinarily patient.

It should be incredible. It's wrong. We had richer interaction models 40 or 50 years ago, from Sketchpad to the Canon Cat to Oberon's TUI.

And it all went nowhere. A mere sideline. It seems to me that it went something like:

  1. Xerox PARC invented something amazing.

  2. Apple loved it, although it didn't understand it, and delivered something far simpler but still radical.

  3. This was far too expensive so they made a cheaper, much dumber version.

  4. Microsoft found a way to graft that onto its existing million-selling product.

  5. After a decade of iterations, MS made version 3 of this fairly usable and it became a hit.

  6. After nearly another decade, in 1995 MS made it fairly good, and followed it with a better-architected successor.

  7. Apple is forced into copying MS's plan -- graft the UI onto more conventional underpinnings -- but does so onto a better base, minus a lot of MS's toxic obsession with backwards-compatibility and marketing.

  8. The FOSS Unix world copies them, but nobody much notices until it makes it onto phones.

Result, something "just barely good enough" that doesn't actually materially develop the benefits of what inspired Apple in the first place.

Well, I think we're coming to a tipping point where a cheap technological development will become so commonplace that keeping the old disk-centric model will become a millstone around our collective neck.

The problem is that everyone is so used to the status quo it that it's going to be very hard to persuade them to let it go.

So it will have to be something that offers radical improvements to tempt people to have a go*.

Like, for instance, banishing entire categories of bugs and errors, while giving improvements in compilation time of 3-4 orders of magnitude, delivered by an environment that is much richer than a command line but simpler than a WIMP desktop.

Something as shocking as a Mac or Amiga was in 1984.

Inventing that from scratch* would be almost unimaginably hard.

So I'm saying, let's try to re-create some of those 1970s systems on modern kit, in FOSS, ready for the NVRAM computers when they arrive.

Re-implementing stuff that has been done before, and improving on it, is something the FOSS world excels at.

Let's get on with so we're ready.


* Puns intended
liam_on_linux: (Default)
 My octogenarian mum is on her second iPad now, a 2012 iPad 3, the first Retina model of iPad. It’s a decent device, quite high-spec, fast and reliable. It has a lovely sharp 1536×2048 display, a gig of RAM, excellent battery life even for a second-hand device, Siri, the works. It runs iOS 9, version 9.3.5 to be precise.

This is the same version as its predecessor, a much slower non-Retina (768×1024) 2011 iPad 2 with 512 MB of RAM. The iPad 3 feels much quicker although both have dual-core 1GHz ARM CPUs.

But Microsoft, in its finite wisdom, is rewriting Skype – rumour has it as a Javascript applet – and emasculating the desktop versions so they match the mobile versions closer. The old versions are being discontinued, servers turned off, and the newer one’s proprietary protocols are incompatible with the old. Recent versions of Windows 10 stealthily replace the standalone desktop Skype app with a “modern” Skype app from the Windows Store, but it does leave the desktop app installed. You can remove the modern version. One way to tell the difference is that the classic version shows different status icons depending on whether you’re connected, logged in, etc. The modern one doesn’t, just a blue Skype logo.

Old versions of Skype (from before version 8) can’t connect any more… and the new versions only run on iOS 10 or above.

Tablet sales are slackening off. Perhaps such moves are intentional as a way to drive sales of newer models, when the old devices are still perfectly functional.

But there is an odd little wrinkle. The iPhone version of Skype 8 works on iOS 9, but the tablet version doesn’t. And the iPad is really just a big iPhone and it can run iPhone apps – in the early days of the iPad, when there were few iPad-native apps, so iPad owners ran iPhone apps, which appeared huge with big chunky controls. But they worked.

If you coax the iPhone version of Skype onto your out-of-support iPad, you will still be able to connect, and both make and receive calls and messages.

I couldn’t find any instructions online. There are a couple of wordless, agonizingly slow Youtube videos showing how to do it – if you read French.

So I thought I’d describe how I did it.

The basic procedure is that we will use a specific old version of iTunes on Windows to add Skype for iPhone to the Apple account used on the iPad, and then use the App Store on the iPad itself to install this from our web account.

What you will need:

  • an iPad that can’t run anything newer than iOS 9

  • a working Apple ID

  • a Windows PC on which you can install an old version of iTunes

    • Ideally one which didn’t have iTunes on it already


  • a cable to connect them


Just to make this harder, in version 12.7, Apple removed iTunes’ ability to install and manage apps on iOS devices in a recent version. This method won’t work with any current version of iTunes. So you’ll need to install a special, older version -- iTunes 12.6.3. If you already have a current version of iTunes installed, you’ll need to remove it and install the last version with the App Store functionality. Old versions of iTunes can’t open the libraries of newer versions. That means you’ll lose access to your iTunes library.

So make sure you have a backup, export your music/photos/videos and any other content to somewhere else and make sure you have a safe copy.

Then download iTunes from here: http://osxdaily.com/2017/10/09/get-itunes-12-6-3-with-app-store/

The other option is to make a special new Windows user account, just for this process. You’ll still need to downgrade iTunes, at least temporarily, but if you work in a dedicated one-shot user account, the new account won’t have access to your library, so you won’t lose it.

If you don’t have anything in your iTunes library, or you don’t normally use iTunes at all – like my mother, or indeed me, as I sync my iPhone to my iMac – then the easiest way to proceed is to erase your entire iTunes library and config files.

The procedure is as follows:

  1. Install the last version of iTunes with the App Store.

  2. Log in to the same Apple ID as the as used on the iPad.

  3. In iTunes, find Skype for iPhone.

  4. Ask to install it on your device. It’s free, so no payment method is required.

  5. Now, Skype for iPhone is on the inventory of your Apple ID.

    (At this point, you can connect the iPad and try to sync it. It won’t install Skype for you, but you can try.)

  6. Now, eject your iPad using the button next to its icon in iTunes. After that, you’re done with the PC.

  7. Now switch over to the iPad and open the Apple Store.

  8. Go to the “Purchased Apps” tab.

  9. Note: you might need to switch views. There is a choice of “iPad apps” and “iPhone apps”. Since we’re looking for Skype for iPhone, it should appear under iPhone apps, not under iPad apps.

  10. If you can’t see it, you can also search for “Skype for iPhone” – capitals don’t matter, but the exact phrase will help.

  11. When you find it, you should see a little cloud logo next to it. That means it’s on your account, but not on this iPad.

  12. Tap the “install” button.

  13. The App Store should tell you that the latest version will not run on your device – it needs iOS 10 or newer, which is why we are here. Crucially, though, it should offer to install the latest version which will work on your device. Say yes to this.


That is about it. It should install the iPhone version of Skype onto your iPad 2 or 3. You will see a small circle in the bottom right corner of the screen. This lets you change the magnification: normally, the app is doubled to fill most of the screen, and it says “1×” in the circle. Tap it to turn off the scaling, and the app will shrink down to phone size and the circle will say “2×” to return to double size.

For me, it worked and I could make and receive calls. However, I could not send video, only receive it.

Sadly if you use large text – my mum’s eyesight is failing – the phone app is almost unusable due to the text size, so we have sold on the iPad and bought a newer model, a fifth generation model. Now she is struggling with iOS 12 instead, which is a major step up in complexity from iOS 9. If you are attempting to do this for a technophobe or an elderly relative, you might consider switching to Facetime, Google Hangouts, or something else, as newer iPads are significantly less easy to use.
liam_on_linux: (Default)
A poorly-worded question on Quora links to a rather interesting (if patchily-translated) Chinese discussion of the Fuchsia OS project.

It suckered me into answering.

But so as to keep my answer outside of Quora...

Fuschia is an incomplete project. It is not yet clear what Google intends for it. It is probably intended as a replacement for Android.

Android is a set of custom layers on top of an old version of the Linux kernel. Android apps run on a derivative of the Java virtual machine.

This means that Android apps are not strictly native Linux applications.

Linux is a Unix-like OS, written in C. C is a simple programming language. It has many design defects, among which are that it does not have strong typing, meaning that it is not type-safe. You can declare a variable as being a long floating-point number, and then access one byte of it as if it were a string and replace what looks like the the letter “q” with the letter “r”. But actually it wasn’t a “q”, it was the value 80, and now you’ve put 81 in there. What was the number 42.37428043 is now 42.37428143, all because you accidentally treated a floating point number as a string.

[Disclaimer: this is a very poorly-described hypothetical instance and I am aware it wouldn't really work like that. Consider it figurative rather than literal.]

Better-designed programming languages prevent this. C just lets you, without an error.

It also does little to no checks on memory accesses. E.g. if you declare an array of 30 numbers, C will happily let you read, or worse still write, the 31st entry, or the 32nd, or the 42nd, or the 375324564th.

The result is that C programs are unsafe because of the language design. It is essentially impossible to write safe programs in C.

However, all Unix-like OSes are written in C. The entire kernel is in C, and all of the tools, from the “ls” command to the text editors to the programs that read and write configuration files and set up the computer, all in C. All in a language that has no way to tell if it’s reading or writing text or integer numbers or floating point numbers or hexadecimal or a binary-encoded image file. A language which won’t tell you if you slip up and accidentally do the wrong thing.

A few geniuses can handle this. A very, very few. People like Dennis Richie and Ken Thompson, who wrote Unix.

Ordinary humans can’t.

But unfortunately, Unix caught on, and now most of the world runs on it.

Later derivatives of the Unix operating system gradually fixed this. First Plan 9, which imposed much stricter limits on how C worked, and then tried to replace it with a language called Alef. Then Plan 9 led to Inferno, which largely replaced C with a safer language called Limbo.

But they didn’t catch on.

One of the leading architects of those operating systems was a programmer called Rob Pike.

He now works for Google, and one of his big projects is a new programming language called Go. Go draws on the lessons of Plan 9, Alef and Limbo.

Fuschia is written in Go instead of C.

Thus, although it has many other changes as discussed in the article you link to, it should in theory be fundamentally safer than Unix, being immune to whole categories of programming errors that are inherent to Unix and all Unix-like OSes.
liam_on_linux: (Default)


I had to point out a couple of issues...

* The OS that came with it... The original 'Strads came with _two_. Digital Research's DOS Plus:
https://en.wikipedia.org/wiki/DOS_Plus
... _and_ MS-DOS. DOS Plus was very obscure -- the only other machine I know to come with it was the Acorn BBC Master 512 -- but it was a forerunner of DR-DOS, which was a huge success and much later became open source.

* That isn't WordStar you show. Well, it sort of is, but it's not _the_ WordStar that you correctly describe as the leading DOS wordprocessor until WordPerfect came along. Amstrad bundled a special custom wordprocessor called WordStar 1512. This is a rebadged version of WordStar Express, which although it came from MicroPro Corp, is in fact totally unrelated to the actual WordStar program. The rumour was that WordStar Express was a student project, written in Modula-2. It is totally incompatible with actual WordStar, using different keystrokes, different file formats, everything. But it did allegedly get the student a job! It didn't sell so Amstrad got it very cheap.
https://www.wordstar.org/index.php/wordstar-history

* WordStar was originally written for CP/M and ported to MS-DOS, meaning that it didn't support MS-DOS's more advanced features, such as subdirectories, very well. MicroPro flailed around a bit, including developing WordStar 2000, another unrelated program that looked similar but used a totally different and incompatible user interface, thus alienating all the existing users.

(And WordStar users are almost fanatically loyal. George R R Martin is one -- all of "a Game of Thrones" was written in WordStar!)

After annoying its users for so long that various companies cloned the original program, MicroPro eventually did something marginally sensible. It bought the leading clone, which was called NewWord, and rebadged it as "WordStar 4," even though it wasn't derived from WordStar 3 at all.

So what Doris had there is a shoddy alternative app from MicroPro, and a better 3rd party alternative that in fact _became_ the real product.

* Locomotive BASIC 2 -- this was sort of a sop, a bone thrown to Locomotive Software who did almost all the original Amstrad CPC and PCW 8-bit business apps. BASIC 2 is pretty much totally unrelated to, and incompatible with, the ROM BASIC in the CPC range, or Locomotive's Mallard BASIC for the PCW, but it was written by the same company. It was the only high-level language built for PC GEM, I believe. It was sold on nothing other than the Amstrads and so disappeared into obscurity.

Rather than BASIC 2 and the fairly awful WordStar 1512, Amstrad ought to have offered LocoScript PC, the DOS version of the Amstrad PCW's bundled wordprocessor. This was a very good app in its day, one of the most powerful DOS wordprocessors in its time, with advanced font handling and very limited WYSIWYG support.

* No RAM expansion in the 1640. That's a plain mistake. There's no expansion possible. The 8086 can only address 1 MB of RAM, and the upper 384 kB of that space is filled with ROM and I/O space in the PC design. 640 kB is all an 8086 PC can take, so there *is* no possible expansion. Thus, no point in fitting slots for it.

Apart from these cavils, a good video that I enjoyed! 

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 16th, 2026 05:50 am
Powered by Dreamwidth Studios