liam_on_linux: (Default)
Prompted by this:
http://www.zdnet.com/the-windows-ecosystem-in-2013-is-more-diverse-than-you-think-7000021181/?s_cid=e539&ttag=e539

Linux is doing great at the moment - but for the wrong reasons. Not because it's caught up with or surpassed the apps or integration of Windows; it hasn't, it's a decade or more behind.
Read more... )
liam_on_linux: (Default)
Back in early September, Ubuntu announced that the nightly builds of Ubuntu now supported their new Mir display server, the planned replacement for X.org.

So I tried it. My impressions from that time were:

<<
As of last night - very late last night - I have Ubuntu "Saucy Salamander" up & running with the Mir display server. I am really quite excited about this. Glitches everywhere, but it works!

It's what would have been beta 1 if they still did things like alpha & beta releases, which they don't, because the SABDFL knows all and we must trust in his wisdom. Or something.

Also, I was very amused to read its kernel startup messages.

The kernel version is "Linux 3.11 for Workgroups", & it says:

MODSIGN: Loaded cert 'Magrathea: Glacier signing key XXXXX'

... during boot. :-)
>>

Soon after, I was horrified to discover the vconsole bug.

But it's getting there.

A fortnight later, ish, I've had another look at the alpha.

It's improved, significantly. They've fixed the security issue - stuff typed in a vconsole no longer appears in the foreground XMir app.

There is still a fair bit of screen flickering, but substantially less. From a quick play, I'd go so far as to say it's usable now; it's gone from horrible to merely distracting. It's the sort of thing I probably lived with in the era of OS/2 2 and Windows 3. :¬)

It's not there yet, not ready for prime-time, but then, they have about a month to go. I'd say it was approaching beta-ready.
liam_on_linux: (Default)
There are some remarkable claims about Lisp out there. Some are
substantiated and there are so many that they cannot all be wrong, it
would seem to me.

http://www.paulgraham.com/avg.html

http://www.lambdassociates.org/blog/bipolar.htm

& in general

https://en.wikiquote.org/wiki/Lisp_programming_language

Disclaimer: the following is all my best-guess approximations to what's
going on, after a LOT of effort, reading, discussion, more reading, and
in a few cases, going to very smart people and feeding them and
interviewing them until they can find sufficiently brain-dead
oversimplifications of things that my stupid non-programmer mind can
comprehend. I may have this all to cock.
Read more... )
liam_on_linux: (Default)
Which offers better performance, an Atom server or a Pentium 4 one?

It's an interesting comparison, and I don't know the direct answer.

The core problem with CPU performance comparison is that people tend to latch on to clock rate as an easily-understood measure. This doesn't work because clock rate is essentially only very loosely correlated to performance. It's like saying that a 125cc motorbike is 4x more powerful than a 16 litre truck engine, because the motorbike engine does 16,000 rpm and the truck engine maxes out at 4,000rpm. The relative speeds are true; the conclusion drawn from them is utterly wrong. Yes, one revs faster, but it is in no way more powerful. It is less powerful.
Read more... )
liam_on_linux: (Default)
A couple of months ago, I tried to update my 2007 Toshiba Satellite Pro P300-1AY laptop from Ubuntu 12.04 to 13.04. It failed, badly -- my AMD RV620 GPU is no longer supported by fglrx, the proprietary AMD/ATI graphics driver. But Ubuntu used it anyway, resulting in a broken GUI.
Read more... )
liam_on_linux: (Default)
[A chap on a mailing list I'm on talked about being unable to find the "Shutdown" option on Windows 8, and how while he and a friend couldn't work out how to "use Twitter" in over half an hour, his mother worked it out in five minutes.]

I've fallen victim to the "trying to be too clever" PEBCAK error myself, a good few times.
(E.g. I spent ages trying to work out the command to tell my first Apple Newton to shut down. Eventually I consulted the manual. Press the on/off button, it said. I think I actually blushed.)
I tried to learn from it. I don't always win.
Shutdown options are like a "sleep" option on a notebook. You don't need one. Just close the lid.Read more... )
liam_on_linux: (Default)
An online acquaintance commented that few Win8 users were using Modern apps, and that the Modern interface was useless for desktops because it's too tablet-centric, and it doesn't offer good multitasking.

Well, yes, it is designed for tablets. Why? Because it certainly looks like the personal computer of the near future - not the office workstation, maybe, but individuals' own machines - will all be tablets, except for a few nerdy hobbyists.
And true, the current Modern interface is poor at multitasking. But it does it, and better than Android or iOS do: you can see two apps at once.

Read more... )
liam_on_linux: (Default)
Symbian was OK. EPOC, its progenitor, was in some ways better. (I write as a Psion owner, user and -- TBH -- fan.)

AIUI, and I do not have good solid references on this, EPOC was a very early adopter of C++ as opposed to plain old C, and as a result, it did many things in extremely nonstandard ways compared to later C++ practice. Its string handling, error handling and all sorts of things was very weird and proprietary compared to the way that the greater C++ community ended up doing.

Read more... )
liam_on_linux: (Default)
A reader in Another Place asked something incisive:

> Does hibernation work with swapspace but without zRam?

I was compelled to answer:

That is a really good question. I don't know. I will try it and report back.

OK, after a little more experimentation, the quick answer is "no".
liam_on_linux: (Default)
So, just for the experiment, I tried configuring a 1GB RAM VM with both zRam (compressed swap in RAM) and swapspace (on-demand swapfiles in /var so you don't need a swap partition).

It seemed to work fine. I loaded Firefox with a ton of image tabs, plus LibreOffice, the GIMP, VLC, Evince, System Monitor and watched the swap gradually climb until zRam's half a gig of "virtual" virtual memory (IYSWIM) was exhausted, at which point it started creating swapfiles - one of 216MB followed by one of 270MB.

System performance gradually degraded, as you might expect. Eventually System Monitor froze up and then Firefox, but I suspect that if I had given them long enough, they'd have recovered as they were swapped back in.

The only snag: trying to hibernate, it did it happily, but when the VM rebooted, I got a cold-boot rather than a recovery from hibernation.

But if you don't want hibernation - and I don't, not on desktops - then the combination seems to work well for slightly low-memory machines.

I'd say that if you don't want or need hibernation support, there doesn't seem to be much need for a dedicated swap partition any more.

[Techie details: Mint 14, 32-bit, both Cinnamon and Maté desktops, fully up-to-date. 1GB RAM, 8GB VHD, a single ext2 partition for / and no swap partition. Running under the latest VirtualBox under the latest Ubuntu 64-bit, 3D graphics enabled (for Cinnamon's benefit).]
liam_on_linux: (Default)
I had problems with this a few years back, but the fix has changed now. Merely installing the Virtualbox Additions does not seem to be enough to get hardware OpenGL working. Also, the instructions I've found only mention Fedora.

To see if you are running with hardware or software rendering, use the command ``glxinfo''. You'll need to install the ``mesa-utils'' package; the info you're after is on the first page of output from glxinfo, so pipe it through ``less'' like so:

glxinfo | less

Read more... )
liam_on_linux: (Default)
(I've bolted this together from a few comments on a previous post, as it was buried some 125 comments down, and promoted it to a post in its own right. Commentary much appreciated, as ever!)

I used to program. I enjoyed it. I wasn't very good at it, but at a recreational level, I found it fun. When I first tried doing it for work purposes, it put me right off. I mentioned this on Twitter recently. (Not sure if that link will open the conversation in its entirety.)

My vague and not-at-all earth-shattering insight is something like this.

For a start, there are at least 4 or 5 levels of programmer. This is counting only those who know that they are programmers, not people, for instance, constructing huge elaborate models in Excel and not realising that what they are doing is in fact programming.

1. There are the very smart people, who have in many cases studied the discipline, read Knuth etc., know multiple languages including very arcane ones, and see and discuss the abstract patterns and behaviours. These people may well have some familiarity with Lisp, Scheme, etc. and concepts like metaprogramming.
Read more... )
liam_on_linux: (Default)
(Another repurposed mailing-list post. Please feel free to rip this apart.)

Maybe Ubuntu are not desperate. Do not forget Shuttleworth's wealth. He
can fund this for a long time.

Note also that a dot-com millionaire - not a billionaire, a "mere"
millionaire - is able to fund his own private freaking *space
programme* these days, with *multiple launch sites* mark you, out of
his own pocket without outside investment. While also running the most
famous pure-play electric-car company on the side.

Read more... )
liam_on_linux: (Default)
I am slightly surprised at the number of interesting new things I've come across recently in a field I am increasingly disenchanted with, but I have stumbled across some interesting new (to me) stuff of late.



The Raspberry Pi is old news, but I just discovered that there is a port of Plan 9 for it:
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=80&t=24480

Read more... )
liam_on_linux: (Default)
I love my Android phone in some ways - what it can do is wonderful. The formfactor of my Nokia E90 was better in every single way, though. Give the Nokia a modern CPU, replace its silly headphone socket, MiniUSB port & Nokia charging port with a standard jack & a MicroUSB, make the internal screen a touchscreen, and I would take your arm off in my haste to acquire it.
Read more... )
liam_on_linux: (Default)
I have long had a (very) idle dream about learning enough eLisp to convert Emacs, which I gather is quite phenomenally powerful and all that -- Neal Stephenson says so and he is as a god to me -- into an actual usable modern editor. I.e. something that looks and works like Notepad or Gedit or MS-DOS Editor: a basic CUA interface, because those are the keybindings I have been using since the end of the 1980s and they are now indelibly burned into my muscle memory.

But someone has gone and done it already.

http://ergoemacs.org/

I've been experimenting a little. It's Emacs, but it actually works with the same keystrokes as every other editor for the last 20+ years. It's amazing.

And unlike the lovely Aquamacs, it doesn't need Mac OS X.

I am not sure I have the flexibility to adapt any more, but I am liking what I am seeing. There's a monster lurking behind this friendly face, but it gets me over the initial hump that none of the editing keystrokes I use daily on Windows, Macs and Linux alike work any more on Emacs.

Aside: don't suggest Vi. I can use it for the very basics, but then again I could probably type with my nose if I had to. I choose not to because it's slow and unpleasant. It is not even as modern as MS-DOS Edit - it's a vestige from the 1970s. I remember using text editors on VAX/VMS nearly 30 years ago and having to flip from edit mode to command mode and back all the time. It was crap then and it's intolerable now; I don't care how many other features it has, the basic operating mode is a POS.
liam_on_linux: (Default)
The other day, I linked to an amusing  Miggy-versus-Jackintosh page I'd found.

This led to a fairly well-mannered reignition of the old argument. (Ta for the repost, Peter!)

I though my comment might be worth a post, since I don't post here as often as I'd like...

I think the Amiga was by far the better machine, yes, in hardware and in software. In raw CPU speed the ST had an edge and in a way I admire the simplicity of the ST's design: the Amiga was expensive and stuffed with custom chips and a custom OS unlike anything else, albeit based in small part on TRIPOS. (And the OS, like the Archimedes', was a last-minute stand-in for a failed project anyway.)

The ST was a Sinclair ZX Spectrum for the 16-bit era:
* the same COTS CPU as everyone non-PC-compatible used
* a bog-standard Yamaha sound chip
* bog-standard graphics derived from inexpensive chips from the x86 side of the fence - it was somewhere between EGA and VGA, basically, at CGA scan rates to work with TV sets.
* an OS kernel derived from CP/M-68K with some of the later semi-MS-DOS-compatible bits
* a GUI that was a straight port of DR-GEM from the PC, but not the version crippled by Apple's lawsuit.
* the PC/MS-DOS floppy disk format, basically
* standard joystick ports, serial/parallel IIRC, and MIDI, which was a stroke of genius, in hindsight.

The Lorraine, later the Amiga, later the Commodore Amiga - not a CBM product at all, originally - was a design tour-de-force from a bunch of ex-Atari people.

The QL was Sinclair's too-crippled take on a cheap 68K machine.

The Mac was a dramatically-cut-down but also simplified and less-weird Lisa, and it was still vastly expensive.

And off to one side, the Archimedes: proprietary from top to toe, although the result was stunning. No acceleration anywhere, very  RISC, very stripped-down-and-simple, and as a result, as fast as feck - and quite expensive at first, albeit awesome in bang-for-buck.

Atari, having lots its chip gurus, said screw that, we can do a 68K box and we can do it faster and cheaper. It designed very little, almost nothing in-house: it was a COTS GUI on a the tweaked kernel of a COTS OS running on a COTS CPU with a COTS chipset.

And the result was a very good machine indeed for the money. No, not as fast as an Archie, but much cheaper. As fast as a Mac but about a sixth or an eighth of the price. Not as whizzy and cool as an Amiga, but cheaper and actually a very cool toy. Way more usable with a single floppy, too!

So don't diss the ST. I think it hit a sweet spot: not as constrained as the QL, not as elaborate & expensive as the Amiga, nowhere near as clever as the Archie, but simple, quick, cheap, solid, and stunning compared to the 8-bits that people were coming from.

The ST showed, for example, how past-it all the 8-bits were. I had a SAM Coupé, one of the latest and greatest 8-bit micros ever - stomped on MSX2 for spec - but the ST was a far better computer all round.

The ST may have paled next to the Miggy, but it made the Mac look very silly indeed.

And of course its media abilities stomped all over the PCs of the time, at a quarter of the price of a tricked-out PC.

As for their survival:

Well, there's no new Amiga H/W, but there is a current OS. 2 or 3 in fact.

The Acorn kit is dead but the chip and arguably elements of the chipset live on, are massively successful, and the OS - another stopgap - is still alive too.

The ST OS has been completely re-implemented as FOSS and it's alive too, just mostly on emulators.

The QL - well, that really is dead, but 2 forks of its OS are out there, one GPL, one with free source but not Free.

But the weird one in the corner, the Archimedes, that is the one that spawned an entire industry, even though the parent company withered and died.

Odd, that.

Probably the greatest British industry success story in many decades and almost nobody in Britain knows about it.
liam_on_linux: (Default)
Following on from my earlier tweet...

It's not the clearest piece but I think his insight is keen.

His point is that all this messing around with KDE 4 and GNOME 3 and so on is a distraction - that instead of imitating 1990s interfaces, we need ones for the kids of the 2nd decade of the 21st century, who are used to iOS and Android.
I think this is what Win8 might get right. Something that combines the simplicity of the iPad with actual real multitasking OS power with a local filesystem and all that. Dead simple full-screen apps, no window management or task switchers or anything, gesturally controlled, either via a touchscreen or a trackpad-plus-keyboard or perhaps a reborn Fingerworks-style combined trackpad-which-is-a-keyboard.

I think the killer feature might be a smart tiling window manager, so that on bigger screens, you can actually see 2 or 3 or more things at once - but without needing to ever learn how to move, split, resize and rearrange windows, which was the 1980s way to do things. 

Win8 sort of has this - there is one fixed ~70:30 split, as I understand it. But the Linux tiling WMs have gone long beyond this years ago. The idea of tiling WMs is that they automatically rearrange your windows for you in an optimal, space-filling arrangement, so nothing is ever hidden behind anything else and all the available screen space is used. Goodbye, desktop wallpaper -- if you can see it, that space is being wasted. 

I suspect that notions such as menus, app switchers (taskbar, dock, whatever) are possibly actually antiquated hang-ons. Users should not have to care about stuff like that -- if you don't need it, take it away.

It's just a guess -- I could be totally wrong, of course. But I suspect that the tablet/touchscreen-smartphone transition is going to look like the CLI-to-GUI one. All the techies and the geeks will howl in protest, then be quietly won over, move across and love it and only a few hold-outs will stay with the old way.

The obstacles in the way? 

#1 -- a good smart window manager so you can still watch multiple apps at one time without doing your own window management. iOS totally fails at this, it's being bolted clumsily and badly onto Android, Win8 makes a very half-hearted stab, but someone is going to do it properly at some point.

#2 -- a rich OS behind it, with the ability to move files/docs/data/whatever between apps, in the way that again iOS fails at and Android isn't great at, but which desktop Windows and OS X excel at but make far too complex for non-techies.

#3 -- some kind of desktop input device that makes this pleasant, accessible and convenient for desk-bound computer users who aren't on tablets and don't want a little screen in their lap. Something that works well and fluidly with big monitors, including multiple ones, and with proper hardware keyboards.
liam_on_linux: (Default)
64-bit operating systems are coming in and are rapidly replacing 32-bit ones as the platforms of choice. The 32-bit edition of Windows 7 is a bit of a minority choice and the 32-bit version of Windows 8 will be more so.

One of the main reasons is that if you have a 32-bit OS, you can only access rather less than 4GB of RAM. But given that a 32-bit CPU can access a full 4 gig, why is this?

Well, the biggest culprits are graphics cards. Most modern 3D cards come with a large amount of on-board video RAM - 256MB is stingy, 512MB is small, 1 to 2GB is common and there are bigger cards out there.

The problem is that some or all of this RAM on the graphics card is also visible in the CPU's address space: the graphics card's onboard RAM is mapped into the CPU's memory space. (Not always all of it - sometimes just a large chunk, such as 256MB of it.)

This means that you lose access to that part of your machine's RAM (assuming that your PC has 4GB fitted.)

This doesn't just apply to low-end CPU-integrated or chipset-integrated graphics; they have a different, related problem but the upshot is the same.

With integrated graphics, there is no separate video memory - the graphics chip just uses a bit of the main system RAM for what is called the framebuffer. This works fine but it's slower because of contention - basically, only 1 chip/chipset can access the RAM at once, either the CPU or the GPU, so they have to take turns. Result, the RAM effectively runs at half the speed.

So dedicated graphics memory is preferable.

I am carefully using the somewhat vague term "graphics memory" and not "video RAM". Video RAM - VRAM for short - is a different kind of memory chip, which has 2 ports per byte, so that, say, the CPU can write values into it at the same time as the GPU reads them out in order to display the contents. This costs more but it runs much quicker as there's no contention.

Dedicated graphics cards have their own, entirely-separate onboard RAM - but the CPU has to communicate with the device somehow, so the graphics card's VRAM is mapped into the CPU's memory space. A 32-bit CPU can only access a flat linear address space of 4GB, therefore, some of that space must be taken up by the video card's memory in order for the OS to be able to write stuff into it and display it.

(All right, in theory, you could use one byte of RAM and just write all the stuff consecutively to it really fast but in actual practice that would be horribly slow and inefficient. It's not so many years ago that 4GB was an inconceivably vast amount of memory, so why not just nick a few hundred meg of space up at the top of the memory map, where you'll never get any real RAM because it would cost tens of thousands of quid, and map the graphic's card's VRAM in there? Simple, efficient, quick.)

So, basically, yes, your fancy graphics card's RAM is mapped into the CPU's address space, right up in the top GB between 3GB and 4GB, which was fine until PCs had so much RAM that they had actual RAM there.

Other devices are mapped in there as well. In fact pretty much all your hardware is mapped in there somewhere, but most of it only needs a few bytes or at most a few KB here or there, so the space is insignificant - but this is why you can't use all your 4GB of RAM in a PC with a 32-bit OS. Some of the space is reserved for I/O devices and their ROMs and so on, and these days, the bulk of that will be the graphics card's VRAM.

There's another caveat.

If you are old enough to remember MS-DOS and the 640KB memory limit, this is exactly the same problem as the 640KB limit writ large.

The original PCs's hybrid 8/16-bit CPU could only access 1MB of RAM. Some of that area of the memory map had to be reserved for I/O and ROM. IBM put it at the top, between 640KB and 1MB. That meant that a few years later, when you could actually afford 1MB of RAM, you couldn't use it all in DOS - the top 384KB of the address space was already in use for I/O and ROM and couldn't be used. Result: 640KB of available space and the "DOS Conventional memory" limit.


Once the 386 came in, which can rearrange RAM in hardware ("memory mapping", a form of virtual memory), you could fill in the unused gaps in that 384KB with bits of actual RAM. It wouldn't be contiguous with main RAM, but in DOS with its various little resident programs to give you a CD drive or a mouse or a sound card or a UK keyboard, you could use all those new 8 or 16 or 32KB fragments of RAM in the top 384KB - "upper memory blocks" they were called - to stash your "terminate and stay resident" (TSR) programs in. This was a bit of a black art, but one of which I was a past master, if I say so myself. I could find every tiny unused block, map it and fill it with some tiny fragment of DOS and keep all 640KB of conventional memory free for apps.


A mate and one-time colleague of mine, Carlton, was fascinated by this and got me to teach him. He came into my office a few weeks later with a printout of his home PC's memory map, to show me how he'd optimised it to hell. He was really proud.

About three months later, some salesdroid in a Mondeo hit Carlton on his bike on the M5 and killed him.

Anyway.

If you bought Quarterdeck's QEMM386 memory manager, it could do this automatically. I could usually beat it. MS-DOS 6 included a somewhat half-hearted tool called MEMMAKER to do this that wasn't very good, but by hand, a skilled techie [*ahem*] could do with EMM386 what Quarterdeck's QEMM and OPTIMISE could do.

32-bit CPUs have 4GB of space and again the top part was reserved for ROM and I/O space, and sure enough, a few years later, this became a problem as people got so much RAM that they needed all of their address space.

But to be fair, it had to go somewhere. The thing is that in the DOS days, some computer manufacturers, such as Sirius and Apricot, put the I/O area somewhere else - e.g. at the start of the 1MB of address space. So those of them still making computers when 1MB of RAM became doable could offer more - 800 or 900KB of RAM - because they didn't reserve the whole top 384KB of address space. These machines ran MS-DOS but weren't PC compatible -- in part because of their weird memory maps -- and they all died out before the advent of the 386.

The bit of this that is actually relevant is this:

Back in the DOS days, there was a kludgey way to access more than 640KB of RAM from DOS, called Expanded memory. You could have megabytes of the stuff, accessed by paging little bits of it in and out of a 64KB window in the upper memory area, called the "page frame". The spec was set by Lotus, Intel and Microsoft and it was sometimes called LIM-spec expanded memory. Just what you wanted for ginormous Lotus 1-2-3 spreadsheets of up to several megabytes! (Seriously, that was its main reason for existence.)

(Expanded RAM is not to be confused with Extended memory, which was just anything above the 1MB mark in a 286 or 386. That's what Windows 3 and later wanted but it was no use to DOS.)

In 8088/8088/80286 PCs, you needed physical hardware to have Expanded RAM.

But the 386 could do it in software - that was the original primary purpose of QEMM

The LIM spec version 4 allowed an arbitrarily large page frame, anywhere in RAM. This meant that a DOS multitasker (such as Quarterdeck's DESQview) could swap DOS programs in and out of conventional memory on demand, allowing you to multitask multiple DOS apps at once.

Anyway, the point of all the history is that later-model Pentium Pro and subsequent chips also support something very like LIM 4 expanded memory. If you have more than 4GB of RAM, these CPUs let you page parts of the memory above 4GB - invisible to 32-bit OSs - into the "base" 4GB of RAM. This means that even if you have 8GB of RAM, you can't run a 7GB application, but you can run three 2GB applications side-by-side.

It's just like LIM4 Expanded RAM paging memory above 1MB into the base 640KB; this time, it's paging memory above 4GB into the base sub-4GB area.

It's called PAE - Physical Address Extension.

The snag is, Microsoft won't let you use it. The feature is only enabled in server versions of Windows - you can't use it with XP, or with 32-bit Vista or Win7 workstation, "pro", "ultimate" editions and so on.

Not a problem with Linux - 32-bit Linux can access loads of RAM, way more than 4GB. The limit is imposed on 32-bit Windows by Microsoft licencing, not by a technical restriction. Blame them for the fact that you can't access more than 3-and-a-bit gig. You could if only they would let you.
liam_on_linux: (Default)
From a reply I made to this Reddit post.

I used to support MultiMate in the late 1980s. I didn't like it much myself, nor use it from choice, but it had one compelling advantage in the timeframe of my first job (1988-1990 or so): there was a network version. This ran off a server share and understood networking.

Not a biggie today, but it was then. Not only could it happily load and save files from shared drives, it didn't collapse in a heap when a user tried to open a file someone else was using. It didn't need a local installation on each individual DOS workstation (IIRC) and it happily printed to network printer queues.

I think it was also significantly cheaper to buy a network 5-user copy than 5 standalone copies! :¬)

Pros: well, not a lot apart from that. Fairly awkward UI even for the 1980s. I believe MultiMate started off trying to vaguely resemble some proprietary minicomputer WP. [*Goes and checks*] Ah, yes, Wang. It kinda sorta emulated a Wang hardware wordprocessor, in the same way that IBM DisplayWrite kinda sorta emulated an IBM DisplayWriter, IBM's dedicated WP.

Its UI was mostly menu driven, but big clunky full-screen menus for file management, plus the weird control keys in editing mode that were absolutely normal for DOS apps. Every app had its own control keys and none of them matched up - even 2 apps from the same software company would often have totally different control keys. MultiMate was no worse than anything else.

Its printer support was legendarily poor, its import/export functions were poor, it didn't support graphics at all, but nothing exceptionally bad for the mid-1980s. It had good mailmerge support built-in, that was quite special for the time; this was often an optional extra. And it could mail-merge stuff across a network, which was snazzy then. I think it even supported a few external (non-MultiMate) file formats for the data source - Ashton-Tate bought it, the dBase people, so I think it could mailmerge in from a networked dBase III database. Very big deal in the mid-80s, something like that. Apps were closed boxes then - you did not routinely move data from one to another. In the early DOS era, dBase III was the database.

MultiMate was already clunky by '88 but it did the job.

MultiMate's great failing is that it came from the era of daisywheel printers and the like. This means /fixed-space fonts/. It worked OK on early dot-matrix and laser printers, but only in monospaced mode with monospaced fonts. It could not cope well at all with proportional fonts and it didn't really understand the notion of different sizes of fonts. There were tons of printer drivers for it - not as many as for Wordperfect, no, but /nothing/ had as many as Wordperfect.

So by around '89-'90, when everyone was going over to proportional fonts on their LaserJet IIs and so on, it started to be a bit embarrassing - you could tell at a glance that MM documents came off something that didn't handle proportional spacing. If you turned it on, MM lost its ability to centre, right-justify or full-justify (AFAICR).

So it fairly quickly faded away.

WordStar was a previous generation of app, really. It came from the CP/M era and it didn't really understand fancy new-fangled concepts like "fonts" or "subdirectories". :¬)

The mid-1980s DOS-era rivals - at least in the UK - were MultiMate, DisplayWrite, and then second-stringers such as Samna Executive. WordStar was still hanging on as a whole family - WordStar 3, then NewWord, then NewWord merging with and becoming WordStar 4, plus WordStar 2000 and WordStar Express. All had totally different UIs and different file formats and so on. Insane, just completely insane.

Then, once they got a bit more mature, two early-DOS-days new contestants rose up: Wordperfect and Microsoft Word. The first usable version of WP was 4.2. It was horrible - everything was combinations of function keys: Shift-F3, Ctrl-F8, Alt-Shift-F5, F4; I kid you not, that might be a typical command sequence - but it was very powerful.

MS Word was weird, for the DOS days. It worked a bit like a Windows WP does - rather than doing [bold on]type[bold off], you typed, then selected the word, then applied "bold" to it. This was arcane in the DOS era, but I liked it. The actual UI was a 2-level menu at the bottom of the screen, with commands like Esc-T-L to open a file: Esc to activate the menu bar, then the Transfer submenu, then Load.

Then the Mac & later Windows 2 started to catch on a bit and everyone wanted pull-down menu bars like the trendy new GUIs. So Word 5 ditched its whole UI and went to pull-downs: File, Edit, View, Format, Tools, Help. Stuff actually vaguely familiar today. Bit of a wrench but it worked.

WP just added drop-downs as well as all the function keys. The WP users hated it - Guy Kewney, a noted UK IT journo and later my friend, before his untimely death, put it like this:

"WordPerfect 4.2 was a bicycle. A great bicycle. Everyone agreed it was a great bicycle, just about the best. So what Wordperfect did was, they put together a committee, looked and the market, and said: 'what we'll do is, we'll put 11 more wheels on it.'"

(Or something like that.)

But despite this, WP 5 really caught on. The bugfixed version 5.1, was huge and became the classic version. It obliterated Word in the market.

But basically, WordPerfect (and to a much lesser extent MS Word) outcompeted MultiMate completely because they were newer enough to natively understand proportional fonts and even resizable fonts. They could do all the fancy stuff (for the '80s) - left justify, right justify, full justify, centre, in a mixture of fonts and font sizes, mixing monospace and proportional, reading the printer's capabilities from the driver and showing you what fonts you had on offer and letting you choose freely and arbitrarily enlarge stuff and so on.

MultiMate couldn't hack that sort of stuff. You set a font at the start of your document and that was it, pretty much, unless you got clever. So by the late '80s when everyone had a printer that could do at least small, standard and large text, super and subscript, a serif and sans-serif typeface, even if nothing more than than, then Word and WordPerfect could handle it just fine... but if you used MultiMate or WordStar or DisplayWrite or anything, you were in for a world of pain.

So they died out quite quickly.

Then came Windows 3 and it all changed.

Wikipedia's article on MultiMate is not too bad.

There's a Spanish version on VetusWare if you want to play and habla un poco de español. Given that Ashton-Tate is long dead, I think it counts as very abandonware!

If you want to try a DOS WP then Microsoft made Word 5.5 freely-downloadable as a Y2K fix: it's on www.microsoft.com somewhere. Google will find it more easily than Bing.

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 18th, 2026 01:15 pm
Powered by Dreamwidth Studios