liam_on_linux: (Default)
A HN poster questioned the existence of 80286 computers with VGA displays.

In fact the historical link between the 286 and VGA are significant and represent one of the most important events in the history of x86 computers.

The VGA standard, along with PS/2 keyboard and mouse ports, 1.4MB 3.5" floppies, and even 72-pin SIMMs, was introduced with IBM's PS/2 range of computers in 1987.

The original PS/2 range included:

• Model 50 -- desktop 286.

• Model 60 -- tower 286.

• Model 70 -- desktop 386DX.

• Model 80 -- tower 386DX. (I still have one. One of the best-built PCs ever made.)

All had the Microchannel (MCA) expansion bus, and VGA as standard.

Note, I am not including the Model 30, as it wasn't a true PS/2: no MCA, and no VGA, just MCGA.

IBM promised buyers that they would be able to run the new OS/2 operating system it was working on with Microsoft at the time.

This is the reason why IBM insisted OS/2 must run on the 286: to provide it to the many tens of thousands of customers it had sold 286 PS/2 machines to.

Microsoft wanted to make OS/2 specific to the newer 32-bit 386 chip. This had hardware-assisted multitasking of 8086 VMs, meaning the new OS would be able to multitask DOS apps with excellent compatibility.

But IBM had promised customers OS/2 and IBM is the sort of company that takes such promises seriously.

So, OS/2 1.x was a 286 OS, not a 386 OS. That meant it could only run a single DOS session and compatibility wasn't great.

This is why OS/2 flopped. That in turn is why MS developed Windows 3, which could multitask DOS apps, and was a big hit. That is why MS had the money to headhunt the MICA team from DEC, headed by Dave Cutler, and give them Portable OS/2 to finish. That became OS/2 NT (because it was developed on Intel's i860 RISC chip, codenamed N-Ten.) That became Windows NT.

That is why Windows ended up dominating the PC industry, not OS/2 (or DESQview/X or any of the other would-be DOS enhancements or replacements).

Arguably, although I admit this is reaching a bit, that's what led to the 386SX, and later to VESA local bus computers, and Win95 and a market of VGA-equipped PCI machines: the fertile ground in which Linux took root and flourished.

PCs got multitasking combined with a GUI because of Windows 3 and its successors. (It's important to note that there were lots of text-only multitasking OSes for PCs: DR's Concurrent DOS, SCO Xenix, QNX, Coherent, TSX-32, PC-MOS, etc.) The killer feature was combining DOS, a GUI, and multitasking of DOS apps. That needed a 386SX or DX.

These things only happened because OS/2 failed, and OS/2 failed because there were lots of 286-based PS/2 machines and IBM promised OS/2 on them.

The 286 and VGA went closely together, and indeed, IBM later made the ISA-bus "PS/2" Model 30-286 in response to the relatively failure of MCA.

It was a pivotal range of computers and it sealed the future of the PC industry long after PS/2s themselves largely disappeared. They were a hugely important range of computers, and they introduced the standards that dominated the PC world throughout the 1990s and into the 2000s: PS/2 ports, VGA sockets, 72-pin RAM, 1.4MB floppies etc. Only the expansion bus and the planned native OS failed. All the external ports, connectors, media and so on became the new industry standards.               

liam_on_linux: (Default)
I really hate it whenever I see someone calling Apple fans fanboys or attacking Apple products as useless junk that only sells because it's fashionable.

Every hater is 100% as ignorant and wrong as any fanatically-loyal fanboy who won't consider anything else.

Let me try to explain why it's toxic.

If someone/some group are not willing to make the effort to see why a very successful product/family/brand is successful, then it prevents them from learning any lessons from that success. That means that the outgroup is unlikely to ever challenge the success.

In life it is always good to ask why. If this thing is so big, why? If people love it so much, why?

I use a cheap Chinese Android phone. It's my 3rd. I also have a cheap Chinese Android tablet that I almost never use. But last time I bought a phone, I had a Planet Computers Gemini on order, and I didn't want two new ChiPhones, so I bought a used iPhone. This was a calculated decision: the new model iPhones were out and dropped features I wanted. This meant the previous model was now quite cheap.

I still have that iPhone. It's a 6S+. It's the last model I'd want: it has a headphone socket and a physical home button. I like those. It's still updated and last week I put the latest iOS on it.

It allowed me to judge the 2020s iOS ecosystem. It's good. Most of the things I disliked about iOS 6 (the previous iPhone model I had) have been fixed now. Most of the apps can be replaced or customised. It's much more open than it was. The performance is good, the form factor is good, way better than my iPhone 4 was.

I don't use iPhones because I value things like expansion slots, multiple SIMs, standard ports and standard charging cables, and a customisable OS. I don't really use tablets at all.

But my main home desktop computer is an iMac. I am an expert Windows user and maintainer with 35 years' of experience with the platform. I am also a fairly expert Linux user and maintainer with 27 years' experience. I am a full-time Linux professional and have been for nearing a decade... because I am a long-term Windows expert and that is why I choose not to use it any more.

My iMac (2015 Retina 27") is the most gorgeous computer I've ever owned. It looks good, it's a joy to use, it is near silent and trouble-free to a degree that any Windows computer can only aspire to be. I don't need expansion slots and so on: I want the vendor to make a good choice, integrate it well and for it to just work and keep just working, and it does.

It is slim, unobtrusive for a large machine, silent, and the picture (and sound) quality is astounding.

I chose it because I have extensive knowledge of building, specifying, benchmarking, reviewing, fixing, supporting, networking, deploying, and recycling old PCs. It is over 3 decades of expert knowledge of PCs and Windows that is why I spent my own money on a Mac.

So every time someone calls Mac owners fanboys, I know they know less than me and therefore I feel entirely entitled to dump on their ignorance from a great height.

I do not use iDevices. I also do not use Apple laptops. I don't like their keyboards, I don't like their pointing devices, I don't like their hard-to-repair designs. I use old Thinkpads, like most experienced geeks.

But I know why people love them, and if one wishes to pronounce edicts about Apple kit, you had better bloody well know your stuff.

I do not recommend them for everyone. Each person has their own needs and should learn and judge appropriately. But I also do not condemn them out of hand.

I have put in an awful lot of Windows boxes over the years. I have lost large potential jobs when I recommended Windows solutions to Mac houses, because it was the best tool for the job. I have also refused large jobs from people who wanted, say, Windows Server or Exchange Server when it *wasn't* the right tool for the job.

It was my job to assess this stuff.

Which equips me well to know that every single time someone decries Apple stuff, that means that they haven't done the work I have. They don't know and they can't bothered to learn.
liam_on_linux: (Default)
A short extract of Neal Stephenson's seminal essay has been doing the rounds on HackerNews.


OK, fine, so let's go with it.

Since my impression is that HN people are [a] xNix fans [b] often quite young therefore [c] have little exposure to other OSes, let me try to unpack what Stephenson was getting at, in context.

The Hole Hawg is a dangerous and overpowered tool for most non-professionals. It is big and heavy. It can take on big tough jobs with ease, but its size and its brute power mean that it is not suitable for precision work. It has relatively few safety features, so that if used inexpertly, it will hurt its operator.

DIY stores are full of smaller, much less powerful tools. This is for good reasons:

  • because for non-professional users, those smaller, less-powerful tools are much safer. A company which sells a tool to untrained users which tends to maim or kill them will go out of business.

  • because smaller, less-powerful tools are better for smaller jobs, that a non-professional might undertake, such as hanging a picture, or putting up some shelves.

  • professionals know to use the right tool for the job. Surgeons do not operate with chainsaws (even though they were invented for surgery). Carpenters do not use axes.


The Hole Hawg, as described, is a clumsy tool that needs things attached to it in order to be used, and even then, you need to know the right way or it will hurt you.

Compare with a domestic drill with a pistol grip that is ready to use out of its case. Modern ones are cordless, increasing their convenience.

One is a tool for someone building a house; the other is a better tool for someone living in that house.

That's the drill part.

Now, let's discuss the OSes talked about in the rest of the 1999 piece from which that's a clipping [PDF].

There are:

  • Linux, before KDE, with no free complete desktop environments yet;

  • Windows, meaning Windows 98SE or NT 4;

  • Classic MacOS – version 9;

  • BeOS.

Stephenson points out that Linux is as powerful as any of them, cheaper, but slower, ugly and unfriendly.

He points out that MacOS 9 is as pretty, friendly, and comprehensible as OSes get, but it doesn't multitask well, it is not very stable, and when a program crashes, your entire computer probably goes with it.

He points out that Windows is overpriced, performs poorly, and is not the best option for anyone – but that everyone runs it and most people just conform with what the mainstream does.

He praises BeOS very highly, which was 100% justified at the time: it was faster than anything else, by a large margin. It has superb multimedia support and integration, better than anything else at the time. It was standards-compliant but not held back by it. For its time, it has a supermodern OS, eliminating tonnes of legacy cruft.

But it didn't have many apps so it was mainly for people in narrow niches, such as music production or maybe video editing.

It was manifestly the future, though. But we're living in the future and it wasn't. This was 23 years ago, nearly a quarter of a century, before KDE and GNOME, before Windows XP, before Mac OS X. You need to know that.

What Unix people interpret as praise here is in fact criticism.

That Unix is very unfriendly and can easily hurt its user. (Think `rm -rf /` here.)

That Unix has a great deal of raw power but maybe more than most people need.

That Unix is, frankly, kinda ugly, and only someone who doesn't care about appearances would choose it.

That something of this brute power is not suitable for fine precision work. (Which it still mostly isn't -- Mac OS X is Unix, tuned and polished, and that's what the creative pros use now.)

Here's a response from 17 years ago.
liam_on_linux: (Default)
[Another recycled mailing list post]

I was asked what options there were for blind people who wish to use Linux.

The answer is simple but fairly depressing: basically every blind person I know personally or via friends of friends who is a computer user, uses Windows or Mac. There is a significant move from Windows to Mac.

Younger computer users -- by which I mean people who started using computers since the 1990s and widespread internet usage, i.e. most of them -- tend to expect graphical user interfaces, menus and so on, and not to be happy with command-line-driven programs.

This applies every bit as much to blind users.

Linux can work very well for blind users if they use the terminal. The Linux shell is the richest and most powerful command-line environment there is or ever has been, and one can accomplish almost anything one wants to do using it.

But it's still a command line, and a notably unfriendly and unhelpful one at that.

In my experience, for a lot of GUI users, that is just too much.

For instance, a decade or so back, the Register ran some articles I wrote on switching to Linux. They were, completely intentionally, what is sometimes today called "opinionated" -- that is, I did not try to present balance or a spread of options. Instead I presented what was, IMHO, the best choices.


Multiple readers complained that I included a handful of commands to type in. "This is why Linux is not usable! This is why it is not ready for the real world! Ordinary people can't do this weird arcane stuff!" And so on.

Probably some of these remarks are still there in the comments pages.

In vain did some others try to reason with them.

But it was 10x quicker to copy-and-paste these commands!
-> No, it's too hard.

He could give GUI steps but it would take pages.
-> Then that's what he should have done, because we don't do this weird terminal nonsense.

But then the article would have been 10x longer and you wouldn't read it.
-> Well then the OS is not ready, it's not suitable for normal people.

If you just copy-and-paste, it's like 3 mouse clicks and you can't make a typing error.
-> But it's still weird and scary and I DON'T LIKE IT.

You can't win.

This is why Linux Mint succeeded -- partly because when Ubuntu introduced its non-Windows-like desktop after Microsoft threatened to sue, Mint hoovered up those users who wanted it Windows-like.

But also because Mint didn't make you install the optional extras. It bundled them, and so what if that makes it illegal to distribute in some countries? It Just Worked out of the box, and it looked familiar, and that won them millions of fans.

Mac OS X has done extremely well partly because users never ever need to go need a command line, for anything, ever. You can if you want, but you never, ever need to.

If that means you can't move your swap file to another drive, so be it. If that means that a tonne of the classic Unix configuration files are gone, replaced by a networked configuration database, so be it.

Apple is not afraid to break things in order to make something better.

The result has been to become the first trillion-dollar computer company, and hundreds of millions of happy customers.

Linux gives you choices, lets you pick what you want, work the way you want... and despite offering the results for free, the result has been about 1% of the desktop market and basically zero of the tablet and smartphone markets.

Ubuntu made a valiant effort to make a desktop of Mac-like simplicity, and it successfully went from a new entrant in a busy marketplace in 2004 to being the #1 desktop Linux within a decade. It has made virtually no dent on the non-Linux world, though.

After 20 years of this, Google (after *bitter* internal argument) introduced ChromeOS, a Linux which takes away all your choices. It only runs on Google hardware, has no apps, no desktop, no package management, no choices at all. It gives you a dead cheap, virus-proof computer that gets you on the Web.

In less time than Ubuntu took to win about 1% of the Windows market over to Linux, ChromeBooks persuaded about one third of the world laptop buying market to switch to Linux. More Chromebooks sell every year -- tens of millions -- than Ubuntu users in total since it lauched.

What effect has this had on desktop Linux? Zero. None at all. If that is the price of success, they are not willing to pay it. What Google has done is so unspeakable foul, so wrong, so blasphemous, they don't even talk about it.

What effect has it had on Microsoft? A lot. Cheaper Windows laptops than ever, new low-end editions of Windows, serious efforts to reduce the disk and memory usage...

And little success. The cheap editions lose what makes Windows desirable, and ultra-cheap Windows laptops make poorer slower Chromebooks than actual Chromebooks.

Apple isn't playing. It makes its money in the high-end.

Unfortunately a lot of people are very technologically conservative. Once they find something they like, they will stay with it at all costs.

This attitude is what has kept Microsoft immensely profitable.

A similar one is what has kept Linux as the most successful server OS in the world. It is just a modernised version of a quick and dirty hack of an OS from the 1960s, but it's capable and it's free. "Good enough" is the enemy of better.

There are hundreds of other operating systems out there. I listed 25 non-Linux FOSS OSes in this piece, and yes, FreeDOS was included.

There are dozens that are better in various ways than Unix and Linux.

  • Minix 3 is a better FOSS Unix than Linux: a true microkernel which can cope with parts of itself failing without crashing the computer.

  • Plan 9 is a better UNIX than Unix. Everything really is a file and the network is the computer.

  • Inferno is a better Plan 9 than Plan 9: the network is your computer, with full processor and OS-independence.

  • Plan 9's UI is based on Oberon: an entire mouse-driven OS in 10,000 lines of rigorous, type-safe code, including the compiler and IDE.

  • A2 is the modern descendant of Oberon: real-time capable, a full GUI, multiprocessor-aware, internet- and Web-capable.

(And before anyone snarks at me: they are all niche projects, direly lacking polish and not ready for the mass market. So was Linux until the 21st century. So was Windows until version 3. So was the Mac until at the very least the Mac Plus with a hard disk. None of this in any way invalidates their potential.)

But almost everyone is too invested in the way they know and like to be willing to start over.

So we are trapped, the monkey with its hand stuck in a coconut shell full of rice, even though it can see the grinning hunter coming to kill and eat it.

We are facing catastrophic climate change that will kill most of humanity and most species of life on Earth, this century. To find any solutions, we need better computers that can help us to think better and work out better ways to live, better cleaner technologies, better systems of employment and housing and everything else.

But we can't let go of the single lousy handful of rice that we are clutching. We can't let go of our broken political and economic and military-industrial systems. We can't even let go of our broken 1960s and 1970s computer operating systems.

And every day, the hunter gets closer and his smile gets bigger.
liam_on_linux: (Default)
EDIT: this post has attracted discussion and comments on various places, and some people are disputing its accuracy. So, I've decided to make some edits to try to clarify things.

When Windows 2 was launched, there were two editions: Windows, and Windows/386.

The ordinary "base" edition of Windows 2.0x ran on an XT-class computer: that is, an Intel 8088 or 8086 CPU. These chips can only directly access a total of 1MB of memory, of which the highest 384kB was reserved for ROM and I/O: so, a maximum 640kB of RAM. That was not a lot for Windows, even then. But both DOS and Windows 2.x did support expanded memory (Lotus-Intel-Microsoft-specification EMS). I ran Windows 2 on 286s and 386s at work, and on 386 machines I used Quarterdeck's QEMM386 to turn the extended memory that Windows 2 couldn't see or use into expanded memory that it could.

The Intel 80286 could access up to 16MB of memory. But all except the first 640kB was basically invisible to DOS and DOS apps. Only native 16-bit programs could access it, and there barely were any — Lotus 1-2-3 r3 was one of the few, for instance.

There was one exception to this: due to a bug the first 64kB of memory above 1MB (less 16 bytes) could be accessed in DOS's Real Mode. This was called the High Memory Area (HMA). 64kB wasn't much even then, but still, it added 10% to the amount of usable memory on a 286. DOS 3 couldn't do anything with this – but Windows 2 could.

Windows 2 and 2.01 were not successful, but some companies did release applications for them – notably, Aldus' PageMaker desktop publishing (DTP) program. So, Microsoft put out some bug-fix releases: I've found traces of 2.01, 2.03, 2.11 and finally 2.12.


When Windows 2.1x was released, MICROS~1 did a little re-branding. The "base" edition of Windows 2.1 was renamed Windows/286. In some places, Microsoft itself claims that this was a special 286 edition of Windows 2 that ran in native 80286 mode and could access all 16MB of memory.

But some extra digging by people including Mal Smith has uncovered evidence that Windows/286 wasn't all it was cracked up to be. For one thing, without the HIMEM.SYS driver, it runs perfectly well on 8088/8086 PCs – it just can't access the 64kB HMA. Microsoft long ago erased the comments to Raymond Chen's blog post, but they are on the Wayback Machine.

So the truth seems to be that Windows/286 didn't really have what would later be called Standard Mode and didn't really run in the 286's protected mode. It just used the HMA for a little extra storage space, giving more room in conventional memory for the Windows real-mode kernel and apps.

So, what about Windows/386?


The new 80386 chip had an additional mode on top of 8/16-bit (8088/8086-compatible) and fully-16-bit (80286-compatible) modes. The '386 had a new 32-bit mode – now called x86-32 – which could access a vast 4GB of memory. (In 1985 or so, that would have cost hundreds of thousands of dollars, maybe even $millions.)

However, this was useless to DOS and DOS apps, which could still only access 640kB (plus EMS, of course).

But Intel learned from the mistake of the 286 design. The 286 needed new OSes to access all of its memory, and even they couldn't give DOS apps access to that RAM.

The 386 "fixed" this. It could emulate, in hardware, multiple 8086 chips at once and even multitask them. Each got its own 640kB of RAM. So if you had 4MB of RAM, you could run 6 separate full-sized DOS sessions and still have 0.4MB left over for a multitasking OS to manage them. DOS alone couldn't do this!

There were several replacement OSes to allow this. At least one of them is now FOSS -- it's called PC-MOS 386.

Most of these 386 DOS-compatible OSes were multiuser OSes — the idea was you could plug some dumb terminals into the RS-232 ports on the back of a 386 PC and users could run text-only DOS apps on the terminals.

But some were aimed at power users, who had a powerful 386 PC to themselves and wanted multitasking while keeping their existing DOS apps.

My personal favourite was Quarterdeck DESQview. It worked with the QEMM386 memory manager and let you multitask multiple DOS apps, side by side, either full-screen or in resizable windows. It ran on top of ordinary MS-DOS.

Microsoft knew that other companies were making money off this fairly small market for multitasking extensions to DOS. So, it made a third special edition of Windows 2, called Windows/386, which supported 80386 chips in 32-bit mode and could pre-emptively multitask DOS apps side-by-side with Windows apps.

Windows programs, including the Windows kernel itself, still ran in 8086-compatible Real Mode and couldn't use all this extra memory, even on Windows/386. All Windows/386 did was provide a loader that converted all the extra memory above 1MB in your high-end 386 PC – that is, extended (XMS) memory – into expanded (EMS) memory that both Windows and DOS programs could use.

The proof of this is that it's possible to launch Windows/386 on an 8086 computer, if you bypass the special loader. Later on, this loader became the basis of the EMM386 driver in MS-DOS 4, which allowed DOS to use the extra memory in a 386 as EMS.


TBH, Windows/386 wasn't very popular or very widely-used. If you wanted the power of a 386 with DOS apps, then you probably were fine with or even preferred text-mode stuff and didn't want a GUI. Bear in mind this is long before graphics accelerators had been invented. Sure you could tile several DOS apps side-by-side, but then you could only see a little bit of each one -- VGA cards and monitors only supported 640×480 pixels. Windows 2 wasn't really established enough to have special hi-res superVGA cards available for it yet.*

Windows/386 could also multitask DOS apps full-screen, and if you used graphical DOS apps, you had to run them full-screen. Windows/386 couldn't run graphical DOS apps inside windows.

But if you used full-screen multitasking, with hotkeys instead of a mouse, then why not use something like DESQview anyway? It used way less disk and memory than Windows, and it was quicker and had no driver issues, because it didn't support any additional drivers.

The big mistake MS and IBM made when they wrote OS/2 was that they should have targeted the 386 chip, instead of the 286.

Microsoft knew this – it even had a prototype OS/2 1 for 386, codenamed "Sizzle" and "Football" – but IBM refused because when it sold thousands of 286 PS/2 machines it had promised the customers OS/2 for them. The customers didn't care, they didn't want OS/2, and this mistake cost IBM the entire PC industry.

If OS/2 1 had been a 386 OS it could have multitasked DOS apps, and PC power users would have been all over it. But it wasn't, it was a 286 OS, and it could only run 1 DOS app at a time. For that, the expensive upgrade and extra RAM you needed wasn't worth it.

So OS/2 bombed. Windows 2 bombed too. But MS was so disheartened by IBM's intransigence, it went back to the dead Windows 2 product, gave it a facelift with the look-and-feel stolen from OS/2 1.2, and they used some very clever hacks to combine the separate Windows (i.e. 8086), Windows/286 and Windows/386 programs all into a single binary product. The WIN.COM loader looked at your system spec and decided whether to start the 8086 kernel (KERNEL.EXE), 286 kernel (DOSX.EXE) or the 386 kernel (WIN386.EXE).

If you ran Windows 3 on an 8086 or a machine with only 640kB (i.e. no XMS), you got a Real Mode 8086-only GUI on top of DOS.

If you ran Win3 on a 286 with 1MB-1¾MB of RAM then it launched in Standard Mode and magically became a 16-bit DOS extender, giving you access to up to 16MB of RAM (if you were rich and crazy eccentric).*

If you ran W3 on a 386 with 2MB of RAM or more, it launched in 386 Enhanced Mode and became a 32-bit multitasking DOS extender and could multitask DOS apps, give you virtual memory and a memory space of up to 4GB.

All in a single product on one set of disks.

This was revolutionary, and it was a huge hit...

And that about wrapped it up for OS/2.

Windows 3.0 was very unreliable and unstable. It often threw what it called an Unrecoverable Application Error (UAE) – which even led to a joke T-shirt that said "I thought UAE was a country in Arabia until I discovered Windows 3!"... but when it worked, what it did was amazing for 1990.

Microsoft eliminated UAEs in Windows 3.1, partly by a clever trick: it renamed the error to "General Protection Fault" (GPF) instead.

Me, personally, always the contrarian, I bought OS/2 2.0 with my own money and I loved it. It was much more stable than Windows 3, multitasked better, and could do way more... but Win3 had the key stuff people wanted.

Windows 3.1 bundled the separate Multimedia Extensions for Windows and made it a bit more stable. Then Windows for Workgroups bundled all that with networking, too!

Note — in the DOS era, all apps needed their own drivers. Every separate app needed its own printer drivers, graphics drivers (if it could display graphics in anything other than the standard CGA, EGA, VGA or Hercules modes), sound drivers, and so on.

One of WordPerfect's big selling points was that it had the biggest and best set of printer drivers in the business. If you had a fancy printer, WordPerfect could handle it and use all its special fonts and so on. Quite possibly other mainstream offerings couldn't, so if you ran WordStar or MultiMate or something, you only got monospaced Courier in bold, italic, underline and combinations thereof.

This included networking. Every network vendor had their own network stack with their own network card drivers.

And network stacks were big and each major vendor used their own protocol. MS used NetBEUI, Novell used IPX/SPX, Apple used AppleTalk, Digital Equipment Corporation's PATHWORKS used DECnet, etc. etc. Only weird, super-expensive Unix boxes that nobody could afford used TCP/IP.

You couldn't attach to a Microsoft server with a Novell network stack, or to an Apple server with a Microsoft stack. Every type of server needed its own unique special client.

This basically meant that a PC couldn't be on more than one type of network at once. The chance of getting two complete sets of drivers working together was next to nil, and if you did manage it, there'd be no RAM left to run any apps anyway.

Windows changed a lot of things, but shared drivers were a big one. You installed one printer driver and suddenly all your apps could print. One sound driver and all your apps could make noises, or play music (or if you had a fancy sound card, both!) and so on. For printing, Windows just sent your printer a bitmap — so any printer that could print graphics could suddenly print any font that came with Windows. If you had a crappy old 24-pin dot-matrix printer that only had one font, this was a big deal. It was slow and it was noisy but suddenly you could have fancy scalable fonts, outline and shadow effects!

But when Microsoft threw networking into this too, it was transformative. Windows for Workgroups broke up the monolithic network stacks. Windows drove the card, then Windows protocols spoke to the Windows driver for the card, then Windows clients spoke to the protocol.

So now, if your Netware server was configured for AppleTalk, say — OK, unlikely, but it could happen, because Macs only spoke AppleTalk — then Windows could happily access it over AppleTalk with no need for IPX.

The first big network I built with Windows for Workgroups, I built dual-stack: IPX/SPX and DECnet. The Netware server was invisible to the VAXen, and vice versa, but WfWg spoke to both at once. This was serious black magic stuff.

This is part of why, over the next few years, TCP/IP took off. Most DOS stuff never really used TCP/IP much — pre-WWW, very few of us were on the Internet. So, chaos reigned. WfWg ended that. It spoke to everything through one stack, and it was easy to configure: just point-and-click. Original WfWg 3.1 didn't even include TCP/IP as standard: it was an optional extra on the disk which you had to install separately. WfWg 3.11 included 16-bit TCP/IP but later Microsoft released a 32-bit TCP/IP stack, because by 1994 or so, people were rolling out PC LANs with pure IP.



* Disclaimer: this is a slight over-simplification for clarity, one of several in this post. A tiny handful of SVGA cards existed, most of which needed special drivers, and many of which only worked with a tiny handful of apps, such as one particular CAD program, or the GEM GUI, or something obscure. Some did work with Windows 2, but if they did, they were all-but unusable because Windows 2's core all had to run in the base 640kB of RAM and it very easily ran out of memory. Windows 3 was not much better, but Windows 3.1 finally fixed this a bit.

So if you had an SVGA card and Windows/286 or Windows/386 or even Windows 3.0, you could possibly set some super-hires mode like 1024×768 in 16 colours... and admire it for whole seconds, then launch a few apps and watch Windows crash and die. If you were in something insane like 24-bit colour, you might not even get as far as launching a second app before it died.

Clarification for the obsessive: when I said 1¾MB, that was also a simplification. The deal was this:

If you had a 286 & at least 1MB RAM, then all you got was Standard Mode, i.e. 286 mode. More RAM made things a little faster – not much, because Windows 2 didn't have a disk cache, relying on DOS to do that. If you had 2 MB or 4 or 8 or 16 (not that anyone sane would put 16MB in a 286, as it would cost $10,000 or something) it made no odds: Standard Mode was all a 286 could do.

If you had a 386 and 2MB or more RAM, you got 386 Enhanced Mode. This really flew if you had 4MB or more, but very few machines came with that much except some intended to be servers, running Unix of one brand or another. Ironically, the only budget 386 PC with 4MB was the Amstrad 2386, a machine now almost forgotten by history. Amstrad created the budget PC market in Europe with the PC1512 and PC1640, both 8086 machines with 5.25" disk drives.

It followed this with the futuristic 2000 series. The 2086 was an unusual PC – an ISA 8086 with VGA. The 2286 was a high-end 286 for 1988: 1MB RAM & a fast 12.5MHz CPU.

But the 2386 had 4MB as standard, which was an industry-best and amazing for 1988. When Windows 3.0 came out a couple of years later, this was the only PC already on the market that could do 386 Enhanced Mode justice, and easily multitask several DOS apps and big high-end Windows apps such as PageMaker and Omnis. Microsoft barely offered Windows apps yet – early, sketchy versions of Word and Excel, nothing else. I can't find a single page devoted to this remarkable machine – only its keyboard.

The Amstrad 2000 series bombed. They were premature: the market wasn't ready and few apps used DOS extenders yet. Only power users ran OS/2 or DOS multitaskers, and power users didn't buy Amstrads. Nor did people who wanted a server for multiuser OSes such as Digital Research's Concurrent DOS/386.

Its other bold design move was that Amstrad gambled on 5.25" floppies going away, replaced by 3.5" diskettes. They were right, of course – and so the 2000 series had no 5.25" bays, allowing for a sleek, almost aerodynamic-looking case. But Amstrad couldn't foresee that soon CD-ROM drives would be everywhere, then DVDs and CD burners, and the 5.25" bay would stick around for another few decades.
liam_on_linux: (Default)

I keep getting asked about this in various places, so I thought it was high time I described how I do it. I will avoid using any 3rd party proprietary tools; everything you need is built-in.

Notes for dual-booters:

This is a bit harder with Windows 10 than it was with any previous versions. There are some extra steps you need to do. Miss these and you will encounter problems, such as Linux refusing to boot, or hanging on boot, or refusing to mount your Windows drive.

It is worth keeping Windows around. It's useful for things like updating your motherboard firmware, which is a necessary maintenance task -- it's not a one-off. Disk space is cheap these days.

Also, most modern PCs have a new type of firmware called UEFI. It can be tricky to get Linux to boot off an empty disk with UEFI, and sometimes, it's much easier to dual-boot with Windows. Some of the necessary files are supplied by Windows and that saves you hard work. I have personally seen this with a Dell Precision 5810, for instance.

Finally, it's very useful for hardware troubleshooting. Not sure if that new device works? Maybe it's a Linux problem. Try it in Windows then you'll know. Maybe it needs initialising by Windows before it will work. Maybe you need Windows to wipe out config information. I have personally seen this with a Dell Precision laptop and a USB-C docking station, for example: you could only configure triple-head in Windows, but once done, it worked fine in Linux too. But if you don't configure it in Windows, Linux can't do it alone.

Why would you want to do this? Well, there are various reasons.


  1. You regularly, often or only run Windows and want to keep it performing well.

  2. You run Windows in a VM under another OS and want to minimize the disk space and RAM it uses.

  3. You dual-boot Windows with another OS, and want to keep it happy in less disk space than it might normally enjoy to itself.

  4. You're preparing your machine for installing Linux or another OS and want to shrink the Windows partition right down to make as much free space as possible.

  5. You've got a slightly troublesome Windows installation and want to clean things up as a troubleshooting step.

Note, this stuff also applies to a brand-new copy of Window, not just an old, well-used installation.

I'll divide the process into 2 stages. One assuming you're not preparing to dual-boot, and a second stage if you are.

So: how to clean up a Windows drive.

The basic steps are: update; clean up; check for errors.

If you're never planning to use Windows again, you can skip the updating part -- but you shouldn't. Why not? Well, as I advised above, you should keep your Windows installation around unless you are absolutely desperate for disk space and so poor that you can't afford to buy more. It's useful in emergencies. And in emergencies, you don't want to spend hours installing updates. So do it first.

Additionally, some Windows updates require earlier ones to be installed. A really old copy might be tricky to update.


  1. Updating. This is easy but not quite as easy as it looks at first glance. Connect your machine to the Internet, open Windows Update, click "Check for updates". But wait! There's more! Because Microsoft has a vested interest in making things look smooth and easy and untroubled, Windows lies to you. Sometimes, when you click "check for updates", it says there are none. Click again and magically some more will appear. There's also a concealed option to update other Microsoft products and it is, unhelpfully, off by default. You should turn that on.

  2. Once Windows Update has installed everything, reboot. Sometimes updates make you do this, but even if they don't, do it manually anyway.

  3. Then run Windows Update and check again. Sometimes, more will appear. If they do, install them and go back to step 1. Repeat this process until no new updates appear when you check.

  4. Next, we're going to clean up the disk. This is a 2-stage process.

  5. First, run Disk Cleanup. It's deeply buried in the menus so just open the Start menu and type CLEAN. It should appear. Run it.

  6. Tick all the boxes -- don't worry, it won't delete stuff you manually downloaded -- and run the cleanup. Normally, this is fast. A few minutes is enough.

  7. Once it's finished, run disk cleanup again. Yes, a second time. This is important.

  8. Second time, click the "clean up system files" button.

  9. Again, tick all the boxes, then click the button to run the cleanup.

  10. This time, it will take a long time. This is the real clean up and it's the step I suspect many people miss. Be prepared for your PC to be working away for hours, and don't try to do anything else while it works, or it will bypass files that are in use.

  11. When it's finished, reboot.

  12. After your PC reboots, right-click on the Start button and open an administrative command prompt. Click yes to give it permission to run. When it appears, type: CHKDSK C: /F

  13. Type "y" and hit "Enter" to give it permission.

  14. Reboot your PC to make it happen.

  15. This can take a while, too. This can fix all sorts of Windows errors. Give it time, let it do what needs to be done.

  16. Afterwards, the PC will reboot itself. Log in, and if you want an extra-thorough job, run Disk Cleanup a third time and clean up the system files. This will get rid of any created by the CHKDSK process.

  17. Now you should have got rid of most of the cruft on your C drive. The next step requires 2 things: firstly, that you have a Linux boot medium, so if you don't have it ready, go download and make one now. Secondly, you need to have some technical skill or experience, and familiarity with the Windows folder tree and navigating it. If you don't have that, don't even try. One slip and you will destroy Windows.

  18. If you do have that experiece, then what you do is reboot your PC from the Linux medium -- don't shutdown and then turn it back on, pick "restart" so that Windows does a full shutdown and reboot -- and manually delete any remaining clutter. The places to look are in C:\WINDOWS\TEMP and C:\USERS\$username\AppData\Local\Temp. "$username" is a placeholder here -- look in the home directory of your Windows login account, whatever that's called, and any others you see here, such as "Default", "Default User", "Public" and so on. Only delete files in folders called TEMP and nowhere else. If you can't find a TEMP folder, don't delete anything else. Do not delete the TEMP folders themselves, they are necessary. Anything inside them is fair game. You can also delete the files PAGEFILE.SYS, SWAPFILE.SYS and HIBERFIL.SYS in the root directory -- Windows will just re-create them next boot anyway.

That's about it. After you've done this, you've eliminated all the junk and cruft that you reasonably can from your Windows system. The further stages are optional and some depend on your system configuration.

Optional stages

Defragmenting the drive

Do you have Windows installed on a spinning magnetic hard disk, or on an SSD?

If it's a hard disk, then you may wish to run a defrag. NEVER defrag an SSD -- it's pointless and it wears out the disk.

But if you have an old-fashioned HDD, then by all means, after your cleanup, defrag it. Here's how.

I have not tested this on Win10, but on older versions, I found that defrag does a more thorough job, faster, if you run it in Safe Mode. Here's how to get into Safe Mode in Windows 10.

Turning off Fast Boot

Fast Boot is a featue that only shuts down part of Windows and then hibernates. Why? Because when you turn your PC on, it's quicker to wake Windows and then load a new session than it is to boot it from scratch, with all the initialisation that involves. Shutdown and startup both become a bit quicker.

If you only run Windows and have no intention of dual-booting, then ignore this if you wish. Leave it on.

But if you do dual-boot, it's definitely worth doing. Why? Because when Fast Boot is on, Windows doesn't totally stop when you shut down, only when you restart. This means that the C drive is marked as being still mounted, that is, still in use. And if it's in use, then Linux won't mount it and you can't access your Windows drive from Linux.

Worse still, if like me you mount the Windows drive automatically during bootup, then Linux won't finish booting. It waits for the C drive to become available, and since Windows isn't running, it never becomes available so the PC never boots. This is a new problem introduced by the Linux systemd tool -- older init systems just skipped the C drive and moved on, but systemd tries to be clever and as a result it hangs.

So, if you dual boot, always disable Fast Boot. It gives you more flexibility. I will list a few how-tos since Microsoft doesn't seem to officially document this.Turning off Hibernation

IF you have a desktop PC, once you have disabled Fast Boot, also disable Hibernation.

If you have a notebook, you might want to leave it on. It's useful if you find yourself in the middle of something but running out of power, or about to get off a train or plane. But for a desktop, there's less reason, IMHO.

There are a few reasons to disable it:

  1. It eliminates the risk of some Windows update turning Fast Boot back on. If Hibernation is disabled, it can't.

  2. It means when you boot Linux your Windows drive will always be available. Starting another OS when Windows is alive but hibernating risks drive corruption.

  3. It frees up a big chunk of disk space -- equal to your physical RAM -- that you can take off your Windows partition and give to Linux.

Here's how to disable it:In brief: open an Admin Mode command prompt, and type powercfg /h off.
That's it. Done.

Once it's done, if it's still there, in Linux you can delete C:\HIBERFIL.SYS.

Final steps -- preparing for installing a 2nd operating system

If you've got this far and you're not about to set up your PC for dual-boot, then stop, you're done.

But if you do want to dual-boot, then the final step is shrinking your Windows drive.

There are 2 ways to do this. You might want one or the other, or both.

The safe way is to follow a dual-booter's handy rule:

Always use an OS-native tool to manipulate that OS.

What this means is this: if you're doing stuff to, or for, Windows, then use a Windows tool if you can. If you're doing it to or for Linux, use a Linux tool. If you're doing it to or for macOS, use a macOS tool.

  • Fixing a Windows disk? Use a Windows boot disk and CHKDSK. Formatting a drive for Windows? Use a Windows install medium. Writing a Windows USB key? Use a Windows tool, such as Rufus.

  • Writing a Linux USB? Use Linux. Formatting a drive for Linux? Use Linux.

  • Adjusting the size of a Mac partition? Use macOS. Writing a bootable macOS USB? Use macOS.

So, to shrink a Windows drive to make space for Linux, then use Windows to do it.

Here's the official Microsoft way.

Check how much space Windows is using, and how much is free. (Find the drive in Explorer, right-click it and pick Properties.)

The free space is how much you can give to Linux.

Note, once Windows is shut down, you can delete the pagefile and swapfile to get a bit more space.

However, if you want to be able to boot Windows, then it needs some free working space. Don't shrink it down until it's full and there's no free space. Try to leave it about 50% empty, and at least 25% empty -- below that and Windows will hit problems when it boots, and if you're in an emergency situation, the last thing you need are further problems.

As a rule of thumb, a clean install of Win10 with no additional apps will just about run in a 16 GB partition. A 32 GB partition gives it room to breathe but not much -- you might not be able to install a new release of Windows, for example. A 64 GB partition is enough space to use for light duties and install new releases. A 128 GB partition is enough for actual work in Windows if your apps aren't very big.

Run Disk Manager, select the partition, right-click and pick "shrink". Pick the smallest possible size -- Windows shouldn't shrink the disk so much you have no free space, but note my guidelines above.

Let it work. When it's done, look at how much unpartitioned space you have. Is there enough room for what you want? Yes? Great, you're done. Reboot off your Linux medium and get going.

No? Then you might need to shrink it further.

Sometimes Disk Manager will not offer to shrink the Windows drive as much as you might reasonably expect. For example, even if you only have 10-20 GB in use, it might refuse to shrink the drive below several hundred GB.

If so, here is how to proceed.

  1. Shrink the drive as far as Windows' Disk Manager will allow.

  2. Reboot Windows

  3. Run "CHKDSK /F" and reboot again.

  4. Check you've disabled Fast Boot and Hibernation as described above.

  5. Try to shrink it again.

No joy? Then you might have to try some extra persuasion.

Boot off a Linux medium, and as described above, delete C:\PAGEFILE.SYS, C:\SWAPFILE.SYS and C:\HIBERFIL.SYS.

Reboot into Windows and try again. The files will be automatically re-created, but in new positions. This may allow you to shrink the drive further.

If that still doesn't work, all is not lost. A couple more things to try:

  • If you have 8 GB or more of RAM, you can tell Windows not to use virtual memory. This frees up more space. Here's how.

  • Disable System Protection. This can free up quite a bit of space on a well-used Windows install. Here's a how-to.

Try that, reboot, and try shrinking again.

If none of this works, then you can shrink the partition using Linux tools. So long as you have a clean disk, fully shut down (Fast Boot off, not hibernated, etc.) then this should be fine.

All you need to do is boot off your Linux medium, remove the pagefile, swapfile and any remaining hibernation file, then run GPARTED.Again, bear in mind that you should leave 25-50% of free space if you want Windows to be able to run afterwards.

Once you've shrunk the partition, try it. Reboot into Windows and check it still works. If not, you might need to make the C partition a little bigger again.

Once you have a small but working Windows drive, you're good to go ahead with Linux.
liam_on_linux: (Default)
Someone at $JOB said that they really wished that rsync could give a fairly close estimate of how long a given operation would take to complete. I had to jump in...

Be careful what you wish for.

Especially that "close" in there, which is a disastrous request!

AIUI...

It can't do that, because the way it works is comparing files on source and destination block-by-block to work out if they need to be synched or not.

To give an estimate, it would have to do that twice, and thus, its use would be pointless. Rsync is not a clever copy program. Rsync exists to synch 2 files/groups of files without transmitting all the data they contain over a slow link; to do the estimate you ask would obviate its raison d'être.

If it just looked at file sizes, the estimate would be wildly pessimistic, and thus make the tool far less attractive and that would have led to it not being used and becoming a success.

Secondly, by comparison: people clearly asked for this from the Windows developers, and commercial s/w being what it is, they got it.

That's how on Win10 you get a progress bar for all file operations. Which means deleting a 0-byte file takes as long as deleting a 1-gigabyte file: it has to simulate the action first, in order to show the progress, so everything now has a built-in multi-second-long delay (far longer than the actual operation) so it can display a fancy animated progress bar and draw a little graph, and nothing happens instantly, not even the tiniest operations.

Thus a harmless-sounding UI request completely obviated the hard work that went into optimising NTFS, which for instance stores tiny files inside the file system indices so they take no disk sectors at all, meaning less head movement too.

All wasted because of a UI change.

Better to have no estimate than a wildly inaccurate estimate or an estimate that doubles the length of the task.

Yes, some other tools do give a min/max time estimate.

There are indeed far more technically-complex solutions, like...

(I started to do this in pseudocode but I quickly ran out of width, which tells you something)

* start doing the operation, but also time it
* if the time is more than (given interval)
* display a bogus progress indicator, while you work out an estimate
* then start displaying the real progress indicator
* while continuing the operation, which means your estimate is now
inaccurate
* adjust the estimate to improve its accuracy
* until the operation is complete
* show the progress bar hitting the end
* which means you've now added a delay at the end

So you get a progress meter throughout which only shows for longer operations, but it delays the whole job.

This is what Windows Vista did, and it was a pain.

And as we all know, for any such truism, there is an XKCD for it.
https://xkcd.com/612/

That was annoying. So in Win10 someone said "fix it". Result, it now takes a long time to do anything at all, but there's a nice progress bar to look at.

So, yeah, no. If you want a tool that does its job efficiently and as quickly as possible, no, don't try to put a time estimate in it.

Non-time-based, non-proportional time indicators are fine.

E.g. "processed file XXX" which increments, or "processed XXX $units_of_storage"

But they don't tell you how long it will take, and that annoys people. They ask "if you can tell me how much you've done, can't you tell me what fraction of the whole that is?" Well, no, not without doing a potentially big operation before beginning work which makes the whole job bigger.

And the point of rsync is that it speeds up work over slow links.

Summary:

Estimates are hard. Close estimates are very hard. Making the estimate makes the job take much longer (generally, at a MINIMUM twice as long). Poor estimates are very annoying.

So, don't ask for them.

TL;DR Executive summary (which nobody at Microsoft was brave enough to do):

"No."

This was one of those things that for a long time I just assumed everyone knew... then it has become apparent in the last ~dozen years (since Vista) that apparently lots of people didn't know, and indeed, that this lack of knowledge was percolating up the chain.

The time it hit me personally was upgrading a customer's installation of MS Office XP to SR1. This was so big, for the time -- several hundred megabytes, zipped, in 2002 and thus before many people had broadband -- that optionally you could request it on CD.

He did.

The CD contained a self-extracting Zip that extracted into the current directory. So you couldn't run it directly from the CD. It was necessary to copy it to the hard disk, temporarily wasting ¼ GB or so, then run it.

The uncompressed files would have fitted on the CD. That was a warning sign; several people failed in attention to detail and checks.

(Think this doesn't matter? The tutorial for Docker instructs you to install a compiler, then build a copy of MongoDB (IIRC) from source. It leaves the compiler and the sources in the resulting container. This is the exact same sort of lack of attention to detail. Deploying that container would waste a gigabyte or so per instance, and thus waste space, energy, machine time, and cause over-spend on cloud resources.

All because some people just didn't think. They didn't do their job well enough.

So, I copied the self-extractor, I ran it, and I started the installation.

A progress bar slowly crept up to 100%. It took about 5-10 minutes. The client and I watched.

When it got to 100%... it went straight back to zero and started again.

This is my point: progress bars are actually quite difficult.

It did this seven times.

The installation of a service release took about 45 minutes, three-quarters of an hour, plus the 10 minutes wasted because an idiot put a completely unnecessary download-only self-extracting archive onto optical media.

The client paid his bill, but unhappily, because he'd watched me wasting a lot of expensive time because Microsoft was incompetent at:

[1] Packaging a service pack properly.
[2] Putting it onto read-only media properly.
[3] Displaying a progress bar properly.

Of course it would have been much easier and simpler to just distribute a fresh copy of Office, but that would have made piracy easier than this product is proprietary software and one of Microsoft's main revenue-earners, so it's understandable that they didn't want to do that.

But if the installer had just said:

Installation stage x/7:
Progress: [XXXXXXXXXX..........]

That would have been fine. But it didn't. It went from 0 to 100%, seven times over, probably because first the Word team's patch was installed, then the Excel team's patch, then the Powerpoint team's patch, then the Outlook team's patch, then the Access team's patch, then the file import/export filters team's patch, etc. etc.

Poor management. Poor attention to detail. Lack of thought. Lack of planning. Major lack of integration and overview.

But this was just a service release. Those are unplanned; if the apps had been developed and tested better, in a language immune to buffer overflows and which didn't permit pointer arithmetic and so on, it would have have been necessary.

But the Windows Vista copy dialog box, as parodied in XKCD -- that's taking orders from poorly-trained management who don't understand the issues, because someone didn't think it through or explain it, or because someone got promoted to a level they were incompetent for.

https://en.wikipedia.org/wiki/Peter_principle

These are systemic problems. Good high-level management can prevent them. Open communications, where someone junior can point out issues to someone senior without fear of being disciplined or dismissed, can help.

But many companies lack this. I don't know yet if $DAYJOB has sorted these issues. I can confirm from bitter personal experience that my previous FOSS-centric employer suffered badly from them.

Of course, some kind of approximate estimate, or incremental progress indicator for each step, is better than nothing.

Another answer is to concede that the problem is hard, and display a "throbber" instead: show an animated widget that shows something is happening, but not how far along it is. That's what the Microsoft apps team often does now.

Personally, I hate it. It's better than nothing but it conveys no useful information.

Doing an accurate estimator based on integral speed tests is also significantly tricky and can slow down the whole operation. Me personally, I'd prefer an indicator that says "stage 6 of 15, copying file 475 of 13,615."

I may not know which files are big or small, which stages will be quick or slow... but I can see what it's doing, I can make an approximate estimate in my head, and if it's inaccurate, well, I can blame myself and not the developer.

And nobody has to try to work out what percent of an n stage process with o files of p different sizes they're at. That's hard for someone to work out, and it's possible that someone can't tell them a correct number of files or something... so you can get progress bars that go to 87% and then suddenly end, or that go to 106%, or that go to 42% and then sit there for an hour, and then do the rest in 2 seconds.

I'm sure we've all seen all of those. I certainly have.
liam_on_linux: (Default)
From a G+ thread that just won't die.

A British digital artist called William Latham -- he has a site, but it won't load for me -- once co-developed a wonderful screensaver for early 32-bit Windows, called Organic Art.

There was even an MS-sponsored free demo version.

Sadly this won't install on 64-bit Windows, as the installer has 16-bit components. However, you can get it working. I did it, after a bit of fiddling, on Windows 7.
Here's how, in brief:
* Install XP Mode
* Boot it, let it update etc.
* In Win7, meanwhile, download the demo from Nemeton
* Once XP Mode is all updated, install the OA MS edition demo from the host drive
* Check it works
Then:
* I copied the whole Program Files/Computer Artworks tree into my W7 Downloads folder
* I also retrieved the screensaver (.SCR) file from C:\WINDOWS\SYSTEM32 -- and as mentioned above, D3DRM.DLL
In W7, I copied this into the same locations on my Win7/64 system.
I used the documented hack to re-enable screensavers (JFGI).
It now ran but couldn't find any profiles.
So:
* In XP Mode, I exported the entire Computer Artworks hive from the Registry to a file in my W7 Downloads folder.
In W7 I imported this file.
Now the 'saver runs. It's worth disabling mode switching and forcing it to use Hardware Acceleration. Not all of the saver modules work but most do -- and very quickly and smoothly, too.

This won't work as-is on Windows 8 or newer. There are hacks but I only got the VirtualPC component of XP Mode running on Win8. Nothing newer worked.

But you can run XP Mode in VirtualBox, and I've published an article on how to do that. The other steps are much the same.

Try it. It's really quite beautiful.
liam_on_linux: (Default)
My previous post was an improvised and unplanned comment. I could have structured it better, and it caused some confusion on https://lobste.rs/

Dave Cutler did not write OS/2. AFAIK he never worked on OS/2 at all in the days of the MS-IBM pact -- he was still at DEC then.

Many sources focus on only one side of the story -- the DEC side, This is important but only half the tale.

IBM and MS got very rich working together on x86 PCs and MS-DOS. They carefully planned its successor: OS/2. IBM placed restrictions on this which crippled it, but it wasn't apparent at the time just how bad this would turn out to be.

In the early-to-mid 1980s, it seemed apparent to everyone that the most important next step in microcomputers would be multitasking.

Even small players like Sinclair thought so -- the QL was designed as the first cheap 68000-based home computer. No GUI, but multitasking.

I discussed this a bit in a blog post a while ago: http://liam-on-linux.livejournal.com/46833.html

Apple's Lisa was a sideline: too expensive. Nobody picked up on its true significance.

Then, 2 weeks after the QL, came the Mac. Everything clever but expensive in the Lisa stripped out: no multitasking, little RAM, no hard disk, no slots or expansion. All that was left was the GUI. But that was the most important bit, as Steve Jobs saw and nobody much else did.

So, a year later, the ST had a DOS-like OS but a bolted-on GUI. No shell, just a GUI. Fast-for-the-time CPU, no fancy chips, and it did great. It had the original, uncrippled version of DR GEM. Apple's lawsuit meant that PC GEM was crippled: no overlapping windows, no desktop drive icons or trashcan, etc.

Read more... )
liam_on_linux: (Default)
Although the launch of GNOME 3 was a bumpy ride and it got a lot of criticism, it's coming back. It's the default desktop of multiple distros again now. Allegedly even Linus Torvalds himself uses it. People tell me that it gets out of the way.

I find this curious, because I find it a little clunky and obstructive. It looks great, but for me, it doesn’t work all that well. It’s OK — far better than it was 2-3 years ago. But while some say it gets out of the way and lets them work undistracted, it gets in my way, because I have to adapt to its weird little quirks. It will not adapt to mine. It is dogmatic: it says, you must work this way, because we are the experts and we have decided that this is the best way.

So, on OS X or Ubuntu, I have my dock/launcher thing on the left, because that keeps it out of the way of the scrollbars. On Windows or XFCE, I put the task bar there. For all 4 of these environments, on a big screen, it’s not too much space and gives useful info about minimised windows, handy access to disk drives, stuff like that. On a small screen, it autohides.

But not on GNOME, no. No, the gods of GNOME have decreed that I don’t need it, so it’s always hidden. I can’t reveal it by just putting my mouse over there. No, I have to click a strange word in the menu bar. “Activities”. What activities? These aren’t my activities. They’re my apps, folders, files, windows. Don’t tell me what to call them. Don’t direct me to click in a certain place to get them; I want them just there if there’s room, and if there isn’t, on a quick flick of the wrist to a whole screen edge, not a particular place followed by a click. It wastes a bit of precious menu-bar real-estate with a word that’s conceptually irrelevant to me. It’s something I have to remember to do.

That’s not saving me time or effort, it’s making me learn a new trick and do extra work.

The menu bar. Time-honoured UI structure. Shared by all post-Mac GUIs. Sometimes it contains a menu, efficiently spread out over a nice big easily-mousable spatial range. Sometimes that’s in the window; whatever. The whole width of the screen in Mac and Unity. A range of commands spread out.

On Windows, the centre of the title bar is important info — what program this window belongs to.

On the Mac, that’s the first word of the title bar. I read from left to right, because I use a Latinate alphabet. So that’s a good place too.

On GNOME 3, there’s some random word I don’t associate with anything in particular as the first word, then a deformed fragment of an icon that’s hard to recognise, then a word, then a big waste of space, then the blasted clock! Why the clock? Are they that obsessive, such clock-watchers? Mac and Windows and Unity all banish the clock to a corner. Not GNOME, no. No, it’s front and centre, one of the most important things in one of the most important places.

Why?

I don’t know, but I’m not allowed to move it.

Apple put its all-important logo there in early versions of Mac OS X. They quickly were told not to be so egomaniac. GNOME 3, though, enforces it.

On Mac, Unity, and Windows, in one corner, there’s a little bunch of notification icons. Different corners unless I put the task bar at the top, but whatever, I can adapt.

On GNOME 3, no, those are rationed. There are things hidden under sub options. In the pursuit of cleanliness and tidiness, things like my network status are hidden away.

That’s my choice, surely? I want them in view. I add extra ones. I like to see some status info. I find it handy.

GNOME says no, you don’t need this, so we’ve hidden it. You don’t need to see a whole menu. What are you gonna do, read it?

It reminds me of the classic Bill Hicks joke:

"You know I've noticed a certain anti-intellectualism going around this country ever since around 1980, coincidentally enough. I was in Nashville, Tennessee last weekend and after the show I went to a waffle house and I'm sitting there and I'm eating and reading a book. I don't know anybody, I'm alone, I'm eating and I'm reading a book. This waitress comes over to me (mocks chewing gum) 'what you readin' for?'...wow, I've never been asked that; not 'What am I reading', 'What am I reading for?’ Well, goddamnit, you stumped me... I guess I read for a lot of reasons — the main one is so I don't end up being a f**kin' waffle waitress. Yeah, that would be pretty high on the list. Then this trucker in the booth next to me gets up, stands over me and says [mocks Southern drawl] 'Well, looks like we got ourselves a readah'... aahh, what the fuck's goin' on? It's like I walked into a Klan rally in a Boy George costume or something. Am I stepping out of some intellectual closet here? I read, there I said it. I feel better."

Yeah, I read. I like reading. It’s useful. A bar of words is something I can scan in a fraction of a second. Then I can click on one and get… more words! Like some member of the damned intellectual elite. Sue me. I read.

But Microsoft says no, thou shalt have ribbons instead. Thou shalt click through tabs of little pictures and try and guess what they mean, and we don’t care if you’ve spent 20 years learning where all the options were — because we’ve taken them away! Haw!

And GNOME Shell says, nope, you don’t need that, so I’m gonna collapse it all down to one menu with a few buried options. That leaves us more room for the all-holy clock. Then you can easily see how much time you’ve wasted looking for menu options we’ve removed.

You don’t need all those confusing toolbar buttons neither, nossir, we gonna take most of them away too. We’ll leave you the most important ones. It’s cleaner. It’s smarter. It’s more elegant.

Well, yes it is, it’s true, but you know what, I want my software to rank usefulness and usability above cleanliness and elegance. I ride a bike with gears, because gears help. Yes, I could have a fixie with none, it’s simpler, lighter, cleaner. I could even get rid of brakes in that case. Fewer of those annoying levers on the handlebars.

But those brake and gear levers are useful. They help me. So I want them, because they make it easier to go up hills and easier to go fast on the flat, and if it looks less elegant, well I don’t really give a damn, because utility is more important. Function over form. Ideally, a balance of both, but if offered the choice, favour utility over aesthetics.

Now, to be fair, yes, I know, I can install all kinds of GNOME Shell extensions — from Firefox, which freaks me out a bit. I don’t want my browser to be able to control my desktop, because that’s a possible vector for malware. A webpage that can add and remove elements to my desktop horrifies me at a deep level.

But at least I can do it, and that makes GNOME Shell a lot more usable for me. I can customise it a bit. I can add elements and I could make my favourites bar be permanent, but honestly, for me, this is core functionality and I don’t think it should be an add-on. The favourites bar still won’t easily let me see how many instances of an app are running like the Unity one. It doesn’t also hold minimised windows and easy shortcuts like the Mac one. It’s less flexible than either.

There are things I like. I love the virtual-desktop switcher. It’s the best on any OS. I wish GNOME Shell were more modular, because I want that virtual-desktop switcher on Unity and XFCE, please. It’s superb, a triumph.

But it’s not modular, so I can’t. And it’s only customisable to a narrow, limited degree. And that means not to the extent that I want.

I accept that some of this is because I’m old and somewhat stuck in my ways and I don’t want to change things that work for me. That’s why I use Linux, because it’s customisable, because I can bend it to my will.

I also use Mac OS X — I haven’t upgraded to Sierra yet, so I won’t call it macOS — and anyway, I still own computers that run MacOS, as in MacOS 6, 7, 8, 9 — so I continue to call it Mac OS X. What this tells you is that I’ve been using Macs for a long time — since the late 1980s — and whereas they’re not so customisable, I am deeply familiar and comfortable with how they work.

And Macs inspired the Windows desktop and Windows inspired the Linux desktops, so there is continuity. Unity works in ways I’ve been using for nearly 30 years.

GNOME 3 doesn’t. GNOME 3 changes things. Some in good ways, some in bad. But they’re not my ways, and they do not seem to offer me any improvement over the ways I’m used to. OS X and Unity and Windows Vista/7/8/10 all give me app searching as a primary launch mechanism; it’s not a selling point of GNOME 3. The favourites bar thing isn’t an improvement on the OS X Dock or Unity Launcher or Windows Taskbar — it only delivers a small fraction of the functionality of those. The menu bar is if anything less customisable than the Mac or Unity ones, and even then, I have to use extensions to do it. If I move to someone else’s computer, all that stuff will be gone.

So whereas I do appreciate what it does and how and why it does so, I don’t feel like it’s for me. It wants me to change to work its way. The other OSes I use — OS X daily, Ubuntu Unity daily, Windows occasionally when someone pays me — don’t.

So I don’t use it.

Does that make sense?
liam_on_linux: (Default)
I have ruffled many feathers with my position that the touch-driven computing sector is growing so fast that it's going to subsume the old WIMP model completely. I don't mean that iPads will replace Windows PCs, but that the descendants of the PC will look and act more like tablets than today's desktops and laptops.

But where is it leading, beyond that point? I have absolutely no concrete idea. But the end point? I've read one brilliant model.

It's in one of the later Foundation books by Isaac Asimov, IIRC. (Not a series I'm that enamoured of, actually.)

A guy gets (steals?) a space yacht: a small, 1-man starship. (Set aside the plausibility of this.)

He searches the ship's crew quarters. In its few luxury rooms, there is no cockpit. No controls, no instruments, nothing. He is bemused.

He returns to the comfiest room, the main stateroom, i.e. cabin/bedroom. In it there is a large, bare dressing table with a comfy seat in front of it. He sits.

Two handprints appear, projected on the surface of the desk, shaped in light.

He studies them. They're just hand-shaped spots of light. He puts his hands on them.

And suddenly, he is much smarter. He knows the ship's position and speed in space. He knows where all the nearby planetary bodies are, their gravity wells, the speeds needed to reach them and enter orbit.

Thinking of the greater galaxy, he knows where all the nearby stars are, their masses, their luminosities, their planetary systems. Merely thinking of a planet, he knows its cities, ports, where to orbit it, etc.

All this knowledge is there in his mind if he wants it; if he allows his attention to move elsewhere, it's gone.

He sits back, shocked. His hands lift from the prints on the desk, and it all disappears.

That is the ultimate UI. One you don't know is there.

Any UI where there are metaphors and abstractions and controls you must operate is inferior; direct interaction is better. We've moved from text views of marked-up files with arcane names in folder hierarchies to today: hi-res, full-colour, moving images of fully-formatted documents and images. That's great.

Some people are happily directly manipulating these — drawing and stroking screens with all their fingers, interacting naturally. Push up to see the bottom of a document, tap on items of interest. It's so natural pre-toddlers can do it.

But many old hands still like their pointing hardware and little icons on screen that they can twiddle with their special pointing devices, and they shout angrily that it's more precise and it's tried and tested and it works.

Show them something better, no, it's a toy. OK for idly surfing the web, or reading, or watching movies, but no substitute for the "real thing".

It's a toy and the mere idea that these early versions could in time grow into something that could replace their 4-box Real Computer of System Unit, Monitor, Mouse and Keyboard is a nonsensical piece of idiocy.

Which is exactly what their former bosses and their tutors said about the Mac's UI 30y ago. It's doubtless what they said about the tinker-toy CP/M boxes a decade before that, and so on.

I'm guilty too. I am using a 25y old keyboard on my tiny silent near-unexpandable 2011 Mac mini, attached via a convertor that cost more than the keyboard and about a third as much as the Mac itself. I don't have a tablet; I don't personally like them much. I like my phablet, though. I gave away my Magic Trackpad - I didn't like it.

(And boy did my friends in the FOSS community curse me out for buying a Mac. I'm a traitor and a coward, apparently.)

But although I personally don't want this stuff, nonetheless, I think it's where we're going.

If adding more layers of abstraction to the system means we can remove layers of abstraction from the human-computer interface, then I'm all for it. The more we can remove, the simpler and easier and clearer the computers we can make, the better. And if we can make them really small and cheap and thus give one to every child in the poorer countries of the world — I'd be delighted.

If price was putting Microsoft and Apple out of business and destroying the career of everyone working with Windows, and replacing it all with that nasty cancerous GPL and Big-Brother-like services like Google — still worth it.

liam_on_linux: (Default)
They're a bit better in some ways. It's somewhat marginal now.

OK. Position statement up front.

Anyone who works in computers and only knows one platform is clueless. You need cross-platform knowledge and experience to actually be able to assess strengths, weaknesses, etc.

Most people in IT this century only know Windows and have only known Windows. This means that the majority of the IT trade are, by definition, clueless.

There is little real cross-platform experience any more, because so few platforms are left. Today, it's Windows NT or Unix, running on x86 or ARM. 2 families of OS, 2 families of processor. That is not diversity.

So, only olde phartes, yeah like me, who remember the 1970s and 1980s when diversity in computing meant something, have any really useful insight. But the snag with asking olde phartes is we're jaded & curmudgeonly & hate everything.

So, this being so...

The Mac's OS design is better and cleaner, but that's only to the extent of saying New York City's design is better and cleaner than London's. Neither is good, but one is marginally more logical and systematic than the other.

The desktop is much simpler and cleaner and prettier.

App installation and removal is easier and doesn't involve running untrusted binaries from 3rd parties, which is such a hallmark of Windows that Windows-only types think it is normal and natural and do not see if for the howling screaming horror abomination that it actually is. Indeed, put Windows types in front of Linux and they try to download and run binaries and whinge when it doesn't work. See comment about cluelessness above.

(One of the few places where Linux is genuinely ahead -- far ahead -- today is software installation and removal.)

Mac apps are fewer in number but higher in quality.

The Mac tradition of relative simplicity has been merged with the Unix philosophy of "no news is good news". Macs don't tell you when things work. They only warn you when things don't work. This is a huge conceptual difference from the VMS/Windows philosophy, and so, typically, this goes totally unnoticed by Windows types.

Go from a Mac to Windows and what you see is that Windows is constantly nagging you. Update this. Update that. Ooh you've plugged a device in. Ooh, you removed it. Hey it's back but on a different port, I need a new driver. Oh the network's gone. No hang on it's back. Hey, where's the printer? You have a printer! Did you know you have an HP printer? Would you like to buy HP ink?

Macs don't do this. Occasionally it coughs discreetly and asks if you know that something bad happened.

PC users are used to it and filter it out.

Also, PC OSes and apps are all licensed and copy-protected. Everything has to be verified and approved. Macs just trust you, mostly.

Both are reliable, mostly. Both just work now, mostly. Both rarely fail, try to recover fairly gracefully and don't throw cryptic blue-screens at you. That difference is gone.

But because of Windows' terrible design and the mistakes that the marketing lizards made the engineers put in, it's howlingly insecure, and vastly prone to malware. This is because it was implemented badly.

Windows apologists -- see cluelessness -- think it's fine and it's just because it dominates the market. This is because they are clueless and don't know how things should be done. Ignore them. They are loud; some will whine about this. They are wrong but not bright enough to know it. Ignore them.

You need antimalware on Windows. You don't on anything else. Antimalware makes computers slower. So, Windows is slower. Take a Windows PC, nuke it, put Linux on it and it feels a bit quicker.

Only a bit 'cos Linux too is a vile mess of 1970s crap. If it still worked, you could put BeOS on it and discover, holy shit wow lookit that, this thing is really fsckin' fast and powerful, but no modern OS lets you feel it. It's under 5GB of layered legacy crap.

(Another good example was RISC OS. Today, millions of people are playing with Raspberry Pis, a really crappy underpowered £25 tiny computer that runs Linux very poorly. Raspberry Pis have ARM processors. The ARM processor's original native OS, RISC OS, still exists. Put RISC OS on a Raspberry Pi and suddenly it's a very fast, powerful, responsive computer. Swap the memory card for Linux and it crawls like a one-legged dog again. This is the difference between an efficient OS and an inefficient one. The snag is that RISC OS is horribly obsolete now so it's not much use, but it does demonstrate the efficiency of 1980s OSes compared to 1960s/1970s ones with a few decades of crap layered on top.)

Windows can be sort of all right, if you don't expect much, are savvy, careful and smart, and really need some proprietary apps.

If you just want the Interwebs and a bit of fun, it's a waste of time and effort, but Windows people think that there's nothing else (see clueless) and so it survives.

Meanwhile, people are buying smartphones and Chromebooks which are good enough if you haven't drunk the cool-aid.

But really, they're all a bit shit, it's just that Windows is a bit shittier but 99% of computers run it and 99% of computer fettlers don't know anything else.

Once, before Windows NT, but after Unix killed the Real Computers, Unix was the only real game in town for serious workstation users.

Back then, a smart man wrote:

“I liken starting one’s computing career with Unix, say as an undergraduate, to being born in East Africa. It is intolerably hot, your body is covered with lice and flies, you are malnourished and you suffer from numerous curable diseases. But, as far as young East Africans can tell, this is simply the natural condition and they live within it. By the time they find out differently, it is too late. They already think that the writing of shell scripts is a natural act.” — Ken Pier, Xerox PARC
That was 30y ago. Now, Windows is like that. Unix is the same but you have air-conditioning and some shots and all the Big Macs you can eat.

It's a horrid vile shitty mess, but basically there's no choice any more. You just get to choose the flavour of shit you will roll in. Some stink slightly less.
liam_on_linux: (Default)
Facebook readers may have noted my post yesterday, when I mentioned that I was trying to resurrect an old notebook with a dead screen by using a screenreader. I commented:

"Just spent an hour trying to update a fresh install of Windows XP SP3 on a PC with no screen, using speech alone. Haven't felt so lost since 1988. It's currently on 100 of 125, though, which is a sort of success..."

Well, I've spent a little more time on it today.

According to http://update.microsoft.com I now have all essential updates installed. I'm not feeling brave enough to tackle the optional updates just yet - I'm still terrible at navigating web pages.

I've also managed to install MS Security Essentials, and currently, Ninite claims to be installing Opera, OpenOffice and a FOSS PDF reader.

It's a very chastening experience. I am a dab hand with driving Windows without a mouse - I learned on Windows 2.0 in the days when my employers didn't own a PC mouse. But much of the XP and Windows apps' UI is either inaccessible by keyboard, unreadable or just unlabelled.

For instance, stepping through the icons in the notification area, I get "icon... icon... NVDA... icon... Automatic updates... clock." Selecting each icon and opening it is the only way to find out what it's the icon for. One gives the wireless network connection info, for instance, but some lazy-ass Microsoft programmer forgot to give it a text label.

The entire UI of the MS Security Essentials consists of the following: "home... update... options... scan... exit." That's it. No legible text at all. I can open Task Manager and move between the tabs, but there's no way to sort the list of tasks to find what is hogging the system. That needs a mouse-click.

Progress bars are unreadable, but NVDA makes a series of rising beeps to tell you that something's happening. It's hard to tell how far you've got, though. The mandatory Windows Genuine Authentication installer stops at about 80%, every time, even after 3 reboots. I gave up and used a third-party WGA killer app to nuke it into oblivion.

And I've compared notes with [livejournal.com profile] ednun on this. Ubuntu seems to be about the best Linux for accessibility, with an integrated screenreader, Orca - but it can read considerably less than NVDA can. Windows does seem to be the best option.

It's quite scary. Certainly I'm nowhere near being able to post status updates from a screenless PC.

(Weird font changes courtesy of the LJ rich-text edit control. Sorry about that.)
liam_on_linux: (Default)
I have spent a lot of time and effort this year on learning my way around the current generation of Windows Server OSs, and the end result is that I've learned that I really profoundly dislike them.

Personally, I found the server admin tools in NT 3 and 4 to be quite good, fairly clean and simple and logical - partly because it was built on LAN Manager, which was IBM-designed, with a lot of experience behind it.

Since Windows 2000 Server, the new basis, Active Directory, is very similar that of Exchange. Much of the admin revolves around things like Group Policies and a ton of proprietary extensions on top of DNS. The result is a myriad of separate management consoles, all a bit different, most of them quite limited, not really following Windows GUI guidelines because they're not true Windows apps, they're snap-ins to the limited MS Management Console. Just like Exchange Server, there are tons and tons of dialog boxes with 20 or 30 or more tabs each, and both the parent console and many of the dialogs containing trees with a dozen+ layers of hierarchy.

It's an insanely complicated mess.

The main upshot of Microsoft's attempts to make Windows Server into something that can run a large, geographically-dispersed multi-site network is that the company has successfully brought the complexity of managing an unknown Unix server to Windows.

On Unix you have an unknown but large number of text files in an unknown but large number of directories, which use a wide variety of different syntaxes, and which have a wide variety of different permissions on them. These control an unknown but large number of daemons from multiple authors and vendors which provide your servers' various services.

Your mission is to memorise all the possible daemons, their config files' names, locations and syntaxes, and use low-level editing tools from the 1960s and 1970s to manage them. The boon is that you can bring your own editors, it all is easily remotely manageable over multiple terminal sessions, and that components can in many cases be substituted one for another in a somewhat plug-and-play fashion. And if you're lucky enough to be on a FOSS Unix, there are no licensing issues.

These days, the Modern way to do this is to slap another layer of tools over the top, and use a management daemon to manage all those daemons for you, and quite possibly a monitoring daemon to check that the management daemon is doing its job, and a deployment daemon to build the boxes and install the service, management and monitoring daemons.

On Windows, it's all behind a GUI and now Windows by default has pretty good support for nestable remote GUIs. Instead of a myriad of different daemons and config files, you have little or no access to config files. You have to use an awkward and slightly broken GUI to access config settings hidden away in multiple Registry-like objects or databases or XML files, mostly you know or care not where. Instead of editing text files in your preferred editor, you must use a set of slightly-broken irritatingly-nonstandard and all-subtly-different GUIs to manipulate vast hierarchical trees of settings, many of which overlap - so settings deep in one tree will affect or override or be overridden by settings deep in another tree. Or, deep in one tree there will be a whole group of objects which you must manipulate individually, which will affect something else depending on the settings of another different group of objects elsewhere.

Occasionally, at some anonymous coder's whim, you might have to write some scripts in a proprietary language.

When you upgrade the system, the entire overall tree of trees and set of sets will change unpredictably, requiring years of testing to eliminate as many as possible of the interactions.

But at least in most installs it will all be MS tools running on MS OSs - the result of MS' monopoly over some two decades being a virtual software monoculture.

But of course often you will have downversion apps running on newer servers, or a mix of app and server OS versions, so some machines are running 2000, some 2003, some 2008 and some 2008R2, and apps could span a decade or more's worth of generations.

And these days, it's anyone's guess if the machine you're controlling is real or a VM - and depending on which hypervisor, you'll be managing the VMs with totally different proprietary toolsets.

If you do have third-party tools on the servers, they will either snap-into the MS management tools, adding a whole ton of new trees and sets to memorise your way around, or they will completely ignore it and offer a totally different GUI - typically one simplified to idiot level, such as a enterprise-level backup solution I supported in the spring which has wizards to schedule anything from backups to verifies to restores, but which contains no option anywhere to eject a tape. It appears to assume that you're using a robot library which handles that automatically.

Without a library, tape ejection from an actual drive attached to the server, required a server reboot.

But this being Windows, almost any random change to a setting anywhere might require a reboot. So, for instance, Windows Terminal Services runs on the same baseline Windows edition, meaning automatic security patch installation - meaning all users get prompted to reboot the server, although they shouldn't have privileges to actually do so, and the poor old sysadmins, probably in a building miles away or on a different continent, can't find a single time to do so when it won't inconvenience someone.

This, I believe, is progress. Yay.

After a decade of this, MS has now decided, of course, that it was wrong all along and that actually a shell and a command line is better. The snag is that it's not learned the concomitant lessons of terseness (like Unix) or of flexible abbreviation (like VMS DCL), or of cross-command standadisation and homogeneity (although to be fair, Unix never learned that, either. "Those who do not know VMS are doomed to reinvent it, poorly," perhaps.) But then, long-term MS users expect the rug to be pulled from under them every time a new generation ships, so they will probably learn that in time.

The sad thing about the proliferation of complexity in server systems, for me, is that it's all happened before, a generation or two ago, but the 20-something-year-olds building and using this stuff don't know their history. Santayana applies.

The last time around, it was Netware 4.

Netware 3 was relatively simple, clean and efficient. It couldn't do everything Netware 2 could do, but it was relatively streamlined, blisteringly fast and did what it did terribly well.

So Novell threw away all that with Netware 4, which was bigger, slower, and added a non-negotiable ton of extra complexity aimed at big corporations running dozens of servers across dozens of sites - in the form of NDS, the Netware Directory Services. Just the ticket if you are running the network the size of Enron or Lehman Brothers, but a world of pain for the poor self-taught saps running single servers of millions of small businesses. They all hated it, and consequently deserted Netware in droves. Most went to NT4; Linux wasn't really there yet in 1996.

Now, MS has done exactly the same to them.

When Windows 2000 came around, Linux was ready - but the tiny handful of actual grown-up integrated server distros (such as eSmith, later SME Server) have never really caught on. Instead, there are self-assembly kits and each sysadmin builds their own. It's how it's always been done, why change?

I had hoped that Mac OS X Server might counteract this. It looked the The Right Thing To Do: a selection of the best FOSS server apps, on a regrettably-proprietary but solid base, with some excellent simple admin tools on top, and all the config moved into nice standard network-distributable XML files.

But Apple has dropped the server ball somewhere along the line. Possibly it's not Apple's fault but the deep instinctual conservatism of network and server admins, who would tend to regard such sweeping changes with fear and loathing.

Who knows.

But the current generation of both Unix and Windows server products both look profoundly broken to me. You either need to be a demigod with the patience and deep understanding of an immortal to manage them properly, or just accept the Microsoft way: run with the defaults wherever possible and continually run around patching the worst-broken bits.

The combination of these things is one of the major drivers behind the adoption of cloud services and outsourcing. You move all the nightmare complexity out of your company and your utter dependence on a couple of highly-paid god-geeks, and parcel it off to big specialists with redundant arrays of highly-paid god-geeks. You lose control and real understanding of what's occurring and replace it with SLAs and trust.

Unless or until someone comes along and fixes the FOSS servers, this isn't going to change - it's just going to continue.

Which is why I don't really want to be a techie any more. I'm tired of watching it just spiral downwards into greater and greater complexity.

(Aside: of course, nothing is new under the sun. It was, I believe, my late friend Guy Kewney who made a very plangent comment about this same process when WordPerfect 5 came out. "With WordPerfect 4.2, we've made a good bicycle. Everyone knows it, everyone likes it, everyone says it's a good bicycle. So what we'll do is, we'll put seven more wheels on it."

In time, of course, everyone looked back at WordPerfect 5.1 with great fondness, compared to the Windows version. In time, I'm sure, people will look back at the relative homogeneity of Windows 2003 Server or something with fondness, too. It seems inevitable. I mean, a direct Win32 admin app running on the same machine at the processes it's managing is bound to be smaller, simpler and faster than a decade-older Win64 app running on a remote host...)
liam_on_linux: (Default)
So today's small victory was beating my HP Proliant ML110 - the original model from 2004, with a 3GHz P4 in it - into submission.

It used to run Windows Server 2003 Small Business Server. Not really from my choice...

I got it with an HP UltraSCSI adaptor in it, but no disks, & I don't have enough to populate it.

So, I swapped it with the PCI-X Dell CERC ATA-100 controller out of my Dell PowerEdge 600SC, acquired around the same time. The Dell has 3 IDE controllers on the motherboard anyway, meaning it can boot from a pair of more-than-big-enough-for-Ubuntu-Server 10GB UltraSCSIs and still has enough ports to drive 4 UltraIDE disks and an IDE CDRW drive to boot and install from.

This left me struggling to fit six UltraIDE drives into the Proliant - it's only really meant to take four, but a pair of cheapo 3½"-to-5¼" mounting kits from eBay let me use the two spare removable bays, too. I have six spare 80GB IDE drives and a seventh in case of failure, so this gave me a total of 372GB of RAID5 storage. OK, not a lot by modern standards, but actually, that is a lot of space. I don't game, I don't download a lot of movies or TV, and I don't want more space than I can back up onto one relatively inexpensive external disk.

Snag is, last year, no Linux distro could access a RAID5 on the CERC controller. It's really an ALI MegaRAID i4, bought to drive a mirror pair with Linux, but kernel 2.6.twenty-whatever-was-current-in-2009 couldn't see RAID5s on this controller. Windows could. So, finding an eval copy of W2K3 SBS in the garage, I went with that, and it worked fine.

But as mentioned in an earlier post, unfortunately, it timed-out at the start of Jan and my server stopped working for more than 1h at a time. Microsoft's elegant method for enforcing the evaluation period is to have the server automatically throw a bluey - a BSOD - after an hour of use. Nice.

So I decided to try this copy of Windows 2008 I have. Put an external screen on it, boot Ubuntu 9.10, move all of W2K3SBS into a "previous system" folder (that's what Mac OS X does when you "archive and install", so it seemed appropriate), boot Windows 2008 and install it. It all went swimmingly. It took an hour or 2 to get it running, but it even detected and installed the inbuilt Broadcom NIC for me, which is more than 2K3 could do. (According to a review I found, these damned boxes shipped with W2K3, so it ought to have worked - but it didn't. Had to faff around downloading the driver on another machine and transferring it across with a USB key. Of course, the HP setup disks I got with the machine don't contain anything as mundane as a NIC driver, oh no. Management tools agogo, but no actual, you know, drivers.
And then the niggling issues started... )
liam_on_linux: (Default)
A final caveat to my previous post: there is one thing you probably shouldn't try doing under XP-inna-VM: play games. The VM does sport optional 2D graphics acceleration, although I've spotted a few display glitches, but the copy of Windows in the VM can't get at your shiny whizzy fanheater of a 3D card & any modern 3D game is going to run like crap. For that, I'm afraid, you need to dual-boot into real native Windows.

TinyXP will do that just fine, but remember, you're going to have to find the latest drivers for every bit of kit in your machine. My advice:
- install TinyXP first, in a primary partition on the 1st hard disk.
- leave plenty of space for Linux; put all its partitions in logical drives in an extended partition
- next, after TinyXP is working but before it's got its drivers, install Ubuntu
- now, in Ubuntu, you can carefully peruse the output of

dmesg | less

... and work out what motherboard chipset you have, what graphics, sound, network card(s) &c. your machine is sporting. The best way to identify a motherboard, though, is just to look at it. Use a torch. You'll probably find the makers' name and the model number printed between the expansion slots.

- Using Linux, go download all the relevant Windows drivers from the manufacturers' websites.
- Go to Places | Computer and open your Windows partition. Copy the downloaded drivers into

C:\Documents and Settings\All Users\Desktop

- Then reboot into Windows again and they're all there, ready to install.

This method saves an awful lot of hassle trying to get Windows working if you have no driver disks.

If you install Ubuntu after Windows, it's smart enough to set up dual-boot for you. Install Windows after Ubuntu, it will screw your bootsector and you won't be able to boot Ubuntu any more. Also, Windows likes being in a primary partition, preferably the first, whereas Linux doesn't care.

Oh, and don't waste your time on anything other than Ubuntu. If you are at the level of expertise to have got any useful info from this piece, you probably don't need advice on choosing a distro... but just in case:

- OpenSUSE is huge and its package-management system is frankly a bit past it.
- Fedora is a sort of rolling beta. It never stabilises, it's not supported and there are no official media addons, which are free with Ubuntu.
- Kubuntu is OK if you're a KDE freak but if you don't know the difference between KDE & GNOME, just go for vanilla Ubuntu, which involves a lot less fiddling.
- Mandriva is OK but again its package-management system, like that in SUSE and Fedora, is a decade or so less advanced than the one in Ubuntu.
- Debian is too much like hard work unless you actively enjoy fiddling.
- Gentoo is for boy-racers, the sort of person who drives a 6Y old Vauxhall Nova with a full bodykit and a 150dB sound system. Just don't.
- All the rest are for Linux hackers. You don't want to go there.
liam_on_linux: (Default)
I've not had a PC quick enough to really use PC-on-PC virtualisation in anger before, until [livejournal.com profile] ednun gave me the carcase of his old one. AMD Athlon64 X2 4800+, 2G RAM, no drives or graphics.

I've upped it to 4G, a couple of old 120GB EIDE hard disks, a DVD burner, a replacement graphics card (freebie from a friend) & a new Arctic Cooling Freezer7 Pro heatsink/fan from eBay to replace the old, clogged-up AMD OEM one. Total budget, just under £20; result, quick dual-core 64-bit machine with 64-bit Linux running very nicely.

For some work stuff. I've been using Linux-under-Linux in VirtualBox, which works rather well - but it's a kinda specialised need. There are still a few things that either don't work all that well in Linux or which I can't readily do, though. Spotify runs under WINE but crackles & pops then stops playing after 2-3 minutes & never emits another cheep. My CIX reader, Ameol, also runs OK under WINE, but windows don't scroll correctly. I don't think there's any Linux software to sync my mobile phone or update its firmware, although I'm not sure I'd want to try the latter from within a VM anyway, just in case...

So I decided to try running Windows in a VM under Linux just for occasional access to a handful of Windows apps, without rebooting into my Windows 2000 & Windows 7RC partitions. (Makes mental note: better replace that Win7 one before the RC expires.)

I've always had reservations about running a "full-sized" copy of Windows this way. It seems very wasteful of resources to me. That is, running one full-fat full-function OS under another full-fat OS, just for access to a couple of apps. (Also, you need a licence, if the guest is a modern, commercial product, not some ancient piece of abandonware.)

So I thought I'd try some "legacy" versions of Windows to see how well they worked. I have a fairly good archive here, from Windows 3.1 up to Win7.
Read more... )

April 2025

S M T W T F S
  1 23 45
6789101112
13141516171819
20212223242526
27282930   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 23rd, 2025 01:21 pm
Powered by Dreamwidth Studios