liam_on_linux: (Default)
2022-03-22 12:19 pm
Entry tags:

On why Apple-haters are every bit as misguided as any fanboy

I really hate it whenever I see someone calling Apple fans fanboys or attacking Apple products as useless junk that only sells because it's fashionable.

Every hater is 100% as ignorant and wrong as any fanatically-loyal fanboy who won't consider anything else.

Let me try to explain why it's toxic.

If someone/some group are not willing to make the effort to see why a very successful product/family/brand is successful, then it prevents them from learning any lessons from that success. That means that the outgroup is unlikely to ever challenge the success.

In life it is always good to ask why. If this thing is so big, why? If people love it so much, why?

I use a cheap Chinese Android phone. It's my 3rd. I also have a cheap Chinese Android tablet that I almost never use. But last time I bought a phone, I had a Planet Computers Gemini on order, and I didn't want two new ChiPhones, so I bought a used iPhone. This was a calculated decision: the new model iPhones were out and dropped features I wanted. This meant the previous model was now quite cheap.

I still have that iPhone. It's a 6S+. It's the last model I'd want: it has a headphone socket and a physical home button. I like those. It's still updated and last week I put the latest iOS on it.

It allowed me to judge the 2020s iOS ecosystem. It's good. Most of the things I disliked about iOS 6 (the previous iPhone model I had) have been fixed now. Most of the apps can be replaced or customised. It's much more open than it was. The performance is good, the form factor is good, way better than my iPhone 4 was.

I don't use iPhones because I value things like expansion slots, multiple SIMs, standard ports and standard charging cables, and a customisable OS. I don't really use tablets at all.

But my main home desktop computer is an iMac. I am an expert Windows user and maintainer with 35 years' of experience with the platform. I am also a fairly expert Linux user and maintainer with 27 years' experience. I am a full-time Linux professional and have been for nearing a decade... because I am a long-term Windows expert and that is why I choose not to use it any more.

My iMac (2015 Retina 27") is the most gorgeous computer I've ever owned. It looks good, it's a joy to use, it is near silent and trouble-free to a degree that any Windows computer can only aspire to be. I don't need expansion slots and so on: I want the vendor to make a good choice, integrate it well and for it to just work and keep just working, and it does.

It is slim, unobtrusive for a large machine, silent, and the picture (and sound) quality is astounding.

I chose it because I have extensive knowledge of building, specifying, benchmarking, reviewing, fixing, supporting, networking, deploying, and recycling old PCs. It is over 3 decades of expert knowledge of PCs and Windows that is why I spent my own money on a Mac.

So every time someone calls Mac owners fanboys, I know they know less than me and therefore I feel entirely entitled to dump on their ignorance from a great height.

I do not use iDevices. I also do not use Apple laptops. I don't like their keyboards, I don't like their pointing devices, I don't like their hard-to-repair designs. I use old Thinkpads, like most experienced geeks.

But I know why people love them, and if one wishes to pronounce edicts about Apple kit, you had better bloody well know your stuff.

I do not recommend them for everyone. Each person has their own needs and should learn and judge appropriately. But I also do not condemn them out of hand.

I have put in an awful lot of Windows boxes over the years. I have lost large potential jobs when I recommended Windows solutions to Mac houses, because it was the best tool for the job. I have also refused large jobs from people who wanted, say, Windows Server or Exchange Server when it *wasn't* the right tool for the job.

It was my job to assess this stuff.

Which equips me well to know that every single time someone decries Apple stuff, that means that they haven't done the work I have. They don't know and they can't bothered to learn.
liam_on_linux: (Default)
2021-09-24 06:13 pm

Just because a vendor sells laptops with a Linux on, caveat emptor still applies

Some companies sell laptops with Linux pre-installed. However in some cases I have read about, there may be significant caveats.

Some examples:

  • Dell pre-installed their own drivers for Ubuntu on their laptops, and if you format the machine and reinstall, or reinstall a different distro, you can't get the source of the drivers and build your own.

  • In other instances I've heard of, the machines work fine but some features are not supported on Linux. Or perhaps only works on the vendor's supported distro & not other distros. Or perhaps on Linux but not on -- say -- FreeBSD.

  • Or all features work, but you require Windows to update the firmware, or to update peripherals' firmware, such as docking stations.

  • Or the Linux models have slightly different specs, such as a specific WLAN card, and the generic Windows version of the same model is not 100% compatible.


The fact that someone offers one or two specific models with one particular Linux distro as an option is good, sure, but it doesn't automatically mean that that particular machine may be a good choice if you run a different distro, or don't want their pre-installed OS, or you didn't buy it with Linux and put it on later.

Long long ago, in the mid-1990s, I ran the testing labs for a major UK computer magazine, called PC Pro. In about 1996 I proposed and ran and edited a feature which the editors were very dubious about, but it proved to be a big hit.

The idea was very simple: at that time, all PCs shipped with Windows 95. As 95 was DOS-based at heart and had no concept of user space vs kernel space, drivers were quite easy. You could in a push use DOS drivers, or port Win32 drivers from Windows for Workgroups which did terrible hacky direct-hardware access stuff.

So my feature was: we want machines designed, built and supplied with Windows NT. At the time, that meant NT 4.

NT 4 was not at all like Win95; it just looked superficially like it. It needed its own, new, specially-written drivers for everything. It had built-in drivers for some things, for example EIDE (i.e. PATA) hard disks, but these did not use DMA, only programmed IO. (Not slow, but caused very high CPU usage; no problem on Win9x, but a performance-killer on NT.)

The PC vendors loved and hated us for it.

Some vendors...

  • promised machines then withdrew at the last minute;

  • promised machines, then changed the spec or price;

  • delivered machines with features not working;

  • delivered machines with expensive replacement hardware for built-in parts that didn't work with NT.


And so on. There was a huge delta in performance (while all Win9x machines performed pretty much alike: we could look at the parts list and predict the benchmark scores with an accuracy of about 5%.)

Many vendors didn't know about DMA hard disk drivers.

Some did but didn't know how to fix it. Some fitted SCSI hard disks as a way round this, not knowing that with the motherboard came a floppy disk with a free driver that would enable DMA on EIDE.

Some shipped CD burners that couldn't burn because the burner software didn't work on NT. Some shipped DVD drives which couldn't play movies on NT because the graphics adaptor's video playback acceleration didn't work on NT.

And so on.

Readers *loved* that feature because it separated the wheat from the chaff: it showed the cheap vendors whose PCs mostly worked but they didn't know how to tune them, from the solid vendors who knew what they were doing and how to make stuff work, from the solid vendors who could build a great PC for the task but it doubled the price.

I got a lot of praise for that article, and it was well worth the work.

Some vendors thanked me because it was so educational for them!

Well, Linux on laptops is still a bit like that today. There is a whole pile of stuff that's easy and a given on Windows that is difficult or problematic on Linux and just plain impossible on any other FOSS OS.

  • Switchable GPUs are a problem

  • Proprietary binary graphics drivers are sometimes a problem

  • Displays on docking stations can be tricky


Interactions between these things is even worse; e.g. multiple displays on USB docking stations can be extra-tricky

For example, with openSUSE Leap I found that with Intel graphics, two screens on a USB-C docking station was easy, but with nVidia Optimus, almost impossible.

With my own Latitude E7270, under KDE I can only drive 1 external screen; if I add 2 as well as the built-in one, then window borders disappear on the laptop screen and so windows can't be moved or resized. But under the lighter-weight Xfce, this is fine & all 3 screens can be used. And that's with an Intel GPU and a proper, PCIe-bus-attached dock.

But every time I un-dock or re-dock, it forgets the screen arrangement and the Display preferences have to be redone every single time.

Most apps can't remember what screen they were on and reopen on a random monitor every time. Possibly entirely offscreen if I have a different screen arrangement.

Even the same screens attached directly to the machine and via the dock confuse it. And I have both a full-size and mini dock. All the ports appear different.

Linux on laptops is still complicated.

Just because things work for 1 person doesn't mean they'll work for everyone. Just because a vendor ships a model with Linux doesn't mean all models work. Just because a vendor ships 1 distro doesn't mean all distros work.

And when the machine is new, you can probably be sure that there will be serious firmware issues with Linux because the firmware was only tested against Windows and sketchily even then. This is the era of Agile and minimum viable products, after all.

So do not take it as read that because Dell ship 2 or 3 models with Ubuntu, all models will Just Work™ with any disto.

I absolutely categorically promise you they don't and they won't.
liam_on_linux: (Default)
2021-08-01 04:21 pm

Re-evaluating "in the beginning was the Command Line" 23 years later

A short extract of Neal Stephenson's seminal essay has been doing the rounds on HackerNews.


OK, fine, so let's go with it.

Since my impression is that HN people are [a] xNix fans [b] often quite young therefore [c] have little exposure to other OSes, let me try to unpack what Stephenson was getting at, in context.

The Hole Hawg is a dangerous and overpowered tool for most non-professionals. It is big and heavy. It can take on big tough jobs with ease, but its size and its brute power mean that it is not suitable for precision work. It has relatively few safety features, so that if used inexpertly, it will hurt its operator.

DIY stores are full of smaller, much less powerful tools. This is for good reasons:

  • because for non-professional users, those smaller, less-powerful tools are much safer. A company which sells a tool to untrained users which tends to maim or kill them will go out of business.

  • because smaller, less-powerful tools are better for smaller jobs, that a non-professional might undertake, such as hanging a picture, or putting up some shelves.

  • professionals know to use the right tool for the job. Surgeons do not operate with chainsaws (even though they were invented for surgery). Carpenters do not use axes.


The Hole Hawg, as described, is a clumsy tool that needs things attached to it in order to be used, and even then, you need to know the right way or it will hurt you.

Compare with a domestic drill with a pistol grip that is ready to use out of its case. Modern ones are cordless, increasing their convenience.

One is a tool for someone building a house; the other is a better tool for someone living in that house.

That's the drill part.

Now, let's discuss the OSes talked about in the rest of the 1999 piece from which that's a clipping [PDF].

There are:

  • Linux, before KDE, with no free complete desktop environments yet;

  • Windows, meaning Windows 98SE or NT 4;

  • Classic MacOS – version 9;

  • BeOS.

Stephenson points out that Linux is as powerful as any of them, cheaper, but slower, ugly and unfriendly.

He points out that MacOS 9 is as pretty, friendly, and comprehensible as OSes get, but it doesn't multitask well, it is not very stable, and when a program crashes, your entire computer probably goes with it.

He points out that Windows is overpriced, performs poorly, and is not the best option for anyone – but that everyone runs it and most people just conform with what the mainstream does.

He praises BeOS very highly, which was 100% justified at the time: it was faster than anything else, by a large margin. It has superb multimedia support and integration, better than anything else at the time. It was standards-compliant but not held back by it. For its time, it has a supermodern OS, eliminating tonnes of legacy cruft.

But it didn't have many apps so it was mainly for people in narrow niches, such as music production or maybe video editing.

It was manifestly the future, though. But we're living in the future and it wasn't. This was 23 years ago, nearly a quarter of a century, before KDE and GNOME, before Windows XP, before Mac OS X. You need to know that.

What Unix people interpret as praise here is in fact criticism.

That Unix is very unfriendly and can easily hurt its user. (Think `rm -rf /` here.)

That Unix has a great deal of raw power but maybe more than most people need.

That Unix is, frankly, kinda ugly, and only someone who doesn't care about appearances would choose it.

That something of this brute power is not suitable for fine precision work. (Which it still mostly isn't -- Mac OS X is Unix, tuned and polished, and that's what the creative pros use now.)

Here's a response from 17 years ago.
liam_on_linux: (Default)
2021-06-25 01:29 pm

Mankind is a monkey with its hand in a trap, & legacy operating systems are among the bait

[Another recycled mailing list post]

I was asked what options there were for blind people who wish to use Linux.

The answer is simple but fairly depressing: basically every blind person I know personally or via friends of friends who is a computer user, uses Windows or Mac. There is a significant move from Windows to Mac.

Younger computer users -- by which I mean people who started using computers since the 1990s and widespread internet usage, i.e. most of them -- tend to expect graphical user interfaces, menus and so on, and not to be happy with command-line-driven programs.

This applies every bit as much to blind users.

Linux can work very well for blind users if they use the terminal. The Linux shell is the richest and most powerful command-line environment there is or ever has been, and one can accomplish almost anything one wants to do using it.

But it's still a command line, and a notably unfriendly and unhelpful one at that.

In my experience, for a lot of GUI users, that is just too much.

For instance, a decade or so back, the Register ran some articles I wrote on switching to Linux. They were, completely intentionally, what is sometimes today called "opinionated" -- that is, I did not try to present balance or a spread of options. Instead I presented what was, IMHO, the best choices.


Multiple readers complained that I included a handful of commands to type in. "This is why Linux is not usable! This is why it is not ready for the real world! Ordinary people can't do this weird arcane stuff!" And so on.

Probably some of these remarks are still there in the comments pages.

In vain did some others try to reason with them.

But it was 10x quicker to copy-and-paste these commands!
-> No, it's too hard.

He could give GUI steps but it would take pages.
-> Then that's what he should have done, because we don't do this weird terminal nonsense.

But then the article would have been 10x longer and you wouldn't read it.
-> Well then the OS is not ready, it's not suitable for normal people.

If you just copy-and-paste, it's like 3 mouse clicks and you can't make a typing error.
-> But it's still weird and scary and I DON'T LIKE IT.

You can't win.

This is why Linux Mint succeeded -- partly because when Ubuntu introduced its non-Windows-like desktop after Microsoft threatened to sue, Mint hoovered up those users who wanted it Windows-like.

But also because Mint didn't make you install the optional extras. It bundled them, and so what if that makes it illegal to distribute in some countries? It Just Worked out of the box, and it looked familiar, and that won them millions of fans.

Mac OS X has done extremely well partly because users never ever need to go need a command line, for anything, ever. You can if you want, but you never, ever need to.

If that means you can't move your swap file to another drive, so be it. If that means that a tonne of the classic Unix configuration files are gone, replaced by a networked configuration database, so be it.

Apple is not afraid to break things in order to make something better.

The result has been to become the first trillion-dollar computer company, and hundreds of millions of happy customers.

Linux gives you choices, lets you pick what you want, work the way you want... and despite offering the results for free, the result has been about 1% of the desktop market and basically zero of the tablet and smartphone markets.

Ubuntu made a valiant effort to make a desktop of Mac-like simplicity, and it successfully went from a new entrant in a busy marketplace in 2004 to being the #1 desktop Linux within a decade. It has made virtually no dent on the non-Linux world, though.

After 20 years of this, Google (after *bitter* internal argument) introduced ChromeOS, a Linux which takes away all your choices. It only runs on Google hardware, has no apps, no desktop, no package management, no choices at all. It gives you a dead cheap, virus-proof computer that gets you on the Web.

In less time than Ubuntu took to win about 1% of the Windows market over to Linux, ChromeBooks persuaded about one third of the world laptop buying market to switch to Linux. More Chromebooks sell every year -- tens of millions -- than Ubuntu users in total since it lauched.

What effect has this had on desktop Linux? Zero. None at all. If that is the price of success, they are not willing to pay it. What Google has done is so unspeakable foul, so wrong, so blasphemous, they don't even talk about it.

What effect has it had on Microsoft? A lot. Cheaper Windows laptops than ever, new low-end editions of Windows, serious efforts to reduce the disk and memory usage...

And little success. The cheap editions lose what makes Windows desirable, and ultra-cheap Windows laptops make poorer slower Chromebooks than actual Chromebooks.

Apple isn't playing. It makes its money in the high-end.

Unfortunately a lot of people are very technologically conservative. Once they find something they like, they will stay with it at all costs.

This attitude is what has kept Microsoft immensely profitable.

A similar one is what has kept Linux as the most successful server OS in the world. It is just a modernised version of a quick and dirty hack of an OS from the 1960s, but it's capable and it's free. "Good enough" is the enemy of better.

There are hundreds of other operating systems out there. I listed 25 non-Linux FOSS OSes in this piece, and yes, FreeDOS was included.

There are dozens that are better in various ways than Unix and Linux.

  • Minix 3 is a better FOSS Unix than Linux: a true microkernel which can cope with parts of itself failing without crashing the computer.

  • Plan 9 is a better UNIX than Unix. Everything really is a file and the network is the computer.

  • Inferno is a better Plan 9 than Plan 9: the network is your computer, with full processor and OS-independence.

  • Plan 9's UI is based on Oberon: an entire mouse-driven OS in 10,000 lines of rigorous, type-safe code, including the compiler and IDE.

  • A2 is the modern descendant of Oberon: real-time capable, a full GUI, multiprocessor-aware, internet- and Web-capable.

(And before anyone snarks at me: they are all niche projects, direly lacking polish and not ready for the mass market. So was Linux until the 21st century. So was Windows until version 3. So was the Mac until at the very least the Mac Plus with a hard disk. None of this in any way invalidates their potential.)

But almost everyone is too invested in the way they know and like to be willing to start over.

So we are trapped, the monkey with its hand stuck in a coconut shell full of rice, even though it can see the grinning hunter coming to kill and eat it.

We are facing catastrophic climate change that will kill most of humanity and most species of life on Earth, this century. To find any solutions, we need better computers that can help us to think better and work out better ways to live, better cleaner technologies, better systems of employment and housing and everything else.

But we can't let go of the single lousy handful of rice that we are clutching. We can't let go of our broken political and economic and military-industrial systems. We can't even let go of our broken 1960s and 1970s computer operating systems.

And every day, the hunter gets closer and his smile gets bigger.
liam_on_linux: (Default)
2021-06-15 01:20 am

Did you know that you can 100% legally get & run WordPerfect for free?

In fact, there are two free versions: one for Classic MacOS, made freeware when WordPerfect discontinued Mac support, and a native Linux version, for which Corel offered a free, fully-working, demo version.

But there is a catch – of course: they're both very old and hard to run on a modern computer. I'm here to tell you how to get them and how to install and run them.

WordPerfect came to totally dominate the DOS wordprocessor market, crushing pretty much all competition before it, and even today, some people consider it to be the ultimate word-processor ever created.

Indeed the author of that piece maintains a fan site that will tell you how to download and run WordPerfect for DOS on various modern computers,  if you have a legal copy of it. And, of course, if you run Windows, then the program is still very much alive and well and you can buy it from Corel Corp.

Sadly, the DOS version has never been made freeware. It still works – I have it running under PC-DOS 7.1 on an old Core 2 Duo Thinkpad, and it's blindingly fast. It also works fine on dosemu. It is still winning new fans today. Even the cut-down LetterPerfect still cost money. The closest thing to a free version is the plain-text-only WordPerfect Editor.

Edit: I do not know if Corel operates a policy like Microsoft, where owning a new version allows you run any older version. It may be worth asking.

But WordPerfect was not, originally, a DOS or a PC program. It was originally developed for a Data General minicomputer, and only later ported to the PC. In its heyday, it also ran on classic MacOS, the Amiga, the Atari ST and more. I recall installing a text-only native Unix version on SCO Xenix 386 for a customer. In theory, this could run on Linux using iBCS2 compatibility.

When Mac OS X loomed on the horizon, WordPerfect Corporation discontinued the Mac version – but when they did so, they made the last ever release, 3.5e, freeware.

WordPerfect 3.5e 
(Image source.)

Of course, this is not a great deal of use unless you have a Mac that can still run Classic – which today means a PowerPC Mac with Mac OS X 10.4 or earlier. However, hope springs eternal: there is a free emulator called SheepShaver that can emulate classic MacOS on Intel-based Macs, and the WPDOS site has a downloadable, ready-to-use instance of the emulator all set up with MacOS 9 and WordPerfect for Mac.

To be legal, of course, you will need to own a copy of MacOS 9 – that, sadly, isn't free. Efforts are afoot to get it to run natively on some of the later PowerMac G4 machines on which Apple disabled booting the classic OS. I must try this on my Mac mini G4 and iBook G4.

The non-Windows version of WordPerfect that lived the longest, though, was the Linux edition. Corel was very keen on Linux. It had its own Linux distro, Corel LinuxOS, which had a very smooth modified KDE and was the first distro to offer graphical screen-resolution setting. Corel made its own ARM-based Linux desktop, the NetWinder, as reviewed in LinuxJournal.

And of course it made WordPerfect available for Linux.

Edit: Sadly, though, Microsoft intervened, as it is wont to do. The programs in WordPerfect Office originally came from different vendors. Some reviews suggested that the slightly different looks and feels of the different apps would be a problem, compared to the more uniform look and feel of MS Office. (The Microsoft apps in Office 4 were very different from one another. Office 95 and Office 97 had a lot of effort put in to make them more alike, and not much new functionality.)

Corel was persuaded to license the MS Office look-and-feel – the button bars and designs – and the macro language (Visual BASIC for Applications) and incorporate them into WordPerfect Office.

But the deal had a cost above the considerable financial one: Corel had to discontinue all its Linux efforts. So it sold off Corel LinuxOS, which became Xandros. It sold its NetWinder hardware, which became independent. It killed off its native Linux app, and ended development of WordPerfect Office for Linux, which was a port of the then-current Windows version using Winelib. In fact, Corel contributed quite a lot of code to the WINE Project at this time in order to bring WINE up to a level where it could completely and stably support all of WordPerfect Office.


I'm not sure if the text-only WordPerfect for Unix ever had a native Linux version – I didn't see it if it did – but a full graphical version of WordPerfect 8 was included with Corel LinuxOS and also sold at retail. Corel offered both a free edition with fewer bundled fonts, as well as a paid version.

This is still out there – although most of its mirrors are long gone, the Linux Documentation Project has it. It's not trivial to install a 20-year-old program on a modern distro, but luckily, help is at hand. The XWP8Users site has offered some guidance for many years, but I confess I never got it to work except by installing a very old version of Linux in a VM. For instance, it's easy enough to get it running on Ubuntu 8.04 or 8.10 – Corel LinuxOS was a Debian-derivative, and so is Ubuntu.

The problem is that even in these days of containers for everything, Ubuntu 8 is older than anything supports. Linux containers came along rather later than 2008. In fact, in 2011 I predicted that containers were going to be the Next Big Thing. (I was right, too.)

So I've not been able to find any easy way to create an Ubuntu 8.04 container on modern Ubuntu. If anyone knows, or is up for the challenge, do please get in touch!

But the "Ex WP8 Users" site folk have not been idle, and a few months ago, they released a big update to their installation instructions. Now, there's a script, and all you need to do is download the script, grab the WordPerfect 8.0 Downloadable Personal Edition (DPE), put them in a folder together and run the script, and voilá. I tried it on Ubuntu 20.04 and it works a treat so long as I run it as root. I have not seen any reports from anyone else about this, so it might be just my installation.

Read about it and get the script here.

Edit:

For more info, read the WordPerfect for Linux FAQ. This includes instructions on adding new fonts, fixing the MS Word import filter and some other useful info.

From the discussion on Hackernews and the FAQ, I should note that there are terms and conditions attached to the free  WP 8.0 DPE. It is only free for personal, non-commercial use, and some people interpret Corel's licence as meaning that although it was a free download, it is not redistributable. This means that if you did not obtain it from Corel's own Linux site (taken down in 2003) or from an authorised re-distributor (such as bundled with SUSE Linux up to 6.1 and early versions of Mandrake Linux, and the "WordPerfect for Linux Bible" hardcopy book, and a few resellers) then it is not properly licensed.

I dispute this: as multiple vendors did re-distribute it and Corel took no action, I consider it fair play. I also very much doubt that anyone will use this in a commercial setting in 2021.

If you are interested in the more complete WordPerfect 8.1, I note that it was included in Corel LinuxOS Deluxe Edition and that this is readily downloaded today, for example from the Internet Archive or from ArchiveOS. However, unless you bought a licence to this, this is not freeware and does not include a licence for use today.



r/linux - A blast from the past: native WordPerfect 8 for Linux running on Fedora 13. It still works! [pic]
(Image source.)

Postscript

If you really want a free full-function word-processor for DOS, which runs very well under DOSemu on Linux, I suggest Microsoft Word 5.5. MS made this freeware at the turn of the century as a free Y2K update for all previous versions of Word for DOS.

How to get it:
Microsoft Word for DOS — it’s FREE

Sadly, MS didn't make the last ever version of Word for DOS free. It only got one more major release, Word 6 for DOS. This has the same menu layout and the same file format as Word 6 for Windows and Word 6 for Mac, and also Word 95 in Office 95 (for Win95 and NT4). It's a little more pleasant to use, but it's not freeware — although if you own a later version of Word, the licence covers previous versions too.

Here is a comparison of the two:
Microsoft Word 5.5 And 6.0 In-depth DOS Review With Pics
liam_on_linux: (Default)
2021-04-07 09:01 pm

Installing Linux on an old 2008 MacBook needs some workarounds & fixes

I just finished doing up an old white MacBook from 2008 (note: not MacBook Pro) for Jana's best friend, back in Brno.

I hit quite a few glitches along the way. Partly for my own memory, partly in case anyone else hits them, here are the work-arounds I needed...

BTW, I have left the links visible and in the text so you can see where you're going. This is intentional.

Picking a distribution and desktop

As the machine is maxed out with 4GB of RAM, and only has a fairly feeble Intel HD 3100 GPU, I went for Xfce as a lightweight desktop that's very configurable and doesn't need hardware OpenGL. (I just wish Xfce had the GNOME 2/Maté facility to lock controls and panels into place.)

Xubuntu (18.10, later upgraded to 19.04) had two peculiar and annoying errors.

  1. On boot, NumLock is always on. This is a serious snag because a MacBook has no NumLock key, nor a NumLock indicator to tell you, and thus no easy way to turn it off. (Fn+F6 twice worked on Xubuntu 18/19, but not on 20.04.) I found a workaround: https://help.ubuntu.com/community/AppleKeyboard#Numlock_on_Apple_Wireless_Keyboard

  2. Secondly, Xubuntu sometimes could not bring the wifi connection up. Rebooting into Mac OS X and then warm-booting into Xubuntu fixed this.

For this and the webcam issue below, I really strongly recommend keeping a bootable Mac OS X partition available and dual-booting between both Mac OS X and Linux. OS X Lion (10.7) is the latest this machine can run. Some Macs – e.g. MacBook Pro and iMac models –  from around this era can run El Cap (10.11) which is probably still somewhat useful. My girlfriend's MacBook Pro is a 2009 model, just one year younger, and it can run High Sierra (10.13) which still supports the latest Firefox, Chrome, Skype, LibreOffice etc without any problem.

By the way: there are "hacks" to install newer versions of macOS onto older Macs which no longer support them. Colin "dosdude1" Mistr has a good list, here: http://dosdude1.com/software.html

However quite a few of these have serious drawbacks on a machine this old. For instance, my 2008 MB might be able to run Mountain Lion (10.8) but probably nothing newer, and if it did, I would have no graphics acceleration, making the machine slow and maybe unstable. Similarly, my 2011 Mac Mini maxes out at High Sierra. Mojave (10.14) and Catalina (10.15) apparently work well, but Big Sur (11) again has no graphics acceleration and is thus well-nigh unusable. But if you have a newer machine and the reports are that it works well as a hack, this may make it useful again.

I had to reinstall Lion. Due to this, I found that the MacBook will not boot Lion off USB; I had to burn a DVD-R. This worked perfectly first time. There are some instructions here:
https://www.lifewire.com/install-os-x-lion-using-bootable-dvd-2260333

Beware, retail Mac OS X DVDs are dual-layer. If the image is more than 5GB, it may not fit on an ordinary single-layer DVD-R.

If I remember correctly, Lion was the last version of Mac OS X that was not a free download. However, that was 10 years and 8 versions ago, so I hope Apple will forgive me helping you to pirate it. A Bittorrent can be found here.

Incidentally, a vaguely-current browser for Lion is ParrotGeeks Firefox Legacy. I found this made the machine much more useful with Lion, able to access Facebook, Gmail etc. absolutely fine, which the bundled version of Safari cannot do. If you disable all sharing options in OS X and only use Firefox, the machine should be reasonably secure even today. OS X is immune to all Windows malware. Download Firefox Legacy from here:
https://parrotgeek.com/fxlegacy.html

However, saying all that, Linux Mint does not suffer from either of these Xubuntu issues, so I recommend Linux Mint Xfce. I found Mint 20 worked well and the upgrade to Mint 20.1 was quick and seamless.

Installation

If you make a 2nd partition in Disk Utility while you're (re-)installing Mac OS X, you can just reformat that as ext4 in the Linux setup program. This saves messing around with Linux disk partitioning on a UEFI MacBook, which I am warning you is not like doing it on a PC. (I accidentally corrupted the MacBook's hard disk trying to copy a Linux partition onto it with gparted, then remove it using fdisk. That's why I had to reinstall. Again, I strongly recommend doing any partitioning with Mac OS X's Disk Utility, and not with Linux.) All Intel Macs have UEFI, not a BIOS, and so they all use only GPT partitioning, not MBR.

I set aside 48GB for Lion and all the rest for Mint. (Mint defaults to using a swapfile in the root partition, just like Ubuntu. This means that 2 partitions are enough. I was trying to keep things as simple as possible.)

If you use Linux fdisk, or Gparted, to look at the disk from Linux, remember to leave the original Apple EFI System Partition ("ESP") alone and intact. You need that even if you single-boot Linux and nothing else.

Wifi doesn't work out of the box on Mint. You need to connect to the Internet via Ethernet, then open the Software and Drivers settings program and install the Broadcom drivers. That was enough for me; more info is here:
https://askubuntu.com/questions/55868/installing-broadcom-wireless-drivers

While connected with a cable, I also did a full update:

sudo -s
apt update
apt full-upgrade -y
apt autoremove -y
apt purge
apt clean


Glitches and gotchas

Startup or shutdown can take ages, or freeze the machine entirely, hanging during shutdown. The fan may spin up during this. The fix is an simple edit to add an extra kernel parameter to GRUB, described here:
https://forums.linuxmint.com/viewtopic.php?t=284960

(Aside: hoping to work around this, I installed kexec-tools for faster reboots. It didn't work. I don't know why not. Perhaps it's something to do with the machine using UEFI, not a BIOS. I also installed the Ubuntu Hardware Enablement stack with its newer kernel, in case that helped, but it didn't. It didn't seem to cause any problems, though, so I left it.)

GRUB shows an error about not being able to find a Mok file, then continues because SecureBoot is disabled. This is non-fatal but there is a fix here:
https://askubuntu.com/questions/1279602/ubuntu-20-04-failed-to-set-moklistrt-invalid-parameter

While troubleshooting the Mok error above, I found that the previous owner of this machine had Fedora on it at some point, and even though I removed and completely reinstalled OS X Lion in a new partition, the UEFI boot entry for Fedora was still there and was still the default. I removed it using the instructions here:
https://www.linuxbabe.com/command-line/how-to-use-linux-efibootmgr-examples

NOTE: I suggest you don't set a boot sequence. Just set the ubuntu entry as the default and leave it at that. The Apple firmware very briefly displays a no-bootable-volume icon (a folder with a question mark on it) as it boots. I think this is why, when I used efibootmgr to set Mint as the default then OS X, it never loaded GRUB but went straight into OS X.

(Mint have not renamed their UEFI bootloader; it's still called "ubuntu" from the upstream distro. I believe this means that you cannot dual-boot a UEFI machine with both Ubuntu and Mint, or multiple versions of either. This reflects my general impression that UEFI is a pain in the neck.)

The Apple built-in iSight Webcam requires a firmware file to work under Linux, which you must extract from Mac OS X:
https://help.ubuntu.com/community/MactelSupportTeam/AppleiSight

Both Xubuntu and Mint automatically install entries in the GRUB boot menu for Mac OS X. For Lion, there are 2: one for the 32-bit kernel, one for the 64-bit kernel. These will not work. To boot into macOS, hold down the Opt key as the machine powers on; this will display the firmware's graphical boot-device selection screen. The Linux partition is described as "EFI Boot". Click on "macOS" or whatever you called your Mac HD partition. If you want to boot into Linux, just power-cycle it and then leave it alone – the screen goes grey, then black with a flashing cursor, then the GRUB menu appears and you can pick Linux. The Linux partition is not visible from macOS and you can't pick it in the Startup Disk system preference-pane.

Post-install fine-tuning

I also added the ubuntu-restricted-extras package to get some nicer web fonts, a few handy codecs, and so on. Remember when installing this that you must use the cursor keys and Enter/Return to say "yes" to the Microsoft free licence agreement. The mouse won't work – use your keyboard. I also added Apple HFS support, so that Linux can easily manipulate the Mac OS X partition.

I installed Google Chrome and Skype, direct from their vendors' download pages. Both of these add their own repositories to the system, so they will automatically update when the OS does. I also installed Zoom, which does not have a repo and so won't get updated. This is an annoyance; we'll have to look at that later if it becomes problematic. I also added VLC because the machine has a DVD drive and this is an easy way to play CDs and DVDs.

As this machine and the old Thinkpad I am sending along with it are intended for kids to use, I installed the educational packages from UbuntuEd. I added those that are recommended for pre-school, primary and secondary schoolchildren, as listed here:
https://discourse.ubuntu.com/t/ubuntu-education-ubuntued/17063

I enabled unattended-upgrades (and set the machine to install updates at shutdown) as described here:
https://www.cyberciti.biz/faq/set-up-automatic-unattended-updates-for-ubuntu-20-04/

While testing the webcam, I discovered that Mint doesn't include Cheese, so I installed that, too:
sudo apt install -y ubuntu-restricted-extras hfsprogs vlc cheese
liam_on_linux: (Default)
2020-05-01 02:59 am
Entry tags:

Not one but đťđť„đť—Ľ complete, working, & 𝙪𝙨𝙚𝙛𝙪𝙡 Raspberry Pi projects!

I have several RasPis lying around the place. I sold my π2 when I got a π3, but then that languished largely unused for several years, after the fun interlude of getting it running RiscOS in an old ZX Spectrum case.

Then I bought myself a π3+ in a passive-cooling heatsink/case for Yule 2018, which did get used for some testing at work, and since then, has also been gathering dust. I am sure this is the fate of many a π.

The sad thing about the RasPi is that it's a bit underpowered. Not unreasonable for a £30 computer. The π1 was a single rather gutless ARM6 core. The π2 at least had 4 cores, but still weedy ones. The π3 had faster cores and wifi, but all still only have 1GB of non-upgradable RAM. They're not really up to running a full Linux desktop. What's worse, the Ethernet and wifi are USB devices, sharing the single USB2 bus with any external storage – badly throttling the bandwidth for server stuff. The π3+ is a bit less gutless but all the other limitations apply – and it needs more power and some form of cooling.

But then a chap on FesseBouc offered an official π touchscreen, used and a bit cheaper than new. That gave me an idea. I listen to a lot of BBC 6music – I am right now, in fact – but it needs a computer. Czech radio seems to mainly play a lot of bland pop which isn't my thing, and of course I can't understand a useful amount of Czech yet. It's at about the level of my Swedish in 1993 or so: if I listen intently and concentrate very hard, I may be able to work out the subject being discussed, but not follow the discussion.

But I don't want to leave a laptop on 24×7 and I definitely don't want a big computer with a separate screen, keyboard and mouse doing it. What I want is something the size of a radio but which can connect to wifi and stream music to simple old-fashioned wired speakers, without listening to me. I most definitely do not want a spy basestation for a dot-com listening to my home, thank you.
Image
So I bought the touchscreen, connected it to my old π3, powered them both off a couple of old phone chargers, bunged in a spare µSD card, and started playing with software. I know where I am with software.

First I tried OSMC. It worked, detected and used the touchscreen, and could connect to my wifi... but it doesn't directly support streaming audio, as far as I can tell, and I could not work out how to install add-ins, nor how to update the underlying Linux.

I had a look at LibreElec but it looked very similar. While I don't really want the bloat of an entire general-purpose Linux distro, I just want this to work, and I had 8GB to play with, which is plenty.

So next I tried XBian. This is a cut-down Debian, running on Btrfs, which boots straight to Kodi. Kodi used to be called XBox Media Centre, and that's where I first met it – I softmodded an old original black XBox that my friend Dop gave me and put XBMC on it. It streamed movies off my server and played DVDs through my TV set, which is all I needed.

XBian felt a lot more familiar. It has a settings page through which I could update the underlying OS. It worked with the touchscreen out of the box. It has a UI for connecting to wifi. It too didn't include streaming Internet radio support, but it had a working add-ons browser, in which I found both BBC iPlayer and Internet Radio extensions.

Soon I was in business. It connected to wifi, it was operable with the touchscreen, connected to some old Altec Lansing speakers I had lying around. So I bought a case from Mironet, my friendly local electronics store. (There is a veritable Aladdin's Cave even closer to my office, GM electronic – but I'm afraid they're not very friendly. Sort of the opposite, in fact.)

I assembled the touchscreen and π3 into my case, and hit a problem. Only one available opening for a µUSB lead, but the screen needs its own. Some Googling later, it emerges than you can power the touchscreen from the π's GPIO pins, but I don't have the cables.

So off to GME it was, and some tricky negotiations later, I bought a strip of a dozen jumper cables. Three of them got me in business, but since it was £1 for all of them, I can't really complain about the wastage.

So now, there's a little compact unit in my bedroom which plays the radio whenever I want, on the power usage of a lightbulb. No fans, no extra cooling, nothing. I've had to use my single Official Raspberry Pi PSU brick, as all my phone chargers gave me the lightning-bolt icon undervoltage warning.

This emboldened me for Project 2.

Some years ago, Morgan's had a cheap offer on 2TB hard disks. I bought all their remaining stock, 5 mismatched drives. One went into an external case for my Mac mini and later died. The other four were in a box, pending installation into my old HP Microserver G1, which currently has 4×300GB drives in it, in a Linux software RAID controlled by Ubuntu. (Thanks to [livejournal.com profile] hobnobs!) However, this only has 2GB of RAM, and I figured that wasn't enough for a 5TB RAID. I may have accidentally killed it trying to fit more RAM, and the job of troubleshooting and fixing it has been waiting for, um, a couple of years now.

Meanwhile, the iMac's 1TB Fusion Drive was at 97.5% full and I don't have any drives big enough to back up everything on it.

I slowly and reluctantly conceded to myself that it might be quicker and easier to build a new server than fix and upgrade the old one.

The Raspberry Pi 4 is quite a different beast. Apart from a beefier 64-bit ARM7 quad-core, it has 2GB and 4GB RAM options, and it has much faster I/O. Its wifi and Ethernet are directly attached to the CPU, not on the USB bus, and it has 2 of those: the old USB2 bus (480Mb/s) and a new, separate 5Gb/s USB3 bus. This is useful power. It can also drive dual monitors via twin µHDMI ports.

But the π4 runs quite hot. The Flirc case my π3+ is in is only meant for home theatre stuff. A laden π4 needs something beefier, and sadly, my local mail-order electronics place, Alza, doesn't offer anything that appealed. I found the Maouii case on Amazon Germany and that fit the bill. (It also gave me a good excuse to buy the entire Luna trilogy by Ian McDonald in order to qualify for free shipping.)

So, from Alza I ordered a 4GB π4 and 4 USB3 desktop drive cases. From Mall CZ I ordered a USB3 hub with a fairly healthy 2.5A power output, thinking this would be enough to power a headless π4. USB C cables and µSD cards I have, and I figured all the USB 3 cables would come with the enclosures, which they did. In these quarantine lockdown times, the companies deliver to electronically-controlled mailboxes in shopping malls and so on, where you enter a code and pick up your package without ever interacting with a potentially-infectious human being.

It was all with me within days.

Now, I place some trust in those techies that I know who are more skilled and experienced than I, especially if they are jaded, cynical ones. File systems are one of the few significant differentiating factors between modern Linux server distros. Unfortunately, a few years ago, the kernel maintainers refused to integrate EVMS and picked the far simpler LVM instead. This has left something of a gap, with enterprise UNIXes still having more sophisticated storage tech than Linux. On the upside, though, this is driving differentiation.

SUSE favours Btrfs, although there's less enthusiasm outside the company. It is stable, but even now, you're recommended not to try to repair a Btrfs filesystem, and it can't give a reliable answer to the 'df' command – in other words, the basic question "how much free space have I got left?"

I love SUSE's YaST admin tool, and for other server stuff, especially on x86, I would probably recommend it, but it's not ideal for what I wanted in this role. Its support for the π4 is a bit preliminary so far, too.

Red Hat has officially deprecated Btrfs, but that left it with the problem that LVM with filesystems placed on top is a complex solution which still leaves something lacking, so with its typical galloping NIH syndrome, it is in the process of inventing an entirely new disk management layer, Stratis. Stratis integrates SGI's tried-and-tested, now-FOSS XFS filesystem with LVM into a unified disk management system.

Yeah, no thanks. Not just yet. I am not fond of Fedora, anyway. No stable or LTS versions (because that's RHEL's raison d'etre). CentOS is a different beast, and also not really my thing. And Fedora is also a bit more bleeding-edge than I like. I do not consider Fedora a server OS; it's more of a rolling tech testbed for RHEL.

Despite some dissenting opinions, the prevailing opinion seems to be that Sun's ZFS is the current state of the art. Ubuntu has decided to go with ZFS, although its license is incompatible with the Linux kernel's GPL. Ubuntu is, to be honest, my preferred distro for desktop stuff, and I've run it on πs before. It works well – better than Fedora, which like Debian eschews non-free drivers completely. It doesn't have Raspian's hardware acceleration but then everyone uses Raspbian on the π so it's an obvious target.

So, Ubuntu Server. Modern versions include ZFS built-in.

I tested this in a VM. Ubuntu Server 18.04 on its own ext4 boot drive... then add a bunch of 20GB drives to the VM... then tell it to create a RAIDZ. One very short time later, it has not only partitioned my drives, created an array, and formatted it, it's also created a mount point and mounted the new array on it. In seconds.

This is quite impressive and far more automatic than the many manual steps involved in doing this with the old Linux built-in 'mdraid' subsystem, as used in my old home server.

Conveniently – it was totally unplanned – by the time all my π4 bits were here, a new Ubuntu LTS was out, 20.04.

I installed all my drives into their new enclosures, plugged them one-by-one into my one of iMac's USB3 ports, and checked that they registered as 2TB drives. They did. Result. Oh, and yes, the cables were in the boxes. USB3 cables are entertainingly fat with shielding, but 5Gb/s is not to be sniffed at.

So, I put my new π4 in its case, put the latest Ubuntu Server on a µSD card – and hit a problem. I can't connect a display. I only have one HDMI monitor and nothing that will connect to a π4's micro-HDMI ports. And I don't really want to try to set this all up headless.

So off to Alza's actual physical shop I trogged to buy a µHDMI to HDMI convertor. Purchasing under quarantine is tricky, so it took a while, but I got it.

Fired up the π4 and it ran fine. No undervoltage warning running off the hub. So I hooked up all the drives, and sure enough, all were visible to the 'lsusb' command.

I referred to various howtos. Hmm. Apparently, you need to put partition records on them. Odd; I thought ZFS subsumed partitioning. Oh, well. I put an empty GUID disklabel on each drive. Then I added them to a RAIDZ, ZFS' equivalent of a RAID5 array.

Well, it wasn't as quick as in an VM, but only a minute or so of heavy disk activity later, the array is created, formatted, its mountpoint created and it's online. This is quite impressive stuff.



Then came the usual joys of Linux' fairly poor subsystem integration: Samba is a separate, different program, Samba user accounts are not Linux user accounts so passwords are different. Mounted filesystems inherit the permissions of their mountpoint. Macs still favour the old Apple protocol, so you need Netatalk as well. It, of course, doesn't integrate with Samba. NFS has two alternatives, and neither, of course, integrate with either Samba or Netatalk. There are good reasons NT caught on, which Apple successfully imitated and even exceeded in Mac OS X – and the Linux world remains as blindly indifferent to them as it has for a quarter of a century.

But some hours of swearing later, it all works. I can connect from Windows, Linux or Mac. It's all passively-cooled so it runs almost completely silently. It does need five power sockets, which is a snag, and there's a bit of cable spaghetti, but for an outlay of about £150 I have a running server which can sustain write speeds of about a gigabyte per second to the array.

I've put my old friend Webmin on it for a friendly web GUI.


So there you are.

While the π3 is a little bit underpowered, for a touchscreen Internet radio, it's great, and I'm very pleased with the result.

But the π4 is very different. It's a thoroughly capable little machine, perfectly usable as a general-purpose desktop PC, or as a server with quite decent bandwidth.

No, the setup has not been a beginner-friendly process. Apparently OpenMediaVault has versions for some single-board computers, including the π3, but not for the π4 yet. I am sure wider support will come.

But overall I'm impressed with how easy this was, without vast expert knowledge, and I'm delighted with the result. I will keep you posted on how it works longer-term.
liam_on_linux: (Default)
2019-02-10 11:48 am

Did Ubuntu switch to GNOME prematurely?

A response to a Reddit question.

I can only agree with you. I have blogged and commented enough about this that I fear I am rather unpopular with the GNOME developer team these days. :-(

The direct reason for the sale is that in founder Mark Shuttleworth's view, Ubuntu's bug #0 has been closed. Windows is no longer the dominant OS. There are many more Linux server instances, and while macOS dominates the high-end laptop segment, in terms of user-facing OSes, Android is now dominant and it is based on the Linux kernel.

His job is done. He has helped to make Linux far more popular and mainstream than it was. Due to Ubuntu being (fairly inarguably, I'd say) the best desktop distro for quite a few years, all the other Linux vendors [disclaimer: including my employer] switched away from desktop distros and over to server distros, which is where the money is. The leading desktop is arguably now Mint, then the various Ubuntu flavours. Linux is now mainstream and high-quality desktop Linuxes are far more popular than ever and they're all freeware.

Shuttleworth used an all-FOSS stack to build Thawte. When he sold it to Verisign in 1999, he made enough that he'd never need to work again. Ubuntu was a way for Shuttleworth to do something for the Linux and FOSS world in return.

It's done.

Thus, Shuttleworth is preparing Ubuntu for an IPO and floatation on the public stock market. As part of this, the company asked the biggest techie community what they'd like to see happen: https://news.ycombinator.com/item?id=14002821

The results were resounding. Drop all the Ubuntu-only projects and switch back to upstream ones. Sadly, this mostly means Red Hat-backed projects, as it is the upstream developer of systemd, PulseAudio, GNOME 3, Flatpak and much more.

Personally I am interested in non-Windows-like desktops. I think the fragmentation in the Linux desktop market has been immensely harmful, has destroyed the fragile unity (pun intended) that there was in the free Unix world, and the finger of blame can be firmly pointed at Microsoft, which did this intentionally. I wrote about this here: https://www.theregister.co.uk/Print/2013/06/03/thank_microsoft_for_linux_desktop_fail/

The Unity desktop came out of that, and that was a good thing. I never like GNOME 2 much and I don't use Maté. But Unity was a bit of a lash-up behind the scenes, apparently, based on a series of Compiz plugins. It was not super stable and it was hard to maintain. The unsuccessful Unity-2D fork was killed prematurely (IMHO), whereas Unity 8 (the merged touchscreen/desktop version) was badly late.

There were undeniably problems with the development approach. Ubuntu has always faced problems with Red Hat, the 800lb gorilla of FOSS. The only way to work with a RH-based project is to take it and do as your told. Shuttleworth has written about this.
https://www.markshuttleworth.com/archives/654
(See the links in that post too.)

Also, some contemporary analysis: https://www.osnews.com/story/24510/shuttleworth-seigo-gnomes-not-collaborating/

I am definitely not claiming that Ubuntu always does everything right! Even with the problems of working with GNOME, I suspect that Mir was a big mistake and that Ubuntu should have gone with Wayland.

Cinnamon seems to be sticking rather closer to the upstream GNOME base for its different desktop. Perhaps Unity should have been more closely based on GNOME 3 tech, in the same way.

But IMHO, Ubuntu was doing terrifically important work with Unity 8, and all that has come to nothing. Now the only real convergence efforts are the rather half-hearted KDE touchscreen work and the ChromeOS-on-tablet work from Google, which isn't all-FOSS anyway TTBOMK.

I am terribly disappointed they surrendered. They were so close.

I entirely agree with you: Unity was _the_ best Linux desktop, bar none. A lot of the hate was from people that never learned to use it properly. I have seen it castigated for lacking stuff that is basic built-in functionality that people never found how to use.

In one way, Unity reminded me of OS/2 2: "a better DOS than DOS, a better Windows than Windows." And it *was*! Unity was a better Mac OS X desktop than Mac OS X. I'm typing on a Mac now and there's plenty of things it can't do that Unity could. Better mouse actions. *Far* better keyboard controls.

I hope that the FOSS forks do eventually deliver.

Meantime, I reluctantly switched to Xfce. It's fine, it works, it's fast and simple, but it lacks functionality I really want.
liam_on_linux: (Default)
2018-03-03 12:09 am

Containers and the future of Unix

A lot of my speculations concern the future of new, alternative operating systems which could escape from old-fashioned, sometimes ill-conceived models and languages.

But I do spend some time thinking about what is happening with Linux, with FOSS Unix in general, and especially with container technologies, something I deal with in my current and recent day-jobs more and more.

One answer to legacy nastiness for years now has been to virtualise it. Today, that's changing to "containerise it".

There is a ton of cruft in Linux and in the BSDs and so on which nobody is ever going to fix. It's too hard, it would break too much stuff... but most of all, there is no commercial pressure to do it, so it's not going to happen.

I can certainly see potentialities. There are parallels that run quite deep.

For instance, consider a few unrelated technologies:

- FreeBSD jails and Solaris Zones. Start here.

They indirectly evolved into LXC, the container mechanism in the Linux kernel which gets relatively little attention. (Docker has critical mass, systemd namespaces are trendier in some niches, CRIO is gaining a little bit of traction.)

Docker now means Linux containers are a known thing, already widely-used with money being poured into their R&D.

Joyent, a company with some vision, saw a chance here. It took Illumos, the FOSS fork of Solaris, and revived and modernised some long-dead Sun code: lxrun, the Linux runtime for Solaris. Joyent SmartOS is therefore a tiny Solaris derivative -- it runs entirely from RAM, booted off a USB stick, but can efficiently scale to hundreds of CPU cores and many terabytes of RAM -- which can natively run Docker Linux containers.

You don't need to run a hypervisor. (It is a hypervisor, if you want that.) You don't need to partition the machine. You don't even need a single copy of Linux on it. You have a rack of x86-64 boxes running SmartOS, and you can throw tens of thousands of Docker containers at them.

It gives capacities and scalability that only IBM mainframes can approach.

Now, if one small company can do this with some long-unmaintained code, then consider what else could be done with it.

 - Want more resilient hosts for long-lived containers? Put some work into Minix 3 until it can efficiently run Linux containers. A proper fully-modular-all-the-way-down microkernel which can detect when its constituent in-memory services fail and restart them. It can in principle even undergo binary version upgrades, piecemeal, on a running system. This is stuff Linux vendors can't even dream of. It would, for a start, make quite a lot of the Spectre and Meltdown vulnerabilities moot, because there's no shared kernel memory space.

Unlike Darwin and xnu, it's a proper microkernel -- no huge in-kernel servers for anything here. (Don't eve try to claim WinNT is a microkernel or I will slap you.) Unlike the GNU HURD, it's here, it works, it's being very widely used for real workloads. And it's 100% FOSS.

 - Want a flexible cluster host which can migrate containers around a globe-spanning virtual datacenter?

Put some work into Plan 9's APE, its Linux runtime. Again, make it capable of running Linux containers. To Plan 9 they'd just be processes and it was built to efficiently fling them around a network.

I have looked into container-hosting Linux distros for several different dayjobs. I can't give details, but they scare me. One I've tried has a min spec of 8GB of RAM and 40GB of disk per cluster node, and a minimum of 3-4 nodes.

This is not small efficient tech. But it could be; SmartOS shows that.

 - Hell, more down to earth -- many old Linux hands are deserting to FreeBSD in disgust over systemd. FreeBSD already has containers and a quite current Linux runtime, the Linuxulator. It would be relatively easy to put them together and have FreeBSD host Linux containers, but the sort of people who dislike systemd also dislike containers.

Not everything would run under containers, sure, no. But they're suitable for far bigger workloads than is generally expected. You can migrate a whole complex Linux server into a container -- P2V migration as was once common when moving to hypervisors. I've talked to people doing it.

Ubuntu LXD is specifically intended for this, because Ubuntu isn't certified for SAP, only SUSE is, so Ubuntu wants to be able to run SLE userlands. Ditto some RHEL-only stuff.

But what if it doesn't work with containers at all?

Well, as parallels...

[1] A lot of Win32 stuff got abandoned with the move to WinXP. People liked the new OS enough that stuff that didn't work got left behind.

[2] Apple formalised this with Carbon after the NeXT acquisition. The MacOS APIs were not clean and suitable for a pre-emptive multitasking OS. So Apple stripped them out and said "if you use this subset, your app can be ported. If you don't, it can't."

Over the next few years, the old OS was forcibly phased out -- there is a generation of late-era gigahertz-class G4 and G5 PowerMac that refuses to boot classic MacOS. Apple tweaked the firmware to prevent it. You _had_ to run OS X on them, and although versions >= 10.4 could run a Classic MacOS VM, not everything worked in a VM.

So the developers had to migrate. And they did, because although it was a lot of work, they wanted to keep selling software.

It worked so well that in the end the migration from PowerPC to Intel was less painful than the one from classic MacOS to OS X.

So maybe Linux workloads that won't work in containers will just go away, replaced by ones that will -- and apps that play nice in a container don't care what distro they're on, and that means that they will run on top of SmartOS and FreeBSD and maybe in time Minix 3 or Plan 9.

And so we'll get that newer, cleaner, reworked Unix after all, but not by any incremental process, by a quite dramatic big-bang approach.

And if there comes a point when it's desirable to run these alternative OSes for some users, because they provide useful features in nice handy easy ways, well, maybe they'll gain traction.

And if that happened, then maybe some people will investigate native ports instead of containerised Linux versions, and gain some edge, and suddenly the Unix world will be blown wide open again.

Might happen. Might not. It's not what I am really interested in, TBH. But it's possible -- existing products, shipping for a few years, show that.
liam_on_linux: (Default)
2015-01-30 06:14 pm
Entry tags:

Are Macs still better than PCs, or isn't there any real difference any more?

They're a bit better in some ways. It's somewhat marginal now.

OK. Position statement up front.

Anyone who works in computers and only knows one platform is clueless. You need cross-platform knowledge and experience to actually be able to assess strengths, weaknesses, etc.

Most people in IT this century only know Windows and have only known Windows. This means that the majority of the IT trade are, by definition, clueless.

There is little real cross-platform experience any more, because so few platforms are left. Today, it's Windows NT or Unix, running on x86 or ARM. 2 families of OS, 2 families of processor. That is not diversity.

So, only olde phartes, yeah like me, who remember the 1970s and 1980s when diversity in computing meant something, have any really useful insight. But the snag with asking olde phartes is we're jaded & curmudgeonly & hate everything.

So, this being so...

The Mac's OS design is better and cleaner, but that's only to the extent of saying New York City's design is better and cleaner than London's. Neither is good, but one is marginally more logical and systematic than the other.

The desktop is much simpler and cleaner and prettier.

App installation and removal is easier and doesn't involve running untrusted binaries from 3rd parties, which is such a hallmark of Windows that Windows-only types think it is normal and natural and do not see if for the howling screaming horror abomination that it actually is. Indeed, put Windows types in front of Linux and they try to download and run binaries and whinge when it doesn't work. See comment about cluelessness above.

(One of the few places where Linux is genuinely ahead -- far ahead -- today is software installation and removal.)

Mac apps are fewer in number but higher in quality.

The Mac tradition of relative simplicity has been merged with the Unix philosophy of "no news is good news". Macs don't tell you when things work. They only warn you when things don't work. This is a huge conceptual difference from the VMS/Windows philosophy, and so, typically, this goes totally unnoticed by Windows types.

Go from a Mac to Windows and what you see is that Windows is constantly nagging you. Update this. Update that. Ooh you've plugged a device in. Ooh, you removed it. Hey it's back but on a different port, I need a new driver. Oh the network's gone. No hang on it's back. Hey, where's the printer? You have a printer! Did you know you have an HP printer? Would you like to buy HP ink?

Macs don't do this. Occasionally it coughs discreetly and asks if you know that something bad happened.

PC users are used to it and filter it out.

Also, PC OSes and apps are all licensed and copy-protected. Everything has to be verified and approved. Macs just trust you, mostly.

Both are reliable, mostly. Both just work now, mostly. Both rarely fail, try to recover fairly gracefully and don't throw cryptic blue-screens at you. That difference is gone.

But because of Windows' terrible design and the mistakes that the marketing lizards made the engineers put in, it's howlingly insecure, and vastly prone to malware. This is because it was implemented badly.

Windows apologists -- see cluelessness -- think it's fine and it's just because it dominates the market. This is because they are clueless and don't know how things should be done. Ignore them. They are loud; some will whine about this. They are wrong but not bright enough to know it. Ignore them.

You need antimalware on Windows. You don't on anything else. Antimalware makes computers slower. So, Windows is slower. Take a Windows PC, nuke it, put Linux on it and it feels a bit quicker.

Only a bit 'cos Linux too is a vile mess of 1970s crap. If it still worked, you could put BeOS on it and discover, holy shit wow lookit that, this thing is really fsckin' fast and powerful, but no modern OS lets you feel it. It's under 5GB of layered legacy crap.

(Another good example was RISC OS. Today, millions of people are playing with Raspberry Pis, a really crappy underpowered £25 tiny computer that runs Linux very poorly. Raspberry Pis have ARM processors. The ARM processor's original native OS, RISC OS, still exists. Put RISC OS on a Raspberry Pi and suddenly it's a very fast, powerful, responsive computer. Swap the memory card for Linux and it crawls like a one-legged dog again. This is the difference between an efficient OS and an inefficient one. The snag is that RISC OS is horribly obsolete now so it's not much use, but it does demonstrate the efficiency of 1980s OSes compared to 1960s/1970s ones with a few decades of crap layered on top.)

Windows can be sort of all right, if you don't expect much, are savvy, careful and smart, and really need some proprietary apps.

If you just want the Interwebs and a bit of fun, it's a waste of time and effort, but Windows people think that there's nothing else (see clueless) and so it survives.

Meanwhile, people are buying smartphones and Chromebooks which are good enough if you haven't drunk the cool-aid.

But really, they're all a bit shit, it's just that Windows is a bit shittier but 99% of computers run it and 99% of computer fettlers don't know anything else.

Once, before Windows NT, but after Unix killed the Real Computers, Unix was the only real game in town for serious workstation users.

Back then, a smart man wrote:

“I liken starting one’s computing career with Unix, say as an undergraduate, to being born in East Africa. It is intolerably hot, your body is covered with lice and flies, you are malnourished and you suffer from numerous curable diseases. But, as far as young East Africans can tell, this is simply the natural condition and they live within it. By the time they find out differently, it is too late. They already think that the writing of shell scripts is a natural act.” — Ken Pier, Xerox PARC
That was 30y ago. Now, Windows is like that. Unix is the same but you have air-conditioning and some shots and all the Big Macs you can eat.

It's a horrid vile shitty mess, but basically there's no choice any more. You just get to choose the flavour of shit you will roll in. Some stink slightly less.
liam_on_linux: (Default)
2014-11-02 03:52 am

I've finally tried going through the Arch way.

I have been meaning to try Arch Linux for years.

As a former RPM user, once I finally made the switch to Ubuntu, more or less exactly 10y ago, well, since then, I have become so wedded to APT that I hesitate with non-APT distros.

My spare system on this machine is Crunchbang, which I like a lot, but is a bit too Spartan in its simplicity for me. Crunchbang is based on the stable version of Debian, which gives it one big advantage on my 2007-era built-for-Windows-Vista hardware: it uses a version of X.org so old that the ATI fglrx drivers for my Radeon HD 3470 GPU still work, which they haven't done on Ubuntu for 2 years now.

But there was a spare partition or 2 waiting. I tried Elementary -- very pretty, but the Mac OS X-ness is just skin-deep; it's GNOME 3, very simplified. No ta. Deepin is too slow and doesn't really offer anything I want -- again, it's a modification of GNOME 3, albeit an interesting one. Same goes for Zorin-OS. I've tried Bodhi before -- it's interesting, but not really pretty to my eyes. (Its Enlightenment desktop is all about eye-candy; as a desktop, it's just another Windows Explorer rip-off. If it shipped with a theme that made it look like one of those shiny floaty spinny movie-computer UIs, I might go for it, but it doesn't, it's all lairy glare that only a teenage metalhead could love.) Fedora won't even install; my partitioning is too complex for its installer to understand. SUSE is a bit bloaty for my tastes, and I don't like KDE (or GNOME 3), which also rules out PCLinuxOS and Deepin.

So Arch was the next logical candidate...

I've been a bit sheepish since an Imaginary Internet Friend, Ric Moore, tried it with considerable success a month or two ago. (As I write, he's in hospital having a foot amputated. I've been thinking of him tonight & I hope he's doing well.)

So I have finally done it. Downloaded it, burned it to a CD -- yes, it's that small -- installed it on one of my spare partitions and I am in business.

After a bit of effort and Googling, I found a simple walkthrough, used it, got installed -- and then discovered that Muktware only tells you about KDE, and assumes you'll use that and nothing else. I don't care for KDE in its modern versions, so I went with Xfce.

Getting a DM working was non-trivial but now I have LXDM -- the 3rd I tried -- and it works. I have an XFCE4 desktop with the "goodies" extras, Firefox, a working Internet connection via Ethernet, and not much else.

It does feel very quick, though, I must give it that. Very snappy. I guess now begins the process of hunting down all the other apps that I use until I've replicated all my basic toolset.

The install was a bit fiddly, much more manual than anything I've done since the mid-1990s, but actually, it all went on very smoothly, considering that it's a lot of hand-entered commands which actually do not seem to depend much on your particular config.
liam_on_linux: (Default)
2014-06-27 05:07 pm
Entry tags:

Actual civilised modern text editors for the Linux console [tech blog post, by me]

Long time, no post. This is because since April, I have started a new job where I actually get paid to write technical stuff for a living.

(Hint - I'm going to have to change that usericon...)

Anyway, this subject came up in conversation with my colleague Pavel recently. In my department, there are some Vi[m] advocates, at least one Emacs user in the wild (approach with caution), and when I said I used Gedit from choice, I got pitying looks. :¬)

Which gave me a chance to have my usual rant about the deep and abiding nastiness of both Vi and Emacs, which did at least provide some amusement. It also led Pavel to ask, quite reasonably, what I did want from a console/shell text editor that wasn't provided by, say, Joe, Nano or Pico.

I said CUA and then had to explain what CUA was, and pointed at SETedit, which I've linked to before. Sadly, it hasn't been updated in a while. Packages are only for old versions of popular distros.
http://setedit.sourceforge.net/

This led him to look thoughtful and go off and do some digging. He came back with some gems.

Firstly, there's the rather fun Text Editors Wiki, which is not as comprehensive as it might be but has a lot of interesting reading.
http://texteditors.org/cgi-bin/wiki.pl

First, he pointed me at XWPE. It certainly looks the part, but sadly the project seems to have died. I did get it running on Fedora 20 by installing some extra libraries and symlinking them to names XWPE wanted, but it crashes very readily.
http://www.identicalsoftware.com/xwpe/

After some more hunting, he also found eFTE, enhanced FTE. I rather like this. Not all the shortcuts do what I expect, but it works well nonetheless.
http://sourceforge.net/projects/efte/

Incidentally, eFTE seems to be a fork of a no-longer-maintained older editor, FTE:
http://fte.sourceforge.net/

More recently, I've also discovered Tilde. It is currently maintained and has recent packages available. It looks a bit richer than eFTE, but sadly, the Alt key doesn't work in a window. Clearly this is a known issue as there's a workaround using Esc instead, but it makes it 2 keystrokes rather than one with a modifier.
http://os.ghalkes.nl/tilde/

I remain surprised that these things are obscure & little-known. I'd have thought that given how many people are moving from other OSes to Linux, a lot more MICROS~1 émigrés would have wanted such tools.
liam_on_linux: (Default)
2013-12-16 02:47 pm

Using a PC via a screenreader for a sighted person - day 2.

Facebook readers may have noted my post yesterday, when I mentioned that I was trying to resurrect an old notebook with a dead screen by using a screenreader. I commented:

"Just spent an hour trying to update a fresh install of Windows XP SP3 on a PC with no screen, using speech alone. Haven't felt so lost since 1988. It's currently on 100 of 125, though, which is a sort of success..."

Well, I've spent a little more time on it today.

According to http://update.microsoft.com I now have all essential updates installed. I'm not feeling brave enough to tackle the optional updates just yet - I'm still terrible at navigating web pages.

I've also managed to install MS Security Essentials, and currently, Ninite claims to be installing Opera, OpenOffice and a FOSS PDF reader.

It's a very chastening experience. I am a dab hand with driving Windows without a mouse - I learned on Windows 2.0 in the days when my employers didn't own a PC mouse. But much of the XP and Windows apps' UI is either inaccessible by keyboard, unreadable or just unlabelled.

For instance, stepping through the icons in the notification area, I get "icon... icon... NVDA... icon... Automatic updates... clock." Selecting each icon and opening it is the only way to find out what it's the icon for. One gives the wireless network connection info, for instance, but some lazy-ass Microsoft programmer forgot to give it a text label.

The entire UI of the MS Security Essentials consists of the following: "home... update... options... scan... exit." That's it. No legible text at all. I can open Task Manager and move between the tabs, but there's no way to sort the list of tasks to find what is hogging the system. That needs a mouse-click.

Progress bars are unreadable, but NVDA makes a series of rising beeps to tell you that something's happening. It's hard to tell how far you've got, though. The mandatory Windows Genuine Authentication installer stops at about 80%, every time, even after 3 reboots. I gave up and used a third-party WGA killer app to nuke it into oblivion.

And I've compared notes with [livejournal.com profile] ednun on this. Ubuntu seems to be about the best Linux for accessibility, with an integrated screenreader, Orca - but it can read considerably less than NVDA can. Windows does seem to be the best option.

It's quite scary. Certainly I'm nowhere near being able to post status updates from a screenless PC.

(Weird font changes courtesy of the LJ rich-text edit control. Sorry about that.)
liam_on_linux: (Default)
2013-05-18 05:56 pm
Entry tags:

zRam and Swapspace

So, just for the experiment, I tried configuring a 1GB RAM VM with both zRam (compressed swap in RAM) and swapspace (on-demand swapfiles in /var so you don't need a swap partition).

It seemed to work fine. I loaded Firefox with a ton of image tabs, plus LibreOffice, the GIMP, VLC, Evince, System Monitor and watched the swap gradually climb until zRam's half a gig of "virtual" virtual memory (IYSWIM) was exhausted, at which point it started creating swapfiles - one of 216MB followed by one of 270MB.

System performance gradually degraded, as you might expect. Eventually System Monitor froze up and then Firefox, but I suspect that if I had given them long enough, they'd have recovered as they were swapped back in.

The only snag: trying to hibernate, it did it happily, but when the VM rebooted, I got a cold-boot rather than a recovery from hibernation.

But if you don't want hibernation - and I don't, not on desktops - then the combination seems to work well for slightly low-memory machines.

I'd say that if you don't want or need hibernation support, there doesn't seem to be much need for a dedicated swap partition any more.

[Techie details: Mint 14, 32-bit, both Cinnamon and Maté desktops, fully up-to-date. 1GB RAM, 8GB VHD, a single ext2 partition for / and no swap partition. Running under the latest VirtualBox under the latest Ubuntu 64-bit, 3D graphics enabled (for Cinnamon's benefit).]
liam_on_linux: (Default)
2011-09-21 06:33 pm

The good old days of server management, and where it all went wrong

I have spent a lot of time and effort this year on learning my way around the current generation of Windows Server OSs, and the end result is that I've learned that I really profoundly dislike them.

Personally, I found the server admin tools in NT 3 and 4 to be quite good, fairly clean and simple and logical - partly because it was built on LAN Manager, which was IBM-designed, with a lot of experience behind it.

Since Windows 2000 Server, the new basis, Active Directory, is very similar that of Exchange. Much of the admin revolves around things like Group Policies and a ton of proprietary extensions on top of DNS. The result is a myriad of separate management consoles, all a bit different, most of them quite limited, not really following Windows GUI guidelines because they're not true Windows apps, they're snap-ins to the limited MS Management Console. Just like Exchange Server, there are tons and tons of dialog boxes with 20 or 30 or more tabs each, and both the parent console and many of the dialogs containing trees with a dozen+ layers of hierarchy.

It's an insanely complicated mess.

The main upshot of Microsoft's attempts to make Windows Server into something that can run a large, geographically-dispersed multi-site network is that the company has successfully brought the complexity of managing an unknown Unix server to Windows.

On Unix you have an unknown but large number of text files in an unknown but large number of directories, which use a wide variety of different syntaxes, and which have a wide variety of different permissions on them. These control an unknown but large number of daemons from multiple authors and vendors which provide your servers' various services.

Your mission is to memorise all the possible daemons, their config files' names, locations and syntaxes, and use low-level editing tools from the 1960s and 1970s to manage them. The boon is that you can bring your own editors, it all is easily remotely manageable over multiple terminal sessions, and that components can in many cases be substituted one for another in a somewhat plug-and-play fashion. And if you're lucky enough to be on a FOSS Unix, there are no licensing issues.

These days, the Modern way to do this is to slap another layer of tools over the top, and use a management daemon to manage all those daemons for you, and quite possibly a monitoring daemon to check that the management daemon is doing its job, and a deployment daemon to build the boxes and install the service, management and monitoring daemons.

On Windows, it's all behind a GUI and now Windows by default has pretty good support for nestable remote GUIs. Instead of a myriad of different daemons and config files, you have little or no access to config files. You have to use an awkward and slightly broken GUI to access config settings hidden away in multiple Registry-like objects or databases or XML files, mostly you know or care not where. Instead of editing text files in your preferred editor, you must use a set of slightly-broken irritatingly-nonstandard and all-subtly-different GUIs to manipulate vast hierarchical trees of settings, many of which overlap - so settings deep in one tree will affect or override or be overridden by settings deep in another tree. Or, deep in one tree there will be a whole group of objects which you must manipulate individually, which will affect something else depending on the settings of another different group of objects elsewhere.

Occasionally, at some anonymous coder's whim, you might have to write some scripts in a proprietary language.

When you upgrade the system, the entire overall tree of trees and set of sets will change unpredictably, requiring years of testing to eliminate as many as possible of the interactions.

But at least in most installs it will all be MS tools running on MS OSs - the result of MS' monopoly over some two decades being a virtual software monoculture.

But of course often you will have downversion apps running on newer servers, or a mix of app and server OS versions, so some machines are running 2000, some 2003, some 2008 and some 2008R2, and apps could span a decade or more's worth of generations.

And these days, it's anyone's guess if the machine you're controlling is real or a VM - and depending on which hypervisor, you'll be managing the VMs with totally different proprietary toolsets.

If you do have third-party tools on the servers, they will either snap-into the MS management tools, adding a whole ton of new trees and sets to memorise your way around, or they will completely ignore it and offer a totally different GUI - typically one simplified to idiot level, such as a enterprise-level backup solution I supported in the spring which has wizards to schedule anything from backups to verifies to restores, but which contains no option anywhere to eject a tape. It appears to assume that you're using a robot library which handles that automatically.

Without a library, tape ejection from an actual drive attached to the server, required a server reboot.

But this being Windows, almost any random change to a setting anywhere might require a reboot. So, for instance, Windows Terminal Services runs on the same baseline Windows edition, meaning automatic security patch installation - meaning all users get prompted to reboot the server, although they shouldn't have privileges to actually do so, and the poor old sysadmins, probably in a building miles away or on a different continent, can't find a single time to do so when it won't inconvenience someone.

This, I believe, is progress. Yay.

After a decade of this, MS has now decided, of course, that it was wrong all along and that actually a shell and a command line is better. The snag is that it's not learned the concomitant lessons of terseness (like Unix) or of flexible abbreviation (like VMS DCL), or of cross-command standadisation and homogeneity (although to be fair, Unix never learned that, either. "Those who do not know VMS are doomed to reinvent it, poorly," perhaps.) But then, long-term MS users expect the rug to be pulled from under them every time a new generation ships, so they will probably learn that in time.

The sad thing about the proliferation of complexity in server systems, for me, is that it's all happened before, a generation or two ago, but the 20-something-year-olds building and using this stuff don't know their history. Santayana applies.

The last time around, it was Netware 4.

Netware 3 was relatively simple, clean and efficient. It couldn't do everything Netware 2 could do, but it was relatively streamlined, blisteringly fast and did what it did terribly well.

So Novell threw away all that with Netware 4, which was bigger, slower, and added a non-negotiable ton of extra complexity aimed at big corporations running dozens of servers across dozens of sites - in the form of NDS, the Netware Directory Services. Just the ticket if you are running the network the size of Enron or Lehman Brothers, but a world of pain for the poor self-taught saps running single servers of millions of small businesses. They all hated it, and consequently deserted Netware in droves. Most went to NT4; Linux wasn't really there yet in 1996.

Now, MS has done exactly the same to them.

When Windows 2000 came around, Linux was ready - but the tiny handful of actual grown-up integrated server distros (such as eSmith, later SME Server) have never really caught on. Instead, there are self-assembly kits and each sysadmin builds their own. It's how it's always been done, why change?

I had hoped that Mac OS X Server might counteract this. It looked the The Right Thing To Do: a selection of the best FOSS server apps, on a regrettably-proprietary but solid base, with some excellent simple admin tools on top, and all the config moved into nice standard network-distributable XML files.

But Apple has dropped the server ball somewhere along the line. Possibly it's not Apple's fault but the deep instinctual conservatism of network and server admins, who would tend to regard such sweeping changes with fear and loathing.

Who knows.

But the current generation of both Unix and Windows server products both look profoundly broken to me. You either need to be a demigod with the patience and deep understanding of an immortal to manage them properly, or just accept the Microsoft way: run with the defaults wherever possible and continually run around patching the worst-broken bits.

The combination of these things is one of the major drivers behind the adoption of cloud services and outsourcing. You move all the nightmare complexity out of your company and your utter dependence on a couple of highly-paid god-geeks, and parcel it off to big specialists with redundant arrays of highly-paid god-geeks. You lose control and real understanding of what's occurring and replace it with SLAs and trust.

Unless or until someone comes along and fixes the FOSS servers, this isn't going to change - it's just going to continue.

Which is why I don't really want to be a techie any more. I'm tired of watching it just spiral downwards into greater and greater complexity.

(Aside: of course, nothing is new under the sun. It was, I believe, my late friend Guy Kewney who made a very plangent comment about this same process when WordPerfect 5 came out. "With WordPerfect 4.2, we've made a good bicycle. Everyone knows it, everyone likes it, everyone says it's a good bicycle. So what we'll do is, we'll put seven more wheels on it."

In time, of course, everyone looked back at WordPerfect 5.1 with great fondness, compared to the Windows version. In time, I'm sure, people will look back at the relative homogeneity of Windows 2003 Server or something with fondness, too. It seems inevitable. I mean, a direct Win32 admin app running on the same machine at the processes it's managing is bound to be smaller, simpler and faster than a decade-older Win64 app running on a remote host...)
liam_on_linux: (Default)
2011-03-26 05:39 pm
Entry tags:

So I tried Natty Narwhal (Ubuntu 11.04). It was... um...

Since none of my spare or test machines have hardware 3D, I was unable to try it until recently. Then I was testing an MSI Wind Top all-in-one touchscreen Atom PC as part of the Simplicity Computers project. (We've decided against it now.)

(The Wind Top works OK with *buntu, but for one entertaining bug: the axes on the touchscreen are reversed. Move your finger left, the pointer goes right; move finger up, pointer goes down. Install the drivers and config to fix this (which depends on HAL, and so doesn't work right on modern *buntu) and the screen image moves offcentre and goes all blurry, so though the touchscreen now works, you can barely read anything, it's all ugly, and the picture is offset about 5mm vertical & 1cm horizontal from where it should be and thus where the pointer is. As it's an all-in-one, there are no screen geometry controls, hardware or software. At which point, we gave up and sent it back.)

Anyway, I got Natty alpha 3 or so working on it.

Compiz crashes more times than Aeroflot in volcano season, taking the "desktop" - not that that word is accurate any more - with it.

The autohiding menu bar is insane, combining the worst of MacOS (menus randomly changing depending which window is active and having no spacial association with whichever window they control - if they control any visible window) and the worst of the Amiga (on which menus are hidden unless you whack the mouse up to the top of the screen and then right-click.) It's about as discoverable as Minoan Linear A.

The NotADockHonest™ is weird and feels raw and unfinished, not like something that shipped as part of Ubuntu 10.04 and 10.10 Netbook Remix. I don't like it as much as the Mac OS X Dock - and I don't like that much - but I am prepared to give the Unity Dock time. Maybe I'll adapt to it.

I mean, I don't like GNOME panels much, either, after all. They're much more customisable than Windows ones, except not in the ways I want (e.g. vertical orientation (b0rked), e.g. large panels but small icons; (no, you can't have that. And you can't have any pudding, either. Bad user, no biccie.))

(Incidentally again, if you like vertical docks and panels, Docky and GLX-Dock and AWM are all broken, too. If you want a nice, attractive dock that actually works quite well in a vertical orientation, try ADeskBar. It's good. Best I've found for Linux yet. Homepage seems to be down, though.)

Mind you, after a little playing, I like the WindowMaker docks much less than OS X ones. (I mean, no labels or tooltips? You are taking the mickey, right?)

But so far, the new Ubuntu 11.04 layout, from a play with a flaky, unstable implementation, just felt like it wasn't something powerful and capable enough to run a PC with. Not yet.

I have no choice but to stick with GNOME 2 on my laptop. It's seven years old, but rock-solid and nicely fast & responsive with Maverick. Much much better than Windows XP on the same hardware. But its ATI Radeon Mobility - actually a 16MB Rage II or III, roughly - doesn't work with Compiz and to give good performance (and to be able to drive a 1280Ă—1024 external monitor) it has to be dropped to 65K colours.

Which Ubuntu provides no UI at all to do, of course.

So you have to edit /etc/X11/xorg.conf.

Only *buntu >10.x doesn't have an xorg.conf file any more. So you have to write one of your own. (I found a blank one that can be adapted, which is very handy.)

Once you've done that and got the graphics working, then you might, perhaps, want suspend/wake and hibernate/resume to work. That means adding "nomodeswitch" to the kernel boot parameters.

That means you lose the graphical boot sequence (which has the colours corrupted on this machine, anyway.)

So you might want to add "vga=791" to the kernel boot params too, to get a graphical boot back, in the same resolution as your desktop.

After doing all this, it works like a dream and is really nice, but forget any hardware 3D, so forget the Netbook interface - or the new Unity one. And also, I think, that means forget GNOME 3, as well.

The obscure and poorly-supported make of this weirdly non-standard machine?

IBM.

Not Lenovo, actual IBM. It's from 2004. A Thinkpad X31.

Saying all that, I still prefer *buntu to the alternatives.

But I think that as of or after Natty, I might be going over to Linux Mint full-time...

Mint, of course, is based on GNOME 2 and has no truck with any of this netbook or unity or GNOME 3 business.

But what is going to happen when GNOME 2 is no longer supported or updated, I wonder?

I mean (*shudder*) I might have to go over to KDE. But the ugly, it burnsssssss... I don't want 23,452,356 options to tweak, I want it to work, and it really helps if it looks vaguely professional and smart while it's at it, not like a red/green colourblind 13 year old's LSD nightmare.
liam_on_linux: (Default)
2010-01-26 09:36 pm

Ubuntu 9.10 FTW

In unrelated news, I had to bring up my main fileserver to retrieve the OpenSolaris & PC-BSD ISOs. Alas, its evaluation copy of Windows 2003 SBS has expired, but I get 1h to pull files off it each reboot, apparently.

I am considering trying to install Windows 2008 Server over the top. I don't care about saving my settings, I just don't want to have to backup & completely reformat. Alas, I seem to have lost my ISO of that.

Going looking, I found that W2k8 R2 is out & it is Micros~1's first ever 64-bit onlyOS. I'd missed this one. It's the server version of Windows 7, basically.

And I had no idea if the fairly-late-model Pentium 4 in my HP Proliant was a 64-bit capable one or not. I know it has hyperthreading, but not if it sports 64-bit extensions.

It doesn't have a mouse of its own and I couldn't get Ubuntu's rdesktop client to connect so I could run CPUID on it, so I tried logging in - only to be told that my time was up and be spanked with a BSOD. Thanks, Redmond.

An idea occurred. I could try booting my 64-bit Ubuntu CD. If it worked, it's 64-bit capable; if it doesn't, it's not.

Well, it's not, and Ubuntu helpfully printed a little message to tell me that I needed an x86-64 CPU and it could only find an x86-32 one. No worries; I am limited to original W2K8 Server then. I am sure I'll cope.

On a whim, I tried my copy of 32-bit Ubuntu 9.10, and to my considerable surprise, not only did it boot but it found the RAID controller and happily mounted my NTFS volume. I tried all manner of Linux distros on this last year - Ubuntu 8.04, 9.04, CentOS and SME Server - and none of them could see the RAID5 volume on its Dell-badged ALI MegaRAID card. So at some point late last year, they fixed the driver in the kernel.

Which was nice.

Which leaves me wondering... try to upgrade it to a newer Windows Server, in which I could do with more experience, or stick Ubuntu on it, which will probably be quicker and easier and more use, and won't date-expire on my in 6mths...?
liam_on_linux: (Default)
2010-01-26 09:26 pm
Entry tags:

More VirtualBox experimentation

One commenter to my big post about VirtualBox the other day - an old mate from CIX, [livejournal.com profile] syllopsium - said that he found VBox's support for OSs other than Windows or Linux to be pretty poor.

So, I thought I'd try the only couple of ISOs I have of OSs that don't belong to either of those families: OpenSolaris (0609 build) and PC-BSD 7.1 (a distro of FreeBSD 7). Interesting both BSD & Solaris are on VBox's list of supported VM types, so I guess they ought to work. Certainly both booted happily from their ISO files, straight into functioning GUIs. OpenSolaris is a live desktop, so I was even able to get Web access from it.

I'm particularly amused by OpenSolaris. It took 2min to boot. On my old PC - an AthlonXP 2800+ with 2G of RAM, so old but not an antique - the same copy of OpenSolaris, burned to a CD, took about 20-25min to boot, and when it did, I had no working Ethernet ports so no working Internet access either. It's a great deal faster in a VM on this machine than on bare metal on the old ones. OK, so, access to a cached ISO file is quicker than a physical optical disk, but not that much faster on the other OSs I have tried. Linux Mint didn't install hugely quicker than on a physical machine - I doubt it was as little as half the time, more like 2/3 of the time.

I must try both of these on the native hardware soon.

I'm discovering some limitations to the XP support, though. It is as one person in CIX:linux (slightly scornfully) described it: "a transparent-desktop job". XP windows do not intermingle with Linux windows; all XP windows form a single layer on the Linux desktop. Either they're all on top or none of them are. Also, in seamless mode, I can't move XP windows off the primary monitor onto my secondary screen - the seamless window is auto-sized to my primary monitor and that's all you get.

Neither of these is killer problems. One that is more awkward is that because GNOME sees the XP VM as a single task, although I have a Spotify window on my Linux desktop, I can't alt-tab to it or select it from the GNOME window selector (when that is actually working, which on a vertical panel is fairly seldom). I think that both VMware Fusion and Parallels on the Mac have solved this.

I still think it's pretty damn fantastic, all the same, mind...
liam_on_linux: (Default)
2010-01-23 07:49 pm
Entry tags:

When NOT to use a VM & what Linux to use

A final caveat to my previous post: there is one thing you probably shouldn't try doing under XP-inna-VM: play games. The VM does sport optional 2D graphics acceleration, although I've spotted a few display glitches, but the copy of Windows in the VM can't get at your shiny whizzy fanheater of a 3D card & any modern 3D game is going to run like crap. For that, I'm afraid, you need to dual-boot into real native Windows.

TinyXP will do that just fine, but remember, you're going to have to find the latest drivers for every bit of kit in your machine. My advice:
- install TinyXP first, in a primary partition on the 1st hard disk.
- leave plenty of space for Linux; put all its partitions in logical drives in an extended partition
- next, after TinyXP is working but before it's got its drivers, install Ubuntu
- now, in Ubuntu, you can carefully peruse the output of

dmesg | less

... and work out what motherboard chipset you have, what graphics, sound, network card(s) &c. your machine is sporting. The best way to identify a motherboard, though, is just to look at it. Use a torch. You'll probably find the makers' name and the model number printed between the expansion slots.

- Using Linux, go download all the relevant Windows drivers from the manufacturers' websites.
- Go to Places | Computer and open your Windows partition. Copy the downloaded drivers into

C:\Documents and Settings\All Users\Desktop

- Then reboot into Windows again and they're all there, ready to install.

This method saves an awful lot of hassle trying to get Windows working if you have no driver disks.

If you install Ubuntu after Windows, it's smart enough to set up dual-boot for you. Install Windows after Ubuntu, it will screw your bootsector and you won't be able to boot Ubuntu any more. Also, Windows likes being in a primary partition, preferably the first, whereas Linux doesn't care.

Oh, and don't waste your time on anything other than Ubuntu. If you are at the level of expertise to have got any useful info from this piece, you probably don't need advice on choosing a distro... but just in case:

- OpenSUSE is huge and its package-management system is frankly a bit past it.
- Fedora is a sort of rolling beta. It never stabilises, it's not supported and there are no official media addons, which are free with Ubuntu.
- Kubuntu is OK if you're a KDE freak but if you don't know the difference between KDE & GNOME, just go for vanilla Ubuntu, which involves a lot less fiddling.
- Mandriva is OK but again its package-management system, like that in SUSE and Fedora, is a decade or so less advanced than the one in Ubuntu.
- Debian is too much like hard work unless you actively enjoy fiddling.
- Gentoo is for boy-racers, the sort of person who drives a 6Y old Vauxhall Nova with a full bodykit and a 150dB sound system. Just don't.
- All the rest are for Linux hackers. You don't want to go there.
liam_on_linux: (Default)
2010-01-23 07:34 pm
Entry tags:

Playing with virtualisation

I've not had a PC quick enough to really use PC-on-PC virtualisation in anger before, until [livejournal.com profile] ednun gave me the carcase of his old one. AMD Athlon64 X2 4800+, 2G RAM, no drives or graphics.

I've upped it to 4G, a couple of old 120GB EIDE hard disks, a DVD burner, a replacement graphics card (freebie from a friend) & a new Arctic Cooling Freezer7 Pro heatsink/fan from eBay to replace the old, clogged-up AMD OEM one. Total budget, just under ÂŁ20; result, quick dual-core 64-bit machine with 64-bit Linux running very nicely.

For some work stuff. I've been using Linux-under-Linux in VirtualBox, which works rather well - but it's a kinda specialised need. There are still a few things that either don't work all that well in Linux or which I can't readily do, though. Spotify runs under WINE but crackles & pops then stops playing after 2-3 minutes & never emits another cheep. My CIX reader, Ameol, also runs OK under WINE, but windows don't scroll correctly. I don't think there's any Linux software to sync my mobile phone or update its firmware, although I'm not sure I'd want to try the latter from within a VM anyway, just in case...

So I decided to try running Windows in a VM under Linux just for occasional access to a handful of Windows apps, without rebooting into my Windows 2000 & Windows 7RC partitions. (Makes mental note: better replace that Win7 one before the RC expires.)

I've always had reservations about running a "full-sized" copy of Windows this way. It seems very wasteful of resources to me. That is, running one full-fat full-function OS under another full-fat OS, just for access to a couple of apps. (Also, you need a licence, if the guest is a modern, commercial product, not some ancient piece of abandonware.)

So I thought I'd try some "legacy" versions of Windows to see how well they worked. I have a fairly good archive here, from Windows 3.1 up to Win7.
Read more... )