liam_on_linux: (Default)
Stumbled across this in the Internet Archive...

Red Hat Linux 6.2 Deluxe

Red Hat used to be a big fish in a small pond, but version 6.2 must prove itself seaworthy.

Red Hat is one ofthe longest-established Linux distributions and the first to be split into packages - archived bundles containing all the programs and supplementary files forming an application, allowing the user to add, remove or upgrade individual subsystems in a single operation. This modularity and upgradability made it the first Linux for non-experts and proved highly successful, to the extent that it remains the most widely used distribution in America and, in some ways, the de facto 'standard’ Linux.

In the past few years, though, rival distributions have surpassed it in some areas and the company’s rigorous stance against including commercial components has imposed some restrictions.

Now Red Hat is playing catch-up. Version 6.0 moved to the 2.2 kernel and version 6.1 aped Caldera and added a graphical installation program, Anaconda. This latest version, 6.2 (codenamed Zoot), smoothes out some wrinkles caused by these changes, adds an interactive startup sequence allowing troublesome components to be deactivated and claims better hardware detection. KDE is offered as an alternative GUI, although GNOME - now on its second release - is the recommended default.

Installation is quite easy. A boot floppy is provided, but the CD is bootable and  after a prompt launches straight into graphics mode. Like Corel LinuxOS, there’s an option to install Linux into a FAT filesystem if you want to keep Windows and don’t want to repartition your drive - although this reduces performance. The installer’s partitioning tool is pretty basic, though, and only FIPS (Federal Information Processing Standard) is supplied for non-destructive repartitioning; we recommend buying Partition Magic for this.

There is a selection of pre-configured installations, including server, GNOME and KDE workstations and a custom option which allows packages to be individually selected. The installer can update an existing Red Hat installation from version 2.0 upwards, which is a neat touch. We tried this on 6.0 and 6.1 installations and it worked well.

There were some niggles, though. On a recent notebook PC, all the hardware, including graphics, sound, PC Card slots, USB and power management was correctly detected and configured, but on an older Cyrix machine, vanilla NE2000 and SoundBlaster 16 cards were missed - although the 'Getting Started’ manual contained simple instructions on how to add them later.

[screenshot]
Red Hat 6.2 offers a choice of GUIs as well as a vast array of skins for that personal touch

Unless you choose a custom install, there’s no option as to where to install the LILO boot manager and it silently overwrote PowerQuest’s BootMagic.

You can choose whether to boot into text or graphics mode, but misconfiguration of the X server on the Cyrix desktop meant that graphics mode failed and had to be configured manually from the command line.

Once installed, the GNOME desktop is pretty good. There isn’t the same range of integrated accessories and utilities as with KDE, but a range of helpful non-GNOME tools is included and the GNOME tools include an excellent help system, file manager and a full spreadsheet, Gnumeric.

The choice of window managers and graphical 'skins’, wallpapers and screensavers is stunning: GNOME looks more attractive than KDE and is vastly more customisable. The desktop also holds links to helpful websites and local documentation and icons for CD and floppy drives. Ifyou choose to install KDE instead, or even alongside, you get only the default KDE desktop.

The basic version of Red Hat can be downloaded as a CD image from the company’s website or installed over the Internet. The Deluxe boxed edition adds 90 days oftelephone support, novice-level printed manuals and several additional CDs: documentation and source code as well as free 'PowerTools’ and commercial workstation applications. The Professional edition doubles the period of support, which also covers Apache configuration and includes more server-based tools.

Red Hat remains a solid distribution, but it no longer has the technological edge. SuSE is easier to install and includes vastly more software, Caldera is better integrated and has more corporate features and Corel, although immature, is the most user-friendly and Windows-like Linux around.

LIAM PROVEN


DETAILS

★★★

PRICE £64 (£54.47 ex VAT)

CONTACT Red Hat 01483 300169

http://europe.redhat.com/

SYSTEM REQUIREMENTS x86 processor with 16MB of RAM and 500MB of disk space
PROS Easier than ever; widely supported
CONS Poorer integration, features and user-friendliness than the competition
OVERALL Red Hat is the Linux baseline: if you’re already familiar with it, it’s still a sound choice, but other variants offer more
liam_on_linux: (Default)
Not to blow my own horn, but, er, well, OK, yes I am. Tootle-toot.

I'm back writing for the Register again, in case you hadn't noticed. A piece a month for the last 2 months.

You might enjoy these:

Old, not obsolete: IBM takes Linux mainframes back to the future
Your KVMs, give them to me
http://www.theregister.co.uk/2015/11/02/ibm_linux_mainframes/

From Zero to hero: Why mini 'puter Oberon should grab Pi's crown
It's more kid-friendly... No really
http://www.theregister.co.uk/2015/12/02/pi_versus_oberton/
liam_on_linux: (Default)
I was just prodded by someone when suggesting that some friends try Linux. I forgot to mention that you can try it without risking your existing PC setup. It prompted me to write this...

I forget that non-techies don't _know_ stuff like that.

Download a program called VirtualBox. It's free and it lets you run a whole other operating system - e.g. Linux - under Windows as a program. So you can try it out without affecting your real computer.

https://www.virtualbox.org/

If all you know is Windows, I'd suggest Linux Mint: http://www.linuxmint.com/

It has a desktop that looks and works similarly to Windows' classic pre-Win8 look & feel.

Google for the steps but here's the basic instructions:

[1] Download and install VirtualBox

[2] Then download the Virtualbox Extensions from the same site. Double-click the extensions file to install it into Vbox. (They have to do it this way for copyright reasons.)

[3] Download Mint. It comes as an ISO file, an image of a DVD.

[4] Make a new VM in VBox. Give it 2-3 gig of RAM. Enable display 3D acceleration in the settings. (Remember, anything you don't know how to do, Google it.) Leave all the other settings as they are.

[5] Start your new VM. It will ask for an ISO file. Point it at the ISO file of Mint you downloaded.

[6] It will boot and run. Install it onto the virtual hard disk inside Vbox. Just accept all the defaults.

[7] Reboot your new Mint VM.

[8] Install the Vbox Guess Additions. On the VBox Device menu, choose “Insert Guest Additions ISO”. Google for instruction on how to install them.

[9] When it’s finished, reboot the VM.

[10] Update your new copy of Linux Mint. (Remember, Google for instructions.)

That’s it. Play with it. See if you can do the stuff you normally do on Windows all right. If you can’t, Google for what program to use and how to install it. It’s not as quick as a real PC but it works.

Don’t assume that because you know how to do something on Windows, it works that way on Linux. E.g. you never should download programs from a website and install them into Linux — it has a better way. Be prepared to learn some stuff.

If you can work it, then you can install it on your PC alongside Windows. This is called Dual Booting. It’s quite easy really and then you choose whether you want Windows or Linux when you turn it on.

All my PCs do it, but I use Windows about once or twice a year, when I absolutely need it. Which is almost never. I only use Windows if someone is paying me too — it is a massive pain to maintain and keep running properly compared to more grown-up equivalents. (Linux and Mac OS X are based on a late-1960s project; they are very mature and polished. The first version of the current Windows family is from 1993. It’s still got a lot of growing up to do — it’s only half the age.)

It’s genuinely better. No, you don’t get all the Windows programs. There aren’t many games for it, for instance. But it can do anything Windows can do, it’s faster, it’s immune to all the Windows viruses and nasties so you don’t need antivirus or a firewall or anything. That means it’s faster, too — antivirus slows computers down, but you need it on Windows.

All the apps are free. All the updates are free, forever. There are thousands of people on web fora who will help you if you have problems, you just have to ask. It’s educational — you will learn more about computers from learning a different way to use them, but that means you won’t be so helpless. You don’t need to be a white-coated genius scientist, but what it means is you take control back from some faceless corporation. Remember, the world’s richest man got that way by selling people stuff they could have had for free if they just knew how.
liam_on_linux: (Default)
I have ruffled many feathers with my position that the touch-driven computing sector is growing so fast that it's going to subsume the old WIMP model completely. I don't mean that iPads will replace Windows PCs, but that the descendants of the PC will look and act more like tablets than today's desktops and laptops.

But where is it leading, beyond that point? I have absolutely no concrete idea. But the end point? I've read one brilliant model.

It's in one of the later Foundation books by Isaac Asimov, IIRC. (Not a series I'm that enamoured of, actually.)

A guy gets (steals?) a space yacht: a small, 1-man starship. (Set aside the plausibility of this.)

He searches the ship's crew quarters. In its few luxury rooms, there is no cockpit. No controls, no instruments, nothing. He is bemused.

He returns to the comfiest room, the main stateroom, i.e. cabin/bedroom. In it there is a large, bare dressing table with a comfy seat in front of it. He sits.

Two handprints appear, projected on the surface of the desk, shaped in light.

He studies them. They're just hand-shaped spots of light. He puts his hands on them.

And suddenly, he is much smarter. He knows the ship's position and speed in space. He knows where all the nearby planetary bodies are, their gravity wells, the speeds needed to reach them and enter orbit.

Thinking of the greater galaxy, he knows where all the nearby stars are, their masses, their luminosities, their planetary systems. Merely thinking of a planet, he knows its cities, ports, where to orbit it, etc.

All this knowledge is there in his mind if he wants it; if he allows his attention to move elsewhere, it's gone.

He sits back, shocked. His hands lift from the prints on the desk, and it all disappears.

That is the ultimate UI. One you don't know is there.

Any UI where there are metaphors and abstractions and controls you must operate is inferior; direct interaction is better. We've moved from text views of marked-up files with arcane names in folder hierarchies to today: hi-res, full-colour, moving images of fully-formatted documents and images. That's great.

Some people are happily directly manipulating these — drawing and stroking screens with all their fingers, interacting naturally. Push up to see the bottom of a document, tap on items of interest. It's so natural pre-toddlers can do it.

But many old hands still like their pointing hardware and little icons on screen that they can twiddle with their special pointing devices, and they shout angrily that it's more precise and it's tried and tested and it works.

Show them something better, no, it's a toy. OK for idly surfing the web, or reading, or watching movies, but no substitute for the "real thing".

It's a toy and the mere idea that these early versions could in time grow into something that could replace their 4-box Real Computer of System Unit, Monitor, Mouse and Keyboard is a nonsensical piece of idiocy.

Which is exactly what their former bosses and their tutors said about the Mac's UI 30y ago. It's doubtless what they said about the tinker-toy CP/M boxes a decade before that, and so on.

I'm guilty too. I am using a 25y old keyboard on my tiny silent near-unexpandable 2011 Mac mini, attached via a convertor that cost more than the keyboard and about a third as much as the Mac itself. I don't have a tablet; I don't personally like them much. I like my phablet, though. I gave away my Magic Trackpad - I didn't like it.

(And boy did my friends in the FOSS community curse me out for buying a Mac. I'm a traitor and a coward, apparently.)

But although I personally don't want this stuff, nonetheless, I think it's where we're going.

If adding more layers of abstraction to the system means we can remove layers of abstraction from the human-computer interface, then I'm all for it. The more we can remove, the simpler and easier and clearer the computers we can make, the better. And if we can make them really small and cheap and thus give one to every child in the poorer countries of the world — I'd be delighted.

If price was putting Microsoft and Apple out of business and destroying the career of everyone working with Windows, and replacing it all with that nasty cancerous GPL and Big-Brother-like services like Google — still worth it.

liam_on_linux: (Default)
(Repurposed CIX post.)

Don’t get me wrong. I like Apple kit. I am typing right now on an original 1990 Apple Extended II keyboard, attached via a ABD-USB convertor to a Core i5 Mac mini from 2011, running Mac OS X 10.10. It’s a very pleasant computer to work on.

But, to give an example of the issues — I also have an iPhone. It’s my spare smartphone with my old UK SIM in it.

But it’s an iPhone 4. Not a lot of RAM, under clocked CPU, and of course not upgradable.

So I’ve kept it on iOS 6, because I already find it annoyingly slow and iOS 7 would cause a reported 15-25% or more slowdown. And that’s the latest it will run.

Which means that [a] I can’t use lots of iPhone apps as they no longer support iOS 6.x and [b] it doesn’t do any of the cool integration with my Mac, because my Mac needs a phone running iOS 8 to do clever CTI stuff.

My old 3GS I upgraded from iOS 4 to 5 to 6, and regretted it. It got slower & slower and Apple being Apple, *you can’t go back*.

Apple kit is computers simplified for non-computery people. Stuff you take for granted with COTS PC kit just can’t be done. Not everything — since the G3 era, they take ordinary generic RAM, hard disks, optical drives, etc. Graphics cards etc. can often be made to work; you can, with work, replace CPUs and runs OSes too modern to be supported.

But it takes work. If you don’t want that, if you just max out the RAM, put a big disk in and live with it, then it’s fine. I’m old enough that I want a main computer that Just Works and gives me no grief and the Mac is all that and it cost me under £150, used. The OS is of course freeware and so are almost all the apps I run — mostly FOSS.

I like FOSS software. I use Firefox, Adium, Thunderbird, LibreOffice, Calibre, VirtualBox and BOINC. I also have some closed-source freeware like Chrome, Dropbox, TextWrangler and Skype. I don’t use Apple’s browser, email client, chat client, text editor, productivity apps or anything. More or less only iTunes, really.

What this means is that I can use pretty much the same suite of apps on Linux, Mac and Windows, making switching between them seamless and painless. My main phone runs Android, my travelling laptop is a 2nd-hand Thinkpad with the latest Ubuntu LTS on it.

As such, many of the benefits of an all-Apple solution are not available to me — texting and making phone calls from the desktop, seamless handover of file editing from desktop to laptop to tablet, wireless transparent media sync between computers and phone, etc.

I choose not to use any of this stuff because I don’t trust closed file formats and dislike vendor lock-in.

Additionally, I don’t like Apple’s modern keyboards and trackpads, and I like portable devices where I can change the battery or upgrade the storage. So I don’t use Apple laptops and phones and don’t own a tablet. iPads are just big iPhones and I don’t like iPhones much anyway. The apps are too constrained, I hate typing on a touchscreen “keyboard” and I don’t like reading book-length texts from a brightly-glowing screen — I have a large-screen (A4) Kindle for ebooks. (Used off eBay, natch.) TBH I’d quite like a backlight on it but the big-screen model doesn’t offer one.

But I don’t get that with Ubuntu. I never used UbuntuOne; I don’t buy digital content at all, from anyone; my Apple account is around 20 years old and has no payment method set up on it. I have no lock-in to Apple and Ubuntu doesn’t try to foist it on me.

With Ubuntu, *I* choose the laptop and I can (and did) build my own desktops, or more often, use salvaged freebies. My choice of keyboard and mouse, etc. I mean, sure, the Retina iMac is lovely, but it costs more than I’m willing to spend on a computer.

Android is… all right. It’s flakey but it’s cheap, customisable (I’ve replaced web browser, keyboard, launcher and email app, something Apple does not readily permit without drastic limitations) and it works well enough.

But it’s got bloatware, tons of vendor-specific extensions and it’s not quick.

Ubuntu is sleek as Linuxes go. I like the desktop. I turn off the web ads and choose my own default apps and it’s perfectly happy to let me. I can remove the built-in ones if I want and it doesn’t break anything.

If I could get a phone that ran Ubuntu, I’d be very interested. And it might tempt me into buying a tablet.

I’ve tried all the leading Linuxes (and most of the minor ones) and so long as you’re happy with its desktop, Ubuntu is the best by a country mile. It’s the most polished, best-integrated, it works well out of the box. I more or less trust them, as much as I trust any software vendor.

The Ubuntu touch offerings look good — the UI works well, the apps look promising, and they have a very good case for the same apps working well on phone and tablet, and the tablet becoming a usable desktop if you just plug a mouse in.

Here’s a rather nice little 3min demo:
https://www.youtube.com/watch?v=c3PUYoa1c9M

Wireless mouse turned on: desktop mode, windows, title bars, menus, etc.
Turn it off, mid-session: it’s a tablet, with touch controls. *With all the same same apps and docs still open.*
Mouse back on: it’s in desktop mode again.

And there’s integration — e.g. phone apps run full-size in a sidebar on a tablet screen, visible side-by-side with tablet apps.

Microsoft doesn’t have this, Apple doesn’t, Google doesn’t.

It looks promising, it runs on COTS hardware and it’s FOSS. What’s not to like?

I suspect, when the whole plan comes together, that they will have a compelling desktop OS, a compelling phone OS and a compelling tablet OS, all working very well together but without any lock-in. That sounds good to me and far preferable to shelling out thousands on new kit to achieve the same on Apple’s platform. Because C21 Apple is all about selling you hardware — new, and regularly replaced, too — and then selling you digital content to consume on it.

Ubuntu isn’t. Ubuntu’s original mission was to bring Linux up to the levels of ease and polish of commercial OSes.

It’s done that.

Sadly, the world failed to beat a path to its door. It’s the leading Linux and it’s expanded the Linux market a little, but Apple beat it to market with a Unix that is easier, prettier and friendlier than Windows — and if you’re willing to pay for it, Apple makes nicer hardware too.

But now we’re hurtling into the post-desktop era. Apple is leading the way; Steve Jobs finally proved his point that he knew how to make a tablet that people wanted and Bill Gates didn’t. Gates’ company still doesn’t, even when it tries to embrace and extend the iPad type of device: millions of the original Surface tablets are destined for landfill like the Atari ET game and Apple Lisa. (N.B. *not* the totally different Surface Pro, but people use it as a lightweight laptop.)

But Apple isn’t trying to make its touch devices replace desktops and laptops — it wants to sell both.

Ubuntu doesn’t sell hardware at all. So it’s trying to drag proper all-FOSS Linux kicking and screaming into the twenty-twenties: touch-driven *and* by desk-bound hardware-I/O, equally happy on ARM or x86-64, very shiny but still FOSS underneath.

The other big Linux vendors don’t even understand what it’s trying to do. SUSE does Linux servers for Microsoft shops; Red Hat sells millions of support contracts for VMs in expensive private clouds. Both are happy doing what they’re doing.

Whereas Shuttleworth is spending his millions trying to bring FOSS to the masses.

OK, what Elon Musk is doing is much much cooler, but Shuttleworth’s efforts are not trivial.
liam_on_linux: (Default)
E-book readers are full of electronics. These require large expensive factories, which use a lot of resources. Then the devices are shipped, consuming resources - such hi-tech manufacture is expensive, therefore is done somewhere cheap, meaning international shipping. Books are cheap to print.

Then you need a computer with Internet access to get your ebooks - more hi-tech, more distant manufacturing and transport. It downloads books from big websites, meaning big datacentres, meaning lots and lots of manufacturing and power.

Then the devices need regular charging - so more power, more fuels being burned, more power distribution.

Books tend to last. They're cheap, need no power, have no DRM (photocopy 'em or scan 'em if you want - it's laborious but perfectly doable), can be reused many times by many people, can be lent and borrowed (think libraries), etc.

Read more... )
liam_on_linux: (Default)
I love my Android phone in some ways - what it can do is wonderful. The formfactor of my Nokia E90 was better in every single way, though. Give the Nokia a modern CPU, replace its silly headphone socket, MiniUSB port & Nokia charging port with a standard jack & a MicroUSB, make the internal screen a touchscreen, and I would take your arm off in my haste to acquire it.
Read more... )
liam_on_linux: (Default)
I do it myself. I wrote some get-started-with-Linux articles for the Register a while ago and got panned for that in the comments.

There are good reasons, though. It's just that all the n00bs and Windows lusers are scared of text, they want point-and-drool. ;-)

The things are these:

* with text, you can copy & paste - you can't do that with descriptions of click-this-click-that

* text is exact & unambiguous. Didn't work? You typed it wrong. 

* text is much much shorter. Full descriptions of GUI ops take pages.

Additionally, it's hard to describe icons unambiguously & most non-techies don't know what the words to describe GUIs mean: they don't know the difference between an icon and a button, or what a pull-down or listbox is. Give them very precise instructions & they can't understand them which they will adamantly deny. They will lie, cheerfully and repeatedly, and claim that things are invisible, don't exist, don't work, are not there, etc. because they would rather do that than admit that they do not understand the words you are using, because that would make them look stupid. They are not stupid - well, some aren't, anyway - but they will not admit that they use words without knowing what they mean and that they know nothing at all about the computer they spent thousands on.

Seriously. In my extensive experience in ~25yr in support, the majority of users do not actually know which bit the computer is. They may have paid as much as a small car but they don't know where it is. They think it's the screen, or in the keyboard, and never associate it with the footrest on the floor. They don't know they need electricity or cooling. They don't know what an operating system or a program is. But they glibly use terms like "hard disk" without the first notion of what they mean.

But they don't know that they don't know - Dunning-Kruger applies - and they will lie, long and loud and hard, rather than admit any
failing on their part. 

That's why the IT Crowd has the support guys put on a tape loop with "have you tried turning it off and back on again?" followed by "are you absolutely SURE it's plugged in?"

We all have heard stories like the urban legend about the American tourist melting a car's gearbox by driving hundreds of miles in 1st gear, not realising it is not an automatic.

Well, computer users are much much worse than this. They would drive backwards, or roll the car into a lake and sit on the bottom rowing it with oars, rather than admit that they don't know the difference between Microsoft Windows and Microsoft Word.

And that's why, rather than trying to describe what a volume icon is and what it looks like and where to find it and how to right click it, we say: "press Ctrl-Alt-T and paste in this line".

People don't like it, but it's short and it works.

liam_on_linux: (Default)
I quite like VirtualBox. Yes, VMWare has strengths, but VBox works a treat, does the seamless-desktop thing with certain
hosts/guests, and basically why pay?

I use VMware Player when I'm doing stuff that requires direct USB access - it's a lot less hassle than VBox for that. You need to run it with admin rights, though, which is a snag.

But when I am revewing operating systems, I tend not to use virtual machines.  I mean, sure, they work, but - for instance - one will not feel or experience the ways in which Ubuntu is a lot better than Windows unless one's running it on the actual hardware. E.g. the fast boot and shutdown times, the improved performance one gets when one doesn't need an antivirus program scanning every sodding disk access and all the crap that runs in the background in Windows.

Raw Ubuntu is quicker and feels quicker, and personally, I prefer the UI to Windows 7's. Win7 is the result of 17 years of work on the Win95 Explorer and yet in some ways it's inferior to the original. I preferred the original taskbar and the original file manager,  TBH.

Ubuntu is a breath of fresh air.

And if Ubuntu is nice and quick, then the stripped-down "remixes" of it, such as Lubuntu and Bodhi Linux, can be breathtaking. You don't get a real feel for that in a VM.

Another issue is drivers. There's the delightful way that Linux and Mac OS X just use generic drivers, rather than Windows' endless dicking around with that vendor's particular driver for that rebranded Taiwanese POS and the pointless fucking icon it sticks in your notification area.

There's the joy of no serial numbers, no activation, and an OS that you can just copy onto an external drive or onto an entirely different PC with totally different hardware and which Just Works™ without falling in a heap because the drive controller chipset has changed or because you've changed more bits of hardware than some evil fatcat bastard's minions in Seattle have decided you're allowed to.

You don't get any of that in a VM.

Running an OS in a VM is like trying to understand what it's like to pet a cat, or perhaps cuddle a baby if you like the things, when it's in an isolation chamber and your arms are in giant rubber gloves and you're peering at it through a small window.

Yeah, it's better than nothing, but it's Not The Same. You don't get a real feel for it.
liam_on_linux: (Default)
I have spent a lot of time and effort this year on learning my way around the current generation of Windows Server OSs, and the end result is that I've learned that I really profoundly dislike them.

Personally, I found the server admin tools in NT 3 and 4 to be quite good, fairly clean and simple and logical - partly because it was built on LAN Manager, which was IBM-designed, with a lot of experience behind it.

Since Windows 2000 Server, the new basis, Active Directory, is very similar that of Exchange. Much of the admin revolves around things like Group Policies and a ton of proprietary extensions on top of DNS. The result is a myriad of separate management consoles, all a bit different, most of them quite limited, not really following Windows GUI guidelines because they're not true Windows apps, they're snap-ins to the limited MS Management Console. Just like Exchange Server, there are tons and tons of dialog boxes with 20 or 30 or more tabs each, and both the parent console and many of the dialogs containing trees with a dozen+ layers of hierarchy.

It's an insanely complicated mess.

The main upshot of Microsoft's attempts to make Windows Server into something that can run a large, geographically-dispersed multi-site network is that the company has successfully brought the complexity of managing an unknown Unix server to Windows.

On Unix you have an unknown but large number of text files in an unknown but large number of directories, which use a wide variety of different syntaxes, and which have a wide variety of different permissions on them. These control an unknown but large number of daemons from multiple authors and vendors which provide your servers' various services.

Your mission is to memorise all the possible daemons, their config files' names, locations and syntaxes, and use low-level editing tools from the 1960s and 1970s to manage them. The boon is that you can bring your own editors, it all is easily remotely manageable over multiple terminal sessions, and that components can in many cases be substituted one for another in a somewhat plug-and-play fashion. And if you're lucky enough to be on a FOSS Unix, there are no licensing issues.

These days, the Modern way to do this is to slap another layer of tools over the top, and use a management daemon to manage all those daemons for you, and quite possibly a monitoring daemon to check that the management daemon is doing its job, and a deployment daemon to build the boxes and install the service, management and monitoring daemons.

On Windows, it's all behind a GUI and now Windows by default has pretty good support for nestable remote GUIs. Instead of a myriad of different daemons and config files, you have little or no access to config files. You have to use an awkward and slightly broken GUI to access config settings hidden away in multiple Registry-like objects or databases or XML files, mostly you know or care not where. Instead of editing text files in your preferred editor, you must use a set of slightly-broken irritatingly-nonstandard and all-subtly-different GUIs to manipulate vast hierarchical trees of settings, many of which overlap - so settings deep in one tree will affect or override or be overridden by settings deep in another tree. Or, deep in one tree there will be a whole group of objects which you must manipulate individually, which will affect something else depending on the settings of another different group of objects elsewhere.

Occasionally, at some anonymous coder's whim, you might have to write some scripts in a proprietary language.

When you upgrade the system, the entire overall tree of trees and set of sets will change unpredictably, requiring years of testing to eliminate as many as possible of the interactions.

But at least in most installs it will all be MS tools running on MS OSs - the result of MS' monopoly over some two decades being a virtual software monoculture.

But of course often you will have downversion apps running on newer servers, or a mix of app and server OS versions, so some machines are running 2000, some 2003, some 2008 and some 2008R2, and apps could span a decade or more's worth of generations.

And these days, it's anyone's guess if the machine you're controlling is real or a VM - and depending on which hypervisor, you'll be managing the VMs with totally different proprietary toolsets.

If you do have third-party tools on the servers, they will either snap-into the MS management tools, adding a whole ton of new trees and sets to memorise your way around, or they will completely ignore it and offer a totally different GUI - typically one simplified to idiot level, such as a enterprise-level backup solution I supported in the spring which has wizards to schedule anything from backups to verifies to restores, but which contains no option anywhere to eject a tape. It appears to assume that you're using a robot library which handles that automatically.

Without a library, tape ejection from an actual drive attached to the server, required a server reboot.

But this being Windows, almost any random change to a setting anywhere might require a reboot. So, for instance, Windows Terminal Services runs on the same baseline Windows edition, meaning automatic security patch installation - meaning all users get prompted to reboot the server, although they shouldn't have privileges to actually do so, and the poor old sysadmins, probably in a building miles away or on a different continent, can't find a single time to do so when it won't inconvenience someone.

This, I believe, is progress. Yay.

After a decade of this, MS has now decided, of course, that it was wrong all along and that actually a shell and a command line is better. The snag is that it's not learned the concomitant lessons of terseness (like Unix) or of flexible abbreviation (like VMS DCL), or of cross-command standadisation and homogeneity (although to be fair, Unix never learned that, either. "Those who do not know VMS are doomed to reinvent it, poorly," perhaps.) But then, long-term MS users expect the rug to be pulled from under them every time a new generation ships, so they will probably learn that in time.

The sad thing about the proliferation of complexity in server systems, for me, is that it's all happened before, a generation or two ago, but the 20-something-year-olds building and using this stuff don't know their history. Santayana applies.

The last time around, it was Netware 4.

Netware 3 was relatively simple, clean and efficient. It couldn't do everything Netware 2 could do, but it was relatively streamlined, blisteringly fast and did what it did terribly well.

So Novell threw away all that with Netware 4, which was bigger, slower, and added a non-negotiable ton of extra complexity aimed at big corporations running dozens of servers across dozens of sites - in the form of NDS, the Netware Directory Services. Just the ticket if you are running the network the size of Enron or Lehman Brothers, but a world of pain for the poor self-taught saps running single servers of millions of small businesses. They all hated it, and consequently deserted Netware in droves. Most went to NT4; Linux wasn't really there yet in 1996.

Now, MS has done exactly the same to them.

When Windows 2000 came around, Linux was ready - but the tiny handful of actual grown-up integrated server distros (such as eSmith, later SME Server) have never really caught on. Instead, there are self-assembly kits and each sysadmin builds their own. It's how it's always been done, why change?

I had hoped that Mac OS X Server might counteract this. It looked the The Right Thing To Do: a selection of the best FOSS server apps, on a regrettably-proprietary but solid base, with some excellent simple admin tools on top, and all the config moved into nice standard network-distributable XML files.

But Apple has dropped the server ball somewhere along the line. Possibly it's not Apple's fault but the deep instinctual conservatism of network and server admins, who would tend to regard such sweeping changes with fear and loathing.

Who knows.

But the current generation of both Unix and Windows server products both look profoundly broken to me. You either need to be a demigod with the patience and deep understanding of an immortal to manage them properly, or just accept the Microsoft way: run with the defaults wherever possible and continually run around patching the worst-broken bits.

The combination of these things is one of the major drivers behind the adoption of cloud services and outsourcing. You move all the nightmare complexity out of your company and your utter dependence on a couple of highly-paid god-geeks, and parcel it off to big specialists with redundant arrays of highly-paid god-geeks. You lose control and real understanding of what's occurring and replace it with SLAs and trust.

Unless or until someone comes along and fixes the FOSS servers, this isn't going to change - it's just going to continue.

Which is why I don't really want to be a techie any more. I'm tired of watching it just spiral downwards into greater and greater complexity.

(Aside: of course, nothing is new under the sun. It was, I believe, my late friend Guy Kewney who made a very plangent comment about this same process when WordPerfect 5 came out. "With WordPerfect 4.2, we've made a good bicycle. Everyone knows it, everyone likes it, everyone says it's a good bicycle. So what we'll do is, we'll put seven more wheels on it."

In time, of course, everyone looked back at WordPerfect 5.1 with great fondness, compared to the Windows version. In time, I'm sure, people will look back at the relative homogeneity of Windows 2003 Server or something with fondness, too. It seems inevitable. I mean, a direct Win32 admin app running on the same machine at the processes it's managing is bound to be smaller, simpler and faster than a decade-older Win64 app running on a remote host...)
liam_on_linux: (Default)
Prediction. Computers, as we know them: boxy devices with a variety of ports and discreet single-function I/O devices such as are going to disappear in the next decade or less.

Here's how I see the development as it has gone and will go.

1st generation 1960s-1970s?

Dumb terminals attached to big computers in separate rooms, tended by specialist staff. Control of software is with command-line interfaces and modal control-key interfaces, e.g. vi. Unusable without training and considerable learning; users confined to specialists.

2nd gen. 1980s?

Textual interfaces on micros, driven by keystrokes. Needs learning, but it's easier. Systems come with manuals, keyboard overlays, help systems. Usable by hobbyists and nonspecialist staff with some training. Uptake grows.

Gen 3. Late 1980s to early/mid 1990s?

WIMP GUIs. Mouse as primary interaction. Early systems monochome; colour used decoratively rather than informatively. Quite discoverable - documentation starts to shrink, become optional. A lot of uniformity in design, leading to lawsuits. Usage becomes widespread; people with the suitable inclinations or requirements teach themselves to use computers and use them for a broad range of activities, including leisure and the arts.

G4. Late 1990s-early noughties?

Sophisticated WIMP GUIS. Programs develop specialist extensions to the WIMP and designs of WIMPs diverge a lot. Specialist WIMPs appear, such as in CAD, 3D design, games design, which are as complex and non-discoverable as anything from the 1970s and earlier. Literacy in GUIs is assumed, which increasingly disenfranchises many people.

G5. Late noughties.

The rise of the Web. Internet access becomes the dominant driver of computer use. Web pages rapidly grow in sophistication and all traces of common UI design go out the window. 3D-accelerated multi-colour GUIs start to appear; colour and shading and transparency are used to enhance the GUI and to convey information. (e.g. red wiggly underline = misspelling; green wiggly underline = grammatical error.) Internet users are now over a billion; many solely use social networks, completely eschewing older communications channels such as email. An entire generation of young adults in the West have grown up with Internet-connected computers as purely leisure devices, toys not tools. Lack of computer literacy severely impedes adoption of modern computers for many, especially older people. Talk of "digital disenfranchisement."

G6. 2007 - 2012/2013 or so.

In hindsight, the last days of the WIMP. Right-clicking and 3D are near-universal; multitouch finger-driven UIs starts to appear in specialist devices. Late noughties. Immediate, instant, very simplified interfaces start to appear everywhere on consumer devices; the primary and often sole interface device is a touchscreen and a finger or fingers. WIMPs are now labouring to keep up, leading to things like "ribbons" and context-sensitive toolbars that appear and disappear dynamically. Broad differences starting to appear between different devices, platforms and manufacturers; recently-dominant companies & platforms that fail to adapt struggle and die, e.g. Nokia, Symbian, Palm.

GUIs are now very rich but very complex, with animations, fades, zooms and animated 3D objects and effects. It now takes time and effort to learn to use them effectively; thriving aftermarket in manuals, as these are no longer supplied with software at all.

Slate-style devices start to become a significant sector - inside 4y, the primary computing device for a quarter or more of Internet users. These do not use a WIMP at all. Users of traditional platforms deride finger-driven computers as simplistic, limited, toylike, crippled, and point out the resemblances between their richer platforms and the new simple ones.

Widespread adoption of finger-driven slate devices by people who previously could not use computers at all, or only in extremely limited ways. With no knowledge or training at all and little or no online help even present in the new systems, people can surf, take and exchange pictures, sound and video clips, talk, play games against one another and so on. Wireless internet access is seen as a basic human right; deprivation of it associated with rioting.

Apparent drop of online literacy reveals that people who have not written since school are now writing and communicating online. Increased replacement of text by video; sites such as Youtube and the Khan Academy and ubiquitous podcasting replace simple text-based information sharing with amateurish audio and video clips, labelled and tagged illiterately if at all. Younger users find this more accessible than written information.

G7. Mid-twenty-teens.

New generation of keyboard/mouse-driven computer OSs adopt slate-style interfaces (as per the early signs seen in Mac OS X 10.7, Windows 8, Ubuntu 11.x et seq. There are no menus or windows, no way to close applications, no direct access to the filesystem. Sometimes the new interfaces are optional and many experienced users dislike them. Novices don't even notice. For users not interested in the technology itself, simple slate-type devices replace conventional "personal computers" for everyone except power users or developers. Workers in organisations issued with corporate boxes do not get to choose, but are now a minority part of the computer-using community. Speech and facial and bodily-gesture recognition starts to be used as a control system in ordinary use, not just in gaming.

G8. late twenty-teens (2017-2019/2020.)

Slate-type, finger- and speech-driven computers of various sizes utterly dominate the mass market; phones, gaming devices, media players, ebook readers all have merged into cheap commodity sealed units, not user serviceable or upgradable and completely unusable without broadband internet. Keyboard/mouse-driven desktops and laptops are confined to specialist niche markets in business. "Desktop" OSs merge with slate/tablet OSs; no distinction is visible and things like "WIMPs", "disks", "files", "directories" are a fading memory. Software distribution is entirely electronic; physical media disappear, largely including for the music and video industries.

2020 and beyond

Devices such as "desktops", "laptops" and "games consoles" are historical, as rare as serial text terminals. No remaining division between "computers" and "screens" - they are the same device - or between "disk" and "memory".

Sharp division apparent between technical specialists and hobbyists using computers with vestiges of windows and command lines and other legacy technologies, used only in development and for building and administering servers. Legacy tech such as magnetic or optical disks only used in servers; most computer users have never seen one, nor a "port" or "cable". Ordinary computers are sealed, solid-state, battery-powered devices, charging and communicating wirelessly; owners would no more increase their devices' memory than they'd rebore the cylinders of their car engine for greater cubic capacity. Small numbers of hobbyists build their own machines from components and exchange "software" and "files" as nostalgic entertainment.

Summary:

The notion of the "PC" has less time left than the time since the turn of the century. Before this current decade is out, various sizes of sealed, disposable computer - just a touchscreen that responds to gestures, facial expressions and voice - will have completely replaced the PC and the Mac in all except specialist niche markets, where they are used by less than a tenth of a percent of "computer users." Existing media printing, recording, publishing and broadcasting companies will have mostly ceased to exist unless they have transformed themselves into a direct-to-subscriber model; there are few remaining TV channels and other mass one-to-many channels remaining, and these are seen as the refuge of the extremely poor or disadvantaged. Production of books and printed media becomes a specialist minority craft activity, like sculpting or painting on canvas.
liam_on_linux: (Default)
In the beginning were the dinosaurs: Erwise, Cello, Mosaic, Lynx and things. Nobody under 40 remembers them and they're all long extinct. Everyone used Mosaic anyway, which was FOSS from the NCSA.

Nobody's heard of the NCSA any more, which is a shame as they also gave the world Apache and without them there wouldn't be a Web. They made something useful out of Tim Berners-Lee's work at CERN, but timbl and CERN are far more famous.

Odd, really, that neither CERN nor the NCSA ostensibly have anything to do with the Internet.

Mosaic begat loads of different browsers. All were also called Mosaic. Many were proprietary, "enhanced" versions, which actually weren't.

Only one was any good. Called - surprise! - Mosaic, it came from a company also called Mosaic. (Are you following all this?) Developed under the codename "Mozilla" - the Godzilla of Mosaics, you see - it was Mosaic with embedded pictures and FTP and cool stuff like that. Hey, it was 1994. People complained about the confusing name so the company renamed itself Netscape and renamed their browser Netscape as well, which isn't confusing at all. It was shareware, vastly successful, created the original 1990s Web and was killed off by Microsoft giving Internet Explorer away for free.

But as Ben Goldacre likes to say so much that he has put it on a T-shirt: "I think you'll find it's a little more complicated than that."

For starters, IE 1 was an optional extra for Windows 95, you had to buy it, and it was utterly crap. IE, incidentally, is also based on Mosaic, via Spyglass. MICROS~1 didn't write IE themselves, they just bought it in. You'd be surprised how many "Microsoft" products were not actually written by Microsoft: Powerpoint, Visual Basic, SQL Server, Defender, Frontpage, Mail and lots of others.

IE2 was free, but still rubbish. So was IE3.

So everyone used Netscape. A few even paid for it and Netscape Inc did tremendously well. This pissed off Microsoft, who don't really like anyone else making big money off their platform. So they worked away on IE until eventually, after about four versions, it was actually just about usable, kinda sorta ish.

And it was freeware.

Netscape wasn't, officially. It went through various stages, including being free only for non-profits and educational institutions, but it ended up proprietary, closed-source shareware. Home and non-commercial or non-profit use was free, businesses were meant to buy licences. Which most didn't.

It went through a whole bunch of versions, all of which were market-leaders in their time.

Netscape 1 was just a browser.

Netscape 2 added an email client and USENET news-reader. Not RSS, what we call a news-reader today, that hadn't been invented yet. Netscape 2 was a fair bit bigger than Netscape 1.

Netscape 3 Gold added web-page editing too. It was bigger still.

Netscape 4 sort of forked, internally: there was Netscape Communicator, a suite including a browser + email + news + address book + web editor + a proprietary shared diary - a huge app for the times,

And separately, there was Netscape Navigator, which was just a browser once again and thus was relatively svelte and quick - so naturally it never got updated past 4.0.x.

In the end, once IE was usable enough, everyone used that instead. Netscape Communicator was big, sluggish, took loads of memory and was inefficient - and it cost money. For instance, every time the window was resized, it re-rendered the entire page, as the rendering engine built a static page display for the current window dimensions. This was at the time when live window resizing was a trendy new feature of Windows - it was an extra in the same Plus! pack for Windows 95 that introduced IE to an indifferent world, and had even been retro-fitted on to MacOS 8.

Netscape complained that IE, a rival for their commercial product, was being given away for free - which counts as illegal restraint of trade. In response, MICROS~1 just bundled it with Windows and blithely claimed it had always been there, even though it wasn't in Windows 95 or Windows NT 3 and they also offered it for Mac and Unix. The US Department of Justice, remarkably, swallowed this, even though it was demonstrably utter bollocks, and let MICROS~1 off.

When Netscape Corp was bought out by AOL and broken up, the company's last act was to make the as-yet-unfinished Communicator 5 open source under its original codename of Mozilla.

After more than two years of work, this eventually became the Mozilla Application Suite, also the basis for AOL's Netscape 6 and 7. Netscape 6 was based on the unfinished Mozilla 0.6 code, and Netscape 7 on the final but unpolished Mozilla 1.0. AOL then outsourced it; Netscape 8 was based on Firefox 1 and Netscape 9 on Firefox 2. All were freeware; Mozilla itself was FOSS.

Mozilla was the Linux browser. It was the best FOSS browser, but that was because it was also pretty much the only FOSS browser. It was also a huge big lumbering thing, like Communicator before it, and it was unpopular on Windows and Mac (although I used it myself, as I am not a big Microsoft fan, as you might have worked out.)

Then Dave Hyatt and some mates, including a chap called Ben Goodger, stripped Mozilla down to just a browser, reinventing Navigator as if it were a new concept. They called it Mozilla Phoenix. Rising from the ashes, you see.

Phoenix the BIOS people complained.

They renamed it Firebird.

Firebird the FOSS database people complained.

They renamed it Firefox, which is a made-up word and obscure enough that nobody minded. It did brilliantly and still is today. The Mozilla Foundation consequently abandoned the Mozilla Internet Suite. The legendary open-source community took it up, renamed it Seamonkey and it's still updated. I still use it occasionally myself. It's OK. It hasn't lost any weight, but the relentless advance of computer technology means that it's no biggie any more.

Firefox is now under some threat from Google Chrome (one of whose developers being a certain Ben Goodger). Chrome is based on Apple's Webkit but with a better UI than Safari (a project headed, amongst others, by one Dave Hyatt). Webkit is Apple's cleaned-up, enhanced version of KDE's KHTML rendering library from the Konqueror browser. Webkit is so much better that KDE have given up on KHTML and now use Webkit too.

Now there are basically four main families of browser:
  • Internet Explorer. Windows-only nowadays, but to most people, IE is The Internet. IE6 sucks bigtime, but tons of big companies are wedded to it, so it shambles on, undead. I suppose that makes it a sort of zombie used by dinosaurs, which actually sounds kind of cool. IE 7 and 8 are sort of OK, if you're the sort of person who doesn't mind sharing needles with strangers.

  • Mozilla, AKA Firefox, Seamonkey, Camino and loads of others.

  • Both, ironically, while being lifelong bitter rivals, are descended from Mosaic.

  • Then there's Webkit, AKA KHTML, AKA Chrome, Safari, Konqueror, the Nokia Symbian browser and others. It was developed from scratch in the late 1990s.

  • And Opera, doing its own idiosyncratic thing for seventeen years. "MultiTorg" coexisted with Mosaic back when giants walked the Earth.
  • liam_on_linux: (Default)
    In case any loyal readers ;¬) missed them, I had a series of pieces on Linux server distros published last week on the Register. They ran them out-of-order; this is the sequence I intended them to go in...
    liam_on_linux: (Default)
    It's been 20 years now since the GNU microkernel Unix, the HURD, was announced. So where is it?

    Bear in mind in the following that when I speak of an OS I am talking about the core OS - the kernel and essential services. Not the shell or the filesystem or user tools like ``ls'' or ``more'' or ``vi'' or anything, and certainly nothing to do with relatively trivial outer layers such as graphical user interfaces.

    The GNU HURD was a very ambitious project: to build a complete, UNIX-compatible OS on a microkernel basis. Microkernels ("µkernel" for short) are very hard to make work, but it has been done - QNX, Chorus, Amoeba and others are all technically microkernels, for instance. Not are really Unices, though, although QNX sort of superficially resembles one enough for developers to feel some familiarity.

    Contrary to popular belief and Apple & NeXT's strongly-promoted message, neither Mac OS X nor NeXTstep before it were technically microkernels.

    The idea of a microkernel is that only a tiny piece of code runs in the processor's (or processors') Ring 0; the rest of the OS is composed of small pieces of userspace code, running in Ring 2 or 3 (using x86-32 rings for reference here). The modules, called servers or daemons, all communicate with each other to work together as a complete OS.

    The theory is that because the OS is very modular, it is easier to work on, more reliable and more robust. If a server dies, it can be restarted and the rest of the kernel will not be affected.

    The first microkernel to get much real-world recognition was Mach, designed at Carnegie-Mellon University in the USA. Mach didn't get to a reasonable level of completion until Mach 3.0 but the earlier versions spawned a host of projects.

    One was OSF/1 by Digital Equipment Corporation, which became Compaq Tru64 and was killed by HP. Another was MkLinux for the original Motorola 680x0-based Apple Macintoshes.

    And one was NeXTstep, which became Mac OS X.

    To make their new OS viable, NeXT wanted to make something Unix-compatible, so they took a huge chunk of the kernel of BSD (not FreeBSD or NetBSD or OpenBSD, this is before them) and built it directly into the Mach kernel as a sort of "Unix compatibility later". This means that the kernel is no longer "micro" at all - it has a honking great monolithic lump of old Unix code bolted onto it.

    The result is called Xnu, and whereas it may not be the most elegant solution, it certainly works. It's now the best-selling Unix ever, estimated to outnumber, in both installed systems and number of users, every other commercial Unix and Unix clone put together.

    GNU took the Mach kernel as well as the basis for HURD, but it tried to do things properly, the pure microkernel way. When Linus Torvalds wrote his kernel, he didn't expect it to compete with HURD - it was meant to be a small quick hack. As he famously said in news:comp.os.minix: "I'm doing a (free) operating system (just a
    hobby, won't be big and professional like gnu) for 386(486) AT clones."

    That was the first announcement of what became and was later named Linux. (It had been called "Freax".)

    The thing is, the GNU developers found that writing a microkernel-based Unix is very very very hard.

    When Linus made his post, Prof. Andy Tanenbaum, a very respected academic in OS research and the author of Minix, the OS that begat Linux, said Linux was obsolete: the direction of the future was clearly microkernels. (And indeed Tanenbaum has produced several µkernel OSs himself.)

    Technically, at least in a theoretical sense, he was right, but Torvalds' practicality in deciding to implement a simple, classical, one-big-monolithic-lump-of-code type of old-fashioned Unix kernel has been proved correct - Linux is now mature, sophisticated and highly usable. In some terms, proprietary Unixes such as HP/UX or AIX or Solaris might do some things better, but Linux is doing very well.

    Meantime, 20y later, the GNU team have not made all that much progress with HURD.

    About 4y ago, it has got to a stage where it looked like it could be used for simple stuff. Debian built a version of the Debian GNU distro around the new HURD kernel instead of the Linux kernel. (There are also Debians built around all the BSD kernels.)

    An announcement was made on Slashdot. The infant OS was hosting its own website. It immediately collapsed under the onslaught.

    Soon after this, the team decided that things had moved on since the creation of Mach in the early 1980s and moved HURD onto a new, more sophisticated and mature µkernel: L4. This involved a very big backward step, so arguably, HURD is less complete and ready now than it was then. Also, things have moved on since L4 and it too has successors, and research projects are looking at them as possible HURD bases, too.

    Personally, I think that one day, Linux is just going to prove to be too big and too complex to maintain and develop efficiently any more. Some developers might move on to a successor, something more modular. I'd have liked to see kernel 2.6 named 3.0 when it was released - there was no planned 2.8 or work-in-progress 2.7; indeed the old stable-and-development-kernels-in-tandem model has been discarded.

    But perhaps some day a modular Linux 3.0 might be started.

    There are other directions - for instance, the chaps that originally developed Unix (of which Linux is really just a re-implementation) went on to further refine and develop their ideas in Plan 9, which later developed into Inferno. These are networked OSs, with resource sharing between nodes an integral OS concept, rather than something bolted on later as it is with Linux, Unix, Windows and so on.

    I do not understand the technical details but the structure of Plan 9 apparently makes the whole concept of µkernels irrelevant - it is functionally divided into pieces already, just not alone the same division of privileged tiny kernel + user-space servers as µkernel OSs. I'd like to see development on Plan 9 picked up and the enhancements of modern Unixes, such as Linux, brought to it. That could be something very special.

    Also, given that other µkernel research OSs have made it to complete, functional condition, such as Minix 3, then perhaps the HURD design is flawed and they should drop it and move on to HURD 2 or something. I don't know enough about the minutiæ.

    But HURD isn't finished, maybe never well be, but it's been a very interesting project all the same. And whereas maybe µkernels are not the right way to go, I think the evidence from Minix 3 and Ameoba and QNX and so on is that they can be made to work, and perhaps that is how things will in fact go one day.
    liam_on_linux: (Default)
    I think it might have done quite well.

    OTOH, and I loved OS/2 - I have spent more cash on OS/2 than all other PC software put together in my entire computing life; possibly more than on anything except Spectrum games, and maybe more even than that! - but even as a fan, it was a pig to install, a pig to network, a pig to install drivers, etc. etc.

    When I tried the Windows 4 beta, I was dazzled. THIS is how it should be. It Just Worked, and setup & tweaking was a dream. Explorer, so elegant! Device Manager - I nearly wept for joy. No 2000-line CONFIG.SYS file! No separate windows for the directory tree and the directory contents!

    WPS, elegant & sophisticated? My arse it was. Half-assed Mac ripoff.

    And for all OS/2's alleged reliability, Fractint could kill it easily, the whole machine. Win95 was no better, and as the 32-bit apps & shonky drivers piled up, considerably worse. Then came the horrors of Win98. And SE. And ME.

    But at first, even the beta of Windows 4 was about as good. And DOS drivers worked at a push. And DOS games and things. The long-filenames-on-FAT hack was a hack, but it *worked*. Make a long filename on HPFS, look for it from a DOS window or WinOS2 - gone! Invisible! You can't have it, mate, tough.

    Then they hacked that to give us FAT32, and lo, it worked and was just like the old days. Incremental steps, no big bangs.

    But when the state of the art was the horrors of Windows 3.x on DOS - even DR-DOS, optimised until it bled with QEMM - or the driver-less and app-less incompatible nightmare of NT 3.1 or 3.5 (if you could afford a £2500 PC to run it well) - OS/2 really actually was "a better DOS than DOS, a better Windows than Windows".

    But I still wonder... If OS/2 1 had been a 386 OS, and had swept away Quarterdeck QEMM and DesqVIEW, killed the infant BSD4.4-Lite on 386, ensured that Windows 3.0 had been aborted... If it had used V86 mode to flawlessly multitask DOS apps, boot DOS and its drivers off a floppy for those troublesome programs for near-perfect compatibility...

    Well... Program Manager and File Manager, which in the 1990s everyone thought were Windows 3.0 innovations but actually came from OS/2 1... They weren't so bad. I kinda liked them, actually. Had them tuned for a very efficient, convenient GUI. Loads of custom hotkeys for launching and switching apps, which always damned well worked, unlike on Explorer when if Windows was narked it would just ignore you, or launch 876 extra copies of your app then fall to its knees and die.

    It coulda been a contender. In 1987, we knew no better. We might have gone for it.

    But knowing what I do now about OS/2 2 compared to Windows 95... I am not sure that we were not a whole lot better off with what we got than what might have been.

    I remember impotently screaming abuse at a Warp Connect box, just trying to get it on my LAN and on the Internet via dial-up at the same time. Either Win95 or NT 3 were vastly better than that.

    OS/2 was, in a horrible way, more DOSsy than DOS. Everything was hand-configured in a vast ASCII config files, which you had to hand-massage into perfection with excruciating care. Then, if you were particularly masochistic, optimise for performance. I never did get Warp 3 to drive the graphics cards and the sound cards of my two 486 laptops at the same time. One or the other, but not both. And one of them was a bloody IBM!

    I would in an odd way have liked to see OS/2 thrive, but you know... Despite my irrational nostalgia for it, on the whole, when Windows 95 gave us plug-and-pray, I mean, plug-and-play, and power management and suspend/resume and so on, and then NT4 gave us a vaguely modern GUI... Then Windows 2000 brought it all together into a single whole, which if not exactly seamless by any means, did slap enough makeup on Frankenstein's Monster to make it look presentable...

    Sorry to say it, but I think we were better off.

    I know, heresy, praise for Microsoft from one of the "Linux Taleban". Shocking.

    Of course, after that it all went a bit wrong. I know everyone loves XP in hindsight, but with all the bloat, I wasn't and am not so sure. Themes? Really? Do I need that? I know, I can oh-so-intuitively switch to Windows Classic in Display Preferences, then run SERVICES.MSC and stop the THEMES service and disable it... But I can't uninstall Movie Maker or IE or any of the other cruft, no way José. I can't move the hibernation file to another drive or partition.

    Then came Vista and we learned to love XP.

    Then came 7, and everyone loves Windows again, except for those of us who found it handy to run a command-line app full-screen occasionally.

    I think I'll stick to Linux, thanks.
    liam_on_linux: (Default)
    Those who don't read Twitter or Facebook may be unaware of the series on Linux I've done for The Register this week (amongst other stuff).

    So here are some links...

    http://www.theregister.co.uk/2010/06/21/reg_linux_guide_1/

    http://www.theregister.co.uk/2010/06/23/reg_linux_guide_2/

    http://www.theregister.co.uk/2010/06/24/reg_linux_guide_3/
    liam_on_linux: (Default)
    I'm amused and a little concerned at the lack of comprehension being shown on the Ubuntu mailing lists concerning Microsoft's moves to support Linux on Hyper-V. Firstly, people are not registering what Hyper-V actually means, and secondly, MS is getting credit for releasing its VM additions as GPL, when actually it was just a way of getting itself out of trouble for using GPL code in a proprietary program.

    So I wrote this...

    Hyper-V is *exactly* the same sort of move as Internet Explorer was.

    Secondly, MS did not choose to give away the source, it had to, because it has been caught violating the GPL.

    For those too young to remember or with short memories, when Windows 95 came out, it did not include a Web browser. Instead it had a client for MS' proprietary online service, the Microsoft Network, MSN - which is now totally gone, dismantled, but the name lives on as that of a MS promotional website and a proprietary instant-messaging client.

    Then Netscape came along. It offered a multiplatform web browser and email client which ran on Windows, Mac and Unix. Closed-source, proprietary, but free for home and non-commercial use.

    Netscape did very well. Its browser soon dominated the Web. MS had totally failed to see that the Web was coming, as demonstrated by its very basic v1.0 web browser being relegated to a paid-for optional add-on for Windows called the Plus Pack, which mainly contained extra themes and some games.

    Microsoft responded by aggressively developing Internet Explorer, giving it away for free to all users of Windows 3.1, 95 and NT and bundling it in with Windows 95B and Windows NT4.

    This was anti-competitive behaviour - there are laws against this kind of thing, for good reasons which are today mostly forgotten. The same sort of laws used to protect us from bank speculation and so on, were dismantled in the 1980s and 1990s and resulted in the current stock markets crash and worldwide recession.

    Microsoft's legal defence against accusations of illegal bundling were that IE - remember, originally an optional extra - was not bundled with Windows, it was an integral part of it. Despite demonstrations in court that this "integral part" could be removed, MS was not prosecuted.

    Result: in the end, Netscape went broke.

    Now, VMware is making good money off the MS platform, just as Netscape did. So, MS bought Connectix for its VirtualPC hypervisor, gave the Windows versions away for free, built the core into Windows Server and called it HyperV - which is also free.

    It's specifically designed to stab VMware in the back by undercutting VMware's product with a free equivalent. It is exactly the same illegal action that MS took with IE 14 years ago. It got away with it then and it will now.

    But for HyperV to be accepted, it must support the other OSs people use - which, today, means Linux.

    So, MS produces add-ins for guest OSs running under VirtualPC/Server/Hyper-V.

    In this case, it used GPL code to produce the add-in, was caught, and rather than fighting the case, it's chosen to release the whole lot as GPL. Doubtless many inside MS are not too happy about this, but it means the company can buy good PR out of what originated as a careless mistake.

    http://www.theregister.co.uk/2009/07/23/microsoft_hyperv_gpl_violation/

    http://www.osnews.com/story/21882/Microsoft_s_Linux_Kernel_Code_Drop_Result_of_GPL_Violation

    And some cogent analysis:
    http://www.theinquirer.net/inquirer/news/1469009/microsoft-donates-code-linux

    July 2025

    S M T W T F S
      1234 5
    6789101112
    13141516171819
    20212223242526
    2728293031  

    Syndicate

    RSS Atom

    Most Popular Tags

    Style Credit

    Expand Cut Tags

    No cut tags
    Page generated Aug. 17th, 2025 03:12 am
    Powered by Dreamwidth Studios