Dec. 20th, 2008

liam_on_linux: (Default)
It's gradually being realised that TCO is more than H/W + S/W. Virtualisation and thin clients are currently very popular: VDI is the Next Big Thing, Virtual Desktop Infrastructure - you run graphical terminals everywhere and lots of virtual XP machines on a few big servers, with the display being remotely used from said terminals. Easier than WinFrame or Terminal Server, more flexible - they don't all need to run the same image, people can have conflicting versions of apps and "their own" machine which can follow them around a network. They can remotely suspend it & come back to all their windows where they were and so on.

But this misses a number of points.

#1 You still need to buy all the Windows licences. Expensive.

#2 You need some seriously BFO servers to run hundreds of VDI instances. Expensive.

#3 If the server goes down, hundreds of users are screwed. Downtime -> expensive.

#4 The "thin clients" don't do much - no local processing to speak of - but they are each a capable computer in their own right, with local RAM and CPU and graphics and sound and some kind of OS, possibly booting from Flash. Not very expensive, but still a significant cost, and of course, it's inefficient to chuck away hundreds or thousands of PCs and replace 'em all.

What's the point of it all, then?

Well, it's not saving upfront costs. It's reducing maintenance costs. Your techies work in a datacentre, looking after a relatively few servers running lots of instances of near-identical Windows system images. They don't need to go into the field - terminals are interchangable and cheap enough that branches can carry a few spares. Cheap largely-unskilled techs can go out in the field and swap boxes and that's largely it, once the network's built.

This is actually taking off. Lots of big vendors pushing it hard: MS, VMware, Citrix, Parallels on the S/W front, Wyse etc. on the terminals and all the big server vendors are dribbling at the prospect.

But it's an awful lot of work just to reduce management costs and it's got snags of its own.

So. Alternatives? Using our favourite FOSS OS?

Linux desktops

Rolling out Linux onto all your desktops is not a big help. Yes, it saves licence costs, but each machine takes an hour or so to provision, there's still loads of local state, and you have to retrain all your techs to install and support hundreds of Linux installations. Nightmare. Very hard sell.

Linux thin clients or terminals or X terminals

Using Linux as the thin client OS - well, fine, but you still have loads of thin clients, with the concomitant server load. Doesn't solve anything, just maybe makes the thin clients a bit cheaper. You could roll up a bootable-CD thin client distro that didn't require any installation at all, but it doesn't help the problem of all those clients and all those big expensive servers.

Turning PCs into diskless workstations, only, er, with disks

Now, for years, I've been interested in stateless OSs for some modern kind of network workstation. Diskless workstations were a good idea once and I tend to think they could make more sense than "thin clients" which aren't thin, they're just graphical terminals. Terminals dump the entire computation (+ RAM + storage) load onto the server. RAM and storage are relatively cheap; it seems to me to make more sense to make the best possible use of local resources as well as reducing management. X.11 is no help here, it's just an alternative to MS RDP or Citrix ICA or whatever.

The ideal, ISTM, would be to have an identical, stateless OS image run out to all your PCs, something with no local config files at all. PCs would be totally interchangable; if they were buggered, they could be easily replaced.

Now companies like Progeny worked on this for ages, trying to separate out the local config from the global config and produce modular Linuxes that could keep just their important state on the server. It's a tricky job; AFAIK it never got done properly.

But ISTM that there is another way, using something that's already been done. Live CDs.

Live CDs are effectively stateless. They boot on completely unknown hardware, detect it on the fly on the first pass, and get you to a usable graphical desktop complete with apps and a network connection.

But they're slow, 'cos they run from CD, and CDs are slow.

Now with some LiveCD distros, like Knoppix, you can copy the CD contents to HD and it becomes a "normal" distro. But that's no good - that's local state again.

Other CD distros, like Puppy, can copy a write-only image of the CD to HD, boot off that via some kind of loader, and keep their state in a separate little file. That's more like it.

So what you need, ISTM, is a distro that works like Puppy - boots off an image on the HD, but as it boots, according to MAC address (say), it gets access to a bit of system-specific config data on the server. Screen resolution, stuff like that. This is a small package of data, and the tech to do that from local storage is known from several liveCDs. It just needs extending to work over a LAN. It broadcasts a request for a server and if it can "see" a server that knows that machine, it gets handed its local state info. This can be cached locally, outside the system image, for offline operation.

Then when the user logs in, they get a network home directory, so their desktop, their files, their settings etc. follow them around the network. Again, this can be cached locally. This is much the same as Windows' "roaming profiles", only 'cos Linux is a bit better designed, you don't sync a folder full of temporary files and the browser cache every time.

Now if you had a network of hundreds of machines all running off local images of the CD, that's an improvement, 'cos now you have a uniform stateless OS. However, when it comes round to upgrading them all, you're screwed again, because you need to update all those machines. Even if you write some snazzy utility to copy the image down off the network, a few hundred PCs loading a half-gig image is going to cripple your network and it will be disastrous over a slow link to a remote site.

So what you need is a pre-boot environment. Not something techie like PXE which not all PCs support, but you're going to need some kind of bootloader anyway.

So, you have a Linux system that loads a Linux system.

The first-stage loader boots up, gets an IP, connects to a specific disk image server and checks if the local image is current. If it isn't, it rsyncs down the latest one. (It might even make sense to keep a backup copy, in case of corruption or a lost connection during the sync. Grandfather-father-son. We're probably only talking about half a gig a pop and I don't think you can buy a HD smaller than 80GB these days. Even if you boot from SSD, 2G to 4G is something like £5 worth of Flash.)

Once the local image is current - and the sync shouldn't take long unless you do a whole-disk-image upgrade; we're probably talking 10s of meg, no more - then the startup kernel KEXECs the "live" kernel in the disk image. No reboot, so it's fast.

With network-bootable workstations, this is inefficient, but it could still work. However, with a small amount of local storage - a few gig - you have space for:

[a] the startup kernel and a simple FS to keep a couple of disk images in. This sort of job is simple enough that a bootable floppy would almost be enough - the system only needs to be a few meg of code. It needs a kernel, network drivers and rsync and not a lot else. This is the tech of the Swiss product Rembo, now bought by IBM and thus gone from the Free world.
http://www.appdeploy.com/tools/detail.asp?id=39
http://www-01.ibm.com/software/tivoli/products/prov-mgr-os-deploy/

[b] a swap partition, alleviating one of the performance problems of LiveCD operation. You could live without this if it got corrupted, lost or whatever.

[c] a small local cache partition containing a backup copy of any node-specific state and any user-specific state. Again, losing this wouldn't be fatal.

For provisioning machines, making spares etc., just leave a stack of bootable CDs of the live image around. Setting up a machine is a matter of turning it on, inserting the CD, pressing Reset or Ctl-Alt-Del if necessary, and waiting. The LiveCD boots as normal. In the background, a task kicks off and looks for an empty local drive. If it finds one, it installs the startup system, copies an image of the CD onto the disk, updates it from the server.

Indeed, you could even set it up so that like WUBI, the Ubuntu installer that copies itself into an existing Windows partition, it doesn't remove any existing data or anything on the hard disk, it just quietly inserts itself into any free space it finds, installs GRUB and makes itself the default with an option to boot back into the old Windows system if desired. Non-destructive reversible provisioning.

Inside the disk image, well, then it's a matter of making a simple generic desktop that looks as much like XP as humanly possible, with Firefox plus addins, an email client, OpenOffice, etc. All the usual stuff. The one extra I can see being really useful would be a bundled Terminal Server client, so that all those expensive-to-maintain PCs become identical thin clients that a trained monkey could set up, becoming completely interchangeable and so on. All the benefits of thin clients, all the remote manageability etc., because in effect, they're all booting off the image on the server - except without sending the whole OS and apps over the wire each time.

Ideally, if the Terminal Server client could work in a mode like Fusion on VMware on Mac OS X - where the remote apps' windows mingle in with local windows, so the users aren't presented with 2 desktops. They have one desktop, but some shortcuts or menu entries, unbeknownst to them, point to remote applications. E.g. the local Linux boxes could all run Outlook remotely to connect to the Exchange Server. Use something like Citrix or Parallels to host that, it would require /massively/ less server load than an entire virtual desktop image, and if you set it up right, you don't need hundreds of Windows licences - you just need lots of Outlook licences on a small number of copies of Windows.

What's the point of all this?

Well, it's a way to sell desktop Linux. This offers a set of (I hope) compelling advantages:

- massively reduced licence costs next to Windows, virtual or otherwise. Remember all those CALs you need even for thin clients.
- repurpose existing PCs into thin clients with a thin, light OS. Instead of proprietary thin clients, hardware replacements and upgrades use dirt-cheap generic hardware. Testing them for suitability just means booting them once from CD or USB.
- no conventional locally-installed OS, no local state, so PCs become interchangeable. In emergency, can just run from CD/USB key. Something goes wrong with a PC? Bung in the CD, reimage it, and user can work while it's happening.
- reduces network load compared to thin clients or network booting
- reduces server load compared to thin clients or virtualisation
- field support requires very little staff training or knowledge
- major reductions in management cost
- much less disruptive migration than replacement of existing kit
- greener: reuses existing machines & gets improved performance
- secure: no need for local antivirus etc. (so further cost savings)

ISTM that this could actually give Linux a persuasive advantage over Windows as a business OS.

So go on, rip the idea to shreds. What have I missed?

The main hard part is trying to sell it to Windows houses as an alternative. That's why I'd suggest part of it is a theme to make it look as much as possible like Windows, either XP or Vista, as desired. e.g.
http://lxp.sourceforge.net/

July 2025

S M T W T F S
  1234 5
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 7th, 2025 05:31 am
Powered by Dreamwidth Studios