liam_on_linux: (Default)
A rock'n'roll Friday evening: setting up a Dell Inspiron mini 12 for [livejournal.com profile] ignatzz's little sister Sarah. She chose this as her sole PC, as she like the idea of Linux. It's bigger than your average "netbook" - a really good 12" 1280*800 screen, but very thin and light. It's nearly as svelte as a Macbook Air, but a grand cheaper. Keyboard's not great - the punctuation keys like comma and full-stop are half width and it has a mere 40G hard disk, rather than either a shockproof SSD or a capacious HD - but it's very slim, very light, almost silent in operation and feels and looks smart and professional. I was quite taken with it. Fitted in her handbag alongside PSU and dongle, too. Very neat.

Snag is, she doesn't have broadband, so bought a Virgin Mobile 3G USB dongle - actually a Huawei E160, which on Windows also appears as a CD-ROM drive containing the software. No damned use on Linux, this functionality - you can't even mount the volume; it merely complicates the issue.

3G dongles allegedly work like a dream on Ubuntu 8.10, but the Mini 12 comes with 8.04 - the LTS edition, which seems a better choice to me than upgrades every 6mth. I've done them and they are far from foolproof.

My gods, though, getting it working was an uphill battle! It took roughly from 7:30pm to 11:30pm. Getting the NetworkManager 0.7 installed wasn't too bad. Then I needed to add USB modeswitch, in order to flip the dongle from virtual-CD mode to modem mode.

The new network manager needed 2 reboots to come to life. First reboot, nothing. I got no network icon in the notification area, so no way to connect to a wireless network at all. Happily, I had a cabled connection available. Then I installed modeswitch and tried again. Now the familiar little network icon appeared and seemed to detect the dongle, and I could add the connection info - but there was no option to go online. Spent hours fiddling with this; Google was no help. It's as if nobody else has encountered the problem, which I find unlikely. Eventually, I tried the Vodafone data-card dialler tool, and it told me no supported modem was connected. I removed and reconnected the dongle, and lo, it appeared and the Vodafone tool saw it.

And so did the Network Manager - even though all sources claim the dongle needs to be plugged in before you boot up - and finally I got a "connect" option. Some wrangling with the right APN - found via the invaluable table on FileSaveAs.com - and it tried but failed to authenticate. (An APN is an Access Point Name and it varies from one ISP to another. GPRS connections don't need a phone number, username or password - they know who you are from your SIM's EMEI number - but the machine needs to know the APN to get a working Internet connection. "Internet" is common, but Virgin uses "goto.virginmobile.uk", which one is unlikely to find by guesswork.)

Next stumbling block: the dialler kept asking for a password, but password had she none. Disabled authentication on the PPP tab and it finally worked.

As the very grateful Sarah said as she disappeared into the night, "I could never have got that working! I'd have just returned the dongle and the notebook as non-working."

A bit more work is required in this area, methinks. A backport of the network manager to the current LTS edition, perhaps, and a well-written HOWTO. I scoured dozens of websites and forums to find the info, and naturally, much of it is mutually contradictory. The bulk of reports tell you to install a modem dialler such as wvdial or gnome-ppp - or the Vodafone thing - which is rather counterproductive. Firstly, the new Network Manager includes GPRS support (and CMDA for Americans) for 3G, including dialling and disconnecting (though sadly no signal strength, speed or traffic meters, which with a 3G monthly cap would be very useful things to have). Secondly, if you use an external dialler, GNOME doesn't "know" it's online, so Firefox and so on start up in offline mode all the time.

So, in summary, to use a 3G Dongle on Ubuntu 8.04:

[0] You absolutely need a working network connection to hand to begin. Ideally, a wired one; at various stages, you might break the Wifi client, so if that's all you had, you'd be stuffed. Get the laptop on a fixed connection and ensure it's fully updated. You don't want to download hundreds of meg of patches over your bandwidth-capped 3G connection.

[1] Install Network Manager 0.7 and reboot
[2] Install USB modeswitch if your dongle requires it; not all do the helpful-for-Windows-users virtual-CD-of-drivers thing
[3] In network connection properties, enter the connection settings including your ISP's APN
[4] If there's no connection option just before the list of wireless networks, remove & reconnect dongle until NetMan recognises it
[5] Try connecting. If it won't log in, see if the ISP requires a username. You may also need to enter a random password such as "password".
[6] If the dummy credentials don't work, try just disabling authentication altogether

Someone who had fewer issues than I did has documented it here, which you may find helpful.
liam_on_linux: (Default)
Is it me or are Ubuntu's dependencies fundamentally broken in places?

I don't much like Evolution as an email client -- too like Outlook, of which I'm not a fan -- but I can't simply remove it all, because components ranging as deep as the GNOME Panel itself depend on it. Why? I can understand why one or 2 components might, such as the "About Me" applet -- although I don't like it - but why does half of the GNOME desktop need one email client?

Yet, on the other hand, I've just been installing an Asus Eee 900 for a friend. I installed Xubuntu, as it's rather lighter-weight than full-on Ubuntu and he will never need the full desktop. Then the special Array.org kernel. Then I added all the restricted components -- and there's another thing, which I'll get back to -- and checked them out. All fine.

Finally I added the Netbook Remix launcher.

But no, it doesn't work. This iconic launcher had no icons and no menu bar. Why? Because it uses GNOME for them. But does it depend on GNOME? No.

So I had to start over, wipe the machine and install "proper" Ubuntu. Three hours' work down the drain.

(Yes, I could have done "apt-get install ubuntu-desktop" and sucked in all the GNOME stuff... But that would probably have filled up the 4GB SSD of the Eee PC and even if it didn't, left me to clean up all the now-redundant Xfce stuff, Abiword, etc. Cleaner to start afresh.)

So what is going on? Why does a shell that needs GNOME and GNOME components such as the go-home panel applet and maximus and so on not depend on GNOME, whereas I can't remove an email client I don't want from my GNOME system because large chunks of GNOME depend on one particular desktop application?

And speaking of components and restricted ones, it is a real pain in the neck to have to manually add in ubuntu-restricted-extras on every single Ubuntu box I install. They are not "optional" if you want to use the Web; Flash, Java and so on are pretty much mandatory. So, come to that, are the w32codecs from the Medibuntu repository.

I am not American; I am not in the USA; I never plan to live in the USA. Yet as far as I know, I have to jump through hoops installing this stuff because they can't be included by United States laws. These do not apply to me. Other distros can be freely downloaded in Europe which do include these elements.

It's not just me. Ubuntu was founded and funded by a South African who lives in Britain and the company behind it is in incorporated in the Isle of Man in the British Isles. I am fortunate enough to have met Mr Shuttleworth and a fair few of the Ubuntu developers. Many of them, too, are European.

None of us are restricted by US law. Nor are Canadians or Mexicans or Uruguayans or Brazilians, all of whom are also American.

But for this one country, to meet its restrictions, Ubuntu is delivered crippled by the removal of all these essential components.

I am not some radical Stallmanite; I don't care about an all-Free-with-a-capital-F distribution. If I did, I'd be using Debian or GnewSense or something. Ubuntu is not a crusading Free distro, it's a no-nonsense get-the-job-done distro. I use it because it is easier and less cluttered than Suse or Mandriva and is not some rolling alpha-test like Fedora. (I used to. I spent five or six years fighting with RPM and got very tired of it indeed.) Oh, and unlike Xandros, it's fairly current and is updated regularly and is small-f free.

No, I am pragmatic; where the Free alternative is good enough, I'll use it. When it isn't, or if I need something non-Free, I'll use that instead.

Ubuntu needs those codecs, plugins and modules.

Can we not have an uncrippled version of Ubuntu for non-USA-citizens to download and use, without hassle overcoming US restrictions? It's even being sold in the shops now, yet even that version, as far as I know, doesn't include all the proprietary-content players.
liam_on_linux: (Default)
... But a beginner can have a bloody good try.

A year ago, I gave a friend of my mum's over here on the Isle of Man (where I am back for Yule) a laptop, and paid for them to get an ADSL connection. It's an old Vaio that used to belong to a client of mine. I replaced its dying 6G hard disk with a 20G one, but it only has 256MB of RAM and a PII-450. I put Ubuntu 7.10 on it, which actually runs quite well.

This week, back across the water, is the first time I've seen the machine since. My plan was to bung in 512MB of RAM and upgrade it to 8.04 LTS.

But...

Apart from dropping something into the keyboard so that it beeps continuously when turned on and the Delete key only working intermittently, it still runs OK... But boy had they messed up the system.

The desktop, Documents directory and home directory were full of about 2-3GB of Windows installers for the Google Toolbar, various games and nasty little freebies, Flash player and so on.

~/Documents was renamed to ~/Documentsgoogle and there were empty files and directories everywhere named "New Directoryjohn smith" (not their real name). There were empty files and folders galore, most called "New Filejohn smith" and "Untitled Folderjohn smith (7)" and so on. (How did the long username get in there?) Many of these had been compressed into tarballs, too.

I'd simplified Ubuntu's dual panels into one, Windows-style one. I'd got rid of desktop-switching and the Shut Down button, too.

From the remaining single panel, the main menu was gone, and so were all app quick-start buttons. The wastebin was gone, too. (Which is a shame, as it had no less than 301 Windows installers in it, but with no icon, you can't empty it.) There were 2 clocks, though, and 3 copies of the fast-user-switching control. (Note to self: remove the user-switching control and the "minimise all windows" control next time, too.) So, no way to start or run programs. There was a saved copy of eBay's CSS file and a tarball of it too. (How?!) Most of Gnome's file associations had been reset, and there were multiple entries for things like text files - right-click one and the first 20 options were to open it with the archive manager, then 15 entries to open it with Synaptic. Oddly, you couldn't open text files. ;¬)

Although the 50+ Windows Google Toolbar installers hadn't worked, obviously - no, I am not putting WINE on it to make them work - they (or Google) had installed it anyway. And about 20 other random extensions, including one in Albanian. I don't speak Albanian and neither do they; I had to Google the vaguely familiar word "shqip" to work out WTF it was. Apart from that it was in Albanian, I have no idea what it was. The icon was of a jigsaw piece

There were numerous bookmarks to 404 pages and other things that hadn't worked. And one or 2 to eBay auctions and stuff.

Somehow, the magic Firefox bookmark folder "Bookmarks Toolbar" had been renamed to "Ubuntu and Free Software links" or words to that effect. I don't know how, as I couldn't rename it back again. The original folder called that was gone.

Ubuntu was sufficiently hosed that the upgrade to 8.04 was impossible.

I am reformatting & reinstalling as I speak. There was not a single actual document file or anything to keep.

(And did you know that you can't use a live/desktop Ubuntu CD to upgrade an existing installation? I didn't. Apparently, only the Alternate CD can do that. Of course, I didn't bring one of them over with me.)

It's really quite impressive.

My mate denies it, but I suspect A Mate Who Knows About Computers (when I have my consultant's hat on, my most feared arch-enemy) has been looking at it, and has been utterly baffled by something that isn't Windows.

But while it still works, it's remarkable just how buggered-up an Ubuntu box can get in the hands of a novice, isn't it? I am irrestistably reminded of Verity Stob's classic State of Decay essay.

I thought Ubuntu was at the stage where it was getting pretty much Granny-proof.

I was wrong.
liam_on_linux: (Default)
It's gradually being realised that TCO is more than H/W + S/W. Virtualisation and thin clients are currently very popular: VDI is the Next Big Thing, Virtual Desktop Infrastructure - you run graphical terminals everywhere and lots of virtual XP machines on a few big servers, with the display being remotely used from said terminals. Easier than WinFrame or Terminal Server, more flexible - they don't all need to run the same image, people can have conflicting versions of apps and "their own" machine which can follow them around a network. They can remotely suspend it & come back to all their windows where they were and so on.

But this misses a number of points.

#1 You still need to buy all the Windows licences. Expensive.

#2 You need some seriously BFO servers to run hundreds of VDI instances. Expensive.

#3 If the server goes down, hundreds of users are screwed. Downtime -> expensive.

#4 The "thin clients" don't do much - no local processing to speak of - but they are each a capable computer in their own right, with local RAM and CPU and graphics and sound and some kind of OS, possibly booting from Flash. Not very expensive, but still a significant cost, and of course, it's inefficient to chuck away hundreds or thousands of PCs and replace 'em all.

What's the point of it all, then?

Well, it's not saving upfront costs. It's reducing maintenance costs. Your techies work in a datacentre, looking after a relatively few servers running lots of instances of near-identical Windows system images. They don't need to go into the field - terminals are interchangable and cheap enough that branches can carry a few spares. Cheap largely-unskilled techs can go out in the field and swap boxes and that's largely it, once the network's built.

This is actually taking off. Lots of big vendors pushing it hard: MS, VMware, Citrix, Parallels on the S/W front, Wyse etc. on the terminals and all the big server vendors are dribbling at the prospect.

But it's an awful lot of work just to reduce management costs and it's got snags of its own.

So. Alternatives? Using our favourite FOSS OS?

Linux desktops

Rolling out Linux onto all your desktops is not a big help. Yes, it saves licence costs, but each machine takes an hour or so to provision, there's still loads of local state, and you have to retrain all your techs to install and support hundreds of Linux installations. Nightmare. Very hard sell.

Linux thin clients or terminals or X terminals

Using Linux as the thin client OS - well, fine, but you still have loads of thin clients, with the concomitant server load. Doesn't solve anything, just maybe makes the thin clients a bit cheaper. You could roll up a bootable-CD thin client distro that didn't require any installation at all, but it doesn't help the problem of all those clients and all those big expensive servers.

Turning PCs into diskless workstations, only, er, with disks

Now, for years, I've been interested in stateless OSs for some modern kind of network workstation. Diskless workstations were a good idea once and I tend to think they could make more sense than "thin clients" which aren't thin, they're just graphical terminals. Terminals dump the entire computation (+ RAM + storage) load onto the server. RAM and storage are relatively cheap; it seems to me to make more sense to make the best possible use of local resources as well as reducing management. X.11 is no help here, it's just an alternative to MS RDP or Citrix ICA or whatever.

The ideal, ISTM, would be to have an identical, stateless OS image run out to all your PCs, something with no local config files at all. PCs would be totally interchangable; if they were buggered, they could be easily replaced.

Now companies like Progeny worked on this for ages, trying to separate out the local config from the global config and produce modular Linuxes that could keep just their important state on the server. It's a tricky job; AFAIK it never got done properly.

But ISTM that there is another way, using something that's already been done. Live CDs.

Live CDs are effectively stateless. They boot on completely unknown hardware, detect it on the fly on the first pass, and get you to a usable graphical desktop complete with apps and a network connection.

But they're slow, 'cos they run from CD, and CDs are slow.

Now with some LiveCD distros, like Knoppix, you can copy the CD contents to HD and it becomes a "normal" distro. But that's no good - that's local state again.

Other CD distros, like Puppy, can copy a write-only image of the CD to HD, boot off that via some kind of loader, and keep their state in a separate little file. That's more like it.

So what you need, ISTM, is a distro that works like Puppy - boots off an image on the HD, but as it boots, according to MAC address (say), it gets access to a bit of system-specific config data on the server. Screen resolution, stuff like that. This is a small package of data, and the tech to do that from local storage is known from several liveCDs. It just needs extending to work over a LAN. It broadcasts a request for a server and if it can "see" a server that knows that machine, it gets handed its local state info. This can be cached locally, outside the system image, for offline operation.

Then when the user logs in, they get a network home directory, so their desktop, their files, their settings etc. follow them around the network. Again, this can be cached locally. This is much the same as Windows' "roaming profiles", only 'cos Linux is a bit better designed, you don't sync a folder full of temporary files and the browser cache every time.

Now if you had a network of hundreds of machines all running off local images of the CD, that's an improvement, 'cos now you have a uniform stateless OS. However, when it comes round to upgrading them all, you're screwed again, because you need to update all those machines. Even if you write some snazzy utility to copy the image down off the network, a few hundred PCs loading a half-gig image is going to cripple your network and it will be disastrous over a slow link to a remote site.

So what you need is a pre-boot environment. Not something techie like PXE which not all PCs support, but you're going to need some kind of bootloader anyway.

So, you have a Linux system that loads a Linux system.

The first-stage loader boots up, gets an IP, connects to a specific disk image server and checks if the local image is current. If it isn't, it rsyncs down the latest one. (It might even make sense to keep a backup copy, in case of corruption or a lost connection during the sync. Grandfather-father-son. We're probably only talking about half a gig a pop and I don't think you can buy a HD smaller than 80GB these days. Even if you boot from SSD, 2G to 4G is something like £5 worth of Flash.)

Once the local image is current - and the sync shouldn't take long unless you do a whole-disk-image upgrade; we're probably talking 10s of meg, no more - then the startup kernel KEXECs the "live" kernel in the disk image. No reboot, so it's fast.

With network-bootable workstations, this is inefficient, but it could still work. However, with a small amount of local storage - a few gig - you have space for:

[a] the startup kernel and a simple FS to keep a couple of disk images in. This sort of job is simple enough that a bootable floppy would almost be enough - the system only needs to be a few meg of code. It needs a kernel, network drivers and rsync and not a lot else. This is the tech of the Swiss product Rembo, now bought by IBM and thus gone from the Free world.
http://www.appdeploy.com/tools/detail.asp?id=39
http://www-01.ibm.com/software/tivoli/products/prov-mgr-os-deploy/

[b] a swap partition, alleviating one of the performance problems of LiveCD operation. You could live without this if it got corrupted, lost or whatever.

[c] a small local cache partition containing a backup copy of any node-specific state and any user-specific state. Again, losing this wouldn't be fatal.

For provisioning machines, making spares etc., just leave a stack of bootable CDs of the live image around. Setting up a machine is a matter of turning it on, inserting the CD, pressing Reset or Ctl-Alt-Del if necessary, and waiting. The LiveCD boots as normal. In the background, a task kicks off and looks for an empty local drive. If it finds one, it installs the startup system, copies an image of the CD onto the disk, updates it from the server.

Indeed, you could even set it up so that like WUBI, the Ubuntu installer that copies itself into an existing Windows partition, it doesn't remove any existing data or anything on the hard disk, it just quietly inserts itself into any free space it finds, installs GRUB and makes itself the default with an option to boot back into the old Windows system if desired. Non-destructive reversible provisioning.

Inside the disk image, well, then it's a matter of making a simple generic desktop that looks as much like XP as humanly possible, with Firefox plus addins, an email client, OpenOffice, etc. All the usual stuff. The one extra I can see being really useful would be a bundled Terminal Server client, so that all those expensive-to-maintain PCs become identical thin clients that a trained monkey could set up, becoming completely interchangeable and so on. All the benefits of thin clients, all the remote manageability etc., because in effect, they're all booting off the image on the server - except without sending the whole OS and apps over the wire each time.

Ideally, if the Terminal Server client could work in a mode like Fusion on VMware on Mac OS X - where the remote apps' windows mingle in with local windows, so the users aren't presented with 2 desktops. They have one desktop, but some shortcuts or menu entries, unbeknownst to them, point to remote applications. E.g. the local Linux boxes could all run Outlook remotely to connect to the Exchange Server. Use something like Citrix or Parallels to host that, it would require /massively/ less server load than an entire virtual desktop image, and if you set it up right, you don't need hundreds of Windows licences - you just need lots of Outlook licences on a small number of copies of Windows.

What's the point of all this?

Well, it's a way to sell desktop Linux. This offers a set of (I hope) compelling advantages:

- massively reduced licence costs next to Windows, virtual or otherwise. Remember all those CALs you need even for thin clients.
- repurpose existing PCs into thin clients with a thin, light OS. Instead of proprietary thin clients, hardware replacements and upgrades use dirt-cheap generic hardware. Testing them for suitability just means booting them once from CD or USB.
- no conventional locally-installed OS, no local state, so PCs become interchangeable. In emergency, can just run from CD/USB key. Something goes wrong with a PC? Bung in the CD, reimage it, and user can work while it's happening.
- reduces network load compared to thin clients or network booting
- reduces server load compared to thin clients or virtualisation
- field support requires very little staff training or knowledge
- major reductions in management cost
- much less disruptive migration than replacement of existing kit
- greener: reuses existing machines & gets improved performance
- secure: no need for local antivirus etc. (so further cost savings)

ISTM that this could actually give Linux a persuasive advantage over Windows as a business OS.

So go on, rip the idea to shreds. What have I missed?

The main hard part is trying to sell it to Windows houses as an alternative. That's why I'd suggest part of it is a theme to make it look as much as possible like Windows, either XP or Vista, as desired. e.g.
http://lxp.sourceforge.net/

GUI Emacs

Jan. 23rd, 2008 07:37 pm
liam_on_linux: (Default)
Is there a Linux version of Emacs with a grown-up 21st-century GUI? Something akin to Aquamacs on OS X:
http://aquamacs.org/

I've tried installing the GNUstep Emacs, but I couldn't get it to start. I've installed the vanilla Ubuntu Emacs 21 for X.11 instead, but it doesn't even seem to know what the scroll wheel on a mouse it. It just beeps at me.

Just curious. Considering (once again) learning a Real Man's Editor.
liam_on_linux: (Default)
It's new to me.

Meant to be a more modern supplement to tar, aimed at archiving to multiple removable media, such as Zip disks. It's been suggested to me that this may be a more appropriate choice than tar or even dump. It handles segmented backups no problem, though it doesn't seem to directly address the problem of putting stuff onto FAT volumes, it should work. It numbers its own segments and given the base archive name will iterate automatically through them. It can be given a segment size and will work to it as a built-in option. It can also compress across segmented archives.

It's in Ubuntu's repositories and it seems to be current and maintained - I've installed it to have a look with a simple "apt-get install dar" - but its CLI is moderately fearsome. To a wuss like me, anyway. Seems to have reasonably helpful docs on the homepage.

http://dar.linux.free.fr/doc/index.html

Thoughts? Avoid 'cos it's relatively new and weird (dump were good enough for my great-grandad onto paper tape, magtape, eeee we used to DREAM of magtape, paper tape, PAPER TAPE, ha, we used to backup onto mercury delay lines reet across t'Pacific Ocean, aye), or go for it if it makes life easier?
liam_on_linux: (Default)
I am trying to backup a RAID array of some 100GB of stuff onto a FAT32 external hard drive.

I tried

tar -cvfz usbdrive/raid.tgz raid/

but it didn't like it. I also tried with 'j' instead of 'z' to use bzip2 instead of gzip. That created an archive called 'j' in the current directory.

So I dropped the compression switch altogether.

tar -cvf usbdrive/raid.tgz raid/

This worked but I forget one detail. FAT32 has a max file size of about 4G. So when the archive got to 4G, it barfed.

I see there's a parameter to change tapes, but I don't know if that allows me to create multi-segment archives. Can GNU tar do this? Break up the backup into lots of 4G chunks? Ideally, compressing them on the way? If so, how? I'm getting nowhere Googling for info...
liam_on_linux: (Default)
Interesting piece from IBM Devworks - an explanation of the standard Unix FS layout.

One thing puzzles me. The author comments that /var is the place for files that grow over time, such as mailboxes. I can see there would be a need on a "proper" multiuser system for a global mail spool, but shouldn't users' mail be in their home directories? As a Unix user, I'm a wimp* and use Thunderbird for my mail on my primary machine - a Linux box - and also on my Mac and my PCs. As a result, there's nothing in /var/mail and ~/mbox contains just a few system mails from about 3y ago when I ran SUSE. (I'm on Ubuntu now and much prefer it.) Am I missing something?

Couple of thoughts arising.

I'm interested to note that the author says that "/etc" is often pronounced "etcee". I always tended to say "et cetera". [livejournal.com profile] uon was right again.

Also, the fact that even Linux distros can't agree on where to put everything, plus the many implicit assumptions in the logic of the tree that any Unix machine is a multiuser box with stuff probably mounted off a remote machine, makes, with the passing of time, a better and better argument for GoboLinux. Must try that sometime.

P.S. No, the book is not dead. It's just... resting. I'm making more money writing for the Inquirer at the moment and at this stage money is more important than the (perhaps meagre) kudos and CD bragging rights of having written a book. Shame. :¬(



* pun intended**
** Not a weakly-interacting massive particle, but that's also a fair description...
liam_on_linux: (Default)
[livejournal.com profile] dougs feels that Ubuntu's no-root-user model is irksome. I can certainly see his point. He feels that, even if it's a useful move for a workstation, for servers, this is an inappropriate setup. He suggests that we tell people to simply give root a useful password and use that.

I'm aware of some of the arguments against - for example, if you're good and dutiful and always use the sudo command, and don't do that naughty "sudo -s" and simply become root, then sudo gets logged, I believe, whereas what root does, whether you logged in as root or whether you sudo-ed and became root, it doesn't. All you get is an entry telling you that such-and-such became root and there the trail ends.

Thoughts? I'm going to have to go back and do some rewriting to change this, obviously, but then, I need to do some of that anyway.
liam_on_linux: (Default)
Xandros are looking for beta testers of their new cross-platform management product. IME Xandros stuff is pretty damned good, but I don't use RH myself any more...

# # #
Red Hat Server System Administrators Needed for Beta Testing of Xandros Cross-Platform Management Tools

Xandros today put out a call to Red Hat Enterprise Server system administrators to sign-up for the beta testing of Xandros’ new cross-platform server management tools. Red Hat server system administrators will play an integral part in assuring that the forthcoming Xandros product releases meet the highest standards for stability and ease of use. Red Hat administrators will test new monitoring tools to determine Xandros’ success in allowing administrators to manage multiple Red Hat Enterprise servers on various hardware architectures.

With the Xandros Management Tools for Red Hat, Red Hat server users will enjoy the same ease of setting up and managing their server as those who use the Xandros server. In general, Windows administration expertise is mostly sufficient to operate the Xandros system, as opposed to others requiring full Linux or Red Hat training.

Red Hat server administrators interested in testing the new tools are invited to apply at:
http://www.xandros.com/beta .
# # #

Sorted!

Dec. 13th, 2006 12:23 am
liam_on_linux: (Default)
I did indeed need to reconfig mdadm after building the array. This may be a difference between 6.10 and 6.06, which is what my older main server's running.

For the curious:

Output of dmesg... )

I am slightly concerned about the initialisation errors on the RAID drives. They seem to be working fine. I thought it might be because I was using plain old IDE cables, but I've replaced them with 80-core UltraIDE cables: no difference. Odd. I've only seen this before on failing drives, but these errors are one-offs, only happening at startup. Otherwise, it seems fine.

Still, for the nonce, this is only a test box. It's not doing anything really important. So I am not too concerned...

RAID info

Dec. 13th, 2006 12:13 am
liam_on_linux: (Default)
Partly for my own reference :-) this is a rather useful page I found on RAID5 on Ubuntu:

http://bfish.xaedalus.net/?p=188

But my RAID isn't automounting. This is my config file:

lproven@lilmesh:~$ cat /etc/mdadm/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=4 UUID=83287c83:7bd8c43e:c550bdfc:77fd6b91

(Isn't SSH wonderful? I'm typing on a Linux laptop on my new home WLAN, SSHed into the server beside me so that I can copy&paste. Small things still please me. SSH, wireless, Linux laptops... Why I could almost be mistaken for being up-to-date technologically if it all wasn't such old hardware that I acquired for free!)

Anyway, this is how my RAID looks on bootup:

lproven@lilmesh:~$ cat /proc/mdstat
Personalities :
unused devices:

In other words, zilch.

Now, it looks like, according to a comment on that page I linked, I might have to "dpkg-reconfigure mdadm" to get auto-assembly to work. Pretty sure I didn't have to do that last time. I'm trying that right now. It's taking a loooooong time to rebuild the initrd.

P.S. Ethernet sorted with static address. Am leaving fonts alone for now as I have a satisfactory solution! I've re-added my AHA1542 SCSI card. The RAID card was nabbing its IRQ, so I've reserved IRQ10 in the BIOS, and a reboot later, I can add it with MODCONF. So now, the machine has SCSI, too.
liam_on_linux: (Default)
Right. Some considerable poking later, I have:
- the server happily booting
- it's apt-getted into currency
- the screen in 132*60 mode (I've disabled Ubuntu's soft fonts - by marking the console-manipulating utility non-executable)
- I have replaced the duff HD in the array with the working one from Nolly's PC (he has a 40GB one in its place, so he's much better off)
- I have created a 4-disk RAID5 array
- formatted it as ext3
- fscked it to check that it seems healthy
- mounted it under /media/raid
- the remaining space on the 3 slightly larger drives partitioned as swap (roughly 40 + 80 + 80 meg. Plenty for a lightly-loaded 128MB RAM machine, I reckon).

Suggestions for:
- a cleaner way of disabling soft fonts
- making eth0 permanently DHCP
- making the RAID autostart, 'cos it currently must be done by hand

...?
liam_on_linux: (Default)
As beloved by [livejournal.com profile] sbisson :-)

In a way, these are what a low-end Ubuntu fileserver are up against. Charlie Demerijian of L'Inq has some insightful comments:

EDIT: these are all from Thecus, BTW.
http://www.thecus.com/index.php

N4100
http://www.theinquirer.net/default.aspx?article=29281

N5200 pt 1
http://www.theinquirer.net/default.aspx?article=36257

N5200 pt 2
http://theinquirer.net/default.aspx?article=36262

Also see

N2100 review
http://www.theinquirer.net/default.aspx?article=27319

& from IDF
http://www.theinquirer.net/default.aspx?article=25645

At the moment, the Linux answer to this is is NASlite:
http://www.serverelements.com/

Only the most basic version is free, though. I'd love to build an Ubuntu NAS version as a side project!

If you're a BSD type, there's FreeNAS:
http://www.freenas.org/
liam_on_linux: (Default)
I've been having a quick look through Froogle.

It looks like there are some reasonably affordable ATA RAID cards now...

http://froogle.google.co.uk/froogle?q=ide+raid+adaptor+card+-workstation+-tower+-barebone&btnG=Search&hl=en&show=dd&scoring=pd

Highpoint seem to do a range. IME - which is fairly limited, TBH - one of the distinguishing features of a "true" RAID controller is RAID5 support. If it only does 0/1/0+1, it's a firmware controller; if it does 5, it's probably the real deal. Agreed?

Anyone got any personal experience of these Highpoint controllers?

Everything <£50 looks to me like a firmware job... But there's at least 1 £75 card which looks to me like the genuine item:

http://www.scan.co.uk/Products/ProductInfo.asp?WebProductID=83203&source=froogle
liam_on_linux: (Default)
Well, it has been, terribly. I thought the Pratchett piece would be a nice little intro to my return.

Sorry about the unscheduled downtime. Real life has intruded, somewhat, in the form of having to Go Out An Earn A Living.

But I am back, there's a tiny little bit of new text for you to chew over, and I am hopeful that there's going to be a lot more very soon. This will partly be due to the good offices of [livejournal.com profile] dougs who is going to be assisting me from now on.

So. I have rewritten the start of chapter 5, which is the one on fileservers. I've pulled out all the stuff about setting up a RAID, which I plan to banish into an appendix. Rewrite appearing here in a mo'.

This is my - and to an extent Doug's - thinking. Please feel free to demolish it.

I do tend to feel that RAID is something which should be absolutely integral to a server, though. These days there's less & less to delimit a server from a workstation; one of the few things is that servers are optimised for reliability rather than performance in desktop apps.

Now, the ideal, to my way of thinking, is a SCSI RAID setup, but frankly, if you're a small business & you can afford that, then you can afford Win2003 Small Business Server and a bunch of CALs. Let's face it, you're probably building your own server 'cos you haven't got loads of spare IT budget, yes? Does this seem reasonable?

So there are 2 alternatives.

IDE RAID or software RAID.

Pertaining to IDE RAID, I would instantly rule out those nasty firmware RAID cards - the sort of £20 things you get in Maplin's. These don't do real RAID, all they do is implement a driver which lies to the OS that a mirror pair or a stripe is actually a single disk. The main reason being that workstation versions of Windows can do striping but they can't do mirroring or true RAID, level 5 or higher.

These things are, from what I have read, a swine to get working under Linux, arrays are not very portable between devices, performance is not great, and basically, you might as well use S/W RAID instead.

I have a server here running Linux S/W RAID and it's pretty good.

One question is:

Are there any cheap good IDE RAID cards out there yet which do real RAID, as in, the host adapter controls the RAID entirely on its own, with no OS involvement? I have a couple of clients with Dell servers with CERC (rebadged ALI MegaRAID) cards in, but those, even at Dell's heavily-subsidized prices, add £150 to the cost of your server. This probably translates to 2 or 3 hundred quid in the real world. I.e., /considerably/ more than the cost of the disks.

That may be too expensive for my target readers.

I don't know of any true H/W IDE RAID controllers that cost down around the price of a disk - which means sub-£100, really. Am I out of date on this?

So, if H/W RAID is out of the picture, I'd like to advise people to use S/W RAID as an alternative and monitor it /zealously./

Comments?

- - - - -

I also have a germinal intro to the chapter on LAMP servers somewhere, if I can find it. That might also follow soon. Little real content there, though.
liam_on_linux: (Default)
...to the rambling buildings of Unseen University and in particular the apartments of Greyhald Spold, currently the oldest wizard on the Disc and determined to keep it that way.

He has just been extremely surprised and upset.

For the last few hours he has been very busy. He may be deaf and a little hard of thinking, but elderly wizards have very well-trained survival instincts, and they know that when a tall figure in a black robe and the latest in agricultural handtools starts looking thoughtfully at you it is time to act fast. The servants have been dismissed. The doorways have been sealed with a paste made from powdered mayflies, and protective octograms have been drawn on the windows. Rare and rather smelly oils have been poured in complex patterns on the floor, in designs which hurt the eyes and suggest the designer was drunk or from some other dimension or, possibly, both; in the very centre of the room is the eightfold octogram of Witholding, surrounded by red and green candles. And in the centre of that is a box made from wood of the curly-fern pine, which grows to a great age, and it is lined with red silk and yet more protective amulets. Because Greyhald Spold knows that Death is looking for him, and has spent many years designing an impregnable hiding place.

He has just set the complicated clockwork of the lock and shut the lid, lying back in the knowledge that here at last is the perfect defence against the most ultimate of all his enemies, although as yet he has not considered the important part that airholes must play in an enterprise of this kind.

And right beside him, very close to his ear, a voice has just said: DARK IN HERE, ISN'T IT?
liam_on_linux: (Default)
Is this something which is an important enough subject for a book aimed at novices, or is it something a bit arcane for a sidebar or appendix?

I don't use it myself, although I'm familiar with the ideas. Any pointers on good online refs on using it on Linux?
liam_on_linux: (Default)
Quick question.

Does anyone know of a simple text-mode text editor for Linux which resembles MS-DOS Edit or any other menu-driven CUA-style editor? I know about joe and pico, but I'm hoping for something easier...

Me, I use vi, but I don't like it much & only know the basics. Never got my head around EMACS - it's just too unlike anything else.

UPDATE: Sorted. I have found not only the rather pleasing WW, but now, SETEdit, which so far I like very much.

http://setedit.sourceforge.net/

Thanks for various hints & tips over in the other LJ!

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 19th, 2026 11:25 am
Powered by Dreamwidth Studios