liam_on_linux: (Default)
2019-05-14 06:07 pm

(no subject)

PowerQuest PartitionMagic was one of my favourite pieces of software ever written.

It offered a lot of functionality for disk and partition management on the PC that had previously been considered impossible, or the sole domain of enterprise storage management systems, such as resizing drive partitions on the fly -- i.e. with all their contents intact.

Later, it gained additional functionality, such as the ability to merge 2 (or more) disk partitions into one larger one.

If, for example, you merged drives C, D and E, you ended up with a big drive C which contained subfolders called "\D\" with the full contents of D: and "\E\" with the full contents of E:

It was then up to you to move stuff around to sort it.

However, the thing is this:

When you move from one drive to another drive, including separate partitions, the OS must copy the data from source to destination, then when it's copied, remove the original file... then repeat this for every file. This is unavoidably slow. It applies even on the same physical drive, if there are multiple partitions.

But if you move a file from one folder to another folder in the same partition, on any modern filesystem, the OS can just rename the file from

/data/my/old/file

... to...

/data/my/new/file

The actual contents of "file" don't move. So it's very, very fast.

So cleaning up the folders left by a PQMagic partition merge was quite quick. It was the merge that took hours. It copied as much data as would fit, shrank D: as much as possible by moving the start, enlarged C: and then copied some more... and repeat. This could be a *very* lengthy process.

This kind of thing is the reason that logical volume management systems exist:

https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)

LVM is complicated and hard to understand. If the above article makes little sense, don't blame yourself. For standalone workstations, I recommend avoiding it.

So, there's LVM, then on top of the LVM space, you have partitions. Those are formatted with a filesystem, such as ext4, or older enterprise filesystems from old commercial Unixes, such as JFS (from IBM's AIX and OS/2), or XFS (from SGI IRIX).

https://en.wikipedia.org/wiki/XFS

https://en.wikipedia.org/wiki/JFS_(file_system)

Fedora enables LVM by default which is just one reason I avoid Fedora.

Then to make matters worse, there are filesystems which support "subvolumes" inside a partition, e.g. Btrfs.

https://en.wikipedia.org/wiki/Btrfs

Btrfs is the default FS of SUSE Linux.

Then you have subvolumes inside partitions on top of LVM volumes on top of disks, and personally it all makes my head spin.

*Because* LVM is hard, and its functionality overlaps with partitioning, there are projects that try to merge them.

For Linux, there was EVMS:

http://evms.sourceforge.net/

Unfortunately, it did not catch on, so we have LVM instead.

https://lwn.net/Articles/14816/

https://unix.stackexchange.com/questions/22885/is-there-a-more-modern-or-more-popular-version-of-evms2

RH does not support Btrfs. However, because it wants some of the features of Btrfs, RH is now building its own new combined logical volume manager / partitioner / filesystem, Stratis:

https://stratis-storage.github.io/

Stratis combines an LVM layer with the XFS filesystem.

I have heard comments that Stratis is in effect re-creating a subset of the functionality of EVMS.

This is a very typical Linux development path.

The richest filesystem/volume manager from commercial Unix is ZFS, from Sun (now Oracle) Solaris.

https://en.wikipedia.org/wiki/ZFS

Like JFS and XFS, ZFS is now open source. However, under a licence that is incompatible with the Linux kernel's GPL licence.

So you _can_ compile a Linux kernel with built-in XFS, but it violates the licence.

However, Ubuntu has found a way around this, with ZFS being a loadable module (AIUI) that isn't part of the kernel itself.

(AIUI. IANAL. Clarification welcome.)

Ubuntu Server offers ZFS instead, in place of Btrfs in SUSE or Stratis in Fedora (or XFS in all of them).

ZFS can replace the LVM _and_ also ext4/XFS/JFS, and therefore Stratis too, but neither SUSE nor RH will bundle ZFS because of licence concerns.

Apple _was_ going to bundle ZFS but it too decided the licensing was too tricky and it has developed its own system, APFS. But then Apple no longer is trying to compete in the server market.

https://en.wikipedia.org/wiki/Apple_File_System

Yes, it is confusing. Yes, it is a mess. Yes, there are too many standards.

https://xkcd.com/927/
liam_on_linux: (Default)
2015-09-05 02:20 pm

Containerising Linux desktop apps -- how and why

There are moves afoot to implement desktop apps inside containers on Linux -- e.g.

https://wiki.gnome.org/Projects/SandboxedApps/Sandbox

This is connected with the current uptake of Docker. There seems to be a lot of misunderstanding about Docker, exemplified by a mailing list post I just read which proposes running different apps in different user accounts instead and accessing them via VNC. This is an adaptation of my reply.

Corrections welcomed!

Docker is a kind of standardized container for Linux.

Containers are a sort of virtual machine.

Current VMs are PC emulators for the PC: they virtualise the PC's hardware, so you can run multiple OSes at once on one PC.

This is useful if you want to run, say, 3 different Linux distros, Windows and Solaris on the same machine at once.

If you run lots of copies of the same OS, it is very inefficient, as you duplicate lots of code.

Containers virtualise the OS instead of the computer. 1 OS instance, 1 kernel, but to the apps running on that OS, each app has its own OS. Apps cannot see other apps at all. The virtualisation means that each app thinks it is running standalone on the OS, with nothing else installed.

This means that you can, say, run 200 instances of Apache on 1 instance of Linux, and they are all isolated. If one crashes, the others don't. You can mix versions, have custom modules in one that the others don't have, etc.

All without the overhead of running 200 copies of the OS.

Containerising apps is a security measure. It means that if, say, you have a compromised version of LibreOffice that contains an exploit allowing an attacker to get root, they get root in the container, and as far as they can see, the copy of LibreOffice is the only thing on the computer. No browser, no email, no stored passwords, nothing.

All within 1 user account, so that this can be done for multiple users, side-by-side, even concurrently on a multiuser host.

It is nothing to do with user accounts; these are irrelevant to it.

Gobo's approach to bundling apps mainly just brings benefits to the user: an easier-to-understand filesystem hierarchy, and apps that are self-contained not spread out all over the filesystem. Nice, but not a killer advantage. There's no big technical advantage and it breaks lots of things, which is why Gobo needs the gobohide kernel extension and so on. It's also why Gobo has not really caught on.

But now, containers are becoming popular on servers. It's relatively easy to isolate server apps: they have no GUI and often don't interact much with other apps on the server.

Desktop apps are much harder to containerise. However, containerising them brings lots of other advantages -- it could effectively eliminate the differences between Linux distributions, forever ending the APT-vs-RPM wars by making the packaging irrelevant, while delivering much improved security, granularity, simplicity and more.

In theory all Gobo's benefits at the app level (the OS underneath is the same old mess) plus many more.

It looks like it might be something that will happen. It will have some side-effects -- reducing the ease of interapp communication, for instance. It might break sound mixing, or inter-app copy-and-paste, system browser/email/calender integration and some other things.

And systems will need a lot more hard disk space.

But possibly worth it overall.

One snag at present is that current efforts look to require btrfs, and btrfs is neither mature nor popular at the moment. This might mean that we get new filesystems with the features such sandboxing would need -- maybe there'll be a new ext5 FS, or maybe Bcachefs will fit the bill. It's early days, but the promise looks good.