![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Sandcastles and skyscrapers
The problem with the Unix lowest-common-denominator model is that it pushes complexity out of the stack and into view, because of stuff other designs _thought_ about and worked to integrate.
It is very important never to forget the technological context of UNIX: a text-only OS for a tiny, already obsolete and desperately resource-constrained, standalone minicomputer. It was written for a machine that was already obsolete, and it shows.
No graphics. No networking. No sound. Dumb text terminals, which is why the obsession with text files being piped to other text files and filtered through things that only handle text files.
While at the same time as UNIX evolved, other bigger OSes for bigger minicomputers were being designed and built to directly integrate things like networking, clustering, notations for accessing other machines over the network, accessing filesystems mounted remotely over the network, file versioning and so on.
I described how VMS pathnames worked in this comment recently: https://news.ycombinator.com/item?id=32083900
People brought up on Unix look at that and see needless complexity, but it isn't.
VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.
50 years later and Unix still can't do stuff like that. It needs tons of extra work with load-balancers and multi-homed network adaptors and SANs to simulate what VMS did out of the box in the 1970s in 1 megabyte of RAM.
The Unix was only looks simple because the implementors didn't do the hard stuff. They ripped it out in order to fit the OS into 32 kB of RAM or something.
The whole point of Unix was to be minimal, small, and simple.
Only it isn't any more, because now we need clustering and network filesystems and virtual machines and all this baroque stuff piled on top.
The result is that an OS which was hand-coded in assembler and was tiny and fast and efficient on non-networked text-only minicomputers now contains tens of millions of lines of unsafe code in unsafe languages and no human actually comprehends how the whole thing works.
Which is why we've build a multi-billion-dollar industry constantly trying to patch all the holes and stop the magic haunted sand leaking out and the whole sandcastle collapsing.
It's not a wonderful inspiring achievement. It's a vast, epic, global-scale waste of human intelligence and effort.
Because we build a planetary network out of the software equivalent of wet sand.
When I look at 2022 Linux, I see an adobe and mud-brick construction: https://en.wikipedia.org/wiki/Great_Mosque_of_Djenn%C3%A9#/m...
When we used to have skyscrapers.
You know how big the first skyscraper was? 10 floors. That's all. This is it: https://en.wikipedia.org/wiki/Home_Insurance_Building#/media...
The point is that it was 1885 and the design was able to support buildings 10× as big without fundamental change.
The Chicago Home Insurance building wasn't very impressive, but its design was. Its design scaled.
When I look at classic OSes of the past, like in this post, I see miracles of design which did big complex hard tasks, built by tiny teams of a few people, and which still works today.
When I look at massive FOSS OSes, mostly, I see ant-hills. It's impressive but it's so much work to build anything big with sand that the impressive part is that it works at all... and that to build something so big, you need millions of workers, and constant maintenance.
If we stopped using sand, and abandoned our current plans, and started over afresh, we could build software skyscrapers instead of ant hills.
But everyone is too focussed on keeping our sand software working on our sand hill OSes that they're too busy to learn something else and start over.
Re: Difficult judgement
That's fair.
Also, note
vicarage's comment below and more to the point my (hasty and therefore too long) answer to it.
The basic Unix model, which is also the Linux model, is good because it is accessible and very powerful to a lot of people, even if that "lot of people" is only 1% of the population.
Plan 9 is more smaller, cleaner, with a more complex but more complete and powerful conceptual model. Forget Sun marketing: in Plan 9, the network is the computer.
But I've tried it and it breaks my brain completely. I can't use it, at all, even for basic stuff.
Plan 9 did the "right thing" but it did it in a weird complicated way that made it too hard for, I suspect, 99% of that 1% of people who get and like and value the Unix way.
That's fatal. It's too much. It condemned it to a niche forever.
(Aside: Inferno, at graphical level, is much easier, but it's much easier in ways that are not apparent to the 1% of 1% of people who grok Plan 9.)
The Unix CLI model, as I said in my reply to John B, is too hard for most people, but it works for 1% of people and that's enough to mean it was a huge success.
[For clarity: I am pulling these numbers out of the air; they are not real percentages. I am just trying to express the difficulty of some concepts and the fact that only a very small minority will grasp them, but they will love them because they can do amazing stuff with them.)
Nobody considers this kind of stuff at the planning stage. By the time a product is in alpha or beta stage and might ship or get cancelled, this is long gone. It's something that happens in the minds of people who find others to consider maybe planning something. By planning stage, it's gone, it's over.
The real point here, perhaps, is to do some measurement and estimation and try to measure how many people of what kind can handle the DOS conceptual framework, the NT one, the Unix one, the VMS one, the IBM mainframe ones, and actually try to... well, to model them, and maybe calculate, explicitly and rationally, what levels of model are accessible to what people.
A small number of people love Plan 9, but it's too hard for most Unix techies.
A small number of people love Lisp, but it's too hard for most programmers.
A small number love Smalltalk, and Forth, and Oberon, and so on, but it's too hard for most.
Whereas millions loved BASIC, and millions love Python. But the Python folk are Unix folk who probably also know and like C, which is too hard for many -- but they don't know their own skills well enough to know that, resulting in millions of broken unsafe C programs.
(And a large and lucrative software industry.)
Python people scorn BASIC: they can't see the hairy bits of Python that put off some BASIC-lovers.
Hardcore C (and curly-bracket languages in general) people scorn Python and its indentation... but millions love curly-bracket languages.
Lisp people scorn all of them.
I am, in general, interested in trying to enumerate and measure this, and try to work out the sweet spots. The levels of complexity that include or exclude a lot of people.
All programmers rate themselves highly compared to non-programmers. They may evaluate themselves as bad against other programmers, but inside, they all know that they are the demigods who can make sand think and sing.
But some levels of tech are too hard for most techies. Lisp, Plan 9, Forth, etc.
Some levels of tech are easy enough that they can make non-techies into techies. BASIC, MS-DOS, Python, etc.
Can we measure this?
Can we work out what levels of human mind can handle what levels of complexity effectively?
And by doing that, work out how to reduce or hide or (ideally) eliminate some of the complexity from some tools, and make them accessible to more people and thus bring them success?
Because a lot of the tools that we have now, that are loved by legions, are trash. They are dangerous junk, "unsafe at any speed" to borrow Nader's term.
But they are tools that large numbers can learn to use.
There are better tools but they are too hard.
Only if we can measure the complexity gaps can we map them. And only if we can make maps can we work out where the bridges need to be built.
All we have now are desire lines. https://en.wikipedia.org/wiki/Desire_path
Those only work at a low level; you can't build an efficient large network from them.
But they are all we have. Nobody's even worked out we need maps yet.