![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Sandcastles and skyscrapers
The problem with the Unix lowest-common-denominator model is that it pushes complexity out of the stack and into view, because of stuff other designs _thought_ about and worked to integrate.
It is very important never to forget the technological context of UNIX: a text-only OS for a tiny, already obsolete and desperately resource-constrained, standalone minicomputer. It was written for a machine that was already obsolete, and it shows.
No graphics. No networking. No sound. Dumb text terminals, which is why the obsession with text files being piped to other text files and filtered through things that only handle text files.
While at the same time as UNIX evolved, other bigger OSes for bigger minicomputers were being designed and built to directly integrate things like networking, clustering, notations for accessing other machines over the network, accessing filesystems mounted remotely over the network, file versioning and so on.
I described how VMS pathnames worked in this comment recently: https://news.ycombinator.com/item?id=32083900
People brought up on Unix look at that and see needless complexity, but it isn't.
VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.
50 years later and Unix still can't do stuff like that. It needs tons of extra work with load-balancers and multi-homed network adaptors and SANs to simulate what VMS did out of the box in the 1970s in 1 megabyte of RAM.
The Unix was only looks simple because the implementors didn't do the hard stuff. They ripped it out in order to fit the OS into 32 kB of RAM or something.
The whole point of Unix was to be minimal, small, and simple.
Only it isn't any more, because now we need clustering and network filesystems and virtual machines and all this baroque stuff piled on top.
The result is that an OS which was hand-coded in assembler and was tiny and fast and efficient on non-networked text-only minicomputers now contains tens of millions of lines of unsafe code in unsafe languages and no human actually comprehends how the whole thing works.
Which is why we've build a multi-billion-dollar industry constantly trying to patch all the holes and stop the magic haunted sand leaking out and the whole sandcastle collapsing.
It's not a wonderful inspiring achievement. It's a vast, epic, global-scale waste of human intelligence and effort.
Because we build a planetary network out of the software equivalent of wet sand.
When I look at 2022 Linux, I see an adobe and mud-brick construction: https://en.wikipedia.org/wiki/Great_Mosque_of_Djenn%C3%A9#/m...
When we used to have skyscrapers.
You know how big the first skyscraper was? 10 floors. That's all. This is it: https://en.wikipedia.org/wiki/Home_Insurance_Building#/media...
The point is that it was 1885 and the design was able to support buildings 10× as big without fundamental change.
The Chicago Home Insurance building wasn't very impressive, but its design was. Its design scaled.
When I look at classic OSes of the past, like in this post, I see miracles of design which did big complex hard tasks, built by tiny teams of a few people, and which still works today.
When I look at massive FOSS OSes, mostly, I see ant-hills. It's impressive but it's so much work to build anything big with sand that the impressive part is that it works at all... and that to build something so big, you need millions of workers, and constant maintenance.
If we stopped using sand, and abandoned our current plans, and started over afresh, we could build software skyscrapers instead of ant hills.
But everyone is too focussed on keeping our sand software working on our sand hill OSes that they're too busy to learn something else and start over.
no subject
In the same way, IBM mainframe operating systems, which IBM has had freedom to develop since the mid-1960s, remain arcane and difficult to use. They do some things very well, but many others quite badly. They're steadily loosing their share of mainframe runtime to Linux, because Linux is easier to build systems on top of.
People have written all kinds of research operating systems in the past few decades, but none of them have had enough advantages to achieve commercial acceptance.
(no subject)
Difficult judgement
Thanks for the write-up (again).
I think I can follow your general thoughts here.
It is said that Saint-Exupéry did come up with the phrase: "Technology always develops from the primitive via the complicated to the simple". I do like that statement very much.
Regarding your comments on hiding things and allowing to see full complexity, I am very torn apart with following your judgement however. For me, it is not easy to differ between the "primitive" and the "sophisticated" and the "simple" here.
Maybe it all depends on a specific set of requirements? According to my tool choice process different requirements most likely lead to different tool choices. In this case, different file system requirements. Most files never need versioning at all, for example. I think that most of the time, the host/user should not be an integral part of a file name abstraction layer.
It's not that black and white though. I can think of situations, where the mentioned explicit VMS file path clearly does have its advantages.
I guess we all would need to work with such a concept for a longer period of time in order to really get its ideas, advantages and concepts behind.
Re: Difficult judgement
no subject
(no subject)
(no subject)