![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
When was the last time you saw a critic write a play, compose a symphony, carve a statue?
I've seen a couple of attempts. I thought they were dire, myself. I won't name names (or media), as these are friends of friends.
Some concrete examples. I have given dozens on liam-on-linux.livejournal.com, but I wonder if I can summarise.
[1]
Abstractions. Some of our current core conceptual models are poor. Bits, bytes, directly accessing and managing memory.
If the programmer needs to know whether they are on a 32-bit or 64-bit processor, or whether it's big-endian or little-endian, the design is broken.
Higher-level abstractions have been implemented and sold. This is not a pipedream.
One that seems to work is atoms and lists. That model has withstood nearly 50Y of competition and it still thrives in its niche. It's underneath Lisp and Scheme, but also several languages far less arcane, and more recently, Urbit with Nock and Hoon. There is room for research here: work out a minimal abstraction set based on list manipulation and tagged memory, and find an efficient way to implement it, perhaps at microcode or firmware level.
[2]
Vita Nuova's Inferno and Tao Group's Taos/Intent showed, in 2 quite separate and independent ways, how an OS can deliver processor-independence at the binary level. This is quite separate from point #1, but it's doable. Java and the JVM is a horrid kludge, for all that it works well enough. This should be at kernel level.
Never mind users needing to know if they have a 32-bit or 64-bit OS, which is *disastrous*. They should not need to know if they have an ARM or an Intel x86 or anything more exotic. It's doable, it's been done and shipped in real products, and it doesn't mean a big performance hit. Any anyway, we accept performance hits all over the place -- virtualisation is bad for it.
However, clearly, there is possible synergy between this and point #1.
[3]
We depend on unsafe programming languages. Our OSes are built in them. Now, on top, we have layered *slightly* safer ones -- usually built in the unsafe ones, of course. Or we have isolated ones running in glorified interpreters with very poor performance.
This is accepted. C is the history, only kernel programmers use it. Application programmers have moved on and work in C++, D, Rust, Go or something, or in scripting languages. We have a diversity of choices. It's all good.
Yeah, no, it isn't. If you need different languages for different levels of the problem, and if you have whole-deployed-system issues caused by implementation details of the underlying programming language, then you have a big problem.
This is addressable.
There have been whole rich Internet-capable GUI-driven systems built from the metal up in the Pascal family, in the Smalltalk family, in the Lisp family. It's doable.
But [a] there is a belief that you need to have a language close to the metal for real performance -- this is untrue, easily falsified, historically often refuted, but strongly, fervently believed nonetheless. And [b] the C family is now so very pervasive it's all that most people know.
So I think one key question is:
If a putative "safe" replacement for unsafe low-level languages takes away control from programmers, that will make them unhappy. Can we come up with something that gives visible benefits in exchange, to balance the deal?
Surely it is possible to make something that delivers such benefits in other areas that people will consider switching away from curly braces, pointers and malloc()/free().
Various languages have delivered powerful benefits.
Lisp is one, but sadly, its power springs from its lack of syntax, and that makes it look unreadable. Initiates find it beautiful; outsiders find it hideous. Lisp is not the answer. There is a type of mind it suits, but that type of mind is rare.
We need to accept that not all programmers are created equal. But does there need to be a distinction between languages for the skilled élite versus ones for the scantly-trained workaday coder who just has a job to do?
There have been efforts to make things with some of Lisp's strength, but readable by mortals. They merit investigation. Dylan, CGOL, PLOT, etc.
We also need to look at bringing fundamentally different programming models closer together, both at the OS level and at the UI level. Imperative, functional, array-processing, logic/predicate based, graphical, whatever.
I suspect that there are places where functional programming or logic programming can deliver huge benefits. Not everywhere, though. So a way to host such tools in a cooperative environment bear consideration.
I think that the possible synergy between this and points #1 & #2 are obvious.
[4]
"You can't get there from here."
OSes are today expected to be huge and all-embracing, able to drive a £5 embedded controller, a massive server, a graphical workstation, whatever.
It didn't used to be so. We should probably break away from that idea. The ideal OS for a server isn't the same one for a phone or for a workstation. The fact that one can do all of it is very impressive, but it's not necessarily a goal.
But now we have pervasive virtualisation.
What I am outlining involves new OSes, new languages, new designs. They are not going to spring fully-formed from anyone's brow, able to take on all the duties of the incumbents with decades of investments.
So they need to target little niches. Education is one. Kids and undergrads. And there are still 2Bn people not online. Aiming at them is one alley. Forget taking on business -- it's too conservative.
Something that takes massively less admin and training and maintenance and support for schools. Schools don't make money, so they never have enough. There's an option.
But it has to start with current kit and infrastructure. No lab-prototype CPU is going to compete with a box full of 64-bit octocore chips.
It has to start out on what we have today, but it doesn't have to start on the bare metal. Provide something with some unique strengths and target it at Xen or something at first.
There are plenty of stories of the extreme productivity possible with some tools in the past. Let's look into those, subject them to critical study and investigation and try to find if there is any truth to it.
I think there are enough such stories that there probably is truth there. Reproducing that is a primary goal. Ease of deployment is another -- something which enables a room full of coders to implement business logic faster, with fewer errors, with better productivity and cheaper deployment.
In time, that could sell, sure, yes.
Something that could be one component of a distributed microservices system and just be very good at one thing -- such as a federated key:value store or something -- could be a toe in the door.
Kaspersky are launching their own OS for routers. That's a niche. Think small. The simplest fastest DNS server or something. Room at the bottom.
[5]
Why?
Big changes are coming. Hell, they're already here. Moore's Law is over and CPUs haven't doubled in speed for the last decade. They're only getting 10% faster every 18 months, if that; we've been in the 3GHz range since 2006-2007. Spinning drives are fast becoming obsolete; optical drives are disappearing fast. More cores, less power, just memory and no other storage.
What's coming? Non-volatile RAM. Lots of CPU cores, but not very fast ones, using less and less power.
The future is distributed mesh computing, millions of processes, running on unknown numbers of cores. Adaptive software that can scale itself out to more cores. Storage drives will only be on big storage servers. Most machines will have a reasonable lump of non-volatile processor-local RAM, and a couple of layers of cache, and nothing else. The idea of "filesystems" will be as archaic as tape streamers are now. Everything will be in-memory all the time, put there at manufacture and probably never replaced for the lifetime of the machine. They will never boot, never shut down. They'll stop and restart where they were.
This stuff *requires* OSes to be fundamentally re-designed, so we might as well get on with it. Embrace change, not fight it. Try to move on to ideas based on the best ones of the 1980s, because currently, we're using upgraded 1960s technology.
I've seen a couple of attempts. I thought they were dire, myself. I won't name names (or media), as these are friends of friends.
Some concrete examples. I have given dozens on liam-on-linux.livejournal.com, but I wonder if I can summarise.
[1]
Abstractions. Some of our current core conceptual models are poor. Bits, bytes, directly accessing and managing memory.
If the programmer needs to know whether they are on a 32-bit or 64-bit processor, or whether it's big-endian or little-endian, the design is broken.
Higher-level abstractions have been implemented and sold. This is not a pipedream.
One that seems to work is atoms and lists. That model has withstood nearly 50Y of competition and it still thrives in its niche. It's underneath Lisp and Scheme, but also several languages far less arcane, and more recently, Urbit with Nock and Hoon. There is room for research here: work out a minimal abstraction set based on list manipulation and tagged memory, and find an efficient way to implement it, perhaps at microcode or firmware level.
[2]
Vita Nuova's Inferno and Tao Group's Taos/Intent showed, in 2 quite separate and independent ways, how an OS can deliver processor-independence at the binary level. This is quite separate from point #1, but it's doable. Java and the JVM is a horrid kludge, for all that it works well enough. This should be at kernel level.
Never mind users needing to know if they have a 32-bit or 64-bit OS, which is *disastrous*. They should not need to know if they have an ARM or an Intel x86 or anything more exotic. It's doable, it's been done and shipped in real products, and it doesn't mean a big performance hit. Any anyway, we accept performance hits all over the place -- virtualisation is bad for it.
However, clearly, there is possible synergy between this and point #1.
[3]
We depend on unsafe programming languages. Our OSes are built in them. Now, on top, we have layered *slightly* safer ones -- usually built in the unsafe ones, of course. Or we have isolated ones running in glorified interpreters with very poor performance.
This is accepted. C is the history, only kernel programmers use it. Application programmers have moved on and work in C++, D, Rust, Go or something, or in scripting languages. We have a diversity of choices. It's all good.
Yeah, no, it isn't. If you need different languages for different levels of the problem, and if you have whole-deployed-system issues caused by implementation details of the underlying programming language, then you have a big problem.
This is addressable.
There have been whole rich Internet-capable GUI-driven systems built from the metal up in the Pascal family, in the Smalltalk family, in the Lisp family. It's doable.
But [a] there is a belief that you need to have a language close to the metal for real performance -- this is untrue, easily falsified, historically often refuted, but strongly, fervently believed nonetheless. And [b] the C family is now so very pervasive it's all that most people know.
So I think one key question is:
If a putative "safe" replacement for unsafe low-level languages takes away control from programmers, that will make them unhappy. Can we come up with something that gives visible benefits in exchange, to balance the deal?
Surely it is possible to make something that delivers such benefits in other areas that people will consider switching away from curly braces, pointers and malloc()/free().
Various languages have delivered powerful benefits.
Lisp is one, but sadly, its power springs from its lack of syntax, and that makes it look unreadable. Initiates find it beautiful; outsiders find it hideous. Lisp is not the answer. There is a type of mind it suits, but that type of mind is rare.
We need to accept that not all programmers are created equal. But does there need to be a distinction between languages for the skilled élite versus ones for the scantly-trained workaday coder who just has a job to do?
There have been efforts to make things with some of Lisp's strength, but readable by mortals. They merit investigation. Dylan, CGOL, PLOT, etc.
We also need to look at bringing fundamentally different programming models closer together, both at the OS level and at the UI level. Imperative, functional, array-processing, logic/predicate based, graphical, whatever.
I suspect that there are places where functional programming or logic programming can deliver huge benefits. Not everywhere, though. So a way to host such tools in a cooperative environment bear consideration.
I think that the possible synergy between this and points #1 & #2 are obvious.
[4]
"You can't get there from here."
OSes are today expected to be huge and all-embracing, able to drive a £5 embedded controller, a massive server, a graphical workstation, whatever.
It didn't used to be so. We should probably break away from that idea. The ideal OS for a server isn't the same one for a phone or for a workstation. The fact that one can do all of it is very impressive, but it's not necessarily a goal.
But now we have pervasive virtualisation.
What I am outlining involves new OSes, new languages, new designs. They are not going to spring fully-formed from anyone's brow, able to take on all the duties of the incumbents with decades of investments.
So they need to target little niches. Education is one. Kids and undergrads. And there are still 2Bn people not online. Aiming at them is one alley. Forget taking on business -- it's too conservative.
Something that takes massively less admin and training and maintenance and support for schools. Schools don't make money, so they never have enough. There's an option.
But it has to start with current kit and infrastructure. No lab-prototype CPU is going to compete with a box full of 64-bit octocore chips.
It has to start out on what we have today, but it doesn't have to start on the bare metal. Provide something with some unique strengths and target it at Xen or something at first.
There are plenty of stories of the extreme productivity possible with some tools in the past. Let's look into those, subject them to critical study and investigation and try to find if there is any truth to it.
I think there are enough such stories that there probably is truth there. Reproducing that is a primary goal. Ease of deployment is another -- something which enables a room full of coders to implement business logic faster, with fewer errors, with better productivity and cheaper deployment.
In time, that could sell, sure, yes.
Something that could be one component of a distributed microservices system and just be very good at one thing -- such as a federated key:value store or something -- could be a toe in the door.
Kaspersky are launching their own OS for routers. That's a niche. Think small. The simplest fastest DNS server or something. Room at the bottom.
[5]
Why?
Big changes are coming. Hell, they're already here. Moore's Law is over and CPUs haven't doubled in speed for the last decade. They're only getting 10% faster every 18 months, if that; we've been in the 3GHz range since 2006-2007. Spinning drives are fast becoming obsolete; optical drives are disappearing fast. More cores, less power, just memory and no other storage.
What's coming? Non-volatile RAM. Lots of CPU cores, but not very fast ones, using less and less power.
The future is distributed mesh computing, millions of processes, running on unknown numbers of cores. Adaptive software that can scale itself out to more cores. Storage drives will only be on big storage servers. Most machines will have a reasonable lump of non-volatile processor-local RAM, and a couple of layers of cache, and nothing else. The idea of "filesystems" will be as archaic as tape streamers are now. Everything will be in-memory all the time, put there at manufacture and probably never replaced for the lifetime of the machine. They will never boot, never shut down. They'll stop and restart where they were.
This stuff *requires* OSes to be fundamentally re-designed, so we might as well get on with it. Embrace change, not fight it. Try to move on to ideas based on the best ones of the 1980s, because currently, we're using upgraded 1960s technology.