I think there are many. Some examples: * The fastest code is the code you don't run. Smaller = faster, and we all want faster. Moore's law is over, Dennard scaling isn't affordable any more, smaller feature sizes are getting absurdly difficult and therefore expensive to fab. So if we want our computers to keep getting faster as we've got used to over the last 40-50 years then the only way to keep delivering that will be to start ruthlessly optimising, shrinking, finding more efficient ways to implement what we've got used to. Smaller systems are better for performance. * The smaller the code, the less there is to go wrong. Smaller doesn't just mean faster, it should mean simpler and cleaner too. Less to go wrong. Easier to debug. Wrappers and VMs and bytecodes and runtimes are bad: they make life easier but they are less efficient and make issues harder to troubleshoot. Part of the Unix philosophy is to embed the KISS principle. So that's performance and troubleshooting. We aren't done. * The less you run, the smaller the attack surface. Smaller code and less code means fewer APIs, fewer interfaces, less points of failure. Look at djb's decades-long policy of offering rewards to people who find holes in qmail or djbdns. Look at OpenBSD. We all need better more secure code. Smaller simpler systems built from fewer layers means more security, less attack surface, less to audit. Higher performance, and easier troubleshooting, and better security. There's 3 reasons. Practical examples... The Atom editor spawned an entire class of app: Electron apps, Javascript on Node, bundled with Chromium. Slack, Discord, VSCode: there are multiple apps used by tens to hundreds of millions of people now. Look at how vast they are. Balena Etcher is a, what, nearly 100 MB download to write an image to USB? Native apps like Rufus do it in a few megabytes. Smaller ones like USBimager do it in hundreds of kilobytes. A dd command in under 100 bytes. Now some of the people behind Atom wrote Zed. It's 10% of the size and 10x the speed, in part because it's a native Rust app. The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code. GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime. Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code. Smaller, simpler, cleaner, fewer layers, less abstractions: these are all goods things which are desirable. Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there. Then they went further, replaced C too, made a simpler safer language, embedded its runtime right into the kernel, and made binaries CPU-independent, and turned the entire network-aware OS into a runtime to compete with the JVM, so it could run as a browser plugin as well as a bare-metal OS. Now we have ubiquitous virtualisation so lean into it: separate domains. If your user-facing OS only runs in a VM then it doesn't need a filesystem or hardware drivers, because it won't see hardware, only virtualised facilities, so rip all that stuff out. Your container host doesn't need to have a console or manage disks. This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans. Leave the vast bloated stuff to commercial companies and proprietary software where nobody gets to read it except LLM bots anyway. | ||
[Adapted from an HN comment.) | ||
Page Summary
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
Yes ...
Date: 2025-12-13 03:56 am (UTC)Re: Yes ...
Date: 2025-12-15 10:11 pm (UTC)Quite so.
no subject
Date: 2025-12-14 09:10 am (UTC)no subject
Date: 2025-12-15 10:13 pm (UTC)Yes, unfortunately, you're probably right there.
Personally I reckon we'll be living in Antarctic caves or mud huts in another 50 years at the rare we're going, but if we still have computers at all, I think the creator of Collapse OS is onto something:
https://collapseos.org/
no subject
Date: 2025-12-15 10:24 pm (UTC)no subject
Date: 2026-01-23 04:17 pm (UTC)While this APL/Lisp system is not implemented in the aggressively simple way of Unix utilities, it's a foundation on which you can write programs in a few lines of code to do what might otherwise require hundreds. This approach could potentially power a wide range of software and reduce large amounts of code currently implemented in languages like C and Java that don't have the expressive power of APL. Elegance in programming is therefore a balancing act, and by adding complexity in the right place you can substantially reduce it in many other places.
You mentioned "enforce standards by putting them in the kernel" but this will be a tall order under the Unix model. There is an example of this in z/OS, the operating system that runs IBM's mainframes, where text files are implemented as "sequential data sets" with each line having a fixed width. This makes it fast to find text at a given line number and this regularity makes programming convenient but in order to be practical it must be enforced at the OS level. Historically, Unix represented the end of significant developments in operating systems, as it froze computer systems at a low, fixed level of abstraction in a framework that doesn't permit extension of the base system.
As I described above, improving the elegance of systems is a balancing act wherein you shift complexity to different points within the system to relieve the overall load. A model like Unix has parts of its foundation that difficult to change and this is what leads to the convoluted systems that have been built on top of Unix-like OSes (Docker, Flatpak etc.) in an attempt to mitigate their inflexibility. The answer isn't so much to "make everything simpler" following a Unix-like model as it is to re-evaluate how OS design is done at a baseline and shift the implementation of features where appropriate. There are systems like IBM's Z and I hardware lines where virtualization and resource-sharing are simple because they're built in at the base level. You can pretty much just plug computers together and have them act as a unified system and then divide it into many virtual instances with less than 1% of the required supporting software compared to X86 and Linux. This kind of design requires more work up front but leads to huge payoffs in reduced complexity later.
Unlike the commenter above, I don't believe these problems are intractable, rather at some point soon there will be no choice but to solve them. Software is approaching a crisis of coherence over its complexity especially as major software vendors cut jobs. Getting out of this mess will require reconsidering priors.