![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Why FOSS OSes often don't have power management as good as proprietary ones
It may seem odd but it's not.
Haiku is a recreation of a late-1990s OS. News for you: in the 1990s and until then, computers didn't do power management.
The US government had to institute a whole big programme to get companies to add power management.
https://en.wikipedia.org/wiki/Energy_Star
Aggressive power management is only a thing because silicon vendors lie to their customers. Yes, seriously.
From the mid-1970s for about 30 years, adding more transistors meant computers got faster. CPUs went from 4-bit to 8-bit to 16-bit to 32-bit, then there was a pause while they gained onboard memory management (Intel 80386/Motorola 68030 generation) then scalar execution and onboard hardware floating point (80486/68040 generation), then onboard L1 cache (Pentium), then superscalar execution and near-board L2 cache (Pentium II), then onboard L2 (Pentium III), then they ran out of ideas to spend CPU transistors on, so the transistor budget went on RAM instead, meaning we needed 64-bit CPUs to track it.
The Pentium 4 was an attempt to crank this as high as it would go by running as fast as possible and accepting a low IPC (instructions per clock). It was nicknamed the fanheater. So Intel US pivoted to Intel Israel's low-power laptop chip with aggressive power management. Voilà, the Core and then Core 2 series.
Then, circa 2006-2007, big problem. 64-bit chips had loads of cache on board, they were superscalar, decomposing x86 instructions into micro ops, resequencing them for optimal execution with branch prediction, they had media and 3D extensions like MMX2, SSE, SSE2, they were 64-bit with lots of RAM, and there was nowhere to spend the increasing transistor budget.
Result, multicore. Duplicate everything. Tell the punters it's twice as fast. It isn't. Very few things are parallel.
With an SMP-aware OS, like NT or BeOS or Haiku, 2 cores make things a bit more responsive but no faster.
Then came 3 and 4 cores, and onboard GPUs, and then heterogenous cores, with "efficiency" and "performance" cores... but none of this makes your software run faster. It's marketing.
You can't run all the components of a modern CPU at once. It would burn itself out in seconds. Most of the chip is turned off most of the time, and there's an onboard management core running its own OS, invisible to user code, to handle this.
Silicon vendors are selling us stuff we can't use. If you turned it all on at once, instant self-destruction. We spend money on transistors that must spend 99% of the time turned off. It's called "dark silicon" and it's what we pay for.
In real life, chips stopped getting Moore's Law speed increases 20 years ago. That's when we stopped getting twice the performance every 18 months.
All the aggressive power management and sleep modes are to help inadequate cooling systems stop CPUs instantly incinerating themselves. Hibernation is to disguise how slowly multi-gigabyte OSes boot. You can't see the slow boot if it doesn't boot so often.
For 20 years the CPU and GPU vendors have been selling us transistors we can't use. Power management is the excuse.
Update your firmware early and often. Get a nice fast SSD. Shut it down when you're not using it: it reboots fast.
Enjoy a fast responsive OS that doesn't try to play the Win/Lin/Mac game of "write more code to use the fancy accelerators and hope things go faster".
A bit of offtopic, I guess?
Not arguing, I just want to share a bit of the history that helps to represent "Moore's Law" slightly better.
And, for sure, I'm using other's materials:
Thus, in its original form, Moore's observation had been holding relevance up until about late '00s. And nor that it died because of physical limiations, the world and the market changed significantly. Also, Moore's observations never intended to live for 50 years, he was prognosing about 10 years from 1965, so it's good enough, I'd say.
On my part, it's another simplification of different sources. One could check the better compliation on Wikipedia.
Or watch/listen to (higly emotional) presentation by Bryan Cantrill
Re: A bit of offtopic, I guess?
Thanks for that!
Yes, old Gordon Moore called it very well indeed, I think, and with only a couple of modest amendments it applied for the best part of half a century. It's a great achievement.
I wasn't trying to invalidate his observation at all.
What I sought to invalidate was the common belief that it means:
And, secondarily, that:
What really matters most, most of the time, for most software, is single-thread performance. The rate of improvement in single-thread performance fell off a cliff about 20 years ago now and apart from the modest bump from Apple Silicon nothing much has changed.
From roughly 1975 to 2005, as CPUs got bigger, every 18mth-2ye, the new generation was about twice as quick as the old one.
From roughly 2005 to 2025 and continuing, every new generation is about 10-15% quicker than the previous one.
Part of the problem is that people increasingly regard computers as magic boxes and don't understand how they work.
When people argue, and they often do, I tell them that they need to read Fred Brooks' classic The Mythical Man-Month. It's 336 pages: not that big.
Better still, buy 2 copies, then you can read it twice as fast!
If they have a clue about them, this makes the point: adding more people to a team does not make the team work faster, just as owning more copies of a book does not help you read it faster, and so adding more CPUs to a computer does not make it compute faster.
Secondly, there's another problem:
Although since roughly the Core 2 Duo, chips stopped getting dramatically quicker, since then RAM has got a lot bigger and a little faster, and hard disks have been largely replaced by SSDs. Both these things make computers much faster, but not because the processors are quicker.
It's not an illusion. It's real but it's misleading. It hides the real problem.
The point about dark silicon you can't use is inspired by this talk by Sophie Wilson:
https://www.youtube.com/watch?v=6lOnpQgn-9s
Re: A bit of offtopic, I guess?
Re: A bit of offtopic, I guess?
Oh they are.
But not as much as you might think.
I spend a good chunk of the mid-1990s working out how to measure this stuff.
Things like I/O -- disk bandwidth and response time, amount of cache -- have far more immediate direct and perceptible effects than you might expect.
I caught Evesham Micros trying to sneak an engineering sample of the as yet unnamed and unannounced but rumoured Pentium MMX into a group test of Pentiums just because doubling the on chip L1 cache size resulted in a global 15% speedup of all apps and Windows itself.
The MMX instruction set itself was a toy and did no good to anyone. Hell, only now, 30y later, is 512-bit AVX starting to matter, and Intel screwed that one up as well.
A dual processor machine is palpably more responsive to use than a single core. Background tasks working on cached data so they don't hit the disk can help. If they need to hit the disk, it is -- to an approximation -- all over and your machine will slow down as badly as a 486 with Win95. (Scaled up.)
By and large background tasks are not intensive on anything. (Except antivirus.) So give them 1 core for them all to share and things get quicker.
Server CPUs now have dozens of cores. Desktop ones don't because they do not help.
Give those background tasks 2 or 3 cores, and nothing happens.
There are damned good reasons mainstream desktop CPUs are still 2 core/4 thread or 4 core/8 thread. Because it takes real effort and skill to create any kind of task that stuff is any faster at! It is genuinely hard to use at all.
This is a tiny bit of the genius of Apple, even now.
Its Arm64 chips have performance and efficiency cores and the OS is smart enough to schedule background stuff on the slow cores.
Result: nothing. You can't tell because, as I am saying, it doesn't matter. But now your battery lasts longer.
To extend my favourite metaphor...
To understand why more cores don't make your computer faster, read Fred Brooks' The Mythical Man-Month.
But it's quite long -- so buy 2 copies so you can read it twice as fast!
In fact, buy 2 more in LARGE PRINT and prop them up further away and now you can read in quarter of the time... ;-)
Re: A bit of offtopic, I guess?
P.S. there is a law for this, it is not perceptance but objective and measurable.
https://en.wikipedia.org/wiki/Amdahl%27s_law