![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Why FOSS OSes often don't have power management as good as proprietary ones
It may seem odd but it's not.
Haiku is a recreation of a late-1990s OS. News for you: in the 1990s and until then, computers didn't do power management.
The US government had to institute a whole big programme to get companies to add power management.
https://en.wikipedia.org/wiki/Energy_Star
Aggressive power management is only a thing because silicon vendors lie to their customers. Yes, seriously.
From the mid-1970s for about 30 years, adding more transistors meant computers got faster. CPUs went from 4-bit to 8-bit to 16-bit to 32-bit, then there was a pause while they gained onboard memory management (Intel 80386/Motorola 68030 generation) then scalar execution and onboard hardware floating point (80486/68040 generation), then onboard L1 cache (Pentium), then superscalar execution and near-board L2 cache (Pentium II), then onboard L2 (Pentium III), then they ran out of ideas to spend CPU transistors on, so the transistor budget went on RAM instead, meaning we needed 64-bit CPUs to track it.
The Pentium 4 was an attempt to crank this as high as it would go by running as fast as possible and accepting a low IPC (instructions per clock). It was nicknamed the fanheater. So Intel US pivoted to Intel Israel's low-power laptop chip with aggressive power management. Voilà, the Core and then Core 2 series.
Then, circa 2006-2007, big problem. 64-bit chips had loads of cache on board, they were superscalar, decomposing x86 instructions into micro ops, resequencing them for optimal execution with branch prediction, they had media and 3D extensions like MMX2, SSE, SSE2, they were 64-bit with lots of RAM, and there was nowhere to spend the increasing transistor budget.
Result, multicore. Duplicate everything. Tell the punters it's twice as fast. It isn't. Very few things are parallel.
With an SMP-aware OS, like NT or BeOS or Haiku, 2 cores make things a bit more responsive but no faster.
Then came 3 and 4 cores, and onboard GPUs, and then heterogenous cores, with "efficiency" and "performance" cores... but none of this makes your software run faster. It's marketing.
You can't run all the components of a modern CPU at once. It would burn itself out in seconds. Most of the chip is turned off most of the time, and there's an onboard management core running its own OS, invisible to user code, to handle this.
Silicon vendors are selling us stuff we can't use. If you turned it all on at once, instant self-destruction. We spend money on transistors that must spend 99% of the time turned off. It's called "dark silicon" and it's what we pay for.
In real life, chips stopped getting Moore's Law speed increases 20 years ago. That's when we stopped getting twice the performance every 18 months.
All the aggressive power management and sleep modes are to help inadequate cooling systems stop CPUs instantly incinerating themselves. Hibernation is to disguise how slowly multi-gigabyte OSes boot. You can't see the slow boot if it doesn't boot so often.
For 20 years the CPU and GPU vendors have been selling us transistors we can't use. Power management is the excuse.
Update your firmware early and often. Get a nice fast SSD. Shut it down when you're not using it: it reboots fast.
Enjoy a fast responsive OS that doesn't try to play the Win/Lin/Mac game of "write more code to use the fancy accelerators and hope things go faster".
Re: A bit of offtopic, I guess?
Oh they are.
But not as much as you might think.
I spend a good chunk of the mid-1990s working out how to measure this stuff.
Things like I/O -- disk bandwidth and response time, amount of cache -- have far more immediate direct and perceptible effects than you might expect.
I caught Evesham Micros trying to sneak an engineering sample of the as yet unnamed and unannounced but rumoured Pentium MMX into a group test of Pentiums just because doubling the on chip L1 cache size resulted in a global 15% speedup of all apps and Windows itself.
The MMX instruction set itself was a toy and did no good to anyone. Hell, only now, 30y later, is 512-bit AVX starting to matter, and Intel screwed that one up as well.
A dual processor machine is palpably more responsive to use than a single core. Background tasks working on cached data so they don't hit the disk can help. If they need to hit the disk, it is -- to an approximation -- all over and your machine will slow down as badly as a 486 with Win95. (Scaled up.)
By and large background tasks are not intensive on anything. (Except antivirus.) So give them 1 core for them all to share and things get quicker.
Server CPUs now have dozens of cores. Desktop ones don't because they do not help.
Give those background tasks 2 or 3 cores, and nothing happens.
There are damned good reasons mainstream desktop CPUs are still 2 core/4 thread or 4 core/8 thread. Because it takes real effort and skill to create any kind of task that stuff is any faster at! It is genuinely hard to use at all.
This is a tiny bit of the genius of Apple, even now.
Its Arm64 chips have performance and efficiency cores and the OS is smart enough to schedule background stuff on the slow cores.
Result: nothing. You can't tell because, as I am saying, it doesn't matter. But now your battery lasts longer.
To extend my favourite metaphor...
To understand why more cores don't make your computer faster, read Fred Brooks' The Mythical Man-Month.
But it's quite long -- so buy 2 copies so you can read it twice as fast!
In fact, buy 2 more in LARGE PRINT and prop them up further away and now you can read in quarter of the time... ;-)
Re: A bit of offtopic, I guess?
P.S. there is a law for this, it is not perceptance but objective and measurable.
https://en.wikipedia.org/wiki/Amdahl%27s_law