liam_on_linux: (Default)
Hmmm. For the first time, ever, really, I hit the limits of modern vs. decade-old wifi and networking.

My home broadband is 500Mb/s. Just now, what with quarantine and so on, I have had to set up a home office in our main bedroom. My "spare" Mac, the Mac mini, has been relegated to the guest room and my work laptop is set up on a desk in the bedroom. This means I can work in there while Jana looks after Ada in the front room, without disturbing me too much.

(Aside: I'm awfully glad I bought a flat big enough to allow this, even though my Czech friends and colleagues, and realtor, all thought I was mad to want one so big.)

The problem was that I was only getting 3/5 bars of wifi signal on the work Dell Latitude, and some intermittent connectivity problems – transient outages and slowdowns. Probably this is when someone uses their microwave oven nearby or something.

It took me some hours of grovelling around on my hands and knees – which is rather painful if one knee has metal bits in -- but I managed to suss out the previous owners' wiring scheme. I'd worked out that there was a cable to the middle room, and connected it, but I couldn't find the other end of the cable to the master bedroom.

So, I dug out an old ADSL router that one of my London ISPs never asked for back: a Netgear DGN-1000. According to various pages Google found, this has a mode where it can be used as a wireless repeater.

Well, not on mine. The hidden webpage is there, but the bridge option isn't. Dammit. I should have checked before I updated its firmware, shouldn't I?

Ah well. There's another old spare router lying around, an EE BrightBox, and this one can take an Ethernet WAN – it's the one that firewalled my FTTC connection. It does ADSL as well but I don't need that here. I had tried and failed to sell this one on Facebook, which meant looking it up and discovering that it can run OpenWRT.

So I tried it. It's quite a process -- you have to enable a hidden tiny webserver in the bootloader, use that to unlock the bootloader, then use the unlocked bootloader to load a new ROM. I did quite a lot of reading and discovered that there are driver issues with OpenWrt. It works, but apparently ADSL doesn't work (don't care, don't need it), but also, its wifi chip is not fully supported and with the FOSS driver it maxes out at 54Mb/s.

Sounds like quite a lot, but it isn't when your broadband is half-gigabit.

So I decided to see what could be done with the standard firmware, with its closed-source Broadcom wifi driver.

(Broadcom may employ one of my Great Heroines of Computing, the remarkable Sophie Wilson, developer of the ARM processor, but their record on open-sourcing drivers is not good.)

So I found a creative combination of settings to turn the thing into a simple access point as it was, without hacking it. Upstream WAN on Ethernet... OK. Disable login... OK. Disable routing, enable bridging... OK.

Swaths of the web interface are disappearing as I go. Groups of fields and even whole tabs vanish each time I click OK. Disable firewall... OK. Disable NAT... OK. Disable DHCP... OK.

Right, now it just bridges whatever on LAN4 onto LAN1-3 and wifi. Fine.

Connect it up to the live router and try...

And it works! I have a new access point, and 2 WLANs, which isn't ideal -- but the second WLAN works, and I can connect and get an Internet connection. Great!

So, I try through the wall. Not so good.

More crawling around and I find a second network cable in the living room that I'd missed. Plug it in, and the cable in the main bedroom comes alive! Cool!

So, move the access point in there. Connect to it, test... 65-70 Mb/s. Hmm. Not that great. Try a cable to it. 85 Mb/sec. Uninspiring.

Test the wifi connection direct to the main router...

Just over 300 Mb/s.

Ah.

Oh bugger!

In other words, after some three hours' work and a fair bit of swearing, my "improved", signal-boosted connection is at best one-fifth as fast as the original one.

I guess the things are that firstly, my connection speed really wasn't as bad as I thought, and secondly, I was hoping with some ingenuity to improve it for free with kit I had lying around.

The former invalidates the latter: it's probably not worth spending money on improving something that is not in fact bad in the first place.

I don't recall when I got my fibre connection in Mitcham, but I had it for at least a couple of years, maybe even 3, so I guess around 2011-2012. It was blisteringly quick when I got it, but the speeds fell and fell as more people signed up and the contention on my line rose. Especially at peak times in the evenings. The Lodger often complained, but then, he does that anyway.

But my best fibre speeds in London were 75-80Mb/s just under a decade ago. My cable TV connection (i.e. IP over MPEG (!)) here in Prague is five times faster.

So the kit that was an adequate router/firewall then, which even supports a USB2 disk as some NAS, is now pitifully unequal to the task. It works fine but its maximum performance will actually reduce the speed of my home wifi, let alone its Fast Ethernet hub when now I need gigabit just for my broadband.

I find myself reeling a little from this.

It reminds me of my friend Noel helping me to cable up the house in Mitcham when I bought it in about 2002. Noel, conveniently, was a BT engineer.

We used Thin Ethernet. Yes, Cheapernet, yes, BNC connections etc. Possibly the last new deployment of 10base-2 in the world!

Why? Well, I had tons of it. Cables, T-pieces, terminators, BNC network cards in ISA or PCI flavours, etc. I had a Mac with BNC. I had some old Sun boxes with only BNC. It doesn't need switches or hubs or power supplies. One cable is the backbone for the whole building -- so fewer holes in the wall. Noel drilled a hole from the small bedroom into the garage, and one from the garage into the living room, and that was it. Strategic bit of gaffer tape and the job's a good 'un.

In 2002, 10 Mb/s was plenty.

At first it was just for a home LAN. Then I got 512kb/s ADSL via one of those green "manta ray" USB modems. Yes, modem, not router. Routers were too expensive. Only Windows could talk to them at first, so I built a Windows 2000 server to share the connection, with automatic fallback to 56k dialup to AOL (because I didn't pay call charges).

So the 10Mb/s network shared the broadband Internet, using 5% of its theoretical capacity.

Then I got 1Mb/s... Then 2Mb/s... I think I got an old router off someone for that at first. The Win 2K Server was a Pentium MMX/200MHz and was starting to struggle.

Then 8MB/s, via Bulldog, who were great: fast and cheap, and they not only did ADSL but the landline too, so I could tell BT to take a running jump. (Thereby hangs a tale, too.)

With the normal CSMA/CD Ethernet congestion, already at 8Mb/s, the home 10base-2 network was not much quicker than wifi -- but it was still worth it upstairs, where the wifi signal was weaker.

Then I got a 16Mb/s connection and now the Cheapernet became an actual bottleneck. It failed – the great weakness of 10base-2 is that a cable break anywhere brings down the entire LAN – and I never bothered to trace it. I just kept a small segment to link my Fast Ethernet switch to the old 10Mb/s hub for my testbed PC and Mac. By this point, I'd rented out my small bedroom too, so my main PC and server were in the dining room. That mean a small 100base-T star LAN under the dining table was all I needed.

So, yes, I've had the experience of networking kit being obsoleted by advances in other areas before – but only very gradually, and I was starting with 1980s equipment. It's a tribute to great design that early-'80s cabling remained entirely usable for 25 years or more.

But to find that the router from my state-of-the-art, high-speed broadband from just six years ago, when I emigrated, is now hopelessly obsolete and a significant performance bottleneck: that was unexpected and disconcerting.

Still, it's been educational. In several ways.

The thing that prompted the Terry Pratchett reference in my title is this:
https://www.extremetech.com/computing/95913-koomeys-law-replacing-moores-focus-on-power-with-efficiency
https://www.infoworld.com/article/2620185/koomey-s-law--computing-efficiency-keeps-pace-with-moore-s-law.html

A lot of people are still in deep denial about this, but x86 chips stopped getting very much quicker in about 2007 or so. The end of the Pentium 4 era, when Intel realised that they were never going to hit the 5 GHz clock that Netburst was aimed at, and went back to an updated Pentium Pro architecture, trading raw clock speeds for instructions-per-clock – as AMD had already done with the Sledgehammer core, the origin of AMD64.

Until then, since the 1960s, CPU power roughly doubled every 18 months. For 40 years.
8088: 4.77MHz.
8086: 8MHz.
80286: 6, 8, 12, 16 MHz.
80386: 16, 20, 25, 33 MHz.
80486: 25, 33; 40, 50; 66; 75, 100 MHz.
Pentium: 60, 66, 75, 90, 100; 120, 133; 166, 200, 233 MHz.
Pentium II: 233, 266, 300, 333, 350.
Pentium III: 333 up to 1GHz.
Pentium 4: topped out at about 3.5 GHz.
Core i7 is still around the same, with brief bursts of more, but it can't sustain it.

The reason was that adding more transistors kept getting cheaper, so processors went from 4-bit to 8-bit, to 16-bit, to 32-bit with a memory management unit onboard, to superscalar 32-bit with floating-point and Level 1 cache on-die, then with added SIMD multimedia extensions, then to 32-bit with out-of-order execution, to 32-bit with Level 2 cache on-die, to 64-bit...

And then they basically ran out of go-faster stuff to do with more transistors. There's no way to "spend" that transistor budget and make the processor execute code faster. So, instead, we got dual cores. Then quadruple cores.

More than that doesn't help most people. Server CPUs can have 24-32 or more cores now – twice that or more on some RISC chips – but it's no use in a general-purpose PC, so instead, the effort now goes to reducing power consumption instead.

Single-core execution speed, the most important benchmark for how fast stuff runs, now gets 10-15% faster every 18 months to 2 years, and has done for about a dozen years. Memory is getting bigger and a bit quicker, spinning HDs now reach vast capacities most standalone PCs will never need, so they're getting replaced by SSDs which themselves are reaching the point where they offer more than most people will ever want.

So my main Mac is 5 years old, and still nicely quick. My spare is 9 years old and perfectly usable. My personal laptops are all 5-10 years old and I don't need anything more.

The improvements are incremental, and frankly, I will take a €150 2.5 GHz laptop over a €1500 2.7 GHz laptop any day, thanks.

But the speeds continue to rise in less-visible places, and now, my free home router/firewall is nearly 10x faster than my 2012 free home router/firewall.

And I had not noticed at all until the last week.
liam_on_linux: (Default)
I ran the testing labs for PC Pro magazine from 1995 to 1996, and acted as the magazine's de facto technical editor. (I didn't have enough journalistic experience yet to get the title Technical Editor.)

The first PC we saw at PC Pro magazine with USB ports was an IBM desktop 486 or Pentium -- in late 1995, I think. Not a PS/2 but one of their more boring industry-standard models, an Aptiva I think.
We didn't know what they were, and IBM were none too sure either, although they told us what the weird little tricorn logo represented: Universal Serial Bus.
Image result for unicode usb logo

"It's some new Intel thing," they said. So I phoned Intel UK -- 1995, very little inter-company email yet -- and asked, and learned all about it.
But how could we test it, with Windows 95A or NT 3.51? We couldn't.
I think we still had the machine when Windows 95B came out... but the problem was, Windows 95B, AKA "OSR2", was an OEM release. No upgrades. You couldn't officially upgrade 95A to 95B, but I didn't want to lose the drivers or the benchmarks...

I found a way. It involved deleting WIN.COM from C:\WINDOWS which was the file that SETUP.EXE looked for to see if there was an existing copy of Windows.

Reinstalling over the top was permitted, though. (In case it crashed badly, I suppose.) So I reinstalled 95B over the top, it picked up the registry and all the settings... and found the new ports.
But then we didn't have anything to attach to them to try them. :-) The iMac wouldn't come out for another 2.5 years yet.
Other fun things I did in that role:
• Discovered Tulip (RIP) selling a Pentium with an SIS chipset that they claimed supported EDO RAM (when only the Intel Triton chipset did). Under threat of a lawsuit, I showed them that it did support it -- it recognised it, printed a little message saying "EDO RAM detected" and worked... but it couldn't use it and benchmarked at exactly the same speed as with cheaper FP-mode RAM.
I think that led to Tulip suing SIS instead of Dennis Publishing. :-)
• Evesham Micros (RIP) sneaking the first engineering sample Pentium MMX in the UK -- before the MMX name had even been settled -- into a grouptest of Pentium 166 PCs. It won handily, by about 15%, which should have been impossible if it was a standard Pentium 1 CPU. But it wasn't -- it was a Pentium MMX with twice as much L1 cache onboard.
Intel was very, very unhappy with naughty Evesham.
• Netscape Communications (RIP) refused to let us put Communicator or Navigator on our cover CD. They didn't know that Europeans pay for local phone calls, so that it cost money to make a big download (30 or 40 MB!). They wouldn't believe us and in the end flew 2 executives to Britain to explain to us that it was a free download and they wanted to trace who downloaded it.
As acting technical editor, I had to explain to them. Repeatedly.

When they finally got it, it resulted in a panicked trans-Atlantic phone call to Silicon Valley, getting someone senior out of bed, as they finally realised why their download and adoption figures were so poor in Europe.

We got Netscape on the cover CD, the first magazine in Europe to do so. :-) Both Communicator and Navigator, IIRC.
• Fujitsu supplied the first PC OpenGL accelerator we'd ever seen. It cost considerably more than the PC. We had no way to test it -- OpenGL benchmarks for Windows hadn't been invented yet. (It wasn't very good in Quake, though.)
I originally censored the company names, but I checked, and the naughty or silly ones no longer exist, so what the hell...
Tulip were merely deceived and didn't verify. Whoever picked SIS was inept anyway -- they made terrible chipsets which were slow as hell.

(Years later, they upped their game, and by C21 there really isn't much difference, unless you're a fanatical gamer and overcloker.)
Lemme think... other fun anecdotes...
PartitionMagic caused me some fun. When I joined (at Issue 8) we had a copy of v1 in the cupboard. Its native OS was OS/2 and nobody cared, I'm afraid. I read what it claimed and didn't believe it so I didn't try it.
Then v2 arrived. It ran on DOS. Repartitioning a hard disk when it was full of data? Preposterous! Impossible!
So I tried it. It worked. I wrote a rave review.
It prompted a reader letter.
"I think I've spotted your April Fool's piece. A DOS program that looks exactly like a Windows 95 app? Which can repartition a hard disk full of data? Written by someone whose name is an anagram of 'APRIL VENOM'? Do I win anything?"
He won a phonecall from me, but he did teach me an anagram of my name I never knew.
It led me to run a tip in the mag.

At the time, a 1.2 GB hard disk was the most common size (and a Quantum Fireball the fastest model for the money). Format that as a FAT16 drive and you got super-inefficient 16 kB clusters. (And in 1995 or early 1996, FAT16 was all you got.)
With PartitionMagic, you could take 200 MB off the end, make it into a 2nd partition, and still fit more onto the C: drive because of far more efficient 8 kB clusters. If you didn't have PQMagic you could partition the disk that way before installing. The only key thing was that C: was less than 1 GB. 0.99 GB was fine.
I suggested putting the swap file on D: -- you saved space and reduced fragmentation.
One of our favourite suppliers, Panrix, questioned this. They reckoned that having the swap file on the outer, longer tracks of the drive made it slower, due to slower access times and slower transfer speeds. They were adamant.
So I got them to bring in a new, virgin PC with Windows 95A, I benchmarked it with a single big, inefficient C: partition, then I repartitioned it, put the swapfile on the new D: drive, and benchmarked it again. It was the same to 2 decimal places, and the C drive had about 250MB more free space.
Panrix apologised and I gained another geek cred point. :-)
liam_on_linux: (Default)
I love my Android phone in some ways - what it can do is wonderful. The formfactor of my Nokia E90 was better in every single way, though. Give the Nokia a modern CPU, replace its silly headphone socket, MiniUSB port & Nokia charging port with a standard jack & a MicroUSB, make the internal screen a touchscreen, and I would take your arm off in my haste to acquire it.
Read more... )
liam_on_linux: (Default)
I have spent a lot of time and effort this year on learning my way around the current generation of Windows Server OSs, and the end result is that I've learned that I really profoundly dislike them.

Personally, I found the server admin tools in NT 3 and 4 to be quite good, fairly clean and simple and logical - partly because it was built on LAN Manager, which was IBM-designed, with a lot of experience behind it.

Since Windows 2000 Server, the new basis, Active Directory, is very similar that of Exchange. Much of the admin revolves around things like Group Policies and a ton of proprietary extensions on top of DNS. The result is a myriad of separate management consoles, all a bit different, most of them quite limited, not really following Windows GUI guidelines because they're not true Windows apps, they're snap-ins to the limited MS Management Console. Just like Exchange Server, there are tons and tons of dialog boxes with 20 or 30 or more tabs each, and both the parent console and many of the dialogs containing trees with a dozen+ layers of hierarchy.

It's an insanely complicated mess.

The main upshot of Microsoft's attempts to make Windows Server into something that can run a large, geographically-dispersed multi-site network is that the company has successfully brought the complexity of managing an unknown Unix server to Windows.

On Unix you have an unknown but large number of text files in an unknown but large number of directories, which use a wide variety of different syntaxes, and which have a wide variety of different permissions on them. These control an unknown but large number of daemons from multiple authors and vendors which provide your servers' various services.

Your mission is to memorise all the possible daemons, their config files' names, locations and syntaxes, and use low-level editing tools from the 1960s and 1970s to manage them. The boon is that you can bring your own editors, it all is easily remotely manageable over multiple terminal sessions, and that components can in many cases be substituted one for another in a somewhat plug-and-play fashion. And if you're lucky enough to be on a FOSS Unix, there are no licensing issues.

These days, the Modern way to do this is to slap another layer of tools over the top, and use a management daemon to manage all those daemons for you, and quite possibly a monitoring daemon to check that the management daemon is doing its job, and a deployment daemon to build the boxes and install the service, management and monitoring daemons.

On Windows, it's all behind a GUI and now Windows by default has pretty good support for nestable remote GUIs. Instead of a myriad of different daemons and config files, you have little or no access to config files. You have to use an awkward and slightly broken GUI to access config settings hidden away in multiple Registry-like objects or databases or XML files, mostly you know or care not where. Instead of editing text files in your preferred editor, you must use a set of slightly-broken irritatingly-nonstandard and all-subtly-different GUIs to manipulate vast hierarchical trees of settings, many of which overlap - so settings deep in one tree will affect or override or be overridden by settings deep in another tree. Or, deep in one tree there will be a whole group of objects which you must manipulate individually, which will affect something else depending on the settings of another different group of objects elsewhere.

Occasionally, at some anonymous coder's whim, you might have to write some scripts in a proprietary language.

When you upgrade the system, the entire overall tree of trees and set of sets will change unpredictably, requiring years of testing to eliminate as many as possible of the interactions.

But at least in most installs it will all be MS tools running on MS OSs - the result of MS' monopoly over some two decades being a virtual software monoculture.

But of course often you will have downversion apps running on newer servers, or a mix of app and server OS versions, so some machines are running 2000, some 2003, some 2008 and some 2008R2, and apps could span a decade or more's worth of generations.

And these days, it's anyone's guess if the machine you're controlling is real or a VM - and depending on which hypervisor, you'll be managing the VMs with totally different proprietary toolsets.

If you do have third-party tools on the servers, they will either snap-into the MS management tools, adding a whole ton of new trees and sets to memorise your way around, or they will completely ignore it and offer a totally different GUI - typically one simplified to idiot level, such as a enterprise-level backup solution I supported in the spring which has wizards to schedule anything from backups to verifies to restores, but which contains no option anywhere to eject a tape. It appears to assume that you're using a robot library which handles that automatically.

Without a library, tape ejection from an actual drive attached to the server, required a server reboot.

But this being Windows, almost any random change to a setting anywhere might require a reboot. So, for instance, Windows Terminal Services runs on the same baseline Windows edition, meaning automatic security patch installation - meaning all users get prompted to reboot the server, although they shouldn't have privileges to actually do so, and the poor old sysadmins, probably in a building miles away or on a different continent, can't find a single time to do so when it won't inconvenience someone.

This, I believe, is progress. Yay.

After a decade of this, MS has now decided, of course, that it was wrong all along and that actually a shell and a command line is better. The snag is that it's not learned the concomitant lessons of terseness (like Unix) or of flexible abbreviation (like VMS DCL), or of cross-command standadisation and homogeneity (although to be fair, Unix never learned that, either. "Those who do not know VMS are doomed to reinvent it, poorly," perhaps.) But then, long-term MS users expect the rug to be pulled from under them every time a new generation ships, so they will probably learn that in time.

The sad thing about the proliferation of complexity in server systems, for me, is that it's all happened before, a generation or two ago, but the 20-something-year-olds building and using this stuff don't know their history. Santayana applies.

The last time around, it was Netware 4.

Netware 3 was relatively simple, clean and efficient. It couldn't do everything Netware 2 could do, but it was relatively streamlined, blisteringly fast and did what it did terribly well.

So Novell threw away all that with Netware 4, which was bigger, slower, and added a non-negotiable ton of extra complexity aimed at big corporations running dozens of servers across dozens of sites - in the form of NDS, the Netware Directory Services. Just the ticket if you are running the network the size of Enron or Lehman Brothers, but a world of pain for the poor self-taught saps running single servers of millions of small businesses. They all hated it, and consequently deserted Netware in droves. Most went to NT4; Linux wasn't really there yet in 1996.

Now, MS has done exactly the same to them.

When Windows 2000 came around, Linux was ready - but the tiny handful of actual grown-up integrated server distros (such as eSmith, later SME Server) have never really caught on. Instead, there are self-assembly kits and each sysadmin builds their own. It's how it's always been done, why change?

I had hoped that Mac OS X Server might counteract this. It looked the The Right Thing To Do: a selection of the best FOSS server apps, on a regrettably-proprietary but solid base, with some excellent simple admin tools on top, and all the config moved into nice standard network-distributable XML files.

But Apple has dropped the server ball somewhere along the line. Possibly it's not Apple's fault but the deep instinctual conservatism of network and server admins, who would tend to regard such sweeping changes with fear and loathing.

Who knows.

But the current generation of both Unix and Windows server products both look profoundly broken to me. You either need to be a demigod with the patience and deep understanding of an immortal to manage them properly, or just accept the Microsoft way: run with the defaults wherever possible and continually run around patching the worst-broken bits.

The combination of these things is one of the major drivers behind the adoption of cloud services and outsourcing. You move all the nightmare complexity out of your company and your utter dependence on a couple of highly-paid god-geeks, and parcel it off to big specialists with redundant arrays of highly-paid god-geeks. You lose control and real understanding of what's occurring and replace it with SLAs and trust.

Unless or until someone comes along and fixes the FOSS servers, this isn't going to change - it's just going to continue.

Which is why I don't really want to be a techie any more. I'm tired of watching it just spiral downwards into greater and greater complexity.

(Aside: of course, nothing is new under the sun. It was, I believe, my late friend Guy Kewney who made a very plangent comment about this same process when WordPerfect 5 came out. "With WordPerfect 4.2, we've made a good bicycle. Everyone knows it, everyone likes it, everyone says it's a good bicycle. So what we'll do is, we'll put seven more wheels on it."

In time, of course, everyone looked back at WordPerfect 5.1 with great fondness, compared to the Windows version. In time, I'm sure, people will look back at the relative homogeneity of Windows 2003 Server or something with fondness, too. It seems inevitable. I mean, a direct Win32 admin app running on the same machine at the processes it's managing is bound to be smaller, simpler and faster than a decade-older Win64 app running on a remote host...)

May 2025

S M T W T F S
    12 3
45678910
11121314151617
1819 2021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 11th, 2025 06:57 pm
Powered by Dreamwidth Studios