liam_on_linux: (Default)

So, yesterday I presented my first conference talk since the Windows Show 1996 at Olympia, where I talked about choosing a network operating system — that is, a server OS — for PC Pro magazine.

(I probablystill have the speaker's notes and presentation for that somewere too. The intensely curious may ask and I maybe able share it too.)

It seemed to go OK, I had a whole bunch of people asking questions afterwards, commenting or thanking me.

[Edit] Video! https://youtu.be/jlERSVSDl7Y

I have to check out the video recording and make some editing marks before it will be published and I am not sure that the hotel wifi connection is fast or capacious enough for me to do that. However, I'll post it as soon as I can.

Meantime, here is some further reading.

I put together a slightly jokey deck of slides and was very pleasantly impressed at how good and easy LibreOffice Impress made it to create and to present them. You can download the 9MB ODP file here:

https://www.dropbox.com/s/xmmz5r5zfmnqyzm/The%20circuit%20less%20travelled.odp?dl=0

The notes are a 110 kB MS Word 2003 document. They may not always be terribly coherent -- some were extensively scripted, some are just bullet points. For best results, view in MS Word (or the free MS Word Viewer, which runs fine under WINE) in Outline mode. Other programs will not show the structure of the document, just the text.

https://www.dropbox.com/s/7b2e1xny53ckiei/The%20Circuit%20less%20travelled.doc?dl=0

I had to cut the talk fairly brutally to fit the time and did not get to discuss some of the operating systems I planned to. You can see some additional slides at the end of the presentation for stuff I had to skip.

Here's a particular chunk of the talk that I had to cut. It's called "Digging deeper" and you can see what I was goingto say about Taos, Plan 9, Inferno, QNX and Minix 3. This is what the slides on the end of the presentation refer to.

https://www.dropbox.com/s/hstqmjy3wu5h28n/Part%202%20%E2%80%94%20Digging%20deeper.doc?dl=0

Links I mentioned in the talk or slides

The Unix Haters' Handbook [PDF]: https://simson.net/ref/ugh.pdf

Stanislav Datskovskiy's Loper-OS:  http://www.loper-os.org/

Paul Graham's essays: http://www.paulgraham.com/

Notably his Lisp Quotes: http://www.paulgraham.com/quotes.html

Steve Jobs on the two big things he missedwhen he visited Xerox PARC:
http://www.mac-history.net/computer-history/2012-03-22/apple-and-xerox-parc/2

Alan Kay interview where he calls Lisp "the Maxwell's Equations of software": https://queue.acm.org/detail.cfm?id=1039523

And what that means: http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/

"In the Beginning was the Command Line" by Neal Stephenson: http://cristal.inria.fr/~weis/info/commandline.html

Author's page: http://www.cryptonomicon.com/beginning.html


Symbolics OpenGenera: https://en.wikipedia.org/wiki/Genera_(operating_system)

How to run it on Linux (some of several such pages):
http://www.jachemich.de/vlm/genera.html
https://loomcom.com/genera/genera-install.html

A brief (13min) into to OpenGenera by Kalman Reti: https://www.youtube.com/watch?v=o4-YnLpLgtk&t=5s
A longer (1h9m) talk about it, also by him: https://www.youtube.com/watch?v=OBfB2MJw3qg

FOSDEM

Dec. 22nd, 2017 12:22 am
liam_on_linux: (Default)

It might interest folk hereabout that I've had a talk accepted at February's FOSDEM conference in Brussels. The title is "The circuit less travelled" and I will be presenting a boiled-down, summarised version of my ongoing studies into OS, language and app design, on the thesis of where, historically, the industry made arguably poor (if pragmatic) choices, some interesting technologies that weren't pursued, where it'll go next and how reviving some forgotten ideas could lend technological advantage to those trying different angles.

In other words, much of what I've been ranting about on here for the last several years.

It will, to say the least, be interesting to see how it goes down.

SUSE is paying for me to attend, but the talk is not on behalf of them -- it's entirely my own idea and submission. A jog from SUSE merely gave me the impetus to submit an abstract and description.

liam_on_linux: (Default)
Once again, recently, I have been told that I simply cannot write about -- for instance -- the comparative virtues of programming languages unless I am a programmer and I can actually program in them. That that is the only way to judge.

This could be the case, yes. I certainly get told it all the time.

But the thing is that I get told it by very smart, very experienced people who also go on to tell me that I am completely wrong about other stuff where I know that I am right, and can produce abundant citations to demonstrate it. All sorts of stuff.

I can also find other people -- just a few -- who know exactly what I am talking about, and agree, and have written much the same, at length. And their experience is the same as mine: years, decades, of very smart highly-experienced people who just do not understand and cannot step outside their preconceptions far enough to get the point.

It is not just me.

Read more... )
liam_on_linux: (Default)
This is a repurposed CIX comment. It goes on a bit. Sorry for the length. I hope it amuses.

So, today, a friend of mine accused me of getting carried away after reading a third-generation Lisp enthusiast's blog. I had to laugh.

The actual history is a bit bigger, a bit deeper.

The germ was this:

https://www.theinquirer.net/inquirer/news/1025786/the-amiga-dead-long-live-amiga

That story did very well, amazing my editor, and he asked for more retro stuff. I went digging. I'm always looking for niches which I can find out about and then write about -- most recently, it has been containers and container tech. But once something goes mainstream and everyone's writing about it, then the chance is gone.

I went looking for other retro tech news stories. I wrote about RISC OS, about FPGA emulation, about OSes such as Oberon and Taos/Elate.

The more I learned, the more I discovered how much the whole spectrum of commercial general-purpose computing is just a tiny and very narrow slice of what's been tried in OS design. There is some amazingly weird and outré stuff out there.

Many of them still have fierce admirers. That's the nature of people. But it also means that there's interesting in-depth analysis of some of this tech.

It's led to pieces like this which were fun to research:

http://www.theregister.co.uk/Print/2013/11/01/25_alternative_pc_operating_systems/

I found 2 things.

One, most of the retro-computers that people rave about -- from mainstream stuff like Amigas or Sinclair Spectrums or whatever -- are actually relatively homogenous compared to the really weird stuff. And most of them died without issue. People are still making clone Spectrums of various forms, but they're not advancing it and it didn't go anywhere.

The BBC Micro begat the Archimedes and the ARM. Its descendants are everywhere. But the software is all but dead, and perhaps justifiably. It was clever but of no great technical merit. Ditto the Amiga, although AROS on low-cost ARM kit has some potential. Haiku, too.

So I went looking for obscure old computers. Ones that people would _not_ read about much. And that people could relate to -- so I focussed on my own biases: I find machines that can run a GUI or at least do something with graphics more interesting than ones before then.

There are, of course, tons of the things. So I needed to narrow it down a bit.

Like the "Beckypedia" feature on Guy Garvey's radio show, I went looking for stuff of which I could say...

"And why am I telling you this? Because you need to know."

So, I went looking for stuff that was genuinely, deeply, seriously different -- and ideally, stuff that had some pervasive influence.

Read more... )
And who knows, maybe I’ll spark an idea and someone will go off and build something that will render the whole current industry irrelevant. Why not? It’s happened plenty of times before.

And every single time, all of the most knowledgeable experts said it was a pointless, silly, impractical flash-in-the-pan. Only a few nutcases saw any merit to it. And they never got rich.
liam_on_linux: (Default)
I've written a few times about a coming transition in computing -- the disappearance of filesystems, and what effects this will have. I have not had any constructive dialogue with anyone.

So I am trying yet again, by attempting to to rephrase this in a historical context:

There have been a number of fundamental transitions in computing over the years.

1st generation

The very early machines didn't have fixed nonvolatile storage: they had short-term temporary storage, such as mercury delay lines or storage CRTs, and read data from offline, non-direct-access, often non-rewritable means, such as punched cards or paper tape.

2nd generation

Hard disks came along, in about 1953, commercially available in 1957: the IBM RAMAC...

https://en.wikipedia.org/wiki/IBM_305_RAMAC

Now, there were 2 distinct types of directly-accessible storage: electronic (including core store for the sake of argument) and magnetic.

A relatively small amount of volatile storage, in which the processor can directly work on data, and a large amount of read-write volatile storage but which must be transferred into volatile storage for processing. You can't add 2 values in 2 disk blocks without transferring them into memory first.

This is one of the fundamental basic models of computer architecture. However, it has been _the_ single standard architecture for many decades. We've forgotten there was ever anything else.

[[

Aside:

There was a reversion to machines with no directly-accessible storage in the late 1970s and early-to-mid 1980s, in the form of 8-bit micros with only cassette storage.

The storage was not under computer control, and was unidirectional: you could load a block, or change the tape and save a block, but in normal use for most people except the rather wealthy, the computer operated solely on the contents of its RAM and ROM.

Note: no filesystems.

(

Trying to forestall an obvious objection:
Later machines, such as the ZX Spectrum 128 and Amstrad PCW, had RAMdisks, and therefore very primitive filesystems, but that was mainly a temporary stage due to processors that couldn't access >64kB of RAM and the inability to modify their ROMs to support widespread bank-switching, because it would have broken backwards-compatibility.)

)

]]

Once all machines have this 2-level-store model, note that the 2 stores are managed differently.

Volatile store is not structured as a filesystem as it is dynamically constructed on the fly every boot. It has little to no metadata.

Permanent store needs to have metadata as well as data. The computer is regularly rebooted, and then, it needs to be able to find its way through the non-volatile storage. Thus, increasingly elaborate systems of indexing.

But the important thing is that filesystems were a solution to a technology issue: managing all that non-volatile storage.

Over the decades it has been overloaded with other functionality: data-sharing between apps, security between users, things like that. It's important to remember that these are secondary functions.

It is near-universal, but that is an artefact of technological limitations. That the fast, processor-local storage was volatile, and non-volatile storage was slow and large enough that it had to be non-local. Nonvolatile storage is managed via APIs and discrete hardware controllers, whose main job was transferring blocks of data from volatile to non-volatile storage and back again.

And that distinction is going away.

The technology is rapidly evolving to the point where we have fast, processor-local storage, in memory slots, appearing directly in the CPUs' memory map, which is non-volatile.

Example -- Flash memory DIMMs:

https://www.theregister.co.uk/2015/11/10/micron_brings_out_a_flash_dimm/

Now, the non-volatile electronic storage is increasing rapidly in speed and decreasing in price.

Example -- Intel XPoint:

https://arstechnica.com/information-technology/2017/02/specs-for-first-intel-3d-xpoint-ssd-so-so-transfer-speed-awesome-random-io/

Note the specs:

Reads as fast as Flash.
Writes nearly the same speed as reads.
Half the latency of Flash.
100x the write lifetime of Flash.

And this is the very first shipping product.

Intel is promising "1,000 times faster than NAND flash, 10 times denser than (volatile) DRAM, and with 1,000 times the endurance of NAND".

This is a game-changer.

What we are looking at is a new type of computer.

3rd generation

No distinction between volatile and non-volatile storage. All storage appears directly in the CPUs' memory map. There are no discrete "drives" of any kind as standard. Why would you? You can have 500GB or 1TB of RAM, but if you turn the machine off, then a day later turn it back on, it carries on exactly where it was.

(Yes there will be some caching and there will need to be a bit of cleverness involving flushing them, or ACID writes, or something.)

It ships to the user with an OS in that memory.

You turn it on. It doesn't boot.

What is booting? Transferring OS code from non-volatile storage into volatile storage so it can be run. There's no need. It's in the processor's memory the moment it's turned on.

It doesn't boot. It never boots. It never shuts down, either. You may have tell it you're turning it off, but it flushes its caches and it's done. Power off.

No hibernation: it doesn't need to. The OS and all state data will be there when you come back. No sleep states: just power off.

What is installing an OS or an app? That means transferring from slow non-volatile storage to fast non-volatile storage. There is no slow or fast non-volatile storage. There's just storage. All of it that the programmer can see is non-volatile.

This is profoundly different to everything since 1957 or so.

It's also profoundly different from those 1980s 8-bits with their BASIC or Forth or monitor in ROM, because it's all writable.

That is the big change.

In current machines, nobody structures RAM as a filesystem. (I welcome corrections on this point!) Filesystems are for drives. Doesn't matter what kind of drive: hard disk, SSD, SD card, CD, DVD, DVD-RW, whatever. Different filesystems, but all need transfers from them to and from volatile storage to function.

That's going away. The writing is on the wall. The early tech is shipping right now.

What I am asking is, how will it change OS design?

All I am getting back is, "don't be stupid, it won't, OS design is great already, why would we change?"

This is the same attitude the DOS and CP/M app vendors had to Windows.

WordStar and dBase survived the transition from CP/M to MS-DOS.

They didn't survive the far bigger one from MS-DOS to Windows.

WordPerfect Corp and Lotus Corp tried. They still failed and died.

A bigger transition is about to happen. Let's talk about it instead of denying it's coming.
liam_on_linux: (Default)

Acorn pulled out of making desktop computers in 1998, when it cancelled the Risc PC 2, the Acorn Phoebe.

The machine was complete, but the software wasn't. It was finished and released as RISC OS 4, an upgrade for existing Acorn machines, by RISC OS Ltd.

by that era, ARM had lost the desktop performance battle. If Acorn had switched to laptops by then, I think it could have remained competitive for some years longer -- 486-era PC laptops were pretty dreadful. But the Phoebe shows that what Acorn was actually trying to build was a next-generation powerful desktop workstation.

Tragically, I must concede that they were right to cancel it. If there had been a default version with 2 CPUs, upgradable to 4, and that were followed with 6- and 8-core models, they might have made it, but RISC OS couldn't do that, and Acorn didn't have the resources to rewrite RISC OS to do it. A dedicated Linux machine in 1998 would have been suicidal -- Linux didn't even have a FOSS desktop in those days. If you wanted a desktop Unix workstation, you still bought a Sun or the like.

(I wish I'd bought one of the ATX cases when they were on the market.)

Read more... )
liam_on_linux: (Default)
I was a keen owner and fan of multiple Psion PDAs (pocket digital assistants – today, I have a Psion 3C, a 5MX and a Series 7/netBook) and several Acorn desktop computers running RISC OS (I have an A310 and an A5000).

I was bitterly disappointed when the companies exited those markets. They still survive -- Psion's OS became Symbian and I had several Symbian devices, including a Sony-Ericsson P800, plus two Nokias -- a 7700 and an E90 Communicator. The OS is now dead, but Psion's handhelds still survive -- I'll get to them.

I have dozens of ARM-powered devices, and I have RISC OS Open running on a Raspberry Pi 3.

But despite my regret, both Psion's and Acorn's moves were excellent, sensible, pragmatic business decisions.

How many people used PDAs?

How many people now use smartphones?

Read more... )
liam_on_linux: (Default)
A summary of where were are and where we might be going next.

Culled from a couple of very lengthy CIX posts.

A "desktop" means a whole rich GUI with an actual desktop -- a background you can put things on, which can hold folders and icons. It also includes an app launcher, a file manager, generally a wastebin or something, accessory apps such as a text editor, calculator, archive manager, etc. It can mount media and show their contents. It can unmount them again. It can burn rewritable media such as CDs and DVDs.

The whole schmole.

Some people don't want this and use something more basic, such as a plain window manager. No file manager, or they use the command line, or they pick their own, along with their own text editor etc., which are not integrated into a single GUI.

This is still a GUI, still a graphical UI, but may include little or nothing more than window management. Many Unix users want a bunch of terminals and nothing else.

A desktop is an integrated suite, with all the extras, like you get with Windows or a Mac, or back in the day with OS/2 or something.

The Unix GUI stack is as follows:
Read more... )
liam_on_linux: (Default)
So in a thread on CIX, someone was saying that the Sinclair computers were irritating and annoying, cut down too far, cheap and slow and unreliable.

That sort of comment still kinda burns after all these decades.

I was a Sinclair owner. I loved my Spectrums, spent a lot of time and money on them, and still have 2 working ones today.

Yes, they had their faults, but for all those who sneered and snarked at their cheapness and perceived nastiness, *that was their selling point*.

They were working, usable, useful home computers that were affordable.

They were transformative machines, transforming people, lives, economies.

I had a Spectrum not because I massively wanted a Spectrum -- I would have rather had a BBC Micro, for instance -- but because I could afford a Spectrum. Well, my parents could, just barely. A used one.

My 2nd, 3rd and 4th ones were used, as well, because I could just about afford them.

If all that had been available were proper, serious, real computers -- Apples, Acorns, even early Commodores -- I might never have got one. My entire career would never have happened.

A BBC Micro was pushing £350. My used 48K Spectrum was £80.

One of those is doable for what parents probably worried was a kid's toy that might never be used for anything productive. The other was the cost of a car.
Read more... )
liam_on_linux: (Default)



Although we almost never saw any of them in Europe, there were later models in the Z80 family.

The first successors, the Z8000 (1985, 16-bit) and its later successor the Z80000 (1986, 32-bit) were not Z80-compatible. They did not do well.

Zilog did learn, though, and the contemporaneous Z800, which was Z80 compatible, was renamed the Z280 and relaunched in 1987. 16-bit, onboard cache, very complex instruction set, could handle 16MB RAM.

Hitachi did the HD64180 (1985), a faster Z80 with an onboard MMU that could handle 512 kB of RAM. This was licensed back to Zilog as the Z61480.

Then Zilog did the Z180, an enhancement of that, which could handle 1MB RAM & up to 33MHz.

That was enhanced into the Z380 (1994) -- 16/32-bit, 20MHz, but not derived from and incompatible with the Z280.

Then came the EZ80, at up to 50MHz. No MMU but 24-bit registers for 16MB of RAM.

Probably the most logical successor was the ASCII Corp R800 (1990), an extended 16-bit Z800-based design, mostly Z80 compatible but double-clocked on a ~8MHz bus for ~16MHz operation.

So, yes, lots of successor models -- but the problem is, too many, too much confusion, and no clear successors. Zilog, in other words, had the same failure as its licensees: it didn't trade on the advantages of its previous products. It did realise this and re-align itself, and it's still around today, but it did so too late.

The 68000 wasn't powerful enough to emulate previous-generation 8-bit processors. Possibly one reason why Acorn went its own way with the ARM, which was fast enough to do so -- the Acorn ARM machines came equipped with an emulator to run 6502 code. It emulated a 6502 "Tube" processor -- i.e. in an expansion box, with no I/O of its own. If your code was clean enough to run on that, you could run it on RISC OS out of the box.

Atari, Commodore, Sinclair and Acorn all abandoned their 8-bit heritage and did all-new, proprietary machines. Acorn even did its own CPU, giving it way more CPU power than its rivals, allowing emulation of the old machines -- not an option for the others, who bought in their CPUs.

Amstrad threw in the towel and switched to PC compatibles. A wise move, in the long view.

The only line that sort of transitioned was MSX.

MSX 1 machines (1983) were so-so, decent but unremarkable 8-bits.

MSX 2 (1985) were very nice 8-bitters indeed, with bank-switching for up to 4MB RAM, a primitive GPU for good graphics by Z80 standards. Floppy drives and 128 kB RAM were common as standard.

MSX 2+ (1988) were gorgeous. Some could handle ~6MHz, and the GPU has at least 128 kB VRAM, so they had serious video capabilities for 8-bit machines -- e.g. 19K colours.

MSX Turbo R (1990) were remarkable. Effectively a ~30MHz 16-bit CPU, 96 kB ROM, 256 kB RAM (some battery-backed), a GPU with its own 128 kB RAM, and stereo sound via multiple sound chips plus MIDI.

As a former Sinclair fan, I'd love to see what a Spectrum built using MSX Turbo R technology could do.


Postscript

Two 6502 lines did transition, kinda sortof.

Apple did the Apple ][GS (1986), with a WD65C816 16-bit processor. Its speed was tragically throttled and the machine was killed off very young so as not to compete with the still-new Macintosh line.

Acorn's Communicator (1985) also had a 65C816, with a ported 16-bit version of Acorn's MOS operating system, BBC BASIC, the View wordprocessor, ViewSheet spreadsheet, Prestel terminal emulator and other components. Also a dead end.

The 65C816 was also available as an add-on for several models in the Commodore 64 family, and there was the GEOS GUI-based desktop to run on it, complete with various apps. Commodore itself never used the chip, though.

liam_on_linux: (Default)
My previous post was an improvised and unplanned comment. I could have structured it better, and it caused some confusion on https://lobste.rs/

Dave Cutler did not write OS/2. AFAIK he never worked on OS/2 at all in the days of the MS-IBM pact -- he was still at DEC then.

Many sources focus on only one side of the story -- the DEC side, This is important but only half the tale.

IBM and MS got very rich working together on x86 PCs and MS-DOS. They carefully planned its successor: OS/2. IBM placed restrictions on this which crippled it, but it wasn't apparent at the time just how bad this would turn out to be.

In the early-to-mid 1980s, it seemed apparent to everyone that the most important next step in microcomputers would be multitasking.

Even small players like Sinclair thought so -- the QL was designed as the first cheap 68000-based home computer. No GUI, but multitasking.

I discussed this a bit in a blog post a while ago: http://liam-on-linux.livejournal.com/46833.html

Apple's Lisa was a sideline: too expensive. Nobody picked up on its true significance.

Then, 2 weeks after the QL, came the Mac. Everything clever but expensive in the Lisa stripped out: no multitasking, little RAM, no hard disk, no slots or expansion. All that was left was the GUI. But that was the most important bit, as Steve Jobs saw and nobody much else did.

So, a year later, the ST had a DOS-like OS but a bolted-on GUI. No shell, just a GUI. Fast-for-the-time CPU, no fancy chips, and it did great. It had the original, uncrippled version of DR GEM. Apple's lawsuit meant that PC GEM was crippled: no overlapping windows, no desktop drive icons or trashcan, etc.

Read more... )
liam_on_linux: (Default)

Windows NT was allegedly partly developed on OS/2. Many MSers loved OS/2 at the time -- they had co-developed it, after all. But there was more to it than that.

Windows NT was partly based on OS/2. There were 3 branches of the OS/2 codebase:

[a] OS/2 1.x – at IBM’s insistence, for the 80286. The mistake that doomed OS/2 and IBM’s presence in the PC industry, the industry it had created.

[b] OS/2 2.x – IBM went it alone with the 80386-specific version.

[c] OS/2 3.x – Portable OS/2, planned to be ported to multiple different CPUs.

After the “divorce”, MS inherited Portable OS/2. It was a skeleton and a plan. Dave Cutler was hired from DEC, which refused to allow him to pursue his PRISM project for a modern CPU and successor to VMS. Cutler got the Portable OS/2 project to complete. He did, fleshing it out with concepts and plans derived from his experience with VMS and plans for PRISM.

Read more... )
liam_on_linux: (Default)
When was the last time you saw a critic write a play, compose a symphony, carve a statue?

I've seen a couple of attempts. I thought they were dire, myself. I won't name names (or media), as these are friends of friends.

Some concrete examples. I have given dozens on liam-on-linux.livejournal.com, but I wonder if I can summarise.

[1]

Abstractions. Some of our current core conceptual models are poor. Bits, bytes, directly accessing and managing memory.

If the programmer needs to know whether they are on a 32-bit or 64-bit processor, or whether it's big-endian or little-endian, the design is broken.

Higher-level abstractions have been implemented and sold. This is not a pipedream.

One that seems to work is atoms and lists. That model has withstood nearly 50Y of competition and it still thrives in its niche. It's underneath Lisp and Scheme, but also several languages far less arcane, and more recently, Urbit with Nock and Hoon. There is room for research here: work out a minimal abstraction set based on list manipulation and tagged memory, and find an efficient way to implement it, perhaps at microcode or firmware level.

Read more... )
liam_on_linux: (Default)
Things have been getting better for a while now. For smaller gadgets, micro-USB is now the standard charging connector. Cables are becoming
a consumable for me, but they're cheap and easy to find.

But it only goes in one way and it's hard to see and to tell. And not all my gadgets want it the same way round, meaning I have to either remember or peer at a tiny socket and try to guess.

So conditions were right for an either-way-round USB connector.


Read more... )
liam_on_linux: (Default)
I had Sinclair Microdrives on my ZX Spectrum. They were better than tape cassette but nothing else -- ~90 kB of slowish, rather unreliable storage.

So I bought an MGT DISCiPLE and an old-fashioned, but new, cheap 5¼" 80-track, DS/DD drive in an external case.

780 kB of storage! On demand! Programs loaded in seconds! Even when I upgraded to an ex-demo 128K Spectrum from Curry's, even 128 kB programs loaded in a (literal) flash!

(MGT's firmware strobed the Spectrum's screen border, in homage to loading from tape, so you could see the data streaming into memory.)

That was the first time I remember being excited by the size and speed of my computer's storage.
Read more... )
liam_on_linux: (Default)
So... when the lack of apps for my beloved Blackberry Passport, and the issues with running sideloaded Android apps, became problematic, I decided to check out a cheap Chinese Android Phablet.

(P.S. The Passport is for sale! Let me know if you're interested.)

The Passport superseded a Samsung Galaxy Note 2, which subsequently got stolen, unfortunately. It was decent, occasionally sluggish, ran an elderly version of Android with no updates in ages, and had a totally useless stylus I never used. It replaced an iPhone 4 which replaced an HTC Desire HD, which replaced a Nokia Communicator E90 -- the best form-factor for a smartphone I've ever had, but nothing like it exists any more.

I wanted a dual-core or quad-core phablet, bigger than 5.5", with dual SIM and a memory card. That was my starting point.  I don't have or use a tablet and never have -- I'm a keyboard junkie. I spend a lot of time surfing the web, on social networks, reading books and things on my phone. I wanted one as big as I could get, but still pocketable. My nicked Samsing was 5.5" and I wanted a little larger. I tried a 6" phablet in a shop and wanted still bigger if possible. I also tried a 6.8" Lenovo Phab Pro in a shop and that was a bit too big (but I might be persuaded -- with a tiny bezel, such a device might be usable).
Read more... )
liam_on_linux: (Default)
Although the launch of GNOME 3 was a bumpy ride and it got a lot of criticism, it's coming back. It's the default desktop of multiple distros again now. Allegedly even Linus Torvalds himself uses it. People tell me that it gets out of the way.

I find this curious, because I find it a little clunky and obstructive. It looks great, but for me, it doesn’t work all that well. It’s OK — far better than it was 2-3 years ago. But while some say it gets out of the way and lets them work undistracted, it gets in my way, because I have to adapt to its weird little quirks. It will not adapt to mine. It is dogmatic: it says, you must work this way, because we are the experts and we have decided that this is the best way.

So, on OS X or Ubuntu, I have my dock/launcher thing on the left, because that keeps it out of the way of the scrollbars. On Windows or XFCE, I put the task bar there. For all 4 of these environments, on a big screen, it’s not too much space and gives useful info about minimised windows, handy access to disk drives, stuff like that. On a small screen, it autohides.

But not on GNOME, no. No, the gods of GNOME have decreed that I don’t need it, so it’s always hidden. I can’t reveal it by just putting my mouse over there. No, I have to click a strange word in the menu bar. “Activities”. What activities? These aren’t my activities. They’re my apps, folders, files, windows. Don’t tell me what to call them. Don’t direct me to click in a certain place to get them; I want them just there if there’s room, and if there isn’t, on a quick flick of the wrist to a whole screen edge, not a particular place followed by a click. It wastes a bit of precious menu-bar real-estate with a word that’s conceptually irrelevant to me. It’s something I have to remember to do.

That’s not saving me time or effort, it’s making me learn a new trick and do extra work.

The menu bar. Time-honoured UI structure. Shared by all post-Mac GUIs. Sometimes it contains a menu, efficiently spread out over a nice big easily-mousable spatial range. Sometimes that’s in the window; whatever. The whole width of the screen in Mac and Unity. A range of commands spread out.

On Windows, the centre of the title bar is important info — what program this window belongs to.

On the Mac, that’s the first word of the title bar. I read from left to right, because I use a Latinate alphabet. So that’s a good place too.

On GNOME 3, there’s some random word I don’t associate with anything in particular as the first word, then a deformed fragment of an icon that’s hard to recognise, then a word, then a big waste of space, then the blasted clock! Why the clock? Are they that obsessive, such clock-watchers? Mac and Windows and Unity all banish the clock to a corner. Not GNOME, no. No, it’s front and centre, one of the most important things in one of the most important places.

Why?

I don’t know, but I’m not allowed to move it.

Apple put its all-important logo there in early versions of Mac OS X. They quickly were told not to be so egomaniac. GNOME 3, though, enforces it.

On Mac, Unity, and Windows, in one corner, there’s a little bunch of notification icons. Different corners unless I put the task bar at the top, but whatever, I can adapt.

On GNOME 3, no, those are rationed. There are things hidden under sub options. In the pursuit of cleanliness and tidiness, things like my network status are hidden away.

That’s my choice, surely? I want them in view. I add extra ones. I like to see some status info. I find it handy.

GNOME says no, you don’t need this, so we’ve hidden it. You don’t need to see a whole menu. What are you gonna do, read it?

It reminds me of the classic Bill Hicks joke:

"You know I've noticed a certain anti-intellectualism going around this country ever since around 1980, coincidentally enough. I was in Nashville, Tennessee last weekend and after the show I went to a waffle house and I'm sitting there and I'm eating and reading a book. I don't know anybody, I'm alone, I'm eating and I'm reading a book. This waitress comes over to me (mocks chewing gum) 'what you readin' for?'...wow, I've never been asked that; not 'What am I reading', 'What am I reading for?’ Well, goddamnit, you stumped me... I guess I read for a lot of reasons — the main one is so I don't end up being a f**kin' waffle waitress. Yeah, that would be pretty high on the list. Then this trucker in the booth next to me gets up, stands over me and says [mocks Southern drawl] 'Well, looks like we got ourselves a readah'... aahh, what the fuck's goin' on? It's like I walked into a Klan rally in a Boy George costume or something. Am I stepping out of some intellectual closet here? I read, there I said it. I feel better."

Yeah, I read. I like reading. It’s useful. A bar of words is something I can scan in a fraction of a second. Then I can click on one and get… more words! Like some member of the damned intellectual elite. Sue me. I read.

But Microsoft says no, thou shalt have ribbons instead. Thou shalt click through tabs of little pictures and try and guess what they mean, and we don’t care if you’ve spent 20 years learning where all the options were — because we’ve taken them away! Haw!

And GNOME Shell says, nope, you don’t need that, so I’m gonna collapse it all down to one menu with a few buried options. That leaves us more room for the all-holy clock. Then you can easily see how much time you’ve wasted looking for menu options we’ve removed.

You don’t need all those confusing toolbar buttons neither, nossir, we gonna take most of them away too. We’ll leave you the most important ones. It’s cleaner. It’s smarter. It’s more elegant.

Well, yes it is, it’s true, but you know what, I want my software to rank usefulness and usability above cleanliness and elegance. I ride a bike with gears, because gears help. Yes, I could have a fixie with none, it’s simpler, lighter, cleaner. I could even get rid of brakes in that case. Fewer of those annoying levers on the handlebars.

But those brake and gear levers are useful. They help me. So I want them, because they make it easier to go up hills and easier to go fast on the flat, and if it looks less elegant, well I don’t really give a damn, because utility is more important. Function over form. Ideally, a balance of both, but if offered the choice, favour utility over aesthetics.

Now, to be fair, yes, I know, I can install all kinds of GNOME Shell extensions — from Firefox, which freaks me out a bit. I don’t want my browser to be able to control my desktop, because that’s a possible vector for malware. A webpage that can add and remove elements to my desktop horrifies me at a deep level.

But at least I can do it, and that makes GNOME Shell a lot more usable for me. I can customise it a bit. I can add elements and I could make my favourites bar be permanent, but honestly, for me, this is core functionality and I don’t think it should be an add-on. The favourites bar still won’t easily let me see how many instances of an app are running like the Unity one. It doesn’t also hold minimised windows and easy shortcuts like the Mac one. It’s less flexible than either.

There are things I like. I love the virtual-desktop switcher. It’s the best on any OS. I wish GNOME Shell were more modular, because I want that virtual-desktop switcher on Unity and XFCE, please. It’s superb, a triumph.

But it’s not modular, so I can’t. And it’s only customisable to a narrow, limited degree. And that means not to the extent that I want.

I accept that some of this is because I’m old and somewhat stuck in my ways and I don’t want to change things that work for me. That’s why I use Linux, because it’s customisable, because I can bend it to my will.

I also use Mac OS X — I haven’t upgraded to Sierra yet, so I won’t call it macOS — and anyway, I still own computers that run MacOS, as in MacOS 6, 7, 8, 9 — so I continue to call it Mac OS X. What this tells you is that I’ve been using Macs for a long time — since the late 1980s — and whereas they’re not so customisable, I am deeply familiar and comfortable with how they work.

And Macs inspired the Windows desktop and Windows inspired the Linux desktops, so there is continuity. Unity works in ways I’ve been using for nearly 30 years.

GNOME 3 doesn’t. GNOME 3 changes things. Some in good ways, some in bad. But they’re not my ways, and they do not seem to offer me any improvement over the ways I’m used to. OS X and Unity and Windows Vista/7/8/10 all give me app searching as a primary launch mechanism; it’s not a selling point of GNOME 3. The favourites bar thing isn’t an improvement on the OS X Dock or Unity Launcher or Windows Taskbar — it only delivers a small fraction of the functionality of those. The menu bar is if anything less customisable than the Mac or Unity ones, and even then, I have to use extensions to do it. If I move to someone else’s computer, all that stuff will be gone.

So whereas I do appreciate what it does and how and why it does so, I don’t feel like it’s for me. It wants me to change to work its way. The other OSes I use — OS X daily, Ubuntu Unity daily, Windows occasionally when someone pays me — don’t.

So I don’t use it.

Does that make sense?
liam_on_linux: (Default)
I'm mainly putting this here to keep it around, as writing it clarified some of my thinking about technological generations.

From https://www.facebook.com/groups/vintagecomputerclub/

You're absolutely right, Jim.

The last big advances were in the 1990s, and since then, things have just stagnated. There are several reasons why -- like all of real life, it's complex.

Firstly, many people believe that computing (and _personal_ computing) began with the 8-bits of the late 1970s: the Commodore PETs, Apple ][s and things. That before them, there were only big boring mainframes and minicomputers, room-sized humming boxes managing bank accounts.

Of course, it didn't. In the late '60s and early '70s, there was an explosion of design creativity, with personal workstations -- Lisp Machines, the Xerox PARC machines: the Alto, Star, Dandelion and so on. There were new cutting-edge designs, with object-oriented languages, graphical user interfaces, networking, email and the internet. All before the 8-bit microprocessors were invented.

Then what happened is a sort of mass extinction event, like the end of the dinosaurs. All the weird clever proprietary operating systems were overtaken by the rise of Unix, and all the complex, very expensive personal workstations were replaced with microcomputers.

But the early micros were rubbish -- so low-powered and limited that all the fancy stuff like multitasking was thrown away. They couldn't handle Unix or anything like it. So decades of progress was lost, discarded. We got rubbish like MS-DOS instead: one program, one task, 640kB of memory, and only with v2 did we get subdirectories and with v3 proper hard disk support.

A decade later, by the mid-to-late 1980s, the micros had grown up enough to support GUIs and sound, but instead of being implemented on elegant grown-up multitasking OSes, we got them re-implemented, badly, on primitive OSes that would fit into 512kB of RAM on a floppy-only computer -- so we got ST GEM, Acorn RISC OS, Windows 2. No networking, no hard disks -- they were too expensive at first.

Then a decade after that, we got some third-generation 32-bit micros and 3rd-gen microcomputer OSes, which brought back networking and multitasking: things like OS/2 2 and Windows NT. But now, the users had got used to fancy graphics and sound and whizzy games, which the first 32-bit 3rd-gen OSes didn't do well, so most people stuck with hybrid 16/32-bit OSes like Windows 9x and MacOS 8 and 9 -- they didn't multitask very well, but they could play games and so on.

Finally, THREE WHOLE DECADES after the invention of the GUI and multitasking workstations and everything connected via TCP/IP networking, we finally got 4th-gen microcomputer OSes: things like Windows XP and Mac OS X. Both the solid multitasking basis with networking and security, AND the fancy 3D graphics, video playback etc.

It's all been re-invented and re-implemented, badly, in a chaotic mixture of unsuitable and unsafe programming languages, but now, everyone's forgotten the original way these things were done -- so now, we have huge, sprawling, messy OSes and everyone thinks it's normal. They are all like that, so that must be the only way it can be done, right? If there was another way, someone would have done it.

But of course, they did do it, but only really old people remember it or saw it, so it's myth and legend. Nobody really believes in it.

Nearly 20y ago, I ran BeOS for a while: a fast, pre-emptive multitasking, multithreaded, 3D and video capable GUI OS with built-in Internet access and so on. It booted to the desktop in about 5 seconds. But there were few apps, and Microsoft sabotaged the only hardware maker to bundle it.

This stuff _can_ be done better: smaller, faster, simpler, cleaner. But you can't have that and still have compatibility with 25y worth of DOS apps or 40y worth of Unix apps.

So nobody used it and it died. And now all we have is bloatware, but everyone points at how shiny it is and if you give it a few billion kB of RAM and Flash storage, it actually starts fairly quickly and you only need to apply a few hundred security fixes a year. We are left with junk reimplemented on a basis of more junk and because it's all anyone knows they think it's the best it could be.
liam_on_linux: (Default)
Modern OSes are very large and complicated beasts.

This is partly because they do so many different things: the same Linux kernel is behind the OS for my phone, my laptop, my server, and probably my router and the server I'm posting this on.

Much the same is true of Windows and of most Apple products.

So they have to be that complex, because they have to do so many things.

This is the accepted view, but I maintain that this is at least partly cultural and partly historical.

Some of this stuff, like the story that “Windows is only so malware-vulnerable because Windows is so popular; if anything else were as popular, it’d be as vulnerable” is a pointless argument, IMHO, because lacking access to alternate universes, we simple cannot know.

So, look, let us consider, as a poor parallel, the industry’s own history.

Look at Windows in the mid to late 1990s as an instance.

Because MS was busily developing a whole new OS, NT, and it couldn’t do everything yet, it was forced to keep maintaining and extending an old one: DOS+Win9x.

So MS added stuff to Win98 that was different to the stuff it was adding to NT.

Some things made it across, out of sync…

NT 3.1 did FAT16, NTFS and HPFS.

Win95 only did FAT. So MS implemented VFAT: long filenames on FAT.

NT 3.1 couldn’t see them; NT 3.5 added that.
Then Win 95B added FAT32. NT 3.5 couldn’t read FAT32; it was added in 3.51 (IIRC).

Filesystems are quite fundamental — MS did the work to keep the 2 lines able to interwork

But it didn’t do it with hardware support. Not back then.

Win95: APM, Plug’n’Play, DirectX.
Later, DirectX 2 with Direct3D.
Win95B: USB1.
Win98: USB2, ACPI; GDI+.
Win98SE: basic Firewire camera-only support; Wake-on-LAN; WDM modems/audio.
WinME: USB mass storage & HID; more complete Firewire; S/PDIF.

(OK, NT 4 did include DirectX 2.0 and thus Direct3D. There were rumours that it only did software rendering on NT and true hardware-accelerated 3D wasn’t available until Windows 2000. NT had OpenGL. Nothing much used it.)

A lot of this stuff only came to the NT family with XP in 2002. NT took a long time to catch up.

My point here is that, in the late ‘90s, Windows PCs became very popular for gaming, for home Internet access over dialup, for newly-capable Windows laptops which were becoming attractive for consumers to own. Windows became a mass-market product for entertainment purposes.

And all that stuff was mainly supported on Win9x, _not_ on NT, because NT was at that time being sold to business as a business OS for business desktop computers and servers. It was notably bad as a laptop OS. It didn’t have PnP, its PCMCIA/Cardbus support and power management was very poor, it didn’t support USB at all, and so on.

Now, imagine this as an alternate universe.

In ours, as we know, MS was planning to merge its OS lines. Sensible plan, the DOS stuff was a legacy burden. But what if it wasn’t? Say it had developed Win9x as the media/consumer OS and NT as the business OS?

This is only a silly thought experiment, don’t try to blow it down by pointing out why not to do it. We know that.

They had a unified programming model — Win32. Terrified of the threat of the DoJ splitting them up, they were already working on its successor, the cross-platform .NET.

They could have continued both lines: one supporting gaming and media and laptops, with lots of special driver support for those. The other supporting servers and business desktops, not supporting all the media bells and whistles, but much more solid.

Yes it sounds daft, but this is what actually happened for the best part of 6 years, from 1996 and the releases of NT 4 and Win 95 OSR2 until Windows XP in 2002.

Both could run MS Office. Both could attach to corporate networks and so on. But only one was any good for gaming, and only the other if you wanted to run SQL Server or indeed any kind of server, firewall, whatever.

Both were dramatically smaller than the post-merger version which does both.

The tendency has been to economise, to have one do-everything product, but for years, they couldn’t do that yet, so there were 2 separate OS teams, and both made major progress, both significantly advanced the art. The PITA “legacy” platform went through lots of releases, steadily gaining functionality, as I showed with that list above, but it was all functionality that didn’t go into the enterprise OS, which went through far fewer releases — despite it being the planned future one.

Things could have gone differently. It’s hard to imagine now, but it’s entirely possible.

If IBM had committed to OS/2 being an 80386 OS, then its early versions would have been a lot better, properly able to run and even multitask DOS apps. Windows 3 would never have happened. IBM and MS would have continued their partnership for longer; NT might never have happened at all, or DEC would have kept Dave Cutler and PRISM might have happened.

If Quarterdeck had been a bit quicker with it, DESQview/X might have shipped before Windows 3, and been a far more compelling way of running DOS apps on a multitasking GUI OS. The DOS world might have been pulled in the Unix-like direction of X.11 and TCP/IP, instead of MS’s own in-house GUI and Microsoft and Novell’s network protocols.

If DR had moved faster with DR-DOS and GEM — and Apple hadn’t sued — a 3rd party multitasking DOS with a GUI could have made Windows stillborn. They had the tech — it went into Flex/OS but nobody’s heard of it.

If the later deal between a Novell-owned DR and Apple had happened, MacOS 7 would have made the leap to the PC platform:

https://en.wikipedia.org/wiki/Star_Trek_project

(Yes, it sounds daft, but this was basically equivalent to Windows 95, 3 years earlier. And for all its architectural compromises, look how successful Win95 was: 40 million copies in the first year. 10x what any previous version did.)

Maybe Star Trek would have bridged the gap and instead of NeXT Apple bought Be instead and migrated us to BeOS. I loved BeOS even more than I loved classic MacOS. I miss it badly. Others do too, which is why Haiku is still slowly moving forward, unlike almost any other non-Unix FOSS OS.

If the competing GUI computers of the late 1980s had made it into the WWW era, notably the Web 2.0 era, they might have survived. The WWW and things like Java and JavaScript make real rich cross-platform apps viable. I am not a big fan of Google Docs, but they are actually usable and I do real, serious, paying work with them sometimes.

So even if they couldn’t run PC or Mac apps, a modern Atari ST or Commodore Amiga or Acorn RISC OS system with good rich web browsers could be entirely usable and viable. They died before the tech that could have saved them, but that’s partly due to mismanagement, it’s not some historical inevitability.

If the GNU project had adopted the BSD kernel, as it considered, and not wasted effort on the HURD, Linux would never have happened and we’d have had a viable FOSS Unix several years earlier.

This isn’t entirely idle speculation, IMHO. I think it’s instructive to wonder how and where things might have gone. The way it happened is only one of many possible outcomes.

We now have effectively 3 mass-market OSes, 2 of them Unixes: Windows NT (running on phones, xBoxes and PCs), Linux (including Android), and macOS/iOS. All are thus multipurpose, doing everything from small devices to enterprise servers. (Yes, I know, Apple’s stopped pushing servers, but it did once: the Xserve made it to quad-core Xeons & its own RAID hardware.)

MS, as one company with a near-monopoly, had a strong incentive to only support one OS family, and it’s done it even when it cost it dearly — for instance, moving the phones to the NT kernel was extremely costly and has essentially cost them the phone market. Windows CE actually did fairly well in its time.

Apple, coming back from a weak position, had similar motivations.

What if instead the niches were held by different companies? If every player didn’t try to do everything and most of them killed themselves trying?

What if we’d had, say, in each of the following market sectors, 1-2+ companies with razor sharp focus aggressively pushing their own niches…

* home/media/gaming
* enterprise workstations
* dedicated laptops (as opposed to portable PCs)
* enterprise servers
* pocket PDA-type devices

And there are other possibilities. The network computer idea was actually a really good one IMHO. The dedicated thin client/smart terminal is another possible niche.

There are things that came along in the tech industry just too late to save players that were already moribund. The two big ones I’m thinking of were the Web, especially the much-scorned-by-techies (including me) Web 2, and FOSS. But there are others — commodity hardware.

I realise that now, it sounds rather ludicrous. Several companies, or at least product lines, destroyed themselves trying to copy rivals too closely — for instance, OS/2. Too much effort trying to be “a better DOS than DOS, a better Windows than Windows”, rather than trying to just be a better OS/2.

Apple didn’t try this with Mac OS X. OS X wasn’t a better Classic MacOS, it was an effectively entirely new OS that happened to be able to run Classic MacOS in a VM. (I say effectively entirely new, because OS X did very little to try to appeal to NeXT owners or users. Sure, they were rich, but there weren’t many of them, whereas there were lots of Mac owners.)

What I am getting at here, in my very very long-winded way, is this.

Because we ended up with a small number of players, each of ‘em tried to do everything, and more or less succeeded. The same OS in my phone is running the server I’ll be posting this message to, and if I happened to be using a laptop to write this, it’d be the same OS as on my PC.

If I was on my (dual-booting) Win10 laptop and was posting this to a blog on CodePlex or something, it’d be the same thing, but a different OS. If MS still offered phones with keyboards, I’d not object to a Windows phone — that’s why I switched to a Blackberry — but as it is Windows phones don’t offer anything I can’t get elsewhere.

But if the world had turned out differently, perhaps, unified by FOSS, TCP/IP, HTML, Java and Javascript, my phone would be a Symbian one — because I did prefer it, dammit — and my laptop would be a non-Unix Apple machine and my desktop an OS/2 box and they’d be talking to DEC servers. For gaming I’d fire up my Amiga-based console.

All talking over Dropbox or the like, all running Google Docs instead of LibreOffice and ancient copies of MS Word.

It doesn’t sound so bad to me. Actually, it sounds great.

Look at the failure of Microsoft’s attempt to converge its actually-pretty-good tablet interface with its actually-pretty-good desktop UI. Bombed, may yet kill them.

Look at Ubuntu’s failure to deliver its converged UI yet. As Scott Gilbertson said:

<<
Before I dive into what's new in Ubuntu 16.10, called Yakkety Yak, let's just get this sentence out of the way: Ubuntu 16.10 will not feature Unity 8 or the new Mir display server.

I believe that's the seventh time I've written that since Unity 8 was announced and here we are on the second beta for 16.10.
>>
http://www.theregister.co.uk/2016/09/26/ubuntu_16_10_beta_2_review/

And yet look at how non-techies are perfectly happy moving from Windows computers to Android and iPhones, despite totally different UIs. They have no problems at all. Different tools for different jobs.

From where we are, the idea of totally different OSes on different types of computer sounds ridiculous, but I think that’s a quirk of the market and how things happened to turn out. At different points in the history of the industry _as it actually happened_ things went very differently.

Microsoft is a juggernaut now, but for about 10 years from the mid ‘80s and early ’90s, the world completely ignored Windows and bought millions of Atari STs and Commodore Amigas instead. Rich people bought Macs.

The world still mostly ignores FreeBSD, but NeXT didn’t, and FreeBSD is one of the parents of Mac OS X and iOS, both loved by hundreds of millions of happy customers.

This is not the best of all possible worlds.

But because our PCs are so fast and so capacious, most people seem to think it is, and that is very strange to me.

As it happens, we had a mass extinction event. It wasn’t really organised enough to call it a war. It was more of an emergent phenomenon. Microsoft and Apple didn’t kill Atari and Commodore; Atari and Commodore killed each other in a weird sort of unconscious suicide pact.

But Windows and Unix won, and history is written by the winners, and so now, everyone seems to think that this was always going to be and it was obvious and inevitable and the best thing.

It wasn’t.

And it won’t continue to be.
liam_on_linux: (Default)
If you write code to target a particular OS in a compiled language that is linked against local OS libraries and needs to talk to particular hardware via particular drivers, then that code is going to be quite specific to the OS it was designed upon and will be difficult to port to another OS. Indeed it might end up tied to one specific version of one specific distro.

Notably, the lowest-level of high-level languages, C.

If, OTOH, you write an app that runs in an interpreted language, that never runs outside the sandbox of the interpreter, which makes a request to get its config from a defined service on another machine, stores its working data on another such machine, and emits its results across the network to another machine, all within one set of, say, Ruby modules, then you don't need to care so much.

This is, vastly reduced and simplified, called the microservices "design pattern".

It is how most modern apps are built — either for the public WWW, or VPNs, or "intranets" not that that word is used any more as everyone has one now. You split it up into chunks as small as possible, because small "Agile" teams can work on each chunk. You farm the chunks out onto VMs, because then, if you do well and have a load spike, you can start more VMs and scale to much much larger workloads.

From Twitter to Instagram, the modern Web is built like this. Instagram, for instance, was built by a team of 3 people, originally in pure Python. They did not own a single server. They deployed not one OS instance. Nothing. EVERYTHING was "in the cloud" running on temporary rented VMs on Amazon EC2.

After 1 year they had 14 million users.

This service sold for a billion dollars.

http://highscalability.com/blog/2012/4/9/the-instagram-architecture-facebook-bought-for-a-cool-billio.html

These aren't toy technologies. They're not little prototypes or occasional research projects.

So if you break your architecture down into little fungible boxes, and you don't manually create any of the boxes, you just create new instances in a remote datacenter with automated tools, then…

Firstly, some of the infrastructure that hosts those boxes could be exchanged if it delivered major benefits and the users would never even know.

Secondly, if a slight change to the internal makeup of some of the boxes delivered major improvements — e.g. by using massively less memory, or starting quicker — it would be worth changing the box makeup to save money and improve performance. That is what is driving migration to containers. Rather than lots of Linux hosts with lots of VMs holding more Linux instances, you have a single host with a single kernel and lots of containers, and you save 1GB RAM per instance or something. That adds up very quickly.

So, thirdly, if such relatively modest efficiencies drive big cost savings or performance improvements, then perhaps if you could replace those of your VMs with other ones that run a small set of (probably quite big & complex) Python scripts, say, with a different OS that can run those scripts quicker in a tenth of the RAM, it might be worth some re-tooling.

That's speculative but it's far from impossible.

There is much more to life than compiled languages. A lot of production software today — countless millions of lines — never goes near a compiler. It's written in interpreted "scripting" languages, or ones that run in VMs, or it just calls on off-the-shelf code on other systems which is a vanilla app, straight out of a repo, that just has some config applied.

Some of my English students here are training up to be developers. #1 language of choice: JavaScript. #2: Java. C? Historical curiosity. C++ a
more recent curiosity.

If one had, for instance, some totally-un-Unix-like OS with a working JVM, or with a working Python interpreter — CPython for compatibility - that
would run a lot of off-the-shelf code for a lot of people.

Old time Unix or Windows hands, when they hear "develop a new app", think of reaching for a compiler or an IDE.

The millennials don't. They start looking for modules of existing code that they can use, stitched together with some Python or Ruby.

No, $NewOS will not replace all uses of Windows or Unix. Don't be silly. But lots of companies aren't running any workloads at all on bare metal any more: it's all in VMs, and what those VMs are hosted on is a matter of convenience or personal preference of the sysadmins. Lots of that is now starting to move to containers instead, because they're smaller and faster to start and stop. Apps are structured as micro services across farms of VMs and are moving to farms of containers instead.

A container can be 10x smaller than a VM and start 100x faster. That's enough to drive adoption.

Well if you can host that container on an OS that's 10x smaller and starts 100x faster, that too might be enough to drive adoption.

You don't need to know how to configure and provision it so long as you can script it with a few lines in an Ansible playbook or a small Puppet file. If the OS has almost no local config because it's only able to run inside a standardised VM, how much setup and config does it need anyway?

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 16th, 2026 05:45 pm
Powered by Dreamwidth Studios