liam_on_linux: (Default)
EDIT: this post has attracted discussion and comments on various places, and some people are disputing its accuracy. So, I've decided to make some edits to try to clarify things.

When Windows 2 was launched, there were two editions: Windows, and Windows/386.

The ordinary "base" edition of Windows 2.0x ran on an XT-class computer: that is, an Intel 8088 or 8086 CPU. These chips can only directly access a total of 1MB of memory, of which the highest 384kB was reserved for ROM and I/O: so, a maximum 640kB of RAM. That was not a lot for Windows, even then. But both DOS and Windows 2.x did support expanded memory (Lotus-Intel-Microsoft-specification EMS). I ran Windows 2 on 286s and 386s at work, and on 386 machines I used Quarterdeck's QEMM386 to turn the extended memory that Windows 2 couldn't see or use into expanded memory that it could.

The Intel 80286 could access up to 16MB of memory. But all except the first 640kB was basically invisible to DOS and DOS apps. Only native 16-bit programs could access it, and there barely were any — Lotus 1-2-3 r3 was one of the few, for instance.

There was one exception to this: due to a bug the first 64kB of memory above 1MB (less 16 bytes) could be accessed in DOS's Real Mode. This was called the High Memory Area (HMA). 64kB wasn't much even then, but still, it added 10% to the amount of usable memory on a 286. DOS 3 couldn't do anything with this – but Windows 2 could.

Windows 2 and 2.01 were not successful, but some companies did release applications for them – notably, Aldus' PageMaker desktop publishing (DTP) program. So, Microsoft put out some bug-fix releases: I've found traces of 2.01, 2.03, 2.11 and finally 2.12.


When Windows 2.1x was released, MICROS~1 did a little re-branding. The "base" edition of Windows 2.1 was renamed Windows/286. In some places, Microsoft itself claims that this was a special 286 edition of Windows 2 that ran in native 80286 mode and could access all 16MB of memory.

But some extra digging by people including Mal Smith has uncovered evidence that Windows/286 wasn't all it was cracked up to be. For one thing, without the HIMEM.SYS driver, it runs perfectly well on 8088/8086 PCs – it just can't access the 64kB HMA. Microsoft long ago erased the comments to Raymond Chen's blog post, but they are on the Wayback Machine.

So the truth seems to be that Windows/286 didn't really have what would later be called Standard Mode and didn't really run in the 286's protected mode. It just used the HMA for a little extra storage space, giving more room in conventional memory for the Windows real-mode kernel and apps.

So, what about Windows/386?


The new 80386 chip had an additional mode on top of 8/16-bit (8088/8086-compatible) and fully-16-bit (80286-compatible) modes. The '386 had a new 32-bit mode – now called x86-32 – which could access a vast 4GB of memory. (In 1985 or so, that would have cost hundreds of thousands of dollars, maybe even $millions.)

However, this was useless to DOS and DOS apps, which could still only access 640kB (plus EMS, of course).

But Intel learned from the mistake of the 286 design. The 286 needed new OSes to access all of its memory, and even they couldn't give DOS apps access to that RAM.

The 386 "fixed" this. It could emulate, in hardware, multiple 8086 chips at once and even multitask them. Each got its own 640kB of RAM. So if you had 4MB of RAM, you could run 6 separate full-sized DOS sessions and still have 0.4MB left over for a multitasking OS to manage them. DOS alone couldn't do this!

There were several replacement OSes to allow this. At least one of them is now FOSS -- it's called PC-MOS 386.

Most of these 386 DOS-compatible OSes were multiuser OSes — the idea was you could plug some dumb terminals into the RS-232 ports on the back of a 386 PC and users could run text-only DOS apps on the terminals.

But some were aimed at power users, who had a powerful 386 PC to themselves and wanted multitasking while keeping their existing DOS apps.

My personal favourite was Quarterdeck DESQview. It worked with the QEMM386 memory manager and let you multitask multiple DOS apps, side by side, either full-screen or in resizable windows. It ran on top of ordinary MS-DOS.

Microsoft knew that other companies were making money off this fairly small market for multitasking extensions to DOS. So, it made a third special edition of Windows 2, called Windows/386, which supported 80386 chips in 32-bit mode and could pre-emptively multitask DOS apps side-by-side with Windows apps.

Windows programs, including the Windows kernel itself, still ran in 8086-compatible Real Mode and couldn't use all this extra memory, even on Windows/386. All Windows/386 did was provide a loader that converted all the extra memory above 1MB in your high-end 386 PC – that is, extended (XMS) memory – into expanded (EMS) memory that both Windows and DOS programs could use.

The proof of this is that it's possible to launch Windows/386 on an 8086 computer, if you bypass the special loader. Later on, this loader became the basis of the EMM386 driver in MS-DOS 4, which allowed DOS to use the extra memory in a 386 as EMS.


TBH, Windows/386 wasn't very popular or very widely-used. If you wanted the power of a 386 with DOS apps, then you probably were fine with or even preferred text-mode stuff and didn't want a GUI. Bear in mind this is long before graphics accelerators had been invented. Sure you could tile several DOS apps side-by-side, but then you could only see a little bit of each one -- VGA cards and monitors only supported 640×480 pixels. Windows 2 wasn't really established enough to have special hi-res superVGA cards available for it yet.*

Windows/386 could also multitask DOS apps full-screen, and if you used graphical DOS apps, you had to run them full-screen. Windows/386 couldn't run graphical DOS apps inside windows.

But if you used full-screen multitasking, with hotkeys instead of a mouse, then why not use something like DESQview anyway? It used way less disk and memory than Windows, and it was quicker and had no driver issues, because it didn't support any additional drivers.

The big mistake MS and IBM made when they wrote OS/2 was that they should have targeted the 386 chip, instead of the 286.

Microsoft knew this – it even had a prototype OS/2 1 for 386, codenamed "Sizzle" and "Football" – but IBM refused because when it sold thousands of 286 PS/2 machines it had promised the customers OS/2 for them. The customers didn't care, they didn't want OS/2, and this mistake cost IBM the entire PC industry.

If OS/2 1 had been a 386 OS it could have multitasked DOS apps, and PC power users would have been all over it. But it wasn't, it was a 286 OS, and it could only run 1 DOS app at a time. For that, the expensive upgrade and extra RAM you needed wasn't worth it.

So OS/2 bombed. Windows 2 bombed too. But MS was so disheartened by IBM's intransigence, it went back to the dead Windows 2 product, gave it a facelift with the look-and-feel stolen from OS/2 1.2, and they used some very clever hacks to combine the separate Windows (i.e. 8086), Windows/286 and Windows/386 programs all into a single binary product. The WIN.COM loader looked at your system spec and decided whether to start the 8086 kernel (KERNEL.EXE), 286 kernel (DOSX.EXE) or the 386 kernel (WIN386.EXE).

If you ran Windows 3 on an 8086 or a machine with only 640kB (i.e. no XMS), you got a Real Mode 8086-only GUI on top of DOS.

If you ran Win3 on a 286 with 1MB-1¾MB of RAM then it launched in Standard Mode and magically became a 16-bit DOS extender, giving you access to up to 16MB of RAM (if you were rich and crazy eccentric).*

If you ran W3 on a 386 with 2MB of RAM or more, it launched in 386 Enhanced Mode and became a 32-bit multitasking DOS extender and could multitask DOS apps, give you virtual memory and a memory space of up to 4GB.

All in a single product on one set of disks.

This was revolutionary, and it was a huge hit...

And that about wrapped it up for OS/2.

Windows 3.0 was very unreliable and unstable. It often threw what it called an Unrecoverable Application Error (UAE) – which even led to a joke T-shirt that said "I thought UAE was a country in Arabia until I discovered Windows 3!"... but when it worked, what it did was amazing for 1990.

Microsoft eliminated UAEs in Windows 3.1, partly by a clever trick: it renamed the error to "General Protection Fault" (GPF) instead.

Me, personally, always the contrarian, I bought OS/2 2.0 with my own money and I loved it. It was much more stable than Windows 3, multitasked better, and could do way more... but Win3 had the key stuff people wanted.

Windows 3.1 bundled the separate Multimedia Extensions for Windows and made it a bit more stable. Then Windows for Workgroups bundled all that with networking, too!

Note — in the DOS era, all apps needed their own drivers. Every separate app needed its own printer drivers, graphics drivers (if it could display graphics in anything other than the standard CGA, EGA, VGA or Hercules modes), sound drivers, and so on.

One of WordPerfect's big selling points was that it had the biggest and best set of printer drivers in the business. If you had a fancy printer, WordPerfect could handle it and use all its special fonts and so on. Quite possibly other mainstream offerings couldn't, so if you ran WordStar or MultiMate or something, you only got monospaced Courier in bold, italic, underline and combinations thereof.

This included networking. Every network vendor had their own network stack with their own network card drivers.

And network stacks were big and each major vendor used their own protocol. MS used NetBEUI, Novell used IPX/SPX, Apple used AppleTalk, Digital Equipment Corporation's PATHWORKS used DECnet, etc. etc. Only weird, super-expensive Unix boxes that nobody could afford used TCP/IP.

You couldn't attach to a Microsoft server with a Novell network stack, or to an Apple server with a Microsoft stack. Every type of server needed its own unique special client.

This basically meant that a PC couldn't be on more than one type of network at once. The chance of getting two complete sets of drivers working together was next to nil, and if you did manage it, there'd be no RAM left to run any apps anyway.

Windows changed a lot of things, but shared drivers were a big one. You installed one printer driver and suddenly all your apps could print. One sound driver and all your apps could make noises, or play music (or if you had a fancy sound card, both!) and so on. For printing, Windows just sent your printer a bitmap — so any printer that could print graphics could suddenly print any font that came with Windows. If you had a crappy old 24-pin dot-matrix printer that only had one font, this was a big deal. It was slow and it was noisy but suddenly you could have fancy scalable fonts, outline and shadow effects!

But when Microsoft threw networking into this too, it was transformative. Windows for Workgroups broke up the monolithic network stacks. Windows drove the card, then Windows protocols spoke to the Windows driver for the card, then Windows clients spoke to the protocol.

So now, if your Netware server was configured for AppleTalk, say — OK, unlikely, but it could happen, because Macs only spoke AppleTalk — then Windows could happily access it over AppleTalk with no need for IPX.

The first big network I built with Windows for Workgroups, I built dual-stack: IPX/SPX and DECnet. The Netware server was invisible to the VAXen, and vice versa, but WfWg spoke to both at once. This was serious black magic stuff.

This is part of why, over the next few years, TCP/IP took off. Most DOS stuff never really used TCP/IP much — pre-WWW, very few of us were on the Internet. So, chaos reigned. WfWg ended that. It spoke to everything through one stack, and it was easy to configure: just point-and-click. Original WfWg 3.1 didn't even include TCP/IP as standard: it was an optional extra on the disk which you had to install separately. WfWg 3.11 included 16-bit TCP/IP but later Microsoft released a 32-bit TCP/IP stack, because by 1994 or so, people were rolling out PC LANs with pure IP.



* Disclaimer: this is a slight over-simplification for clarity, one of several in this post. A tiny handful of SVGA cards existed, most of which needed special drivers, and many of which only worked with a tiny handful of apps, such as one particular CAD program, or the GEM GUI, or something obscure. Some did work with Windows 2, but if they did, they were all-but unusable because Windows 2's core all had to run in the base 640kB of RAM and it very easily ran out of memory. Windows 3 was not much better, but Windows 3.1 finally fixed this a bit.

So if you had an SVGA card and Windows/286 or Windows/386 or even Windows 3.0, you could possibly set some super-hires mode like 1024×768 in 16 colours... and admire it for whole seconds, then launch a few apps and watch Windows crash and die. If you were in something insane like 24-bit colour, you might not even get as far as launching a second app before it died.

Clarification for the obsessive: when I said 1¾MB, that was also a simplification. The deal was this:

If you had a 286 & at least 1MB RAM, then all you got was Standard Mode, i.e. 286 mode. More RAM made things a little faster – not much, because Windows 2 didn't have a disk cache, relying on DOS to do that. If you had 2 MB or 4 or 8 or 16 (not that anyone sane would put 16MB in a 286, as it would cost $10,000 or something) it made no odds: Standard Mode was all a 286 could do.

If you had a 386 and 2MB or more RAM, you got 386 Enhanced Mode. This really flew if you had 4MB or more, but very few machines came with that much except some intended to be servers, running Unix of one brand or another. Ironically, the only budget 386 PC with 4MB was the Amstrad 2386, a machine now almost forgotten by history. Amstrad created the budget PC market in Europe with the PC1512 and PC1640, both 8086 machines with 5.25" disk drives.

It followed this with the futuristic 2000 series. The 2086 was an unusual PC – an ISA 8086 with VGA. The 2286 was a high-end 286 for 1988: 1MB RAM & a fast 12.5MHz CPU.

But the 2386 had 4MB as standard, which was an industry-best and amazing for 1988. When Windows 3.0 came out a couple of years later, this was the only PC already on the market that could do 386 Enhanced Mode justice, and easily multitask several DOS apps and big high-end Windows apps such as PageMaker and Omnis. Microsoft barely offered Windows apps yet – early, sketchy versions of Word and Excel, nothing else. I can't find a single page devoted to this remarkable machine – only its keyboard.

The Amstrad 2000 series bombed. They were premature: the market wasn't ready and few apps used DOS extenders yet. Only power users ran OS/2 or DOS multitaskers, and power users didn't buy Amstrads. Nor did people who wanted a server for multiuser OSes such as Digital Research's Concurrent DOS/386.

Its other bold design move was that Amstrad gambled on 5.25" floppies going away, replaced by 3.5" diskettes. They were right, of course – and so the 2000 series had no 5.25" bays, allowing for a sleek, almost aerodynamic-looking case. But Amstrad couldn't foresee that soon CD-ROM drives would be everywhere, then DVDs and CD burners, and the 5.25" bay would stick around for another few decades.
liam_on_linux: (Default)

[Another repurposed comment from the same Lobsters thread I mentioned in my previous post.]

A serious answer deserved a serious response, so I slept on it, and, well, as you can see, it took some time. I don't even the excuse that "Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte."

If you are curious to do so, you might be amused to look through my older tech-blog posts – for example this or this.

The research project that led to these 3 FOSDEM talks started over a decade ago when I persuaded my editor that retrocomputing articles were popular & I went looking for something obscure that nobody else was writing about.

I looked at various interesting long-gone platforms or technologies – some of the fun ones were Apollo Aegis & DomainOS, SunDew/NeWS, the Three Rivers PERQ etc. – that had or did stuff nothing else did. All were either too obscure, or had little to no lasting impact or influence.

What I found, in time, were Lisp Machines. A little pointy lump in the soil, which as I kept digging turned into the entire Temple of Damanhur. (Anyone who's never heard of that should definitely look it up.) And then as I kept digging, the entire war for the workstation, between whole-dynamic-environment languages (Lisp & Smalltalk, but there are others) and the reverse, the Unix way, the easy-but-somehow-sad environment of code written in a unsafe, hacky language, compiled to binaries, and run on an OS whose raison d'être is to "keep 'em separated": to turn a computer into a pile of little isolate execution contexts, which can only pass info to one another via plain text files. An ugly, lowest-common-denominator sort of OS but which succeeded and thrived because it was small, simple, easy to implement and to port, relatively versatile, and didn't require fancy hardware.

That at one time, there were these two schools – that of the maximally capable, powerful language, running on expensive bespoke hardware but delivering astonishing abilities... versus a cheap, simple, hack of a system that everyone could clone, which ran on cheap old minicomputers, then workstations with COTS 68K chips, then on RISC chips.

(The Unix Haters Handbook was particularly instructive. Also recommended to everyone; it's informative, it's free and it's funny.)

For a while, I was a sort of Lisp zealot or evangelist – without ever having mastered it myself, mind. It breaks my brain. "The Little Lisper" is the most impenetrable computer publication I've ever tried, and failed, to read.

A lot of my friends are jaded old Unix pros, like me having gone through multiple proprietary flavours before coming to Linux. Or possibly a BSD. I won serious kudos from my first editor when I knew how to properly shutdown a Tadpole SPARCbook with:


sync
sync
sync
halt

"What I tell you three times is true!" he crowed.

Very old Unix hands remember LispMs. They've certainly met lots of Lisp evangelists. They got very tired of me banging on about it. Example – a mate of mine said on Twitter:

«
A few years ago it was lisp is the true path. Before that is was touchscreens will kill the keyboard.
»

The thing is, while going on about it, I kept digging, kept researching. There's more to life than Paul Graham essays. Yes, the old LispM fans were onto something; yes, the world lost something important when they were out-competed into extinction by Unix boxes; yes, in the right hands, it achieves undreamed-of levels of productivity and capability; yes, the famous bipolar Lisp programmer essay.

But there are other systems which people say the same sorts of things about. Not many. APL, but even APL fans recognise it has a niche. Forth, mainly for people who disdain OSes as unnecessary bloat and roll their own. Smalltalk. A handful of others. The "Languages of the Gods".

Another thing I found is people who'd bounced off Lisp. Some tried hard but didn't get it. Some learned it, maybe even implemented their own, but were unmoved by it and drifted off. A lot of people deride it – L.I.S.P. = Lotsa Insignificant Stupid Parentheses, etc. – but some of them do so with reason.

I do not know why this. It may be a cultural thing, it may be one of what forms of logic and of reasoning feel natural to different people. I had a hard time grasping algebra as a schoolchild. (Your comment about "grade school" stuff is impenetrable to me. I'm not American so I don't know what "grade school" is, I cannot parse your example, and I don't know what level it is aimed at – but I suspect it's above mine. I failed 'O' level maths and had to resit it. The single most depressing moment of my biology degree was when the lecturer for "Intro to Statistics" said he knew we were all scared, but it was fine; for science undergraduates like us, it would just be revision of our maths 'A' level. If I tried, I'd never even have got good enough exam scores to be rejected for a maths 'A' level.)

When I finally understood algebra, I "got" it and it made sense and became a useful tool, but I have only a weak handle on it. I used to know how to solve a quadratic equation but I couldn't do it now.

I never got as far as integration or differentiation. I only grasped them at all when trying to help a member of staff with her comp-studies homework. It's true: the best way to learn something is to teach it.

Edsger Dijkstra was a grumpy git, but when he said:

“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration”

... and...

“The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence.”

... I kind of know what he meant. I disagree, obviously, and I am not alone, but he did have a core point.

I think possibly that if someone learned Algol-style infix notation when they were young, and it's all they've ever known, if someone comes along and tells them that it's all wrong, to throw it away, and do it like this – or possibly (this(like(do(it)))) – instead, it is perfectly reasonable to reject it.

Recently I used the expression A <> B to someone online and they didn't understand. I was taken aback. This is BASIC syntax and was universal when I was under 35. No longer. I rephrased it as A != B and they understood immediately.

Today, C syntax is just obvious and intuitive. As Stephen Diehl said:

«
C syntax is magical programmer catnip. You sprinkle it on anything and it suddenly becomes "practical" and "readable".
»

I submit that there are some people who cannot intuitively grasp the syntaxless list syntax of Lisp. And others who can handle it fine but dislike it, just as many love Python indentation and others despise it. And others who maybe could but with vast effort and it will forever hinder them.

Comparison: I am 53 years old, I emigrated to the Czech Republic 7 years ago and I now have a family here and will probably stay. I like it here. There are good reasons people still talk about the Bohemian lifestyle.

But the language is terrifying: 4 genders, 7 cases, all nouns have 2 plurals (2-4 & >=5), a special set of future tenses for verbs of motion, & two entire sets of tenses – verb "aspects", very broadly one for things that are happening in the past/present/future but are incomplete, and one for things in the past or present that are complete.

After 6 years of study, I am an advanced beginner. I cannot read a headline.

Now, context: I speak German, poorly. I learned it in 3 days of hard work travelling thence on a bus. I speak passable French after a few years of it at school. I can get by in Spanish, Norwegian and Swedish from a few weeks each.

I am not bad at languages, and I'm definitely not intimidated by them. But learning your first Slavic language in your 40s is like climbing Everest with 2 broken legs.

No matter how hard I try, I will never be fluent. I won't live long enough.

Maybe if I started Russian at 7 instead of French, I'd be fine, but I didn't. But 400 million people speak Slavic languages and have no problems with this stuff.

I am determined. I will get to some useful level if it kills me. But I'll never be any good and I doubt I'll ever read a novel in it.

I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother. I know many Anglophones (and other first-language speakers) who live in Czechia who just gave up on Czech. For a lot of people, it's just too hard as an adult. My first course started with 15 students and ended with 3. This is on the low side of normal; 60% of students quit in the first 3 months, after paying in full.

And when people say that "look, really, f(a,b) is the same thing as (f a,b)" or tell us that we'll just stop seeing the parentheses after a while (see slides 6 & 7 ) IT DOES NOT HELP. In fact, it's profoundly offputting.

I am regarded as a Lisp evangelist among some groups of friends. I completely buy and believe, from my research, that it probably is the most powerful programming language there's ever been.

But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.

Before this talk, I conferred with 2 very smart programmer friends of mine about the infix/prefix notation issue. ISTM that it should be possible to have a smart editor that could convert between the two, or even round-trip convert a subset of them.

This is why I proposed Dylan on top of Lisp, not just Lisp. Because Lisp frightens people and puts them off, and that is not their fault or failing. There was always meant to be an easier, more accessible form for the non-specialists. Some of my favourite attempts were CGOL and Lisp wizard David A. Moon's PLOT. If Moon thinks it's worth doing, we should listen. You might have heard of this editor he wrote? It's called "Emacs". I hear it's quite something.

liam_on_linux: (Default)
I'm getting some great feedback on the FOSDEM "Starting Over" talk. I will re-record it soon and put it on Youtube.

A handful of people on HackerNews got it, although as expected, most didn't, or flicked through the slideshow and started arguing. There was a much more interesting set of responses on Lobsters.

This post grew out of a reply to a comment there.


The commenter said that they enjoyed the talk and had not encountered Oberon before, but they did not understand why I'd picked Dylan as an alternative to Smalltalk... and had I considered building in Javascript in the browser instead?

Part of the plan is to make something that is easy and fun. It will be limited at first compared to the insane incomprehensible unfathomable richness of a modern *nix or Windows OS. Very limited. So if it is limited, then I think it has to be fun and accessible and easy and comprehensible to have any hope of winning people over.

Lisp is hard. It may be the ultimate programming language, the only programmable programming language, but the syntax is not merely offputting, it is profoundly inaccessible for a lot of ordinary mortals. Just learning an Algol-like language is not hard. BASIC was fun and accessible. The right languages are toys for children, and that's good.

Today, I have met multiple professional Java programmers who have next to no grasp of the theory, or of algorithms or any comp-sci basic principles... but they can bolt together existing modules just fine and make useful systems.

Note: I am not saying that this is a good way to build business logic, but it is how a lot of organizations do it.

There is a ton of extra logic that one must internalize to make Lisp comprehensible. I suspect that there is a certain type of mind for whom this stuff is accessible, easily acquired, and then they find it intuitive and obvious and very useful.

But I think that that kind of mind is fairly rare, and I do not think that this kind of language – code composed of lists, containing naked ASTs – will ever be a mass-market proposition.

Dylan, OTOH, did what McCarthy originally intended. It wrapped the bare lists in something accessible, and they demonstrated this by building an exceptionally visual, colourful, friendly graphical programming language in it. It was not intended for building enterprise servers; it was built to power an all-graphical pocket digital assistant, with a few meg of RAM and no filesystem.

Friendly and fun, remember. Accessible, easy, simple above all else. Expressly not intended to be "big and professional like GNU."

But underneath Dylan's friendly face is the raw power of Lisp.

So the idea is that it gives you the best of both worlds, in principle. For mortals, there's an easy, colourful, fun toy. But one you can build real useful apps in.

And underneath that, interchangeable and interoperable with it, is the power of Lisp – but you don't need to see it or interact with it if you don't want to.

And beneath that is Oberon, which lets you twiddle bits if you need to in order to write a device driver or a network stack for a new protocol. Or create a VM and launch it, so you can have a window with Firefox in it.

[Re Javascript]

Oh dear gods, no!

There is an old saying in comp sci, attributed to David Wheeler: "We can solve any problem by introducing an extra level of indirection."

It is often attributed to Butler Lampson, one of the people at PARC who designed and built the Xerox Alto, Dolphin and Dorado machines. He is also said to have added a rider:
"...except for the problem of too many layers of indirection."

The idea here is to strip away a dozen layers of indirection and simplify it down to the minimum number of layers that can provide a rich, programmable, high-level environment that does not require users to learn arcane historical concepts such as "disks" or "directories" or "files", or "binaries" and "compilers" and "linkers". All that is ancient history, implementation baggage from 50 years of Unix.

The WWW was a quick'n'dirty, kludgy implementation of hypertext on Unix, put together using NeXTstations. The real idea of hypertext came from Ted Nelson's Xanadu.

The web is half a dozen layers of crap -- a protocol [1] that carries composite documents [2] built from Unix text files [3] and rendered by a now massively complex engine [4] whose operation can be modified by a clunky text-based scripting language [5] which needed to be JITted and accelerated by a runtime environment [6]. It is a mess.

It is more or less exactly what I am trying to get away from. The idea of implementing a new OS in a minimal 2 layers, replacing a dozen layers, and then implementing that elegant little design by embedding it inside a clunky half-dozen layers hosted on top of half a dozen layers of Unix... I recoil in disgust, TBH. It is not merely inefficient, it's profane, a desecration of the concept.

Look, I am not a Christian, but I was vaguely raised as one. There are a few nuggets of wisdom in the Christian bible.
Matthew 7:24-27 applies.

“Therefore, whosoever heareth these sayings of Mine and doeth them, I will liken him unto a wise man, who built his house upon a rock.
And the rain descended and the floods came, and the winds blew and beat upon that house; and it fell not, for it was founded upon a rock.
And every one that heareth these sayings of Mine and doeth them not, shall be likened unto a foolish man, who built his house upon the sand;
and the rain descended, and the floods came, and the winds blew, and beat upon that house; and it fell, and great was the fall of it.”

Unix is the sand here. An ever-shifting, impermanent base. Put more layers of silt and gravel and mud on top, and it's still sand.

I'm saying we take bare x86 or ARM or RISC-V. We put Oberon on that, then Smalltalk or Lisp on Oberon, and done. Two layers, one of them portable. The user doesn't even need to know what they're running on, because they don't have a compiler or anything like that.

You're used to sand. You like sand. I can see that. But more sand is not the answer here. The answer is a high-pressure host that sweeps away all the sand.
liam_on_linux: (Default)
My talk should be on in about an hour and a half from when I post this.

«

A possible next evolutionary step for computers is persistent memory: large capacity non-volatile main memory. With a few terabytes of nonvolatile RAM, who needs an SSD any more? I will sketch out a proposal for how to build an versatile, general-purpose OS for a computer that doesn't need or use filesystems or files, and how such a thing could be built from existing FOSS code and techniques, using lessons from systems that existed decades ago and which inspired the computers we use today.

Since the era of the mainframe, all computers have used hard disks and at least two levels of storage: main memory, or RAM, and secondary or auxiliary storage: disk drives, accessed over some form of disk controller using a file system to index the contents of secondary storage for retrieval.

Technology such as Intel's 3D Xpoint -- sold under the brand name Optane -- and HP's future memristor storage will render this separation obsolete. When a computer's permanent storage is all right there in the processors' memory map, there is no need for disk controllers or filesystems. It's all just RAM.

It is very hard to imagine how existing filesystem-centric OSes such as Unix could be adapted to take full advantage of this, so fundamental are files and directories and metadata to how they operate. I will present the outline of an idea how to build an OS that natively uses such a computer architecture, based on existing technology and software, that the FOSS community is ideally situated to build and develop.
»


It talks about Lisp, Smalltalk, Oberon and A2, and touches upon Plan 9, Inferno, Psion EPOC, Newton, Dylan, and more.

You can download the slides (in PDF or LO ODP format) from the FOSDEM programme entry for the talk.
It is free to register and to watch.

I will update this post later, after it is finished, with links to the video, slides, speaker's notes, etc.

UPDATE:

In theory you should be able to watch the video on the FOSDEM site after the event, but it seems their servers are still down. I've put a copy of my recording on Dropbox where you should be able to watch it.

NOTE: apparently Dropbox will only show the first 15min in its preview. Download the video and play it locally to see the whole, 49min thing. It is in MP4 encoded with H.264.
Unfortunately, in the recording, the short Steve Jobs video is silent. The original clip is below. Here is a transcript:
I had three or four people who kept bugging me that I ought to get my rear over to Xerox PARC and see what they were doing. And so I finally did. I went over there. And they were very kind and they showed me what they were working on.

And they showed me really three things, but I was so blinded by the first one that I didn’t even really see the other two.

One of the things they showed me was object-oriented programming. They showed me that, but I didn’t even see that.

The other one they showed me was really a networked computer system. They had over a hundred Alto computers, all networked using email,
et cetera, et cetera. I didn’t even see that.

I was so blinded by the the first thing they showed me, which was the graphical user interface. I thought it was the best thing I'd ever seen in my life.

Now, remember, it was very flawed. What we saw was incomplete. They’d done a bunch of things wrong, but we didn’t know that at the time. And still, though, they had the germ of the idea was there and they’d done it very well. And within, you know, 10 minutes, it was obvious to me that all computers would work like this someday. It was obvious.


liam_on_linux: (Default)
I am not a huge fan of the Windows-type desktop, but I will use it happily enough.

This is good, because there's a wide choice of them. However, I like my taskbar to be vertical, and most Windows-like desktops can't do that. When I say so, people often respond something like "but $desktop_x does vertical panels just fine!"

I get that a lot. So often I assembled an Imgur picture album to show what I mean.

The taskbar was an original invention in Windows 95. There is no prior art; I and others have looked. The closest were the "icon bar" in Acorn RISC OS (1987) and the Dock in NeXTstep (1988). Both are simpler.

The way the taskbar works is that whatever its orientation, its contents run right-to-left.

So you have the Start button, then (as of IE4's Active Desktop) an optional "quick launch" toolbar (still there in Win8 and Win10 but off by default), then buttons for all the open windows/apps, then the "system tray" or "notification area" containing status icons and the clock.

Wherever you put the taskbar — bottom, top, vertical on the left, vertical on the right — the icons in the system tray and quicklaunch run left to right, in rows if there isn't enough space.

Buttons have text running left to right. (R to L if you use Arabic, Hebrew etc.)

Buttons are wider than they are tall. (This is harder to see in Win7/8/10 because they don't contain text by default any more).

In a taskbar, as you resize the panel, you get more or fewer rows of icons. More or fewer buttons may fit. If it is so narrow there's only room for 1 icon, then they form a column.

This is good, because it means that on a widescreen, you get more room if the panel gets wider. You get more icons, more buttons, but they stay the same size — there is a "large icons"/"small icons" setting and it is honoured. I normally adjust mine for 4 columns of status icons, which gives me window buttons about the same size as the old traditional ones on Win9x/2K/XP.

In GNOME 2, MATE, Cinnamon, et al, the contents of a panel are arranged in the direction that the panel is arranged. So if you place the taskbar vertically, the contents run vertically. No rows or columns; just a single column. This is bad, because your status icons take up much of the panel leaving little room for window buttons.

If you resize the panel, some or all of the contents get bigger or smaller. So for example in KDE (3/4/5, doesn't matter, they all do vertical taskbars but badly), you get a HUGE start button because it's not resizable: it fits the width of the panel. You get a HUGE clock as well because there's no size setting. There are a million settings for where the panel is, how it's rendered, and how the file manager can display email, network shares, the entire Web and connect to some network protocol nobody's used in 3 decades, but you can't set the font size of the clock. Why would you want to do that?

GNOME 2 and MATE show some things vertically, and some horizontally. Just vertically is bad, but a mixture is even worse — you get the worst of both worlds, showing few things but some look weird or take a lot of room.

In the original Windows taskbar, if you make it really thick in a horizontal orientation, you get 2 rows of app buttons — and even 3 or 4 if it's big enough.

If the GNOME 2/MATE/Cinnamon ones did that in vertical orientation, it would help, but no, they don't implement that feature.

In other words, what annoys me is that almost every FOSS desktop out there is a copy of Win95. KDE (all versions); GNOME 2 & MATE; Cinnamon; XFCE; LXDE/LXQt; Enlightenment; Lumina.

But most of them are rubbish rip-offs of the Windows desktop, and they can't even do all the things the original could 26 years ago when version 1.0 of it shipped.

Xfce does it fairly well. It's a bit clunky but it works. LXDE and LXQt do it well too, but they're much less customisable.

I know of 3 current FOSS desktops that aren't Win95 ripoffs. GNOME 3 is its own thing (with heavy influence from Ubuntu Unity, which in turn is a rip-off of Mac OS X.) Pantheon (the Elementary OS desktop) is a — very poor — rip off of Mac OS X. Pretty, though. (PearOS was a good rip-off; Apple bought it and shut it down. Pantheon doesn't even have a menu bar, just an empty panel where it used to be. I've blogged about that before.) Budgie doesn't quite know what it is, but you can do the exact same thing with about 5min of customising Xfce with its built-in themes and controls.

(The ROX Desktop wasn't but it's basically dead, sadly. GNUstep isn't a desktop, they just implemented one by accident. It's a NeXTstep ripoff. There were long long ago ripoffs of Classic MacOS ("Sparta") and AmigaOS (amiwm + some file manager I forget.) All disappeared last century.)

OpenCDE isn't but despite an epic amount of work to get it made FOSS by a mate of mine, nobody seems to care and it hasn't been modernized or even adopted. Sad, really. It wouldn't be a vast amount of work to make it into a decent OS/2 Warp Workplace Shell clone, and a lot of people loved that.

It's the Linux way, isn't it? What we will do is, we'll divide into a dozen different groups that hate each other's guts. Three quarters of them will duplicate each other's work, badly, based on a rip-off of someone else's idea. Two of the others will rip-off a different idea instead. And one lunatic will do something totally different and new, half-finish it, get bored and go off and do something else which they also won't finish.

Meanwhile, the hardcore will use something horrible from 1973 but love it to death, proclaim how powerful it is and refuse to use anything more modern.

For bonus points, they'll pick one tool from 1973 and another tool from 1965 and despise each other for using the wrong one.

And repeat.

Meanwhile, most practical people with jobs just go and buy a Macbook.
liam_on_linux: (Default)
Everyone hated the Win8 UI. I used it for a couple of months, until it timed-out and wanted to be activated – at which point I went back to Ubuntu. I learned Windows on a machine with no mouse – at the end of the 1980s, my employers didn't own a single PC mouse – so I drive it using the keyboard far more heavily than most sighted people. Launch an app: Win+R, binary name, enter: Win+R, control, enter. Win+R, cmd, enter. Not sure of the binary name? Win key, type a few letters, glance to check, return. Same UI as Spotlight on a Mac. I didn't care that there wasn't a Start menu, or that the launcher was full-screen. I barely saw it.


I had a brief play with a couple of Win8 tablets and several phones. It was actually a bloody good touchscreen interface, more powerful and capable than either iOS or Android, and with some good touches taken from Blackberry 10 and the short-lived Palm WebOS.
Resizable gadgets that convey live info without opening the app. Gestures to summon launcher and switcher without wasting any screen space – swipe onto the screen from different edges.

At the time, I too thought tablets were manifestly going to be the future. MS has made good business from betting the farm on the next gen of tech. WinNT was barely usable on the contemporary 1993 kit – it was designed for 1998 tech. Win2000 was designed for 2003-4 kit.

But tablet makers didn't deliver. Apple sat on its hands: iPads got slimmer, faster, higher-res and with more storage, and nothing else. Google failed to commit: Android Honeycomb looked good, but almost all the big-screen UI enhancements were gradually dropped from later Android versions. Why? No answer ever came. Hardware makers didn't produce tablets with lots of ports, multiple storage media, and expandability – all the things laptops had. So everyone bought a tablet & then kept it, because the later models weren't much better. They got replaced when dropped or when the battery failed. Massive early adoption then it flatlined.

The Win8 interface was genuinely very good, if you had a touchscreen. But the hardware didn't follow suit, so it was beached, high and dry, on mouse-and-keyboard desktops and half-assed laptops with touchscreens.

If your hands are on a keyboard and your thumbs on a trackpad, then it becomes better to make the trackpad multitouch and give it rich gestures -- which is what Apple did. I've been using OS X since v10.0. I'm typing on it right now. The way kids with it use multitouch is amazing. No menu bar, no dock, no "desktop" as such, just fast fluid gestures to flip from fullscreen-app to fullscreen-app, or flip to a tiled overview of all of them then zoom back in. It's a mode of usage from someone with a high-end laptop and nothing else (partly because the laptops are so expensive, of course) and it's profoundly different to desktop windows-icons-mouse-pointer usage. You simply can't do this stuff with a mouse.

Apple were right: don't bolt a touchscreen onto a laptop. You get "gorilla arm", constantly moving the hands away from their natural position, etc.

Or, just make it all screen. Segment your market on whether people want expansion etc.

This left MS in a corner, so it had to DIY. The Surface devices are the result. Everyone I know who has one loves it. But it was too little too late to change the course of the whole industry.

Google is experimenting with ChromeOS tablets, but it's crippled because the good kit is coming from China – behind the Great Firewall, with no Google, so ChromeOS can't work. Chuwi could make amazing Chromebooks -- they have cheapo convertible Surface-like tablets, running both Windows and Android, on the same device if you wish. But they're in China so they can't adopt ChromeOS.

The PC market has always been driven by price. Some loyalists will pay thousands for a Surface, just as others will for an iPad Pro or MacBook Pro. I won't. I am a keyboard fetishist (apparently) and I'm also cheap, so I use 2nd hand Thinkpads with good keyboards that still work fine.

I have a Chuwi tablet, a Hi9 Air. It was £250 new, with tax & duty, for the spec of a £1000 Apple or Samsung at the time. It's 3Y old & still fine.

If I could have a convertible ChromeBook with a high spec for that kind of money, I'd try it. But I can't. I can have disposable plastic crap, or I need to pay £1000 or something absurd for a Pixel. Yeah, no. Hard no.


And of course the Linux desktop is woefully neglected and nobody is even seriously trying tablet operations. What do you expect? These folks like stuff like Vi, Emacs and tiling window managers. They took years to adopt anti-aliasing. Only Canonical had the vision to go for a converged desktop/tablet/phone UI, but bloody HackerNews didn't like it, so they ditched it. IMHO the only mistake they made was going for their own display server – Mir was a step too far. Wayland was already clearly the future. If Unity 8 had run on Wayland, they might not have been stretched so thin and they might have got it out the door in reasonable time.
liam_on_linux: (Default)
Thirteen years ago this weekend, Steve Jobs announced the original iPhone, as former Microsofty Steve Sinofsky discusses in a great Twitter thread. The demo was epochal, but very hard to pull off behind the scenes, as the NY Times discussed six years later.

I was already a frustrated smartphone user. I had a Nokia 7710 – the last ever Symbian device with a derivative of the Psion UI, and therefore doubly compromised, as I discussed in what turned out to be a very poorly timed piece on OSnews.

After I saw the demo, I knew that finger-operated touchscreens were clearly the future – but as you can see from the comments on that January 2007 blog post, most of my techie friends were very much not convinced of this.

But it was very clear to me that the iPhone was the future. The comments there make amusing reading now: no, it can't be. It doesn't have the features. It doesn't have 3G. It's not about the UI, people want the features.

I couldn't believe that Apple had ported OS X to the ARM chip, though – and now, it's its native platform and all the new Macs will be ARM-based.
liam_on_linux: (Default)
Interested in running DOS programs on 64-bit Windows (or x86 macOS or Linux)? Would you like to run classic DOS applications such as WordPerfect, natively and without emulation on a modern OS? Would you like to get an MS-DOS prompt back under Windows 10 on AMD64?

I found a copy of the IBM PC DOS 2000 VM from Connectix VirtualPC for Mac, and converted it into a format that VirtualBox can open and run.


This was bundled for free with Connectix VirtualPC. VirtualPC is now owned by Microsoft and is a free download.

Old versions are out there for free download, e.g. the Mac version 4.

Just the PC DOS 2000 disk image, converted to VirtualBox VDI format, compressed in Zip format, is here. It's about 10MB.

Note: this is the complete, unmodified Connectix VirtualPC DOS image. It contains DOS integration tools for VirtualPC which do not work with VirtualBox. Unfortunately, VirtualBox does not offer guest additions for DOS. You will see some minor errors as it boots due to this. How to fix them is below.

If you actually want to try this, here are a few things you will need to know.

This is PC DOS 2000, AKA PC DOS 7.01. It's PC-DOS 7 plus bugfixes and
Y2K compatibility. It is not FAT32-capable: for that, you need PC DOS 7.1. Here is how to get and install that – it too is a free download. This VHD is the ideal basis for building a PC DOS 7.1 VM and that is why I created it.

PC DOS 7 is from the same code-base as MS-DOS 6.22, but with updates. It has IBM's E editor instead of the Microsoft full-screen editor, and IBM's Rexx programming language instead of QBASIC. It does not support DoubleSpace or DriveSpace disk compression. It does include IBM's licensed-in antivirus and backup tools, but to be honest I have not investigated these. It is installed on a 2GB FAT16 partition which is the single primary active partition on the virtual hard disk, just as Connectix shipped it.

PC DOS 2000 does support power-management, but it is not enabled by default. Without it, this means that the VM will take (and waste) 100% CPU. (Unlike MS-DOS 6.22, PC DOS also has native PCMCIA card
support, but that is no use in a VM – however, it may be helpful if you want an OS for a very old laptop.) To enable power management, you should add a line to the CONFIG.SYS file that says:

device=c:\dos\power.exe

That should be enough – afterwards, your DOS VM will only take the tiny amount of CPU that it needs.
DOS needs only 32MB of RAM and will run fine in 1MB. Yes, one megabyte, not one gigabyte.

You might also want to remove the AUTOEXEC.BAT line that references a FSHARE program in the CNTX directory, as that won't work under VirtualBox. Type the following:

e autoexec.bat

Look for the line that says:

C:\CNTX\FSHARE.EXE

Insert the word REM at the beginning of the line, so it says:

REM C:\CNTX\FSHARE.EXE

Press F2 to save the file. Press F3 to exit. Reboot the VM with [Host]+[R].

PC DOS 2000 was the bundled demo virtual machine with Connectix's VirtualPC. VirtualPC is, for now, obsolete – it does not work correctly under any version of Windows after Win7. Its last hurrah was as the basis for the XP Mode feature in Win7, which did not work on Windows 8 (although there is an easy fix to run it under Win8 or 8.1) or at all under Windows 10.

(I say "obsolete for now" as the original purpose of VirtualPC was as a way to run x86 DOS and Windows on PowerMacs, which did not have x86 processors and could not natively run x86 binaries. Now that Apple is transitioning to processors with the ARM instruction set, newer Macs can again not natively run x86 binaries. Yes, there is a built-in emulator, but Rosetta 2 will not work well on a hypervisor. So, there is once again an opening in the market
for a PC emulator for Macs, if Microsoft chose to resurrect the application. I personally would like to see that – VirtualPC was a good tool and the easiest, least-complicated way to run guest OSes on top of those it ran on, simpler to use than VMware or VirtualBox.)

Yes, this does mean that there is a legal, activated copy of Windows XP Professional for free download that you can run under Win7/8/8.1. And yes, you can extract it and run it under VirtualBox if you wish. I wrote an article for the Register describing how to do that. The snag is that the activation only works for a VirtualPC VM and it will fail on any other hypervisor. You will need a license key or to crack this ancient, obsolete version of Windows. Obviously I cannot help you with that. None of this is needed for PC DOS: it has no activation, copy protection or anything like it.

Microsoft acquired Connectix in 2003 and VirtualPC provided the basis for Microsoft Hyper-V (just as QEMU provides the basis for KVM on Linux) – file formats, management tools and so on. In theory, VirtualBox can attach a Hyper-V virtual hard disk to a VirtualBox VM and boot from it, but in my testing, this did not work with this ~20-year-old Apple VirtualPC file. I had to use command-line tools to convert it to VMware format, and then from VMware format to native VirtualBox format. Apart from testing, that is all I have done.

For my own use, I have of course slightly tweaked and updated the VM. I have configured memory management, added a few useful tools from from a WinME boot diskette:

  • the MS IDE CD device driver

  • the MS mouse driver

  • the MS full-screen editor

  • the MS SCANDISK disk-checking tool

... and a few more, simply because I'm more familiar with them. I've disabled the Connectix guest additions but I have not replaced them – I run it under Linux, where I can just mount the disk image to get files on or off it. I also have a modernized version with the FAT32-capable PC DOS 7.1.

If you are interested in these changes, please leave a comment on the blog and I will help you reproduce them for yourself. Please also let me know of any errors, corrections, additional info or any help you want with getting this working.

You can log in to LiveJournal to comment with any OpenID, including Facebook, Twitter or Google accounts.


I emphasize that this is an unmodified disk image. I have not in any way altered the contents of the VM image, just converted it from one format to another. These files remain the property of their original copyright holders.
liam_on_linux: (Default)
[Repurposed from a Reddit comment]

Ethernet is not just a kind of cable. It's also the electronic signals carried over that cable, and the format of the data packets that are sent over it.

Token Ring is a totally different kind of network, with totally different signals. You can't just convert one to the other.

If you write a sentence of Chinese in Roman characters, you've still got Chinese. It's not magically become Latin. "你吃了吗?" becomes "Nǐ chīle ma?" but it hasn't been translated -- it's still Chinese and unless you speak Chinese you cannot understand it.

This is not Token Ring converted to Ethernet -- it's Token Ring over unshielded twisted-pair cabling (UTP), the kind that some forms of Ethernet (10base-T, 100base-T, 1000base-T) happen to run over too. UTP comes in different categories – low grades are enough for voice phone calls but not for data. The minimum standard for 10base-T was Category 3, and 100base-T (called Fast Ethernet) used 2 higher: Cat5.

The connectors are a separate standard -- 8 position 8 contact (8P8C). However that doesn't tell you which wire is connected to which connector. Those are standardized as Registered Jack standards. The one that was used 10base-T was adapted from telephony standard #45 – RJ45.

The defining feature of Ethernet is that them medium is a straight line, A to B, and all computers share it. Everyone tries to talk at once, and if someone else is talking to, you hear it, shut up, wait a random time and then try again. This is called Collision Sensing Multiple Access with Collision Detection: CSMA/CD.

Dead simple to implement, therefore easy to implement, therefore cheap. But it doesn't scale. If too many people talk at once, they can't make themselves heard. 20-30 computers works OK; 200-300 and performance falls off a cliff.

Token Ring doesn't work like that. In a TR network, the wiring is a electronically a loop: no "ends". (Actually that's really fragile in real life, so the loop has long fingers reaching out from a central box, the Medium Access Unit – MAU – to each computer and back -- physically it's a star.) There is one magic golden ticket that means "I can use the network!" It goes round and round. If you need to talk, you grab it, send your message, then send it on its way. The next machine that needs to talk waits for the ticket, grabs it, talks, then lets it go. A single Token going round and round the ring.

This is complicated, so it's expensive, but because access to the medium is controlled, it scales really well, so it was worth it for big networks that had to perform even under heavy load.

Token Ring worked at 2 speeds: 4Mb/s and later 16Mb/s.

On top of the electronic signalling system, you can put whatever data you want. Normally, some kind of network protocol -- IBM used SNA and DLC, but also later NetBEUI in DOS and OS/2; Novell used IPX/SPX; Apple used AppleTalk; and those weird super-expensive UNIX computers in universities and reseach labs used their own weird thing called TCP/IP.

This isn't Token Ring over Ethernet. This is plain pure Token Ring, but using media converters to run it over cheap Cat5 UTP cabling with 8P8C connectors wired up as RJ45.

But hang on, you say, Ethernet cabling isn't one line. It's a star too.

Well, yes, it is now, because UTP is point-to-point. Older Ethernet standards such as 10base-5 (orginally called just Ethernet) and the later, thinner, cheaper 10base-2 ("CheaperNet", later renamed Thin Ethernet and the older stuff was retconned as Thick Ethernet) are a single long wire, up to a bit under 200 metres (600 feet), with special caps on the end called terminators. But if it breaks at any point, the whole network fails – and to add a new computer, you have to break the cable. Accidentally introduce a branch or fork and it fails. So, cheap but flawed.

So someone came up with a way to run it over UTP, adapted phone wiring really. Each machine has its own cable and they all meet in the middle in a box called a hub. But electronically, it's a line -- the ends aren't connected. You can just plug in new machines while the others keep talking. You need a lot more cables, dozens of them, but it's good, it works, and it's cheap.

You need a lot more cables and you need either a big hub -- but remember, Ethernet maxed out at under 50 machines or so. (Very hand-wavy number: the "everyone shout at once" model depends on what you're doing, obviously.)

It was difficult but you could interconnect hubs. But then you double the size of your network, and it doesn't scale.

So if you put a mini router in each hub, and it only passes traffic destined for the other hub, it scales better – but only to a 4-5 hubs or it gets too complex.

But once you're making cheap mini router chips, you can make a hub that listens to the traffic and only connects machines when they're talking. It's like a little telephone exchange, instead of lots of people on one line all talking at once. It switches circuits. Normally, client computers all talk to a server, not to each other, so it's quite scalable.

Now your hub has smarts. It doesn't act like a dumb everyone-to-everyone hub any more. It's a mini call-centre, routing calls from machine to machine: a switch. So normally only 2 machines are ever connected at once, and that scales really well, to hundreds and hundreds of machines with no drop in performance. No token required, just smarter, faster electronics. And all at 100Mb/s.

When switches got cheap, that was it for Token Ring.
liam_on_linux: (Default)
This started out as a comment on my previous post, Notes towards a classification of UEFI firmware types.

For some 30 years, the PC platform had the BIOS. Old-fashioned, clunky, designed for 8/16-bit OSes such as DOS and largely irrelevant to 32-bit OSes which couldn't call it anyway. So either they used it to kick off their bootloader, which set some stuff up then jumped to 32-bit mode and subsequently ignored it (i.e. NT, OS/2, Netware), or, they just stuck an exploit in the Master Boot Record and bypassed it altogether (i.e. Linux, BSD, Solaris, etc.)

It was a limitation, but it was a baseline for PC compatibility, and it kinda worked. There have been multiple workarounds to get round size restrictions for hard disks in x86 PCs: first ST-506 and RLL, and SCSI controllers with their own firmware; then IDE (max 512MB); then EIDE with LBA (max 8GB); then UltraEIDE (max 128GB); then SATA and banishing the ancient 2-drives-per-channel thing and a limit of 2TB per drive.

This was a ceiling. You can't work around the ancient cylinders/heads/sectors-per-track code in the BIOS any more; all the values are maxed out to get 2TB. And the BIOS dictated the structure of the Master Boot Record, meaning this was a hard limit for the PC disk partitioning system. So it had to go. Something new had to replace it. That replacement was GPT partitioning, and with that came UEFI.

UEFI is allegedly an open standard. Sure, there is a UEFI consortium and so on, yes.

But it's like a lot of things in the IT industry. We all talk about how there are standards bodies and so on, but really, there are a bunch of tiny marmosets and tamarins, and two grumpy 400kg silverback gorillas who don't like each other but are totally dependant on one another and know it.

UEFI is an extension of EFI. EFI was originally called IBI: Intel Boot Initiative. EFI was and is Intel-proprietary, closed-source and paid-for. EFI is Intel's proprietary 64-bit firmware for booting Windows on Itanium, basically.

Intel chose to adopt and adapt this for x86-64, but it was not a sure thing. Intel had choices.

Intel could have used Open Firmware. Open Firmeare has been standard on SPARC and many POWER and PowerPC boxes, including PCI PowerMacs, for decades. It works fine on x86 -- the OLPC XO-1 used it.

Or it could have used CoreBoot. It works, it's FOSS, it's fast, it supports BIOS emulation and DOS/Windows.

Or it could have used ARC firmware from the ACE initiative, as the SGI Visual Workstations did. (I wish I'd picked up one when they were going cheap. Lovely machines.) You've probably never heard of it, but the ACE consortium's device-naming system (supporting Windows NT, Unix and VMS) is the reason for the weird device naming in the NT bootloader from 1993 (NT 3.1) through NT 4 & Win2K until XP's BCD, incidentally.

Intel had options. But it didn't take them. For Intel's first stab at firmware for its first 64-bit PC processor, Itanium, It went with something closed and proprietary.

Why? Well, let's back up a bit.

It may seem sweeping when I say that Intel and Microsoft rule the industry. If it wasn't apparent, those are the 2 gorillas I was alluding to in my metaphor earlier.

But what about AMD?

Realistically, AMD has significantly influenced the direction of the PC industry just once ever: AMD64.

The story is more complicated than it looks. It didn't happen because of AMD's actions. It happened because of Microsoft's response to Intel's actions.

Intel's plan for the next-gen PC processor, IA-64 (i.e. Itanium) flopped. At peak it was selling a few thousand units a year. Not a typo. The PC industry deals in millions of units and Itanium failed to achieve even 1% of that.

Intel had no fallback plan; it had sold its ARM licence to Marvell, it had leaned heavily on HP/Compaq to kill Alpha. So Intel was left with an unimpressive, hot, underperforming range of x86-32 chips (the Netburst Pentium 4) and nowhere to go. It was bitten by the fallacy of sunk costs: having spent tens of billion on Itanium, it was not inclined to launch a competitor to its own struggling next-gen architecture when it didn't have a compelling option available.

AMD saw an opportunity and invented AMD64 — a pretty inspired hack, IMHO. Still a bit stingy with registers compared to most RISC ISAs, but good.

AMD64 caught on. Partly because of FOSS and Linux jumping to adopt it.

Intel responded as Intel is wont to do: it invented its own, different 64-bit extension to x86. It showed this to MS and asked for a version of Windows for it.

MS, to Intel's shock, said a firm NO. Paraphrased, the statement I heard described was: "We already support your crappy IA64. We already support AMD64. We are not going to support a third 64-bit PC architecture, especially as we support two of yours already. [x86 and IA-64.] You need us. Without us, without Windows, you're dead and you know it. We will not do it. AMD64 is fine, it's good, it works, the industry likes it. Get compatible with it or go unsupported by MS."

Intel, smartly, caved. It threw away its own IX64 (I'm afraid I've forgotten the official name) and tweaked its 64-bit Netburst cores to support AMD64.

Then, discovering corruption and massive nepotism going on in its new Indian R&D centre, where it was working on its future quad-core Netburst chips, it closed the entire operation, and switched focus to Intel Israel's low-power Pentium-M design for laptops. It became the future, but the new "Core" and "Core Duo" chips had to be rushed out somewhat, so the 64-bit Pentium 4 was replaced by the 32-bit Core series... closely followed by the 64-bit Core 2, rendering all Core 1 machines obsolete at a stroke.

Back to UEFI.

AMD set the agenda on 64-bit x86, and the FOSS world swung the prow of the boat towards it.

My suspicion is that when Intel management realised that Itanium was dead in the water and they'd lost control of x86-64, they set up an allegedly open-source body for UEFI, based on their own EFI, which Intel could totally dominate — thus giving Intel a way to keep its hand on the tiller of x86-64.

Only one other partner had a significant role: MS, who have slighly more knowledge of writing firmware than I do.

The Linux world were, as usual, too busy hating each other, pretending each other didn't exist or if they acknowledged they do, flinging poop at one another. They had no real say in it.

AMD didn't get much of a say. (AMD supported CoreBoot, incidentally.)

So we got UEFI, which is an Intel standard and was designed to boot Windows and basically nothing else. That's why getting a free Linux to boot on UEFI can be such a pain.

The best I can say for UEFI is that it's better than the situation in the ARM world, where there is no standard firmware at all, some of the most popular devices have none, and Arm Ltd's vain attempt to urge one on AArch64 licensees has failed completely, AFAICS.
liam_on_linux: (Default)
There seems to be, from my own experience so far, 3 or 4 types of UEFI system in practice:

[1] UEFI systems where you can pick an option to work in legacy BIOS emulation, and then all the UEFI stuff disappears and it just looks and acts like a BIOS. Example: my Thinkpad X220 & T420 even on the latest firmware. You can enter the firmware with whatever the manufacturer's normal hot-key combination is, etc.

[2] UEFI systems which will look for legacy BIOS boot structures on the boot medium, load from them in BIOS mode and from then look like a BIOS machine. However, this is dynamic and if booted from a UEFI medium with an EFI System Partition etc., they act like pure UEFI machines. Examples: I have seen some fairly recent (Xeon) desktop Dells from the last 3-5 years like this.

[3] UEFI systems which offer a choice: for instance, when you enter the "select a boot medium" menu, and they detect a bootable USB , then you get 2 boot options for that medium: "legacy boot" or "BIOS boot" or words to that effect, or "UEFI boot". Depending on which you pick, the *same boot medium* will perceive itself to be starting on a BIOS system or on a UEFI system. If you know the difference, these are easy, but it's easy to get it wrong and enter a config where you can't install a bootable system, or your booted system can't see or manipulate the boot structures of an installed system of the other type.

Examples: I have seen modern 2019-model Dell Precision mobile workstations like this; my girlfriend's Lenovo Core i5 desktop (about a 2017-2018 machine).

Interestingly, I was only able to get my partner's machine to boot Linux Mint 19.2 or 19.3 from its HD by pressing F12 to pick a boot medium; then the GRUB menu appears. Normally, it boots direct to Win10, whatever settings are in the firmware. (This is based on Ubuntu 18.04, as mainstream as Linux gets). Upgrading the firmware made no difference.

Interestingly, this summer, when I upgraded to Mint 20 (based on Ubuntu 20.04), suddenly the EFI GRUB menu started working without F12. Someone somewhere has fixed something and now it works. Who knows who or what? All part of the fun of UEFI.

[4] UEFI systems which are pure UEFI and will only boot from a correct-configured UEFI medium. Some may say that there is a legacy boot option but it won't work. Example: my current work desktop, a Dell Precision Core i7 minitower. I inherited this from a colleague who quit. He spent days trying to get it to boot. Later, I tried to help. When he left, I asked for and got the machine, and I spent a week or so on it. I tried about 6-7 Linux distros, stable and cutting-edge versions, booted from USB or DVD or network. Nothing worked. I could not get a Linux distro to boot from the hard disk. In the end, in desperation, I put the latest Win10 on it, which worked perfectly. With the boot structures created by Win10, Linux will dual-boot perfectly happily and totally reliably.

I have received a lot of abuse from "experts" who tell me that what I've seen is impossible, not true, etc. Especially from enterprise Linux vendors, who just get their bootloader signed and therefore see no problem with this. This is a theme — I've been in the business 32 years now and I've had that a lot.

The watchword is this: one, single exception disproves a million people for whom something works perfectly.

It doesn't matter how many reviews say "X is great" if one review says "Y doesn't work and Z is missing." It is the negative assessments that matter; this is not a number game, not a vote. One failure outweighs any number of successes.

IMHO, UEFI is inadequately-specified, as of yet poorly-debugged, and is not yet really ready for prime time. It works with single-boot Windows systems, although it is hard to do things like change boot settings, and it's relatively easy to get an unbootable system if you, for instance, copy a working HDD install onto an SSD. Fixing this is a lot of work and the automated tools in Win10 don't work, which indicates to me that Microsoft don't fully understand this either. There is no longer any consistent way to get into the firmware setup program, into Windows' Safe Mode etc. Many systems intentionally conceal all this to make a smoother customer experience. You need to do things like set options in a Windows control panel to display the startup options when the system is next booted. If you can't load Windows, tough. If you can but it fails to complete boot, tough. If you don't have a Windows system, tough. I regard this as broken, badly broken; the industry apparently does not.

It is important to note, not as paranoia but as a simple statement of fact, that enterprise OS vendors focus on servers, and servers typically do not dual-boot, at all, ever. Most, in fact, run inside dedicated VMs, so they don't even interact with other OSes at all. Therefore all this stuff is poorly-debugged and little-tested.

It is also significant, I think, in paranoid mode, that while Microsoft says it loves Linux and FOSS now, this is marketing guff. No significant parts of Windows have been made FOSS. Windows will not dual-boot with any FOSS OS; in fact it disables other bootloaders. It is entirely the FOSS OS's job. Windows can't mount, read, or write Linux filesystems, or even identify them. MS only likes Linux if it's running safely inside a Windows VM.

This, for me, falsifies the claim that MS <3 FOSS. They talk the talk but do not walk the walk. It is their old embrace and extend tactic once again.

As such, the fact that UEFI works so badly with non-MS OSes seems likely to be quite intentional to me, and that it only cooperates with big enterprise server OS vendors. The situation is difficult for small FOSS players and not materially improving. 
liam_on_linux: (Default)

The excellent LowEndMac website for users of vintage Apple kit has a thriving FB community, which is full of the sort of people who recommend that whatever your vintage Mac, you crack its firmware and run the latest OS on it. As an example they got very excited recently because someone found a leaked beta of Mac OS X 10.6 for PowerPC. 10.6 "Snow Leopard" was the first Intel-x86-only release of OS X. This was apparently a late decision and 10.6.0 did have an unfinished version that ran on the older PowerPC chip.

These guys -- and they are almost all guys -- would rather run an unstable beta of a newer OS than the stable, patched version of 10.5 with its plethora of stable, supported PowerPC apps.

Currently, they're getting very excited about the new ARM Macs and excitedly telling one another about how fast they are and how Rosetta 2 can magically translate any x86-64 app into a full native ARM app and how the M1 is a system-on-a-chip and therefore the RAM is built in to the processor and so is much faster than boring external RAM. (The RAM is DDR4 in a separate die -- just a single die, which is possibly a bad sign bandwidth-wise -- that is in in the same physical package as the SoC, like a Pentium II's L2 cache was.)

They're also under the impression that Rosetta 2 will magically allow Windows 10 in a VM, like on current Macs.

I tried to answer...

Apple has used 4 CPU architectures so far:
[1] Motorola 680x0
[2] AIM PowerPC
[3] Intel x86-32 and then x86-64
[4] ARM

In each case it's provided some kind of emulation for the generation immediately before which can run some, individual programs from the old architecture.

A hypervisor is not just "a program". A hypervisor splits up a CPU into multiple instances, each of which can run a separate OS.

An ARM hypervisor can virtualise multiple ARMs. An x86 hypervisor can virtualise multiple x86.

But you can't just run a hypervisor through a translation tool such as Rosetta. It interacts with the host CPU and the guest OSes and the programs on them run on that host CPU too.

If I give you a copy of a very long book, such as The Lord of the Rings, and a very big very complete English-to-Chinese dictionary, you could given enough time produce something that was kind of like the LotR in something like Chinese, but it will never be as good as a translation by a Chinese native who speaks English.

In the same way, a translated app will never be as good as a native app.

But given that super-complete Chinese dictionary, you look up English words and phrases and it gives you some characters to write down. You can't read them if you don't speak Chinese.

So if a Chinese person came up to you and said "你好! 你吃了吗?" ... then that fancy dictionary isn't going to help you. It's a English to Chinese dictionary and unless you can read Chinese, then even a Chinese to English dictionary won't help you.

You can do a not-very-good one-time one-way translation, but you can't translate in both directions. You can't interact in Chinese.

Apps inside Parallels are x86 apps that run on an x86 chip, and that runs on an x86 OS, which interacts with the hypervisor which interacts with the processor.

TechRadar's article carefully says that they demonstrated Linux. Linux is cross-platform. Linux runs on ARM. They could run ARM Linux in an ARM VM under ARM Parallels on ARM macOS and it will work fine, and fast too -- because it's all-native -- making for a great demo.

If they're cheeky they can do this with ARM Win10 as well and it will look good, but it won't run native MS Office because there's no such product.

If you get the fancy ARM Windows supplied with ARM tablets that can emulate x86-32, you will get MS Office for x86 running on an emulator. That is 1 level of emulation: the OS is native, but the app is translated.

There are ARM versions of Windows these days but they are limited, heavily locked-down things. One version is only available preinstalled on ARM tablets. It's like MS-DOS in the v1-v4 era: it's not a retail product. The only way to get it is to buy a computer running it, and it's hardware-locked to the firmware device it came with, like a phone OS.

Another version of ARM Windows 10 is a special "Internet of Things" edition of Windows 10 – it doesn't have a desktop and so on. This runs on a Raspberry Pi, but it's intended to run on a smart doorbell, not a full-function personal computer. Of course people have found a way to hack it to get a full desktop, but it's not trivial, it's not supported, and there are very few ARM Windows apps out there -- just a few FOSS things that have been recompiled.

So: you can't just buy full desktop Windows for ARM and run it on an ARM VM as you can with x86 Windows.  Table ARM Win10 does do it own internal x86-32 emulation -- just as RISC Windows NT on MIPS, POWER, Alpha and so on emulated x86-16 in the 1990s. But there's no x86-64 just yet, and anyway, running any performance-critical app in emulation is undesirable. Running an entire VM in one would be nasty.

Remember that when the first x86-only Mac OS X came out, Apple dropped "Classic Mode", for precisely this reason.

Emulation is sluggish but it'll do for stuff that is not performance-critical. OK for MS Word, not desirable for MS Excel if you're working with big complicated databases.

And if you want to run x86 Windows then you can't virtualise that on ARM, because an ARM chip can't run x86 instructions.

So ARM Macs mean saying goodbye to running x86 Windows in a VM. (And x86 Linux, but who cares? You can run ARM Linux and ARM Linux has all the FOSS apps from x86 Linux -- but no x86 games or other proprietary apps.)

So you have to emulate x86.

Rosetta 2 can't help you here. It can't "see" into a VM. It doesn't know what OS is inside the VM and what apps are on that OS, so it can't translate them.

So it has to emulate the whole thing.

It works, but it's slow. You take your fast elegant ARM chip and emulate a huge complex x86 chip with 10x as many transistors, and it will work, but it's not quick.

Anyone old enough to have run Insignia SoftWindows or SoftPC on a PowerMac under MacOS 7/8/9 in the mid-1990s has tried this. It works but it's not elegant and it's not fast.

But I personally have been using ARM computers since 1988 or so, when I bought an Acorn Archimedes. It had an 8MHz ARM2 chip. Native software was blindingly fast: it was about 4-8x faster than the fastest x86 box my employers sold, an IBM with a 80386DX chip running at 25MHz with secondary cache.

Note those numbers. An 8MHz ARM running ARM code was at least 4x faster than a 25MHz 386 running 386 code. That is how much more efficient than x86 ARM can be.

I had a program called !PCEm on my Arc. It was a complete PC emulator and it could run MS-DOS 3.3 and PC apps such as Lotus 1-2-3, MS Word for DOS and QuickBASIC. I used it for some work.

The original IBM PC ran at 4.77 MHz. These days, 4.77 GHz is just about possible. A thousand times faster.

But my Arc emulating an x86 chip ran at about 2MHz.

An ARM chip that was some 4x faster running ARM code was well over 10x slower running x86 code -- more like 15-20x slower.

This stuff is not new. I was running x86 code on ARM before IBM and Motorola invented the first PowerPC chip.

It works. It can be usable. But you lose all the high-performance of your ARM chip, and it works flat out, running hot, burning power, to give mediocre x86 performance.

So yes, some x86-64 native macOS apps will run usably on ARM, because the whole OS isn't being emulated -- 1 translated app is being run.

But that doesn't apply to running Windows, or running a hypervisor.

They can probably make it work and run x86 Windows on ARM Macs, but don't expect it to be as fast as running x86 Windows on an x86 Mac, where under BootCamp it runs at full native speed and in Parallels or whatever it runs 10-15% slower.

liam_on_linux: (Default)

[Repurposed HackerNews comment]


OS/2 was a genuinely great OS in its time.


OS/2 2.0 was released in the same month as Windows 3.1. In that era, it was so much better, it was embarrassing.


(Linux 1.0 would not be released for another 2 years yet, and v1.0 of native BSD on x86 — BSD/OS from BSDi, i.e. still commercial — for another whole year. Yes, it was possible to run pre-1.0 versions of both — BSD/OS 0.3 came out in April 1992 as well — but pre-1.0 Linux was very sketchy and very hard work.)


If IBM had let Microsoft make OS/2 1 a 386 OS (x86-32) instead of a 286 OS, the IT world would have turned out very differently. An OS/2 1.x in 1987 that could multitask DOS apps would have been a big hit.


I suspect Windows 3, FreeBSD etc. and Linux would never have happened. Perhaps the GNU Project would have adopted the BSD-Lite kernel, as it did evaluate but foolishly discarded.


But saying that, OS/2 2 was still a 1980s-style OS, a nightmare of vast config files, special drivers that cost money and came on floppy via international post, building custom modified boot floppies so your hard disk or CD drive controller would be recognised and real major pain.


The desktop was very powerful but very weird and kinda clunky. It's no coincidence that nobody has ever re-implemented the OS/2 Workplace Shell on Linux. Lots of other 1980s OSes — Acorn's RISC OS; Classic MacOS; AmigaOS; NeXTstep; CDE; yep, all of those exist or existed. WPS? Yeah, no thanks.


Read more... )
liam_on_linux: (Default)

Edit an entry, use the "switch to new editor" option and it duplicates it. Thanks, LJ, that is not what I wanted at all. 🙄

liam_on_linux: (Default)

[Repurposed from Stack Exchange, here]
The premise in the question is incorrect. There were such chips. The question also fails to allow for the way that the silicon-chip industry developed.
Moore's Law basically said that every 18 months, it was possible to build chips with twice as many transistors for the same amount of money.
The 6502 (1975) is a mid-1970s design. In the '70s it cost a lot to use even thousands of transistors; the 6502 succeeded partly because it was very small and simple and didn't use many, compared to more complex rivals such as the Z80 and 6809.
The 68000 (1979) was also from the same decade. It became affordable in the early 1980s (e.g. Apple Lisa) and slightly more so by 1984 (Apple Macintosh). However, note that Motorola also offered a version with an 8-bit external bus, the 68008, as used in the Sinclair QL. This reduced performance, but it was worth it for cheaper machines because it was so expensive to have a 16-bit chipset and 16-bit memory.
Note that just 4 years separates the 6502 and 68000. That's how much progress was being made then.
The 65C816 was a (partially) 16-bit successor to the 6502. Note that WDC also designed a 32-bit successor, the 65C832. Here is a datasheet: https://downloads.reactivemicro.com/Electronics/CPU/WDC%2065C832%20Datasheet.pdf
However, this was never produced. As a 16-bit extension to an 8-bit design, the 65C816 was compromised and slower than pure 16-bit designs. A 32-bit design would have been even more compromised.

Note, this is also why Acorn succeeded with the ARM processor: its clean 32-bit-only design was more efficient than Motorola's combination 16/32-bit design, which was partly inspired by the DEC PDP-11 minicomputer. Acorn evaluated the 68000, 65C816 (which it used in the rare Acorn Communicator), NatSemi 32016, Intel 80186 and other chips and found them wanting. Part of the brilliance of the Acorn design was that it effectively used slow DRAM and did not need elaborate caching or expensive high-speed RAM, resulting in affordable home computers that were nearly 10x faster than rival 68000 machines.
The 68000 was 16-bit externally but 32-bit internally: that is why the Atari machine that used it was called the ST, short for "sixteen/thirty-two".
The first fully-32 bit 680x0 chip was the 68020 (1984). It was faster but did not offer a lot of new capabilities, and its successor the 68030 was more successful, partly because it integrated a memory management unit. Compare with the Intel 80386DX (1985), which did much the same: 32-bit bus, integral MMU.
The 80386DX struggled in the market because of the expense of making 32-bit motherboards with 32-bit wide RAM, so was succeeded by the 80386SX (1988), the same 32-bit core but with a half-width (16-bit) external bus. This is the same design principle as the 68008.
Motorola's equivalent was the fairly rare 68EC020.
The reason was that around the end of the 1980s, when these devices came out, 16MB of memory was a huge amount and very expensive. There was no need for mass-market chips to address 4GB of RAM — that would have cost hundreds of thousands of £/$ at the time. Their 32-bit cores were for performance, not capacity.
The 68030 was followed by the 68040 (1990), just as the 80386 was followed by the 80486 (1989). Both also integrated floating-point coprocessors into the main CPU die. The progress of Moore's Law had now made this affordable.
The line ended with the 68060 (1994), but still 32-bit — but again like Intel's 80586 family, now called "Pentium" because they could't trademark numbers — both have Level 1 cache on the CPU die.
The reason was because at this time, fabricating large chips with millions of transistors was still expensive, and these chips could still address more RAM than was remotely affordable to fit into a personal computer.
So the priority at the time was to find way to spend a limited transistor budget on making faster chips: 8-bit → 16-bit → 32-bit → integrate MMU → integrate FPU → integrate L1 cache
This line of development somewhat ran out of steam by the mid-1990s. This is why there was no successor to the 68060.
Most of the industry switched to the path Acorn had started a decade earlier: dispensing with backwards compatibility with now-compromised 1970s designs and starting afresh with a stripped-down, simpler, reduced design — Reduced Instruction Set Computing (RISC).
ARM chips supported several OSes: RISC OS, Unix, Psion EPOC (later renamed Symbian), Apple NewtonOS, etc. Motorola's supported more: LisaOS, classic MacOS, Xenix, ST TOS, AmigaDOS, multiple Unixes, etc.
No single one was dominant.
Intel was constrained by the success of Microsoft's MS-DOS/Windows family, which sold far more than all the other x86 OSes put together. So backwards-compatibility was more important for Intel than for Acorn or Motorola.
Intel had tried several other CPU architectures: iAPX-432, i860, i960 and later Itanium. All failed in the general-purpose market.
Thus, Intel was forced to to find a way to make x86 quicker. It did this by breaking down x86 instructions into RISC-like "micro operations", re-sequencing them for faster execution, running them on a RISC-like core, and then reassembling the results into x86 afterwards. First on the Pentium Pro, which only did this efficiently for x86-32 instructions, when many people were still running Windows 95/98, an OS composed of a lot of x86-16 code and which ran a lot of x86-16 apps.
Then with the Pentium II, an improved Pentium Pro with onboard L1 (and soon after L2) cache and improved x86-16 optimisation — but also around the time that the PC market moved to Windows XP, a fully x86-32 OS.
In other words, even by the turn of the century, the software was still moving to 32-bit and the limits of 32-bit operation (chiefly, 4GB RAM) were still largely theoretical. So, the effort went into making faster chips with the existing transistor budget.
Only by the middle of the first decade of the 21st century did 4GB become a bottleneck, leading to the conditions for AMD to create a 64-bit extension to x86.
The reasons that 64-bit happened did not apply in the 1990s.
From the 1970s to about 2005, 32 bits were more than enough, and CPU makers worked on spending the transistor budgets on integrating more go-faster parts into CPUs. Eventually, this strategy ran out, when CPUs included the integer core, a floating-point core, a memory management unit, a tiny amount of L1 cache and a larger amount of slower L2 cache.
Then, there was only 1 way to go: integrate a second CPU onto the chip. Firstly as a separate CPU die, then as dual-core dies. Luckily, by this time, NT had replaced Win9x, and NT and Unix could both support symmetrical multiprocessing.
So, dual-core chips, then quadruple-core chips. After that, a single user on a desktop or laptop gets little more benefit. There are many CPUs with more cores but they are almost exclusively used in servers.
Secondly, the CPU industry was now reaching limits of how fast silicon chips can run, and how much heat they emit when doing so. The megahertz race ended.
So the emphases changed, to two new ones, as the limiting factors became:

  • the amount of system memory

  • the amount of cooling they required

  • the amount of electricity they used to operate

These last two things are two sides of the same coin, which is why I said two not three.
Koomey's Law has replaced Moore's Law.

liam_on_linux: (Default)
In lieu of anything new right now -- I accidentally sent my last post to the wrong Livejournal. In the unlikely event that anyone is reading this one and not [livejournal.com profile] lproven, it's over here:
Unix is Unix is Unix.
liam_on_linux: (Default)
The first computer I owned was a Sinclair ZX Spectrum, and I retain a lot of fondness for these tiny, cheap, severely-compromised machines. I just backed the ZX Spectrum Next kickstarter, for instance.

But after I left university and got a job, I bought myself my first "proper" computer: an Acorn Archimedes. The Archie remains one of the most beautiful computers [PDF] to use and to program I've ever known. This was the machine for which Acorn developed the original ARM chip. Acorn also had am ambitious project to develop a new, multitasking, better-than-Unix OS for it, written in Modula-2 and called ARX. It never shipped, and instead, some engineers from Acorn's in-house AcornSoft publishing house did an inspired job of updating the BBC Micro OS to run on the new ARM hardware. The result was called Arthur. Version 2 was renamed RISC OS [PDF].

(Incidentally, Dick Pountain's wonderful articles about the Archie are why I bought one and why I'm here today. Some years later, I was lucky enough to work with him on PC Pro magazine and we're still occasionally in touch. A great man and a wonderful writer.)

Seven or eight years ago on a biker mailing list, Ixion, I mentioned RISC OS as something interesting to do with a Raspberry Pi, and a chap replied "a friend of mine wrote that!" Some time later, that passing comment led to me facilitating one of my favourite talks I ever attended at the RISC OS User Group of London. The account is well worth a read for the historical context.

(Commodore had a similar problem: the fancy Commodore Amiga Operating System, CAOS, was never finished, and some engineers hastily assembled a replacement around the TRIPOS research OS. That's what became AmigaOS.)

Today, RISC OS runs on a variety of mostly small and inexpensive ARM single-board computers: the Raspberry Pi, the BeagleBoard, the (rather expensive) Titanium, the PineBook and others. New users are discovering this tiny, fast, elegant little OS and becoming enthusiastic about it.

And that's let to two different but cooperating initiatives that hope to modernise and update this venerable OS. One is backed by a new British company, RISC OS Developments, who have started with a new and improved distribution of the Raspberry Pi version called RISC OS Direct. I have it running on a Rasπ 3B+ and it's really rather nice.

The other is a German project called RISC OS Cloverleaf.

What I am hoping to do here is to try to give a reality check on some of the more ambitious goals for the original native ARM OS, which remains one of my personal favourites to this day.

Even back in 1987, RISC OS was not an ambitious project. At heart, it vaguely resembles Windows 3 on top of MS-DOS: underneath, there is a single-tasking, single-user, text-mode OS built to an early-1980s design, and layered on top of that, a graphical desktop which can cooperatively multitask graphical apps -- although it can also pre-emptively multitask old text-mode programs.

Cooperative multitasking is long gone from mainstream OSes now. What it means is that programs must voluntarily surrender control to the OS, which then runs the next app for a moment, then when that app gives up control of the computer, a third, a fourth and so on. It has one partial advantage: it's a fairly lightweight, simple system. It doesn't need much hardware assistance from the CPU to work well.

But the crucial weakness is in the word "cooperative": it depends on all the programs being good citizens and behaving themselves. If one app grabs control of the computer and doesn't let go, there's nothing the OS can do. Good for games and for media playback -- unless you want to do something else at the same time, in which case, tough luck -- but bad news if an app does something demanding, like rendering a complex model or applying a big filter or something. You can't switch away and get on with anything else; you just have to wait and hope the operation finishes and doesn't run out of memory, or fill up the hard disk, or anything. Because if that one app crashes, then the whole computer crashes, too, and you'll lose all your work in all your apps.

Classic MacOS worked a bit like this, too. There are good reasons why everyone moved over to Windows 95 (or Windows NT if they could afford a really high-end PC) -- because those OSes used the 32-bit Intel chips' hardware memory protection facilities to isolate programs from one another in memory. If one crashed, there was a chance you could close down the offending program and save your work in everything else.

Unlike under MacOS 7, 8 or 9, or under RISC OS. Which is why Acorn and Apple started to go into steep decline after 1995. For most people, reliability and robustness are worth an inferior user experience and a bit of sluggishness. Nobody missed Windows 3.

Apple tried to write something better, but failed, and ended up buying NeXT Computer in 1996 for its Unix-based NeXTstep OS. Microsoft already had an escape plan -- to replace its DOS-based Windows 9x and get everyone using a newer, NT-based OS.

Acorn didn't. It was working on another ill-fated all-singing, all-dancing replacement OS, Galileo, but like ARX, it was too ambitious and was never finished. I've speculated about what might have happened if Acorn did a deal with Be for BeOS on this blog before, but it would never have happened while Acorn was dreaming of Galileo.

So Acorn kept working on RISC OS alongside its next-gen RISC PC, codenamed Phoebe: a machine with PCI slots and the ability to take two CPUs -- not that RISC OS could use more than one. It added support for larger hard disks, it built-in video encoding and decoding and some other nice features, but it was an incremental improvement at best.

Meanwhile, RISC OS had found another, but equally doomed, niche: the ill-fated Network Computer initiative. NCs were an idea before their time: thin, lightweight, simple computers with no hard disk, but always-on internet access. Programs wouldn't -- couldn't -- get installed locally: they'd just load over the Internet. (Something like a ChromeBook with web apps, 20 years later, but with standalone programs.) The Java cross-platform language was ideal for this. For this, Acorn licenced RISC OS to Pace, a UK company that made satellite and cable-TV set-top boxes.

Acorn's NC was one of the most complete and functional, although other companies tried, including DEC, Sun and Corel. The Acorn NC ran NCOS, based on, but incompatible with, RISC OS. Sadly, the NC idea was ahead of its time -- this was before broadband internet was common, and it just wasn't viable on dial-up.

Acorn finally acknowledged reality and shut down its workstation division in 1998, cancelling the Phoebe computer after production of the cases had begun. Its ARM division went on to become huge, and the other bits were sold off and disappeared. The unfinished RISC OS 4 was sold off to a company called RISC OS Ltd. (ROL), who finished it and sold it as an upgrade for existing Acorn owners. Today, it's owned by 3QD, the company behind the commercial Virtual Acorn emulator.

A different company, Castle Technology, continued making and selling some old Acorn models, until 2002 when it surprised the RISC OS world with a completely new machine: the Iyonix. It had proved impossible to make new ARM RISC OS machines, because RISC OS ran in 26-bit mode, and modern ARM chips no longer supported this. Everyone had forgotten the Pace NC effort, but Castle licenced Pace's fork of RISC OS and used it to create a new, 32-bit version for a 600MHz Intel ARM chip. It couldn't directly run old 26-bit apps, but it was quite easy to rewrite them for the new, 32-bit OS.

The RISC OS market began to flourish again in a modest way, selling fast, modern RISC OS machines to old RISC OS enthusiasts. Some companies still used RISC OS as well, and rumour said that a large ongoing order for thousands of units from a secret buyer is what made this worthwhile for Castle.

ROL, meantime, was very unhappy. It thought it had exclusive rights to RISC OS, because everyone had forgotten that Pace had a license too. I attempted to interview its proprietor, Paul Middleton, but he was not interested in cooperating.

Meantime, RISC OS Ltd continued modernising and improving the 26-bit RISC-OS-4-based branch of the family, and selling upgrades to owners of old Acorn machines.

So by early in the 21st century, there were two separate forks of RISC OS:

  • ROL's edition, derived from Acorn's unfinished RISC OS 4, marketed as Select, Adjust and finally "RISC OS SIX", running on 26-bit machines, with a lot of work done on modularising the codebase and adding a Hardware Abstraction Layer to make it easier to move to different hardware. This is what you get with VirtualAcorn.

  • And Castle's edition, marketed as RISC OS 5, for modern 32-bit-only ARM computers, based on Pace's branch as used to create NCOS. This is the basis of RISC OS Open and thus RISC OS Direct.

When Castle was winding down its operations selling ARM hardware, it shared up the source code to RISC OS 5 in the form of RISC OS Open (ROOL). It wasn't open source -- if you made improvements, you had to give them back to Castle Technologies. However, this caused RISC OS development to speed up a little, and let to the version that runs on other ARM-based computers, such as the Raspberry Pi and BeagleBoard.

Both are still the same OS, though, with the same cooperative multitasking model. RISC OS does not have the features that make 1990s 32-bit OSes (such as OS/2 2, Windows NT, Apple Mac OS X, or the multiple competing varieties of Unix) more robust and stable: hardware-assisted memory management and memory protection, pre-emptive multitasking, support for multiple CPUs in one machine, and so on.

There are lightweight, simpler OSes that have these features -- the network-centric successor to Unix, called Plan 9, and its processor-independent successor, Inferno; the open-source Unix-like microkernel OS, Minix 3; the commercial microkernel OS, QNX, which was nearly the basis for a next-generation Amiga and was the basis of the next-generation Blackberry smartphones; the open-source successor to BeOS, Haiku; Pascal creator Niklaus Wirth's final project, Oberon, and its multiprocessor-capable successor A2/Bluebottle -- which ironically is pretty much exactly what Acorn ARX set out to be.

In recent years, RISC OS has gained some more minor modern features. It can talk to USB devices. It speaks Internet protocol and can connect to the Web. (But there's no free Wifi stack, so you need to use a cable. It can't talk Bluetooth, either.) It can handle up to 2GB of memory -- four thousand times more than my first Archimedes.

Some particular versions or products have had other niceties. The proprietary Geminus allowed you to use multiple monitors at once. Aemulor allows 32-bit computers to run some 26-bit apps. The Viewfinder add-on adaptor allowed RISC PCs to use ATI AGP graphics cards from PCs, with graphics acceleration. The inexpensive PineBook laptop has Wifi support under RISC OS.

But these are small things. Overcoming the limitations of RISC OS would be a lot more difficult. For instance, Niall Douglas implemented a pre-emptive multitasking system for RISC OS. As the module that implements cooperative multitasking is called the WIMP, he called his Wimp2. It's still out there, but it has drawbacks -- the issues are discussed here.

And the big thing that RISC OS has is legacy. It has some 35 years of history, meaning many thousands of loyal users, and hundreds of applications, including productivity apps, scientific, educational and artistic tools, internet tools, games, and more.

Sibelius, generally regarded as the best tool in the world for scoring and editing sheet music, started out as a RISC OS app.

People have a lot of investment in RISC OS. If you have using a RISC OS app for three decades to manage your email, or build 3D models, or write or draw or paint or edit photos, or you've been developing your own software in BBC BASIC -- well, that means you're probably quite old by now, and you probably don't want to change.

There are enough such users to keep paying for RISC OS to keep a small market going, offering incremental improvements.

But while if someone can raise the money to pay the programmers, adding wifi, or Bluetooth, or multi-monitor graphical acceleration, or hardware-accelerated video encoding or decoding, would be relatively easy to do, it still leaves you with a 1980s OS design:

  • No pre-emptive multitasking

  • No memory protection or hardware-assisted memory management

  • No multi-threading or multiple CPU support

  • No virtual memory, although that's less important as a £50 computer now has four times more RAM than RISC OS can support.

Small, fast, pleasant to use -- but with a list of disadvantages to match:

  • Unable to take full advantage of modern hardware.

  • Unreliable -- especially under heavy load.

  • Unable to scale up to more processors or more memory.

The problem is the same one that Commodore and Atari faced in the 1990s. To make a small, fast OS for an inexpensive computer which doesn't have much memory, no hard disk, a single CPU with no fancy features, then you have to do a lot of low-level work, close to the metal. You need to write a closely-integrated piece of software, much of it in assembly language, which is tightly coupled to the hardware it was built for.

The result is something way smaller and faster than big lumbering modular PC operating systems which have to work with a huge variety of hardware from hundreds of different companies -- so the OS is not closely integrated with the hardware. But conversely, this design has advantages, too: because it is adaptable to new devices, as the hardware improves, the OS can improve.

So when you ran Windows 3 on a 386 PC with 4MB of RAM -- a big deal in 1990! -- it could use the hardware 16-bit virtualisation of the 386 processor to pretend to be 2, 3 or 4 separate DOS PCs at the same time -- so you could keep your DOS apps when you moved to Windows. They didn't look or feel like Windows apps, but you already knew how to use them and you could still access all your data and continue to work with it.

Then when you got a 486 in 1995 (or a Pentium with Windows NT if you were rich) it could pretend to be multiple 386 computers running separate copies of 16-bit Windows as well as still running those DOS apps. And it could dial into the Internet using new 32-bit apps, too. By the turn of the century, it could use broadband -- the apps didn't know any difference, as it was all virtualised. Everything just went faster.

Six or seven years after that, your PC could have multiple cores, but multiple 32-bit apps could be divided up and run across two or even four cores, each one at full speed, as if it had the computer to itself. Then a few years later, you could get a new 64-bit PC with 64-bit Windows, which could still pretend to be a 32-bit PC for 32-bit apps.

When these things started to appear in the 1990s, the smaller OSes that were more tightly-integrated with their hardware couldn't be adapted so easily when that hardware changed. When more capable 68000-series processors appeared, such as the 68030 with built-in memory management, Atari's TOS, Commodore's AmigaOS and Apple's MacOS couldn't use it. They could only use the new CPU as a faster 68000.

This is the trap that RISC OS is in. Amazingly, by being a small fish in a very small pond -- and thanks to Castle's mysterious one big customer -- it has survived into its fourth decade. The only other end-user OS to survive since then has been NeXTstep, or macOS as it's now called, and it's had a total facelift and does not resemble its 1980s incarnation at all: a 32-bit 68030 OS became a PowerPC OS, which became an Intel 32-bit x86 OS, which became a 64-bit x86 OS and will soon be a 64-bit ARM OS. No 1980s or 1990s NeXTstep software can run on macOS today.

When ARM chips went 32-bit only, RISC OS needed an extensive rewrite, and all the 26-bit apps stopped working. Now, ARM chips are 64-bit, and soon, the high-end models will drop 32-bit support altogether.

As Wimp2 showed, if RISC OS's multitasking module was replaced with a pre-emptive one, a lot of existing apps would stop working.

AmigaOS is now owned by a company called Hyperion, who have ported it to PowerPC -- although there aren't many PowerPC chips around any more.

It's too late for virtual memory, and we don't really need it any more -- but the programming methods that allow virtual memory, letting programs spill over onto disk if the OS runs low on memory, are the same as those that enforce the protection of each program's RAM from all other programs.

Just like Apple did in the late 1990s, Hyperion have discovered that if they rewrite their OS to take advantage of PowerPC chips' hardware memory-protection, then it breaks all the existing apps whose programmers assumed that they could just read and write whatever memory they wanted. That's how Amiga apps communicate with the OS -- it's what made AmigaOS so small and fast. There are no barriers between programs -- so when one program crashes, they all crash.

The same applies to RISC OS -- although it does some clever trickery to hide programs' memory from each other, they can all see the memory that belongs to the OS itself. Change that, and all existing programs stop working.

To make RISC OS able to take advantage of multiple processors, the core OS itself needs an extensive rewrite to allow all its modules to be re-entrant -- that is, for different apps running on different cores to be able to call the same OS modules at the same time and for it to work. The problem is that the design of the RISC OS kernel dates back to about 1981 and a single eight-bit 6502 processor. The assumption that there's only one processor doing one thing at a time is deeply written into it.

That can be changed, certainly -- but it's a lot of work, because the original design never allowed for this. And once again, all existing programs will have to be rewritten to work with the new design.

Linux, to pick an example, focuses on source code compatibility. Since it's open source, and all its apps are open source, then if you get a new CPU, you just recompile all your code for the new chip. Linux on a PowerPC computer can't run x86 software, and Linux on an ARM computer can't run PowerPC software. And Linux on a 64-bit x86 computer doesn't natively support 32-bit software, although the support can be added. If you try to run a commercial, proprietary, closed-source Linux program from 15 or 20 years ago on a modern Linux, it won't even install and definitely won't function -- because all the supporting libraries and modules have slowly changed over that time.

Windows does this very well, because Microsoft have spend tens of billions of dollars on tens of thousands of programmers, writing emulation layers to run 16-bit code on 32-bit Windows, and 32-bit code on 64-bit Windows. Windows embeds layers of virtualisation to ensure that as much old code as possible will still work -- only when 64-bit Vista arrived in 2006 did Windows finally drop support for DOS programs from the early 1980s. Today, Windows on ARM computers emulates an x86 chip so that PC programs will still work.

In contrast, every few versions of macOS, Apple removes any superseded code. The first x86 version of what was then called Mac OS X was 10.4, which was also the last version that ran Classic MacOS apps. By version 10.6, OS X no longer ran on PowerPC Macs, and OS X 10.7 no longer ran PowerPC apps. OS X 10.8 only ran on 64-bit Macs, and 10.15 won't run 32-bit apps.

This allows Apple to keep the OS relatively small and manageable, whereas Microsoft is struggling to maintain the vast Windows codebase. When Windows 10 came out, it announced that 10 was the last-ever major new version of Windows.

It would be possible to rewrite RISC OS to give it pre-emptive multitasking -- but either all existing apps would need to be rewritten, or it would need to incorporate some kind of emulator, like Aemulor, to run old apps on the new OS.

Pre-emptive multitasking -- which is a little slower -- would make multi-threading a little easier, which in turn would allow multi-core support. But that would need existing apps to be rewritten to use multiple threads, which allows them to use more than one CPU core at once. Old apps might still work, but not get any faster -- you could just run as many as you have CPU cores side-by-side with only a small drop in speed.

Then a rewrite of RISC OS for 64-bit ARM chips would require a 32-bit emulation layer for old apps to run -- and very slowly at that, when ARM chips no longer execute 32-bit code directly. A software emulation of 32-bit ARM would be needed, with perhaps a 10x performance drop.

All this, on a codebase that was never intended to allow such things, and done by a tiny crew of volunteers. It will take many years. Each new version will inevitably lose some older software which will stop working. And each year, some of those old enthusiasts who are willing to spend money on it will die. I got my first RISC OS machine in 1989, when I was 21. I'm 52 now. People who came across from the previous generation of Acorn computers, the BBC Micro, are often in their sixties.

Once the older users retire, who will spend money on this? Why would you, when you can use Linux, which does far more and is free. Yes, it's slower and it needs a lot more memory -- but my main laptop is from 2011, cost me £129 second-hand in 2017, and is fast and reliable in use.

To quote an old joke:
"A traveller stops to ask a farmer the way to a small village. The farmer thinks for a while and then says "If you want to go there I would not start from here."

There are alternative approaches. Linux is one. There's already a RISC OS-like desktop for Linux: it's called ROX Desktop, and it's very small and fast. It needs a bit of an update, but nothing huge.

ROX has its own system for single-file applications, like RISC OS's !Apps, called 0install -- but this never caught on. However, there are others -- my personal favourite is called AppImage, but there are also Snap apps and Flatpak. Supporting all of them is perfectly doable.

There is also an incomplete tool for running RISC OS apps on Linux, called ROLF... and a project to run RISC OS itself as an app under Linux.

Not all Linux distributions have the complicated Linux directory layout -- one of my favourites is GoboLinux, which has a much simpler, Mac-like layout.

It would be possible to put together a Linux distribution for ARM computers which looked and worked like RISC OS, had a simple directory layout like RISC OS, including applications packaged as single files, and which, with some work, could run existing RISC OS apps.

No, it wouldn't be small and fast like RISC OS -- it would be nearly as big and slow as any other Linux distro, just much more familiar for RISC OS users. This is apparently good enough for all the many customers of Virtual Acorn, who run RISC OS on top of Windows.

But it would be a lot easier to do than the massive rewrite of RISC OS needed to bring it up to par with other 21st century OSes -- and which would result in a bigger, slower, more complex RISC OS anyway.

The other approach would be to ignore Linux and start over with a clean sheet. Adopt an existing open-source operating system, modify it to look and work more like RISC OS, and write some kind of emulator for existing applications.

My personal preference would be A2/Bluebottle, which is the step-child of what Acorn originally wanted as the OS for the Archimedes. It would need a considerable amount of work, but Professor Wirth designed the system to be tiny, simple and easy to understand. It's written in a language that resembles Delphi. It's still used for teaching students at ETH Zürich, and is very highly-regarded [PDF] in academic circles.

It would be a big job -- but not as big a job as rewriting RISC OS...
liam_on_linux: (Default)
In a work chat, this gods-awful article about "What does a DevOps Engineer do?" came up.
Read more... )

I thought I'd comment on a few of the howlers, but it grew a bit...

> As we all know the days of sysadmins who applies specialized skills to tune individual servers is over.

No, not at all.

> Work is all about Automation

All? No. It's a part, no more.

> explaining a daily routine

*The* daily routine

> become a DevOps Engineer.

DevOps Engineers.

> A System Administrator is supposed to build, manage, and troubleshoot servers on a regular basis.

And clients, and networking, and switches/firewalls/routers, etc. You missed out about 75% of the job.

> Become master in deploying virtualization

Become *a* master

> A DevOps Engineer must know the IT network and storage concepts.

A lot more than just knowing the concepts. They need to know the implementation and management, as well.

> Puppet, Chef, Jenkins, Salt, Ansible, Kubernetes, Docker, Prometheus, Cloud Computing and storage platform, and Infrastructure as a Code (Terraform).

This is a bizarre mixture of automation, deployment and testing tools, concepts, ideas and methods. These things do not belong together.

> But nothing to worry about here.

*There is* nothing to worry about.

Only I think that's wrong, too. There's plenty.

> For e.g. Python

"e.g." means "for example"; you don't need "for" as well, and it shouldn't have a capital mid-sentence.

> bridge between the development and operations teams.

What you're trying to describe is someone who _is_ the ops team, or part of it. Not a bridge; the role you are talking about _replaces_ ops. (Be that a good idea or not. Hint: it's not.)

> As we know in today’s time

"As we know today" -- only we do not. I don't think this is true.

> everything is automatized

*Automated

> including the server triggering

No "the".

> These days we hear a new term “DevSecOps” right!

No, we don't.

> A DevOps personnel

You can't have *a* personnel.

And knowledge and awareness of security is part of *everyone's* role now, not just in dev, ops, or devops.

> Now you will say why a DevOps person needs to know the testing?

"Now, you might say why should a DevOps person need to know about testing?"

> Nothing to worry here again You

Missing full stop.

> can always got to your Development and Testing teams to address the errors if any in their application.

This sounds like passing the buck -- if you don't know why continuous integration (or whatever) tooling is working, ask someone else to fix it.

It is very important to know your limits. To know what you know and what you don't know, to be able and unafraid to SAY "I don't know" or "I can't do that"...

This is a core life skill, not just a core DevOps skill. But it also has a necessary coda: when you say you don't know, you need to be able to say "... but I know how to find out." Knowing what you don't know, being able to say that you don't know it, and knowing who to ask or where to look, and being able to say that too.

This is so important, it needs to go at the top.

Big scary list of platforms, then being confident enough to say "I don't know."

This is how I know if someone is good or not: whether, when encountering something new, they say "I don't know".

Anyone who never ever says "I don't know" is no good at their job and should not have that job.

I do not hear you saying you don't know -- about anything.

> Infra setups (either over Cloud or Bare Metal or VM’s)

Never ever make a plural with an apostrophe, in any conditions. No exceptions.

> Version Control support (GIT, SVN ..)

An ellipsis is three dots, no more, no less. No exceptions. If it is ending a sentence or a list, no space before it.

> Checking Email

That's everyone with a job. Omit the irrelevant. Your readers' time is precious; don't waste it.

> Checking JIRA / Any ticketing tool for pending/scheduled tasks.

Any ticketing tool? (And why is it capitalised?) Don't you mean _your_ ticketing tool or tools?

Scheduled tasks? I thought you said it was all about automation now? If it can be scheduled, shouldn't it be automated?

> Clear Notifications of alerting system.

You uSe an awFul loT of RanDom Capital letters, do you Know that? Why?

> Ensure if any new server is created and monitoring has been set up on that.

Check for new servers every day? Really? This just happens without you, does it?

Just one? You use the singular, not plurals here, so you must mean just one of them.

> Verify if all the service running on that server are covered under the monitoring system.

All the service? Don't you mean all the services? Did you proofread this before you posted it?

Top tip to aspiring DevOps people: CHECK YOUR WORK. If you are not sure, for example, if you are not working in your native language or a language you know very well, get a native to check before pushing to prod.

> Check and automate if any server is running out of disk. Taking a backup of instance and restoring if required.

If any *servers* ARE. How will you automate adding disk space? Is that possible? Why are you picking out and highlighting this one task?

> Taking a backup of Prod DB and providing that DB to developers on Staging / Testing Environment for testing of any issue.

Any issue? You mean any *issues.* Attention to detail! Like you are not showing here.

Do you test on a copy of the production database? Really? Is that not a bit big?

> Automation setup for daily tasks like (DB/Instance/Logs/Config-Files) backup.

Don't use "like" when you mean "such as". They are not the same. Precision is important, too.

> In case of new project setting up new Jenkins Job. ( Freestyle / Pipeline).

Freestyle, like your punctuation?

> Making config changes on servers using (Ansible /chef/ puppet).

Don't put things in parentheses if you are being specific. Be consistent in capitalisation.

Actually, be consistent in general. You're not.

> Writing playbooks for automating daily tasks.

Playbooks? Is that not specific to one tool? Why not be general? Always be more general if it is applicable. DevOps is all about generalisation instead of specialisation. Is automation of repetitive tasks a daily task? I would think automation is about *eliminating* daily tasks.

> Deploying code on Development and Production servers.

Radically different tasks which you are combining. That sets off alarm bells for me.

> Ensuring that post-deployment sanity of code is done and proper sign-offs are given.

"Sanity" and "sanity checking" are not the same thing, you know.

> Providing assistance during Audits.

Audits? Every day? Remember these are daily tasks.

Being able to clearly distinguish between daily tasks and exceptional ones is a core skill, one you appear to lack.

> Ensuring that access on servers are given to required users only that’s too after proper approval

Clear separation of responsibilities and roles, you mean. Not lumping them together... for example, like you are lumping together three separate sentences here.

> Don’t be scared guys

Sexist language.

> as once you get hold of all these things your life will become easier and you will start loving it.

It is not necessary to love one's job.

> Another fact is this is one of THE MOST HIGHLY PAID SKILLSET

Highly paid *skillsets*. Remember to keep singulars and plurals separate. Separation of tasks, a key DevOps skill.

> out there in market.

The market. Or markets, as there is more than one.

Again, failure to pay attention to apparently small details, like capitaLisaTion, disqualifies anyone from working in admin or development or ops roles. Capitalisation is vital on Unix systems, for instance.

Always remember -- your purpose in life may be to serve as an example to others of what NOT to do.

2/10, would not hire.
liam_on_linux: (Default)
[Repurposed from a Reddit comment here.]

Non everyone hates "jaggies" with a burning passion, or loves anti-aliasing. Well-designed GUIs for relatively low-res displays, even monochrome ones, can and did look great, and arguably, the fact that modern GUIs tend to need antialiasing, truecolour, transparency, scalable vector icons and so on is merely a sign that the former attention to detail in design has been lost.

Some of us miss the way 1990s desktops look.

Different display standards have different needs for good reasons.

Moving images are different to static ones. High colour depth has different needs from intermediate colour depth which has different needs to low colour depth -- my first computer had 8 colours, total, plus a Bright setting, and a resolution of 256*192 pixels.

The garish colour scheme of the original AmigaOS 1.x desktop -- white, blue and orange -- was chosen to deliver high contrast on an NTSC TV set, so it would be legible, because most owners of the early Amiga computers couldn't afford monitors.

The original Mac had 384*256 pixels, in monochrome, so every pixel in every icon was hand-picked and positioned for clarity. In situations like these, you very definitely do not want anti-aliasing even if it's possible (which of course it is not, in monochrome).

But by MacOS 8 and 9, most Macs could display 16-bit or 24-bit colour, so the desktop used it if you had it -- within the constraints of the same OS design, because the same OS could still run on machines with a mono screen.

The rounded rectangles of the original classic MacOS design were a feature that Steve Jobs insisted upon -- they were a hallmark of its design.

Look through a gallery of OS GUI design over the ages and see how it developed:

Compare and contrast the frankly clunky visual design of Smalltalk, the environment that inspired Apple's Lisa and Mac, with Susan Kare's hand-drawn mono icons. Each one is a miniature masterpiece, and she is justly famed for them.

The elegant pinstripes and bevels of Apple's Platinum theme in MacOS 8 and 9 are widely held to be the acme of traditional GUI design, from an age before flat screens, before graphics accelerators, before universal "true colour" on multi-megapixel hi-res screens, let alone HD or 4K screens.

Personally, I found the clear crisp greenscreen fonts of early IBM MDA screens, and even DEC VT-series terminals, on long-persistence phosphors for less flicker, more restful on my eyes than the glaring white Retina screen of the 27" iMac I'm looking at right now.

I gazed at those for 8 hours plus a day without eyestrain.

Colour screens were a step backwards from those for text-editing or coding.

So, no. Not everyone hates jaggies as much as you do. You will find plenty of people desperately trying to get non-anti-aliased fonts for programming under Linux, e.g.

https://www.reddit.com/r/unixporn/comments/49jjky/how_do_i_get_noantialiased_fonts_on_my_terminal/

https://superuser.com/questions/130267/how-can-i-turn-off-font-antialiasing-only-for-gnome-terminal-but-not-for-other

https://geoff.greer.fm/2017/06/14/gnome-terminal-antialiasing-saga/

The person to whom I was replying said that "everyone hates jaggies" and "everyone prefers a smooth, anti-aliased display."

For clarity: while I am disagreeing with this, I am not saying it is only them. I am merely saying that not everyone likes or wants anti-aliasing. Some of us like clear, crisp, sharp graphics or text. I like CRTs. I like monochrome screens, especially monochrome CRTs. I love using classic Macs partly for this reason. Some just want to turn their 2560*1080 TFT into a giant tiled session of Vim and shell instances, and want it as crisp as can be.
liam_on_linux: (Default)
I tried to leave a helpful, constructive answer to this interesting blog post:
https://www.forsure.dev/-/2020/05/19/640-kilobytes-of-ram-and-why-i-bought-an-ibm-5160/

In case it helps, there are a few things that you could fix or improve on this machine. Please feel free to contact me if you would like more explanation.

> No HISTORY. You can repeat the last command by pressing the right-arrow.

This is incorrect. You say that you have IBM PC DOS 5. If so, this includes the DOSKEY command. This will give you a command-line history with editing. Just type `dos\doskey` to load it.

> For a starters, on IBM DOS (version 5.0) there is no $PATH.

There certainly should be! DOS has 2 configuration files, which live in the root directory of the boot drive (A: or C:). They are called [1] CONFIG.SYS and [2] AUTOEXEC.BAT. In the 2nd, there should be a line:
PATH=C:\DOS; C:\
If you don't have them, email me and I can help you write some. I am easy to find on Google.

> Trying to exit QBASIC. Epic fail

That is *not* QBASIC; QBASIC has a GUI. You were in either BASICA or GWBASIC. The command to quit is `system`, if I remember correctly after 30 years.

> but there is no scrolling

Yes there is. Type `dir /p` for page-by-page. `dir /w` gives a wide listing. You can combine these: `dir /w /p`. You can also do `dir | more`.

> the monitor only is 25 lines.

This depends on the graphics card. If you have an MDA card, no, 25 lines is all. Try `mode con: lines=43` or `mode con: lines=50`. This will only work on a VGA-compatible card, though, and you will need ANSI.SYS installed, I think.

> wppreview, I totally miss the point of this program.

It is not part of DOS. Sounds like a WordPerfect preview program for use with mailmerge.

> I will have to remap my function key in i3, because I am currently using the windows key for this.

It is easy to remap CapsLock to be a “Windows” (Super) key. This is how I use my IBM Model M in Linux. I suggest `xmodmap`.

> Besides that, I found this great archive with manuals and bootdisks and even PC DOS 5.02.

If you are willing to change the DOS version, I suggest DR DOS 3.41. The reason is this: MS/PC DOS 5, 6 & later are designed for 386 memory management. This is impossible on an 8088 chip, and as a result, you will have very little free memory. Many DOS programs won’t work.

DR-DOS is a better 3rd party clone of DOS, by the company that wrote the original OS (CP/M) that MS-DOS was ripped-off from. The first version is 3.41 (before that it had different names) and it is far more memory-efficient.

https://winworldpc.com/product/dr-dos/3x

But if you want to stay with an IBM original DOS, then IBM developed PC DOS all the way to version 7.1, which supports EIDE hard disks over 8GB, FAT32 and some other nice features. It is a free download.

I have described how to get it here:
https://liam-on-linux.livejournal.com/59703.html

PC DOS 7 is a bit strange; IBM removed Microsoft’s GUI editor and replaced it with an OS/2-derived one called E, which has a weird UI. IBM also removed GWBASIC and replaced it with the Rexx scripting language.

Personally, I combine bits of PC-DOS 7.1 with Microsoft’s editor, Microsoft’s diagnostics, Scandisk disk-repair tool and some other bits, but that is more than I can cover in a comment!

There is a lot you can do to upgrade a 5160 if you wish. Here is a crazy example:

https://sites.google.com/site/misterzeropage/

I would not go that far, but a VGA card, VGA CRT, a serial mouse and an XTIDE card with a CF card in it, and it would be a lot easier to use…

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 16th, 2026 12:19 am
Powered by Dreamwidth Studios