liam_on_linux: (Default)
I stumbled across an old article of mine earlier, and tweeted it. Sadly, the server seems to have noticed and slapped a paywall onto it. So, on the basis that I wrote the bally thing anyway, here's a copy of the text for posterity, grabbed from Google's cache. Typos left from the original.

FEATURE - Server integration - Window onto Unix

If you want to access a Unix box from a Windows PCs you might feel that the world is against you. Although Windows wasn't designed with Unix integration in mind there is still a range of third-party products that can help. Liam Proven takes you through a selection of the better-known offerings.

10 March 1998

Although Intel PCs running some variant of Microsoft Windows dominateat the world is against you. Although Windows wasn't designed with Unix integration in mind there is still a range of third-party products that can help. Liam Proven takes you through a selection of the better-known offerings. the desktop today, Unix remains strong as a platform for servers and some high-end graphics workstations. While there's something to be said in favour of desktop Unix in cost-of-ownership terms, it's generally far cheaper to equip users with commodity Windows PCs than either Unix workstations or individual licences for the commercial Unix offering, such as Sun's Solaris or SCO's products, that run on Intel PCs.

The problem is that Windows was not designed with Unix integration as a primary concern. Granted, the latest 32-bit versions are provided with integrated Internet access in the form of TCP/IP stacks and a web browser, but for many businesses, a browser isn't enough.

These power users need more serious forms of connectivity: access to Unix server file systems, text-based applications and graphical Unix programs.

These needs are best met by additional third-party products. Most Unix vendors offer a range of solutions, too many to list here, so what follows is a selection of the better-known offerings.

Open access

In the 'Open Systems' world, there is a single, established standard for sharing files and disks across Lans: Network File System (NFS). This has superseded the cumbersome File Transfer Protocol (FTP) method, which today is mainly limited to remote use, for instance in Internet file transfers.

Although, as with many things Unix, it originated with Sun, NFS is now the de facto standard, used by all Unix vendors. In contrast to FTP, NFS allows a client to mount part of a remote server's filesystem as if it were a local volume, giving transparent access to any program.

It should come as no surprise that no version of Windows has built-in NFS support, either as a client or a server. Indeed, Microsoft promotes its own system as an alternative to NFS under the name of CIFS. Still, Microsoft does include FTP clients with its TCP/IP stacks, and NT Server even includes an FTP server. Additionally, both Windows 95 and NT can print to Unix print queues managed by the standard LPD service.

It is reasonably simple to add NFS client support to a small group of Windows PCs. Probably the best-regarded package is Hummingbird's Maestro (formerly from Beame & Whiteside), a suite of TCP/IP tools for Windows NT and 95. In addition to an NFS client, it also offers a variety of terminal emulations, including IBM 3270 and 5250, Telnet and an assortment of Internet tools. A number of versions are available including ones to run alongside or independently of Microsoft's TCP/IP stack. DOS and Windows 3 are also provided for.

There is also a separate NFS server to allow Unix machines to connect to Windows servers.

If there are a very large number of client machines, though, purchasing multiple licences for an NFS package might prove expensive, and it's more cost-effective to make the server capable of serving files using Windows standards. Effectively, this means the Server Message Block (SMB) protocol, the native 'language' of Microsoft's Lan Manager, as used in everything from Windows for Workgroups to NT Server.

Lan Manager - or, more euphemistically, LanMan - has been ported to run on a range of non-MS operating systems, too. All Microsoft networking is based on LanMan, so as far as any Windows PCs are concerned, any machine running LanMan is a file server: a SCO Unix machine running VisionFS, or a Digital Unix or OpenVMS machine running PathWorks. For Solaris systems, SunLink PC offers similar functionality.

It's completely transparent: without any additional client software, all network-aware versions of Windows (from Windows 3.1 for Workgroups onwards) can connect to the disks and printers on the server. For DOS and Windows 3.1 clients, there's even a free LanMan (Dos-based) client available from Microsoft. This can be downloaded from www.microsoft. com or found on the NT Server CD.

Samba in the server

So far, so good - as long as your Unix vendor offers a version of LanMan for its platform. If not, there is an alternative: Samba. This is a public domain SMB network client and server, available for virtually all Unix flavours. It's tried and tested, but traditionally-minded IT managers may still be biased against public domain software. Even so, Samba is worth a look; it's small and simple and works well. It only runs over TCP/IP, but this comes as standard with 32-bit Windows and is a free add-on for Windows 3. A Unix server with Samba installed appears in "Network Neighborhood" under Windows as another server, so use is completely transparent.

File and print access is fine if all you need to do is gain access to Unix data from Windows applications, but if you need to run Unix programs on Windows, it's not enough. Remote execution of applications is a built-in feature of the Unix operating system, and works in three basic ways.

The simplest is via the Unix commands rexec and rsh, which allow programs to be started on another machine across the network. However, for interactive use, the usual tools are Telnet, for text-terminal programs, and the X Window System (or X) for GUI applications.

Telnet is essentially a terminal emulator that works across a TCP/IP network, allowing text-based programs to be used from anywhere on the network. A basic Telnet program is supplied free with all Windows TCP/IP stacks, but only offers basic PC ANSI emulation. Traditional text-based Unix applications tend to be designed for common text terminals such as the Digital VT220 or Wyse 60, and use screen controls and keyboard layouts specific to these devices, which the Microsoft Telnet program does not support.

A host of vendors supply more flexible terminal emulators with their TCP/IP stacks, including Hummingbird, FTP Software, NetManage and many others. Two specialists in this area are Pericom Software and J River.

Pericom's Teem range of terminal emulators is probably the most comprehensive, covering all major platforms and all major emulations. J River's ICE range is more specific, aiming to connect Windows PCs to Unix servers via TCP/IP or serial lines, providing terminal emulation, printing to Unix printers and easy file transfer.

Unix moved on from its text-only roots many years ago and modern Unix systems have graphical user interfaces much like those of Windows or the MacOS. The essential difference between these and the Unix GUI, though, is that X is split into two parts, client and server. Confusingly, these terms refer to the opposite ends of the network than in normal usage: the X server is the program that runs on the user's computer, displaying the user interface and accepting input, while the X client is the actual program code running on a Unix host computer.

The X factor

This means that all you need to allow PCs to run X applications is an X server for MS Windows - and these are plentiful. While Digital, Sun and other companies offer their own X servers, one of the best-regarded third-party offerings, Exceed, again comes from Hummingbird. With an MS Windows X server, users can log-in to Unix hosts and run any X-based application as if they were using a Unix workstation - including the standard X terminal emulator xterm, making X ideal for mixed graphical and character-based work.

The only drawback of using terminal emulators or MS Windows X servers for Unix host access is the same as that for using NFS: the need for multiple client licences. However, a radical new product from SCO changes all that.

The mating game

Tarantella is an "application broker": it shifts the burden of client emulation from the desktop to the server. In short, Tarantella uses Java to present a remote desktop or "webtop" to any client computer with a Java-capable web browser. From the webtop, the user can start any host-based application to which they have rights, and Tarantella downloads Java code to the client browser to provide the relevant interface - either a terminal emulator for character-based software or a Java X emulator for graphical software.

The host software can be running on the Tarantella server or any other host machine on the network, meaning that it supports most host platforms - including Citrix WinFrame and its variants, which means that Tarantella can supply Windows applications to all clients, too.

Tarantella is remarkably flexible, but it's early days yet - the first version only appeared four months ago. Currently, Tarantella is confined to running on SCO's own UnixWare, but versions are promised for all major Unix variants and Windows NT.

There are plenty of ways to integrate Windows and Unix environments, and it's a safe bet that whoever your Unix supplier is they will have an offering - but no single product will be perfect for everyone, and those described here deserve consideration. Tarantella attempts to be all things to all system administrators, but for now, only if they are running SCO. It's highly likely, though, that it is a pointer to the way things will go in the future.

USING WINDOWS FROM UNIX

There are a host of solutions available for accessing Unix servers from Windows PCs. Rather fewer go the other way, allowing Unix users to use Windows applications or data stored on Windows servers.

For file-sharing, it's easiest to point out that the various solutions outlined in the main article for accessing Unix file systems from Windows will happily work both ways. Once a Windows machine has access to a Unix disk volume, it can place information on to that volume as easily as it can take it off.

For regular transfers, or those under control of the Unix system, NFS or Samba again provide the answer. Samba is both a client and a server, and Windows for Workgroups, Windows 95 and Windows NT all offer server functionality.

Although a Unix machine can't access the hard disk of a Windows box which is only running an NFS client, most NFS vendors also offer separate NFS servers for Windows. It would be unwise, at the very least, to use Windows 3 or Windows 95 as a file server, so this can reasonably be considered to apply mainly to PCs running Windows NT.

Here, the licensing restrictions on NT come into play. NT Workstation is only licensed for 10 simultaneous incoming client connections, so even if the NFS server is not so restricted, allowing more than this violates Microsoft's licence agreement. Different versions of NT Server allow different numbers of clients, and additional licences are readily available from Microsoft, although versions 3.x and 4 of NT Server do not actually limit connections to the licensed number.

There are two routes to running Windows applications on Unix workstations: emulating Windows itself on the workstation, or adding a multi-user version of Windows NT to the Unix network.

Because there are so many applications for DOS and Windows compared to those for all other operating system platforms put together, several companies have developed ways to run Windows, or Windows programs, under Unix. The simplest and most compatible method is to write a Unix program which emulates a complete Intel PC, and then run an actual copy of Windows on the emulator.

This has been done by UK company Insignia, whose SoftWindows was developed with assistance from Microsoft itself. SoftWindows runs on several Unix architectures including Solaris, IRIX, AIX and HP-UX (as well as the Apple Macintosh), and when running on a powerful workstation is very usable.

A different approach was tried by Sun with Wabi. Wabi once stood for "Windows Application Binary Interface", but for legal reasons, this was changed, and now the name doesn't stand for anything. Wabi translates Windows API calls into their Unix equivalents, and emulates an Intel 386 processor for use on RISC systems. This enables certain 16-bit Windows applications, including the major office suites, to run under Unix, without requiring an actual copy of Microsoft Windows. However, it isn't guaranteed to run any Windows application, and partly due to legal pressure from Microsoft, development was halted after the 16-bit edition was released.

It's still on sale, and versions exist for Sun Solaris, SCO Unix and Caldera OpenLinux.

Both these approaches are best suited to a small number of users who don't require high Windows performance. For many users and high-performance, Insignia's NTrigue or Tektronix' WinDD may be better answers. Both are based on Citrix WinFrame, which is a version of Windows NT Server 3.51 licensed from Microsoft and adapted to allow true multi-user access. While WinFrame itself uses the proprietary ICA protocol to communicate with clients, NTrigue and WinDD support standard X Windows, allowing Unix users to log-in to a PC server and remotely run 32-bit Windows software natively on Intel hardware.
liam_on_linux: (Default)
[A friend asked why, if Lisp was so great, it never got a look-in when Ada was designed.]

My impression is that it’s above all else cultural.

There have long been multiple warring factions depending on deeply-felt beliefs about how computing should be done. EBDCIC versus ASCII, RISC vs CISC, C vs Pascal, etc. Now it’s mostly sorted inasmuch as we all use Unix-like OSes — the only important exception, Windows, is becoming more Unix-like — and other languages etc. are layered on top.

But it goes deeper than, e.g., C vs Pascal, or BASIC or Fortran or whatever. There is the imperative vs functional camp. Another is algebraic expressions versus non-algebraic: i.e. prefix or postfix (stack-oriented RPN), or something Other such as APL/I/J/A+; manual memory management versus automatic with GC; strongly versus weakly typed (and arguably sub-battles such as manifest versus inferred/duck typing, static vs dynamic, etc.)

Mostly, the wars settled on: imperative; algebraic (infix) notation; manual memory management for system-level code and for externally-distributed code (commercial or FOSS), and GC Pascal-style languages for a lot of internal corporate s/w development (Delphi, VB, etc.).

FP, non-algebraic notation and things like were thus sidelined for decades, but are now coming back layered on top of complex OSes written in C-like languages. This is an era of proliferation in dynamic, interpreted or JITTed languages used for specific niche tasks, running on top of umpteen layers of GP OS. Examples range across Javascript, Perl 6, Python, Julia, Clojure, Ruby and tons more.

Meanwhile, new safer members of the broader C family of compiled languages, such as Rust and Go, and stretching a point Swift, are getting attention for more performance-critical app programming.

All the camps have strong arguments. There are no single right or wrong answers. However, cultural pressure and uniformity mean that outside of certain niches, we have several large camps or groups. (Of course, individual people can belong to more than one, depending on job, hobby, whatever.)

C and its kin are one, associated with Unix and later Windows.

Pascal and its kin, notably Object Pascal, Delphi/FPC, another. Basic now means VB and that means .NET family languages, another family. Both have historically mainly been part of the MS camp but now reaching out, against some resistance, into Unix land.

Java forms a camp of its own, but there are sub-camps of non-Java-like languages running on the JVM — Clojure, Scala, etc.

Apple’s flavour of Unix forms another camp, comprising ObjC and Swift, having abandoned outreach efforts.

People working on the development of Unix itself tend to strongly favour C above all else, and like relatively simple, old-fashioned tools — ancient text editors, standalone compilers. This has influenced the FOSS Unix GUIs and their apps.

The commercial desktop app developers are more into IDEs and automation; these days this covers .NET and JVM camps, and spans all OSes, but the Pascal/VM camp are still somewhat linked to Windows.

The people doing niche stuff, for their own needs or their organisations, which might be distributed as source — which covers sysadmins, devops and so on — are more into scripting languages, where there’s terrific diversity.

Increasingly the in-house app devs are just using Java, be they desktop or server apps. Indeed “desktop” apps of this type might now often mean Java server apps generating a remote UI via web protocols and technologies.

Multiple camps and affiliations. Many of them disdain the others.

A summary of how I’m actually addressing your question:

But these ones are the dominant ones, AFAICS. So when a new “safe” “secure” language was being built, “weird” niche things like Lisp, Forth, or APL never had a chance of a look-in. So it came out looking a bit Pascal- and BASIC-like, as those are the ones on the safe, heavily-type-checked side of the fence.

A more general summary:

I am coming to think that there are cultural forces stronger than technical forces involved in language choice.

Some examples I suspect that have been powerful:

Lisp (and FP) are inherently complex to learn and to use and require exceptionally high intelligence in certain focussed forms. Some people perfectly able to be serviceable, productive coders in simple imperative languages find themselves unable to fathom these styles or methods of programming. Their response is resentment, and to blame the languages, not themselves. (Dunning Kruger is not a problem confined to those of low intelligence.)

This has resulted in the marginalisation of these technologies as the computing world became vastly more commoditised and widespread. Some people can’t handle them, and some of them end up in positions of influence, so teaching switched away from them and now students are taught in simpler, imperative languages. Result, there is a general perception that some of these niche tools are exotic, not generally applicable or important, just toys for academics. This isn’t actually true but it’s such a widespread belief that it is self-perpetuating.

This also applies to things like Haskell, ML/OCaml, APL, etc.

On the flip side: programming and IT are male-dominated industries, for no very good reason. This results in masculine patterns of behaviour having profound effects and influences.

So, for instance, languages in the Pascal family have safety as a priority and try to protect programmers from errors, possibly by not allowing them to write unsafe code. A typically masculine response to this is to resent the exertion of oppressive control.

Contrastingly, languages in the BCPL/C/C++ family give the programmer extensive control and require considerable discipline and care to write safe code. They allow programmers to make mistakes which safer languages would catch and prevent.

This has a flip side, though: the greater control potentially permits or offers theoretically higher performance.

This aligns with “manly” virtues of using powerful tools — the appeal of chainsaws, fast cars and motorcycles, big powerful engines, even arguably explicitly dangerous things like knives and guns. Cf. Perl, “the Swiss Army chainsaw”.

Thus, the masculine culture around IT has resulted in people favouring these languages. They’re dangerous in unskilled hands. So, get skilled, then you can access the power.

Of course, again, as Dunning Kruger teach us, people cannot assess their own skill, and languages which permit bugs that others would trap have been used very widely for 3 decades or more, often on the argument of performance but actually because of toxic culture. All OSes are written in them; now as a result it is a truism that only these languages are suitable for writing OSes.

(Ignoring the rich history of OSes in safer languages — Algol, Lisp, Oberon, perhaps even Mesa, or Pascal in the early Macs.)

If you want fast code, you need a fast language! And Real Men use C, and you want to be a Real Man, don’t you?

Cf. the story of Mel The Real Programmer.

Do it in something low-level, manage your own memory. Programming is a game for the smart, and you must be smart because you’re a programmer, so you can handle it and you won’t drop a pointer or overflow an array.

Result, decades of complex apps tackling arbitrary complex data — e.g. Web browsers, modern office suites — written in C, and decades of software patching and updating trying to catch the legions of bugs. This is now simply perceived as how software works, as normal.

Additionally, in many cases, any possible performance benefits have long been lost due to large amounts of protective code, of error-checking, in libraries and tools, made necessary by the problems and inherent fragility of the languages.

The rebellion against it is only in the form of niche line-of-business app developers doing narrow, specific stuff, who are moving to modern interpreted languages running on top of tens of million of lines of C written by coders who are only just able to operate at this level of competence and make lots of mistakes.

For people not facing the pressures of commercial releases, there was an era of using safer, more protective compiled languages for in-company apps — Turbo Pascal, Delphi, VB. But that’s fading away now in favour of Java and .NET, “managed” languages running under a VM, with concomitant loss of performance but slight improvement in safety and reliability.

And because this has been widespread for some 2-3 decades, it’s now just _how things are done_. So if someone presents evidence and accounts of vastly better programmer productivity in other tools, decades ago, in things like Lisp or Smalltalk, then these are discounted as irrelevant. Those are not manly languages for manly programmers and so should not be considered. They’re toys.

People in small enough niches continue to use them but have given up evangelising about them. Like Mac users, their comments are dismissed as fanboyism.

So relatively small cultural effects have created immensely strong cultures, dogmas, about what is or isn’t a good choice for certain categories of problem. People outside those categories continue to use some of these languages and tools, while others languish.

This is immensely sad.

For instance, there have been successful hybrid approaches.

OSes written in Pascal derivatives, or in Lisp, or in Smalltalk, now lost to history. As a result, processor design itself has shifted and companies make processors that run C and C-like languages efficiently, and processors that understood richer primitives — lists, or objects — are now historical footnotess.

And languages which attempted to straddle different worlds — such as infix-notation Lisp derivatives, readable and easily learnable by programmers who only know infix-based, imperative languages — e.g. Dylan, PLOT, or CGOL — are again forgotten.

Or languages which developed down different avenues, such as the families of languages based on or derived from Oberon, or APL, or ML. All very niche.

And huge amounts of precious programmer time and effort expended fighting against limited and limiting tools, not well suited to large complex projects, because they simply do not know that there are or were alternatives. These have been crudely airbrushed out, like disappearing Soviet commissars.

“And so successful was this venture that very soon Magrathea itself became the richest planet of all time, and the rest of the galaxy was reduced to abject poverty. And so the system broke down, the empire collapsed, and a long, sullen silence settled over the galaxy, disturbed only by the pen-scratchings of scholars as they laboured into the night over smug little treatises on the value of a planned political economy. In these enlightened days, of course, no one believes a word of it.”

(Douglas Adams)
liam_on_linux: (Default)
Recycled blog comment, in reply to this post and this tweet, itself a comment on Bill Bennet's blog post.

I couldn't really disagree more, I'm afraid.

I regularly switch between Mac OS X, Linux & Windows. Compared to genuinely different OSes -- RISC OS, Plan 9, Bluebottle -- they're almost identical. There's no such thing as "intuitive" computing (yet) -- it's just what you're most familiar with.

IMHO the problem is that Windows has been so dominant for 25Y+ that its ways are the only ones for which most people have "muscle memory".

There is nothing intuitive about hierarchical filing systems. It's not how real life works. People don't have folders full of folders full of folders. They have 1 level, maybe 2. E.g. a drawer or set of drawers containing folders with documents in. No more levels that that

The deep hierarchies of 1970s to 1990s computers were a techie thing. They're conceptually abstract for normal folk. Tablets and Android phones show that: people have 1 level of folders and that's enough. The success of MS Office 2007 et seq (which I cordially loathe) shows that hunting through 1 level of tabs on a ribbon is easier for non-techies than layers of menus. Me, I like the menus

You get used to Windows-isms and if they're taken away or altered, suddenly, it's all weird. But it's not harder, it's just different. The Mac way, even today, is somewhat simpler, and once you learn the new grammar, it's less hassle. Windows has the edge in some things, but surprisingly few, and with the accumulation of cruft like ribbons everywhere, it's losing that, too

You say Apple's spent 27y hiding stuff. No. That's obviously silly. OS X is only 16y old, for a start. But it's spent 27y doing things differently and you didn't keep up, so when you switched, aaaargh, it's all weird!

OS X is Unix! Trademarked, POSIX certified, the lot. You know Unix? Pop open a terminal, all the usual stuff is there. But it's too much for non-techies, so it's simplified for them. Result, a trillion-dollar company and what PC types call "Mac fanbois". There's a reason – because it really is easier for them. No window management: full-screen apps. No need to remember the meaning of multiple mouse buttons. They're there if you need them, but you can do it with gestures instead^d^dI learned Macs in 1988 and have used them alongside Windows and Linux for as long as all 3 existed. I use a 29Y old Apple keyboard and a 5-button Dell mouse on my Mac. I use it in a legacy way, with deep folder trees, a few symlinks to find things, and no Apple apps at all. When I borrowed the Mac of a student, set up with everything full-screen on multiple desktops switched between with gestures, all synched with his iPad and iPhone, I was totally lost. He uses it in a totally different way to the way I use mine -- with the same FOSS apps as on my Linux laptops and my dusty unused Windows partitions

But that flexibility is good. And the fact that they have sold hundreds of millions   of iOS devices and Macs indicates that it really is good for people, and they love it. It's not slavish fashion-following: to account for a company surviving and thriving for 40 years based on that is arrant foolishness

Perhaps you're a car driver. Most of them think that car controls are intuitive. They aren't. They're entirely arbitrary. I mostly switched from motorcycles to cars in 2005 at nearly 40 years old. Motorbike controls -- a hand throttle, because it needs great precision, but a foot gearchange because that doesn't -- still feel far more natural to me, a decade later

But billions drive cars and find car controls natural and easy

It's just what you're used to

It's not Apple's fault, I'm afraid. It's yours. Sorry

I urge you to exercise your brain and learn new muscle memories. It's worth it. The additional flexibility feels great.
liam_on_linux: (Default)
In a response to a comment on:

It’s time to ban ‘stupid’ IoT devices. They’re as dangerous as post-Soviet era nuclear weapons.

One of the elements of security is currentness. It is more or less axiomatic that all software contains errors. Over time, these are discovered, and then they can be exploited to gain remote control over the thing running the software.

This is why people talk about "software rot" or "rust". It get old, goes off, and is not desirable, or safe, to use any more.

Today, embedded devices are becoming so powerful & capable that it's possible to run ordinary desktop/server operating systems on them. This is much, much easier than purpose-writing tiny, very simple, embedded code. The smaller the software, the less there is to go wrong, so the less there is to debug.

Current embedded systems are getting pretty big. The £5 Raspberry pi zero can run a full Linux OS, GUI and all. This makes it easy and cheap to use.

For instance, the possibly forthcoming ZX Spectrum Next and Ben Versteeg's ZX HD Spectrum HDMI adaptor both work by just sticking a RasPi Zero in there and having it run software that converts the video signal. Even if the device is 1000x more powerful and capable than the computer it's interfaced to, it doesn't matter if it only costs a fiver.

The problem is that once such a device is out there in lots of Internet-connected hardware, it never gets updated. So even in the vanishingly-unlikely even that it was entirely free of known bugs, issues and vulnerabilities when it was shipped, it won't stay that way. They *will* be discovered and then they *will* be exploited and the device *will* become vulnerable to exploitation.

And this is true of everything from smartphone-controlled light switches to doorbells to Internet-aware fridges. To a first approximation, all of them.

You can't have them automatically update themselves, because general-purpose OSes more or less inevitably grow over time. At some point they won't fit and your device bricks itself.

Or you give it lots of storage, increasing its price, but then the OS gets a new major version, which can't be automatically upgraded.

Or the volunteers updating the software stop updating that release, edition, family, or whatever, or it stops supporting the now-elderly chip your device uses...

Whichever way, you're toast. You are inevitably going to end up screwed.

What is making IoT possible is that computer power is cheap enough to embed general-purpose computers running general-purpose OSes into cheap devices, making them "smart". But that makes them inherently vulnerable.

This is a more general case of the argument that I tried (& judging by the comments, failed) to make in one of my relatively recent The Register pieces.

Cheap general-purpose hardware is a great thing and enables non-experts to do amazing and very cool things. However, so long as it's running open, general-purpose software designed for radically different types of computer, we have a big problem, and one that is going to get a whole lot worse.
liam_on_linux: (Default)
So, very rarely for me, a YouTube comment.

I know, I know, "never read the comments". But sheesh...



This is the single most inaccurate, error-ridden piece of computer reporting I have ever seen. Almost every single claim is wrong.

#9 Corel LinuxOS

This wasn't "designed by Debian". It was designed by, as the name says, Corel, but based on Debian, as is Ubuntu, Mint, Elementary & many other distros. For its time it was pretty good. I ran it.

"Struggled to detect drives" is nonsense.

It begat Xandros which continued for some years. Why was it killed? Because Corel did a licensing deal with Microsoft to add Visual Basic for Applications and MS Office toolbars to WordPerfect Office. One of the terms of the deal that MS insisted on was the cancellation of WordPerfect Office for Linux, Corel LinuxOS, and Corel's ARM-based NetWinder line of hardware.

#7 ITS

"Offered absolutely no security". Correct -- by design. Because it came out of what later became the GNU Project, and was meant to encourage sharing.

#6 GNU Hurd

Still isn't complete because it was vastly over-optimistic, but it has inspired L4, Minix 3 and many others. Most of its userland became the basis of Linux, arguably the most successful OS in the history of the world.

#5 Windows ME

There is a service pack, but it's unofficial.

It runs well on less memory than Windows 2000 did, and it was the first (and last) member of the Windows 9x family to properly support FireWire -- important if you had an iPod, for instance.

#4 MS-DOS 4.0

Wasn't written by Microsoft; it was a rebadged version of IBM's PC-DOS 4.0.

The phrase "badly-coded memory addresses" is literally meaningless, it is empty techno-babble.

It ran fine and introduced many valuable additions, such as support for hard disk partitions over 32MB, disk caching as standard, and the graphical DOSShell with its handy program-switching facility.

No, it wasn't a classic release, but it was the beginning of Microsoft being forced into making DOS competitive, alongside PC-DOS 4.0 and DR-DOS 5. It wasn't a result of creeping featuritis -- it was the beginning of it, and not from MS.

#3 Symbian

Symbian was a triumph, powering the very successfully Psion Series 5, 5mx, Revo and NetBook as well as multiple mobile phones.

Meanwhile, there was no such device as "the Nokia S60" -- S60 was a user interface, a piece of software, not a phone. It was one of Symbian's UIs, alongside S80, S90 and UIQ in Europe and others elsewhere.

Symbian was the only mobile OS with good enough realtime support to run the GSM stack on the same CPU as the main OS -- all other smartphones used a separate CPU running a separate OS.

Its browser was fine for the time.

Nokia only moved to Windows Phone OS when it hired a former Microsoft manager to run the company. Before then it also had its own Linux, Maemo, and also made Android devices.

#2 Lindows

"The open source distribution of Linux" is more technobabble. A distribution is a variety of Linux -- Lindows was one.

Its UI was Windows-like, like many other Linuxes even today, but Lindows' selling point was that it could run Windows apps via WINE. This wasn't a good idea - the compatibility wasn't there yet although it's quite good today -- but it's not even mentioned.

Like Corel LinuxOS, it was based on Debian, but Debian is a piece of software, not a company. Debian didn't "expect" anything.

Almost every single statement here is wrong.

#1 Vista / Windows 8

Almost every new version of Windows ever has required high-end specs for the time. This wasn't a new failing of Vista.

Windows 8 is not more "multi-functional" than any previous version. Totally wrong.

It didn't "do away with the desktop" -- also totally wrong. It's still there and is the primary UI.



JavaOS and Windows 1.0 are by comparison almost fair and apt, but this is shameful travesty of a piece. Everyone involves should be ashamed.
liam_on_linux: (Default)
My last job over here in Czechia was a year basically acting as the entire international customer complaints department for a prominent antivirus vendor.

Damned straight, Windows still has severe malware and virus problems! Yes, even Windows 8.x and 10.

The original dynamic content model for Interner Explorer was: download and run native binaries from the Internet. (AKA "ActiveX", basically OLE on web pages.) This is insane if you know anything about safe, secure software design.

It's better now, but the problem is that since IE is integrated into Windows, IE uses Windows core code to render text, images, etc. So any exploit that targets these Windows DLLs can allow a web page to execute code on your machine.

Unix' default model is that only binaries on your own system that have been marked as executable can run. By default it won't even run local stuff that isn't marked as such, let alone anything from a remote host.

(This is a dramatic oversimplification.)

Microsoft has slowly and painfully learned that the way Unix does things is safer than its own ways, and it's changing, but the damage is done. If MS rewrote Windows and fixed all this stuff, a lot of existing Windows programs wouldn't work any more. And the only reason to choose Windows is the huge base of software that there is for Windows.

Such things can be done. Mac OS X didn't run all classic MacOS apps when it was launched in 2001 or so. Then in 10.5 Apple dropped the ability to run old classic apps at all. Then in 10.6 it dropped the ability to run the OS on machines with the old processors. Then in 10.7 it dropped the ability to run apps compiled for the old processor.

It has carefully stage managed a transition, despite resistance. Microsoft _could_ have done this, but it didn't have the nerve.

It's worth mentioning that, to give it credit, the core code of both Windows 3 and Windows 95 contains some _inspired_ hacks to make stuff work, that Windows NT is a technical tour de force, and that the crap that has gradually worked its way in since Windows XP is due to the marketing people's insistence, not due to the programmers and their managers, who do superb work.

Other teams _do_ have the guts for drastic changes: look at Office 2007 (whole new UI, which I personally hate, but others like), and Windows 8 (whole new UI, which I liked but everyone else hated).

However Windows is the big cash cow and they didn't have the the courage when it was needed. Now, it's too late.
liam_on_linux: (Default)
Something I seldom see mentioned, but I use a lot, is Linux systems installed directly onto USB sticks (pendrives).

No, you can't install from these, but they are very useful for system recovery & maintenance.

There are 2 ways to do it.

[1] Use a diskless PC, or disconnect your hard disk.

This is fiddly.

SUSE has some info on how to do this.

[2] Use a VM.

VirtualBox is free and lets you assign a physical disk drive to a VM. It's much harder to do this than it is in VMware -- it requires some shell commands to create, and other ones every time you wish to use it -- but it does work.

Here's how:

http://www.sysprobs.com/access-physical-disk-virtualbox-desktop-virtualization-software

Read the comments!

Every time you want to run the VM, you must take ownership of the USB device's entry in /dev

E.g.

chown lproven:lproven /dev/sdc

N.B. This may require sudo.

Then the VM works. If you don't do this, the VM won't start and will give an unhelpful error message about nonexistent devices, then quit.

(It's possible that you could work around this by running VirtualBox as root, but that is not advisable.)

The full Unity edition of Ubuntu 16.04 will not install on an 8GB USB key, but Lubuntu will. I suspect that Xubuntu would also be fine, and maybe the Maté edition. I suspect but have not tested that KDE and GNOME editions won't work, as they're bigger. They'd be fine on bigger keys, of course, but see the next paragraph.

Also note that desktops based on GNOME 3 require hardware OpenGL support, and thus run very badly inside VMs. This includes GNOME Shell, Unity & Cinnamon, and in my experience, KDE 4 & 5.

Installation puts GRUB in the MBR of the key, so it boots like any other disk.

Hints:

  • Partition the disk as usual. I suggest no separate /home but it's up to you. A single partition is easiest.

  • Format the root partition as ext2 to extend flash media life (no journalling -> fewer writes)

  • Add ``noatime'' to the /etc/fstab entry for the root volume -- faster & again reduces disk writes

  • No swap. Swapping wears out flash media. I install and enable ZRAM just in case it's used on low-RAM machines: http://askubuntu.com/questions/174579/how-do-i-use-zram

  • You can add VirtualBox Guest Additions if you like. The key will run better in a VM and when booted on bare metal they just don't activate.

I then update as normal.

You can update when booted on bare metal, but if it installs a kernel update, then it will run ``update-grub'' and this will add entries for any OSes on that machine's hard disk into the GRUB menu. I don't like this -- it looks messy -- so I try to only update inside a VM.

I usually use a 32-bit edition; the resulting key will boot and run 64-bit machines too and modern versions automatically run PAE and use all available RAM.

Sadly my Mac does not see such devices as bootable volumes, but the keys work on normal PCs fine.

EDIT: It occurs to me that they might not work on UEFI PCs unless you create a UEFI system partition and appropriate boot files. I don't have a UEFI PC to experiment with. I'd welcome comments on this.

Windows can't see them as it does not natively understand ext* format filesystems. If you wish you can partition the drive and have an exFAT (or whatever format you prefer) data partition as well, of course.

I also install some handy tools such as additional filesystem support (exFAT, HFS etc.), GParted, things like that.

I find such keys a handy addition to my portable toolkit and have used them widely.

If you wish and you used a big enough key, you could install multiple distros on a single key this way. But remember, you can't install from them.

I've also found that the BootRepair tool won't install on what it considers to be an installed system. It insists on being installed on a live installer drive.

If you want to carry around lots of ISO files and choose which to install, a device like this is the easiest way:

http://www.zalman.com/contents/products/view.html?no=212
liam_on_linux: (Default)

I am reluctant, but I have to sell this lovely phone.

It's a 32GB, fully-unlocked Blackberry Passport running the latest OS. It's still in support and receiving updates.

http://us.blackberry.com/smartphones/blackberry-passport/overview.html

The sale includes a PDAir black leather folding case which is included in the price -- one of these:

https://www.amazon.co.uk/Pdair-Leather-BlackBerry-Passport-Stitch/dp/B012AU2FVO

It is used but in excellent condition and fully working. I have used both Tesco Mobile CZ and UK EE micro SIM cards and both worked perfectly.

The keyboard is also a trackpad and can be used to scroll and select text. The screen is square and hi-resolution -- the best I have ever used on a smartphone.

It runs the latest Blackberry 10 OS, which has the best email client on any pocket device. It can also run some Android apps and includes the Amazon app store. I side-loaded the Google Play store but not all apps for standard Android work. I am happy to help you load this if you want.

It is 100% usable without a Google, Apple or Microsoft account, if you are concerned about privacy issues.

It supports Blackberry Messenger, obviously, and has native clients for Twitter and other social networks -- I used Skype, Reddit, Foursquare and Untappd, among others. I also ran Android clients for Runkeeper, Last.FM and several other services. Facebook, Google+ and others are usable via their web interfaces.

I will do a full factory reset before handing it over.

It has a microSD slot for additional storage if you need it.

It is about a year old and has been used, so the battery is not good as new, but it still lasts much longer than the Android phablet that replaced it!

You can see it and try it before purchase if you wish.

Reason for sale: I needed more apps. I do not speak Czech and I need Google Translate and Google Maps almost every day.

Note: no mains adaptor included but it charges over micro-USB, so any charger will work, although it complains about other phone brand's chargers -- but they still work.

IKEA sell a cheap multiport one:
http://www.ikea.com/cz/cs/catalog/products/00291891/



You can see photos of my device here:
Passport

This is the Flickr album, or click on the photo above.

I am hoping for CzK 10000 but I am willing to negotiate.

Contact details on my profile page, or email lproven on Google Mail.
liam_on_linux: (Default)
I found this post interesting:

"Respinning Linux"

It led me to comment as follows...

Have you folks encountered LXLE? It's a special version of Lubuntu, the lightest-weight of the official Ubuntu remixes, optimised for older hardware.

http://www.lxle.net/

Cinnamon is a lot less than ideal, because it uses a desktop compositor. This requires hardware OpenGL. If the graphics driver doesn't do this, it emulates it using a thing called "LLVMpipe". This process is slow & uses a lot of CPU bandwidth. This is true of all desktops based on GNOME 3 -- including Unity, Elementary OS, RHEL/CentOS "Gnome Classic", SolusOS's Consort, and more. All are based on Gtk 3.

In KDE, it is possible to disable the compositor, but it's still very heavyweight.

The mainstream desktops that do not need compositing at all are, in order of size (memory footprint), from largest to smallest:
* Maté
* Xfce
* LXDE

All are based on Gtk 2, which has now been replaced with Gtk 3.

Of these, LXDE is the lightest, but it is currently undergoing a merger with the Razor-Qt desktop to become LXQt. This may be larger & slower when finished -- it's too soon to tell.

However, of the 3, this means it has a brighter-looking future because it will be based on a current toolkit. Neither Maté nor Xfce have announced firm migration paths to Gtk 3 yet.
liam_on_linux: (Default)
I almost never saw 2.8MB floppy drives.

I know they were out there. The later IBM PS/2 machines used them, and so did some Unix workstations, but the 2.8MB format -- quad or extended density -- never really took off.

It did seem to me that if the floppy companies & PC makers had actually adopted them wholesale, the floppy disk as a medium might have survived for considerably longer.

The 2.8MB drives never really took off widely, so the media remained expensive, ISTM -- and thus little software was distributed on the format, because few machines could read it.

By 1990 there was an obscure and short-lived 20MB floptical diskette format:

http://www.cbronline.com/news/insites_20mb_floptical_drive_reads_144mb_disks

Then in 1994 came 100MB Zip disks, which for a while were a significant format -- I had Macs with built-in-as-standard Zip drives.

Then the 3½" super floptical drives, the Imation SuperDisk in 1997, 144MB Caleb UHD144 in early 1998 and then 150MB Sony HiFD in late 1998.

(None of these later drives could read 2.8MB diskettes, AFAIK.)

After that, writable CDs got cheap enough to catch on, and USB Flash media mostly has killed them off now.

If the 2.8 had taken off, and maybe even intermediate ~6MB and ~12MB formats -- was that feasible? -- before the 20MB ones, well, with widespread adoption, there wouldn't have been an opening for the Zip drive, and the floppy drive might have remained a significant and important medium for another decade.

I didn't realise that the Zip drive eventually got a 750MB version, presumably competing with Iomega's own 1GB Jaz drive. If floppy drives had got into that territory, could they have even fended off CDs? Rewritable CDs always were a pain. They were a one-shot medium and thus inconvenient and expensive -- write on one machine, use a few times at best, then throw away.

I liked floppies. I enjoy playing with my ancient Sinclair computers, but loading from tape cassette is just a step too far. I remember the speed and convenience when I got my first Spectrum disk drive, and I miss it. Instant loading from an SD drive just isn't the same. I don't use them on PCs any more -- I don't have a machine with a floppy drive in this country -- but for 8-bits, two drives with a meg or so of storage was plenty. I used them long after most people, if only for updating BIOSes and so on.
liam_on_linux: (Default)
I was surprised to read someone castigating and condemning the Cyrix line of PC CPUs today.

For a while, I recommended 'em and used 'em myself. My own home PC was a Cyrix 6x86 P166+ for a year or two. Lovely machine -- a 133MHz processor that performed about 30-40% better than an Intel Pentium MMX at the same clock speed.

My then-employer, PC Pro magazine, recommended them too.

I only ever hit one problem: I had to turn down reviewing the latest version of Aldus PageMaker because it wouldn't run on a 6x86. I replaced it with a Baby-AT Slot A Gigabyte motherboard and a Pentium II 450. (Only the 100MHz front side bus Pentium IIs were worth bothering with IMHO. The 66MHz FSB PIIs could be outperformed by a cheaper SuperSocket 7 machine with a Cyrix chip.) It was very difficult to find a Baby-AT motherboard for a PII -- the market had switched to ATX by then -- but it allowed me to keep a case I particularly liked, and indeed, most of the components in that case, too.

The one single product that killed the Cyrix chips was id Software's Quake.

Quake used very cleverly optimised x86 code that interleaved FPU and integer instructions, as John Carmack had worked out that apart from instruction loading, which used the same registers, FPU and integer operations used different parts of the Pentium core and could effectively be overlapped. This nearly doubled the speed of FPU-intensive parts of the game's code.

The interleaving didn't work on Cyrix cores. It ran fine, but the operations did not overlap, so execution speed halved.

On every other benchmark and performance test we could devise, the 6x86 core was about 30-40% faster than the Intel Pentium core -- or the Pentium MMX, as nothing much used the extra instructions, so really only the additional L1 cache helped. (The Pentium 1 had 16 kB of L1; the Pentium MMX had 32 kB.)

But Quake was extremely popular, and everyone used it in their performance tests -- and thus hammered the Cyrix chips, even though the Cyrix was faster in ordinary use, in business/work/Windows operation, indeed in every other game except Quake.

And ultimately that killed Cyrix off. Shame, because the company had made some real improvements to the x86-32 design. Improving instructions-per-clock is more important than improving the raw clock speed, which was Intel's focus right up until the demise of the Netburst Pentium 4 line.

AMD with the 64-bit Sledgehammer core (Athlon 64 & Opteron) did the same to the P4 as Cyrix's 6x86 did to the Pentium 1. Indeed I have a vague memory some former Cyrix processor designers were involved.

Intel Israel came back with the (Pentium Pro-based) Pentium M line, intended for notebooks, and that led to the Core series, with IPC speeds that ultimately beat even AMD's. Today, nobody can touch Intel's high-end x86 CPUs. AMD is looking increasingly doomed, at least in that space. Sadly, though, Intel has surrendered the low end and is killing the Atom line.

http://www.pcworld.com/article/3063672/windows/the-death-of-intels-atom-casts-a-dark-shadow-over-the-rumored-surface-phone.html

The Atoms were always a bit gutless, but they were cheap, ran cool, and were frugal with power. In recent years they've enabled some interesting cheap low-end Windows 8 and Windows 10 tablets:

http://www.anandtech.com/show/8760/hp-stream-7-review

https://www.amazon.co.uk/Windows10-Tablet-Display-11000mAh-Battery-F-Black-B-Gray/dp/B01DF3UV3Y?ie=UTF8&keywords=hi12&qid=1460578088&ref_=sr_1_2&sr=8-2

Given that there is Android for x86, and have already been Intel-powered Android phones, plus Windows 10 for phones today, this opened up the intriguing possibility of x86 Windows smartphones -- but then Intel slammed the door shut.

Cyrix still exists, but only as a brand for Via, with some very low-end x86 chips. Interestingly, these don't use Cyrix CPU cores -- they use a design taken from a different non-Intel x86 vendor, the IDT WinChip:

https://en.wikipedia.org/wiki/WinChip

I installed a few WinChips as upgrades for low-speed Pentium PCs. The WinChip never was all that fast, but it was a very simple, stripped-down core, so it ran cool, was about as quick as a real Pentium core, but was cheaper and ran at higher clock speeds, so they were mainly sold as an aftermarket upgrade for tired old PCs. The Cyrix chips weren't a good fit for this, as they required different clock speeds, BIOS support, additional cooling and so on. IDT spotted a niche and exploited it, and oddly, that is the non-Intel x86 core that's survived at the low-end, and not the superior 6x86 one.

In the unlikely event that Via does some R&D work, it could potentially move into the space now vacated by the very low-power Atom chips. AMD is already strong in the low-end x86 desktop/notebook space with its Fusion processors which combine a 64-bit x86 core with an ATI-derived GPU, but they are too big, too hot-running and too power-hungry for smartphones or tablets.
liam_on_linux: (Default)

(Repurposed email reply)

Although I was educated & worked with DEC systems, I didn't have much to do with the company itself. Its support was good, the kit ludicrously expensive, and the software offerings expensive, slow and lacking competitive features. However, they also scored in some ways.

My 60,000' view:

Microsoft knew EXACTLY what it was doing with its practices when it built up its monopoly. It got lucky with the technology: its planned future super products flopped, but it turned on a dime & used what worked.

But killing its rivals, any potential rival? Entirely intentional.

The thing is that no other company was poised to effectively counter the MS strategy. Nobody.

MS' almost-entirely-software-only model was almost unique. Its ecosystem of apps and 3rd party support was unique.

In the end, it actually did us good. Gates wanted a computer on every desk. We got that.

The company's strategy called for open compatible generic hardware. We got that.

Only one platform, one OS, was big enough, diverse enough, to compete: Unix.

But commercial, closed, proprietary Unix couldn't. 2 ingredients were needed:

#1 COTS hardware - which MS fostered;
#2 FOSS software.

Your point about companies sharing their source is noble, but I think inadequate. The only thing that could compete with a monolithic software monopolist on open hardware was open software.

MS created the conditions for its own doom.

Apple cleverly leveraged FOSS Unix and COTS X86 hardware to take the Mac brand and platform forward.

Nobody else did, and they all died as a result.

If Commodore, Atari and Acorn had adopted similar strategies (as happened independently of them later, after their death, resulting in AROS, AFROS & RISC OS Open), they might have lived.

I can't see it fitting the DEC model, but I don't know enough. Yes, cheap low-end PDP-11s with FOSS OSes might have kept them going longer, but not saved them.

The deal with Compaq was catastrophic. Compaq was in Microsoft's pocket. I suspect that Intel leant on Microsoft and Microsoft then leant on Compaq to axe Alpha, and Compaq obliged. It also knifed HP OpenMail, possibly the Unix world's only viable rival to Microsoft Exchange.

After that it was all over bar the shouting.

Microsoft could not have made a success of OS/2 3 without Dave Cutler... But DEC couldn't have made a success out of PRISM either, I suspect. Maybe a stronger DEC would have meant Windows NT would never have happened.

liam_on_linux: (Default)
My contention is that a large part of the reason that we have the crappy computers that we do today -- lowest-common-denominator boxes, mostly powered by one of the kludgiest and most inelegant CPU architectures of the last 40 years -- is not technical, nor even primarily commercial or due to business pressures, but rather, it's cultural.

When I was playing with home micros (mainly Sinclair and Amstrad; the American stuff was just too expensive for Brits in the early-to-mid 1980s), the culture was that Real Men programmed in assembler and the main battle was Z80 versus 6502, with a few weirdos saying that 6809 was better than either. BASIC was the language for beginners, and a few weirdos maintained that Forth was better.

At university, I used a VAXcluster and learned to program in Fortran-77. The labs had Acorn BBC Micros in -- solid machines, the best 8-bit BASIC ever, and they could interface both with lab equipment over IEEE-488 and with generic printers and so on over Centronics parallel and its RS-423 interface [EDIT: fixed!], which could talk to RS-232 kit.

As I discovered when I moved into the professional field a few years later (1988), this wasn't that different from the pro stuff. A lot of apps were written in various BASICs, and in the old era of proprietary OSes on proprietary kit, for performance, you used assembler.

But a new wave was coming. MS-DOS was already huge and the Mac was growing strongly. Windows was on v2 and was a toy, but Unix was coming to mainstream kit, or at least affordable kit. You could run Unix on PCs (e.g. SCO Xenix), on Macs (A/UX), and my employers had a demo IBM RT-6150 running AIX 1.

Unix wasn't only the domain (pun intentional) of expensive kit priced in the tens of thousands.

A new belief started to spread: that if you used C, you could get near-assembler performance without the pain, and the code could be ported between machines. DOS and Mac apps started to be written (or rewritten) in C, and some were even ported to Xenix. In my world, nobody used stuff like A/UX or AIX, and Xenix was specialised. I was aware of Coherent as the only "affordable" Unix, but I never saw a copy or saw it running.

So this second culture of C code running on non-Unix OSes appeared. Then the OSes started to scramble to catch up with Unix -- first OS/2, then Windows 3, then the for a decade parallel universe of Windows NT, until XP became established and Win9x finally died. Meanwhile, Apple and IBM flailed around, until IBM surrendered, Apple merged with NeXT and switched to NeXTstep.

Now, Windows is evolving to be more and more Unix-like, with GUI-less versions, clean(ish) separation between GUI and console apps, a new rich programmable shell, and so on.

While the Mac is now a Unix box, albeit a weird one.

Commercial Unix continues to wither away. OpenVMS might make a modest comeback. IBM mainframes seem to be thriving; every other kind of big iron is now emulated on x86 kit, as far as I can tell. IBM has successfully killed off several efforts to do this for z Series.

So now, it's Unix except for the single remaining mainstream proprietary system: Windows. Unix today means Linux, while the weirdoes use FreeBSD. Everything else seems to be more or less a rounding error.

C always was like carrying water in a sieve, so now, we have multiple C derivatives, trying to patch the holes. C++ has grown up but it's like Ada now: so huge that nobody understands it all, but actually, a fairly usable tool.

There's the kinda-sorta FOSS "safe C++ in a VM", Java. The proprietary kinda-sorta "safe C++ in a VM", C#. There's the not-remotely-safe kinda-sorta C in a web browser, Javascript.

And dozens of others, of course.

Even the safer ones run on a basis of C -- so the lovely cuddly friendly Python, that everyone loves, has weird C printing semantics to mess up the heads of beginners.

Perl has abandoned its base, planned to move onto a VM, then the VM went wrong, and now has a new VM and to general amazement and lack of interest, Perl 6 is finally here.

All the others are still implemented in C, mostly on a Unix base, like Ruby, or on a JVM base, like Clojure and Scala.

So they still have C like holes and there are frequent patches and updates to try to make them able to retain some water for a short time, while the "cyber criminals" make hundreds of millions.

Anything else is "uncommercial" or "not viable for real world use".

Borland totally dropped the ball and lost a nice little earner in Delphi, but it continues as Free Pascal and so on.

Apple goes its own way, but has forgotten the truly innovative projects it had pre-NeXT, such as Dylan.

There were real projects that were actually used for real work, like Oberon the OS, written in Oberon the language. Real pioneering work in UIs, such as Jef Raskin's machines, the original Mac and Canon Cat -- forgotten. People rhapsodise over the Amiga and forget that the planned OS, CAOS, to be as radical as the hardware, never made it out of the lab. Same, on a smaller scale, with the Acorn Archimedes.

Despite that, of course, Lisp never went away. People still use it, but they keep their heads down and get on with it.

Much the same applies to Smalltalk. Still there, still in use, still making real money and doing real work, but forgotten all the same.

The Lisp Machines and Smalltalk boxes lost the workstation war. Unix won, and as history is written by the victors, now the alternatives are forgotten or dismissed as weird kooky toys of no serious merit.

The senior Apple people didn't understand the essence of what they saw at PARC: they only saw the chrome. They copied the chrome, not the essence, and now all that any of us have is the chrome. We have GUIs, but on top of the nasty kludgy hacks of C and the like. A late-'60s skunkware project now runs the world, and the real serious research efforts to make something better, both before and after, are forgotten historical footnotes.

Modern computers are a vast disappointment to me. We have no thinking machines. The Fifth Generation, Lisp, all that -- gone.

What did we get instead?

Like dinosaurs, the expensive high-end machines of the '70s and '80s didn't evolve into their successors. They were just replaced. First little cheapo 8-bits, not real or serious at all, although they were cheap and people did serious stuff with them because it's all they could afford. The early 8-bits ran semi-serious OSes such as CP/M, but when their descendants sold a thousand times more, those descendants weren't running descendants of that OS -- no, it and its creator died.

CP/M evolved into a multiuser multitasking 386 OS that could run multiple MS-DOS apps on terminals, but it died.

No, then the cheapo 8-bits thrived in the form of an 8/16-bit hybrid, the 8086 and 8088, and a cheapo knock-off of CP/M.

This got a redesign into something grown-up: OS/2.

Predictably, that died.

So the hacked-together GUI for DOS got re-invigorated with an injection of OS/2 code, as Windows 3. That took over the world.

The rivals - the Amiga, ST, etc? 680x0 chips, lots of flat memory, whizzy graphics and sound? All dead.

Then Windows got re-invented with some OS/2 3 ideas and code, and some from VMS, and we got Windows NT.

But the marketing men got to it and ruined its security and elegance, to produce the lipstick-and-high-heels Windows XP. That version, insecure and flakey with its terrible bodged-in browser, that, of course, was the one that sold.

Linux got nowhere until it copied the XP model. The days of small programs, everything's a text file, etc. -- all forgotten. Nope, lumbering GUI apps, CORBA and RPC and other weird plumbing, huge complex systems, but it looks and works kinda like Windows and a Mac now so it looks like them and people use it.

Android looks kinda like iOS and people use it in their billions. Newton? Forgotten. No, people have Unix in their pocket, only it's a bloated successor of Unix.

The efforts to fix and improve Unix -- Plan 9, Inferno -- forgotten. A proprietary microkernel Unix-like OS for phones -- Blackberry 10, based on QNX -- not Androidy enough, and bombed.

We have less and less choice, made from worse parts on worse foundations -- but it's colourful and shiny and the world loves it.

That makes me despair.

We have poor-quality tools, built on poorly-designed OSes, running on poorly-designed chips. Occasionally, fragments of older better ways, such as functional-programming tools, or Lisp-based development environments, are layered on top of them, but while they're useful in their way, they can't fix the real problems underneath.

Occasionally someone comes along and points this out and shows a better way -- such as Curtis Yarvin's Urbit. Lisp Machines re-imagined for the 21st century, based on top of modern machines. But nobody gets it, and its programmer has some unpleasant and unpalatable ideas, so it's doomed.

And the kids who grew up after C won the battle deride the former glories, the near-forgotten brilliance that we have lost.

And it almost makes me want to cry sometimes.

We should have brilliant machines now, not merely Steve Jobs' "bicycles for the mind", but Gossamer Albatross-style hang-gliders for the mind.

But we don't. We have glorified 8-bits. They multitask semi-reliably, they can handle sound and video and 3D and look pretty. On them, layered over all the rubbish and clutter and bodges and hacks, inspired kids are slowly brute-forcing machines that understand speech, which can see and walk and drive.

But it could have been so much better.

Charles Babbage didn't finish the Difference Engine. It would have paid for him to build his Analytical Engine, and that would have given the Victorian British Empire the steam-driven computer, which would have transformed history.

But he got distracted and didn't deliver.

We started to build what a few old-timers remember as brilliant machines, machines that helped their users to think and to code, with brilliant -- if flawed -- software written in the most sophisticated computer languages yet devised, by the popular acclaim of the people who really know this stuff: Lisp and Smalltalk.

But we didn't pursue them. We replaced them with something cheaper -- with Unix machines, an OS only a nerd could love. And then we replaced the Unix machines with something cheaper still -- the IBM PC, a machine so poor that the £125 ZX Spectrum had better graphics and sound.

And now, we all use descendants of that. Generally acknowledged as one of the poorest, most-compromised machines, based on descendants of one of the poorest, most-compromised CPUs.

Yes, over the 40 years since then, most of rough edges have been polished out. The machines are now small, fast, power-frugal with tons of memory and storage, with great graphics and sound. But it's taken decades to get here.

And the OSes have developed. Now they're feature-rich, fairly friendly, really very robust considering the stone-age stuff they're built from.

But if we hadn't spent 3 or 4 decades making a pig's ear into silk purse -- if we'd started with a silk purse instead -- where might we have got to by now?
liam_on_linux: (Default)
More retrocomputing meanderings -- whatever became of the ST, Amiga and Acorn operating systems?

The Atari ST's GEM desktop also ran on MS-DOS, DR's own DOS+ (a forerunner of the later DR-DOS) and today is included with FreeDOS. In fact the first time I installed FreeDOS I was *very* surprised to find my name in the credits. I debugged some batch files used in installing the GEM component.

The ST's GEM was the same environment. ST GEM was derived from GEM 1; PC GEM from GEM 2, crippled after an Apple lawsuit. Then they diverged. FreeGEM attempted to merge them again.

But the ST's branch prospered, before the rise of the PC killed off all the alternative platforms. Actual STs can be quite cheap now, or you can even buy a modern clone:

http://harbaum.org/till/mist/index.shtml

If you don't want to lash out but have a PC, the Aranym environment gives you something of the feel of the later versions. It's not exactly an emulator, more a sort of compatibility environment that enhances the "emulated" machine as much as it can using modern PC hardware.

http://aranym.org/

And the ST GEM OS was so modular, different 3rd parties cloned every components, separately. Some commercially, some as FOSS. The Aranym team basically put together a sort of "distribution" of as many FOSS components as they could, to assemble a nearly-complete OS, then wrote the few remaining bits to glue it together into a functional whole.

So, finally, after the death of the ST and its clones, there was an all-FOSS OS for it. It's pretty good, too. It's called AFROS, Atari Free OS, and it's included as part of Aranym.

I longed to see a merger of FreeGEM and Aranym, but it was never to be.

The history of GEM and TOS is complex.

Official Atari TOS+GEM evolved into TOS 4, which included the FOSS Mint multitasking later, which isn't much like the original ROM version of the first STs.

The underlying TOS OS is not quite like anything else.

AIUI, CP/M-68K was a real, if rarely-seen, OS.

However, it proved inadequate to support GEM, so it was discarded. A new kernel was written using some of the tech from what was later to become DR-DOS on the PC -- something less like CP/M and more like MS-DOS: directories, separated with backslashes; FAT format disks; multiple executable types, 8.3 filenames, all that stuff.

None of the command-line elements of CP/M or any DR DOS-like OS were retained -- the kernel booted the GUI directly and there was no command line, like on the Mac.

This is called GEMDOS and AIUI it inherits from both the CP/M-68K heritage and from DR's x86 DOS-compatible OSes.

The PC version of GEM also ran on Acorn's BBC Master 512 which had an Intel 80186 coprocessor. It was a very clever machine, in a limited way.

Acorn's series of machines are not well-known in the US, AFAICT, and that's a shame. They were technically interesting, more so IMHO than the Apple II and III, TRS-80 series etc.

The original Acorns were 6502-based, but with good graphics and sound, a plethora of ports, a clear separation between OS, BASIC and add-on ROMs such as the various DOSes, etc. The BASIC was, I'd argue strongly, *the* best 8-bit BASIC ever: named procedures, local variables, recursion, inline assembler, etc. Also the fastest BASIC interpreter ever, and quicker than some compiled BASICs.

Acorn built for quality, not price; the machines were aimed at the educational market, which wasn't so price-sensitive, a model that NeXT emulated. Home users were welcome to buy them & there was one (unsuccessful) home model, but they were unashamedly expensive and thus uncompromised.

The only conceptual compromise in the original BBC Micro was that there was provision for ROM bank switching, but not RAM. The 64kB memory map was 50:50 split ROM and RAM. You could switch ROMs, or put RAM in their place, but not have more than 64kB. This meant that the high-end machine had only 32kB RAM, and high-res graphics modes could take 21kB or so, leaving little space for code -- unless it was in ROM, of course.

The later BBC+ and BBC Master series fixed that. They also allowed ROM cartridges, rather than bare chips inserted in sockets on the main board, and a numeric keypad.

Acorn looked at the 16-bit machines in the mid-80s, mostly powered by Motorola 68000s of course, and decided they weren't good enough and that the tiny UK company could do better. So it did.

But in the meantime, it kept the 6502-based, resolutely-8-bit BBC Micro line alive with updates and new models, including ROM-based terminals and machines with a range of built-in coprocessors: faster 6502-family chips for power users, Z80s for CP/M, Intel's 80186 for kinda-sorta PC compatibility, the NatSemi 32016 with PANOS for ill-defined scientific computing, and finally, an ARM copro before the new ARM-based machines were ready.

Acorn designed the ARM RISC chip in-house, then launched its own range of ARM-powered machines, with an OS based on the 6502 range's. Although limited, this OS is still around today and can be run natively on a Raspberry Pi:

https://www.riscosopen.org/content/

It's very idiosyncratic -- both the filesystem, the command line and the default editor are totally unlike anything else. The file-listing command is CAT, the directory separator is a full stop (i.e. a period), while the root directory is called $. The editor is a very odd dual-cursor thing. It's fascinating, totally unrelated to the entire DEC/MS-DOS family and to the entire Unix family. There is literally and exactly nothing else even slightly like it.

It was the first GUI OS to implement features that are now universal across GUIs: anti-aliased font rendering, full-window dragging and resizing (as opposed to an outline), and significantly, the first graphical desktop to implement a taskbar, before NeXTstep and long before Windows 95.

It supports USB, can access the Internet and WWW. There are free clients for chat, email, FTP, the WWW etc. and a modest range of free productivity tools, although most things are commercial.

But there's no proper inter-process memory protection, GUI multitasking is cooperative, and consequently it's not amazingly stable in use. It does support pre-emptive multitasking, but via the text editor, bizarrely enough, and only of text-mode apps. There was also a pre-emptive multitasking version of the desktop, but it wasn't very compatible, didn't catch on and is not included in current versions.

But saying all that, it's very interesting, influential, shared-source, entirely usable today, and it runs superbly on the £25 Raspberry Pi, so there is little excuse not to try it. There's also a FOSS emulator which can run the modern freeware version:

http://www.marutan.net/rpcemu/

For users of the old hardware, there's a much more polished commercial emulator for Windows and Mac which has its own, proprietary fork of the OS:

http://www.virtualacorn.co.uk/index2.htm

There's an interesting parallel with the Amiga. Both Acorn and Commodore had ambitious plans for a modern multitasking OS which they both referred to as Unix-like. In both cases, the project didn't deliver and the ground-breaking, industry-redefiningly capable hardware was instead shipped with much less ambitious OSes, both of which nonetheless were widely-loved and both of which still survive in the form of multiple, actively-maintained forks, today, 30 years later -- even though Unix in fact caught up and long surpassed these 1980s oddballs.

AmigaOS, based in part on the academic research OS Tripos, has 3 modern forks: the FOSS AROS, on x86, and the proprietary MorphOS and AmigaOS 4 on PowerPC.

Acorn RISC OS, based in part on Acorn MOS for the 8-bit BBC Micro, has 2 contemporary forks: RISC OS 5, owned by Castle Technology but developed by RISC OS Open, shared source rather than FOSS, running on Raspberry Pi, BeagleBoard and some other ARM boards, plus some old hardware and RPC Emu; and RISC OS 4, now owned by the company behind VirtualAcorn, run by an ARM engineer who apparently made good money selling software ARM emulators for x86 to ARM holdings.

Commodore and the Amiga are both long dead and gone, but the name periodically changes hands and reappears on various bits of modern hardware.

Acorn is also long dead, but its scion ARM Holdings designs the world's most popular series of CPUs, totally dominates the handheld sector, and outsells Intel, AMD & all other x86 vendors put together something like tenfold.

Funny how things turn out.
liam_on_linux: (Default)
I am told it's lovely to use. Sadly, it only runs on obscure PowerPC-based kit that costs a couple of thousand pounds and can be out-performed by
a £300 PC.

AmigaOS's owners -- Hyperion, I believe -- chose the wrong platform.

On a Raspberry Pi or something, it would be great. On obscure expensive PowerPC kit, no.

Also, saying that, I got my first Amiga in the early 2000s. If I'd had one 15y earlier, I'd probably have loved it, but I bought a 2nd hand
Archimedes instead (and still think it was the right choice for a non-gamer and dabbler in programming).

A few years ago, with a LOT of work using 3 OSes and 3rd-party disk-management tools, I managed to coax MorphOS onto my Mac mini G4.
Dear hypothetical gods, that was a hard install.

It's... well, I mean, it's fairly fast, but... no Wifi? No Bluetooth?

And the desktop. It got hit hard with the ugly stick. I mean, OK, it's not as bad as KDE, but... ick.

Learning AmigaOS when you already know more modern OSes -- OS X, Linux, gods help us, even Windows -- well, the Amiga seems pretty
weird, and often for no good reason. E.g. a graphical file manager, but not all files have icons. They're not hidden, they just don't have
icons, so if you want to see them, you have to do a second show-all operation. And the dependence on RAMdisks, which are a historical curiosity now. And the needing to right-click to show the menu-bar when it's on a screen edge.

A lot of pointless arcana, just so Apple didn't sue, AFAICT.

I understand the love if one loved it back then. But now? Yeeeeeeaaaaaah, not so much.

Not that I'm proclaiming RISC OS to be the business now. I like it, but it's weird too. But AmigaOS does seem a bit primitive now. OTOH, if they sorted out multiprocessor support and memory protection and it ran on cheap ARM kit, then yeah, I'd be interested.
liam_on_linux: (Default)
I recently read that a friend of mine claimed that "Both the iPhone and iPod were copied from other manufacturers, to a large extent."

This is a risible claim, AFAICS.

There were pocket MP3 jukeboxes before the iPod. I still own one. They were fairly tragic efforts.

There were smartphones before the iPhone. I still have at least one of them, too. Again, really tragic from a human-computer interaction point of view.


AIUI, the iPhone originated internally as a shrunk-down tablet. The tablet originated from a personal comment from Bill Gates to Steve Jobs that although tablets were a great idea, people simply didn’t want tablets because Microsoft had made them and they didn’t sell.
Read more... )
Jobs’ response was that the Microsoft ones didn’t sell because they were no good, not because people didn’t want tablets. In particular, Jobs stated that using a stylus was a bad idea. (This is also a pointer was to why he cancelled the Newton. And guess what? I've got one of them, too.)

Gates, naturally, contested this, and Jobs started an internal project to prove him wrong: a stylus-free finger-operated slim light tablet. However, when it was getting to prototype form, he allegedly realised, with remarkable prescience, that the market wasn’t ready yet, and that people needed a first step — a smaller, lighter, simpler, pocketable device, based on the finger-operated tablet.

Looking for a role or function for such a device, the company came up with the idea of a smartphone.

Smartphones certainly existed, but they were a geek toy, nothing more.

Apple was bold enough to make a move that would kill its most profitable line — the iPod — with a new product. Few would be so bold.

I can’t think of any other company that would have been bold enough to invent the iPhone. We might have got to devices as capable as modern smartphones and tablets, but I suspect they’d have still been festooned in buttons and a lot clumsier to use.

It’s the GUI story again. Xerox sponsored the invention and original development but didn’t know WTF to do with it. Contrary to the popular history, it did productise it, but as a vastly expensive specialist tool. It took Apple to make it the standard method of HCI, and it took Apple two goes and many years. The Lisa was still too fancy and expensive, and the original Mac too cut-down and too small and compromised.

The many rivals’ efforts were, in hindsight, almost embarrassingly bad. IBM’s TopView was a pioneering GUI and it was rubbish. Windows 1 and 2 were rubbish. OS/2 1.x was rubbish, and to be honest, OS/2 2.x was the pre-iPhone smartphone of GUI OSes: very capable, but horribly complex and fiddly.

Actually, arguably — and demonstrably, from the Atari ST market — DR GEM was a far better GUI than Windows 1 or 2. GEM was a rip-off of the Mac; the PC version got sued and crippled as a result, so blatant was it. It took MS over a decade to learn from the Mac (and GEM) and produce the first version of Windows with a GUI good enough to rival the Mac’s, while being different enough not to get sued: Windows 95.

Now, 2 decades later, everyone’s GUI borrows from Win95. Linux is still struggling to move on from Win95-like desktops, and even Mac OS X, based on a product which inspired Win95, borrows some elements from the Win95 GUI.

Everyone copies MS, and MS copies Apple. Apple takes bleeding-edge tech and turns geek toys into products that the masses actually want to buy.

Microsoft’s success is founded on the IBM PC, and that was IBM’s response to the Apple ][.

Apple has been doing this consistently for about 40 years. It often takes it 2 or 3 goes, but it does.

  • First time: 8-bit home micros (the Apple ][, an improved version of a DIY kit.)

  • Second time: GUIs (first the Lisa, then the Mac).

  • Third time: USB (on the iMac, arguably the first general-purpose PC designed and sold for Internet access as its primary function).

  • Fourth time: digital music players (the iPod wasn’t even the first with a hard disk).

  • Fifth time: desktop Unix (OS X, based on NeXTstep).

  • Sixth time: smartphones (based on what became the iPad, remember).

  • Seventh time: tablets (the iPad, actually progenitor of the iPhone rather than the other way round).

Yes, there are too many Mac fans, and they’re often under-informed. But there are also far to many Microsoft apologists, and too many Linux ones, too.

I use an Apple desktop, partly because with a desktop, I can choose my own keyboard and pointing device. I hate modern Apple ones.

I don’t use Apple laptops or phones. I’ve owned multiple examples of both. I prefer the rivals.

My whole career has been largely propelled by Microsoft products. I still use some, although my laptops run Linux, which I much prefer.

I am not a fanboy of any of them, but sadly, anyone who expresses fondness or admiration for anything Apple will be inevitably branded as one by the Anti-Apple fanboys, whose ardent advocacy is just as strong and just as irrational.

As will this.
liam_on_linux: (Default)
I'm very fond of Spectrums (Spectra?) because they're the first computer I owned. I'd used my uncle's ZX-81, and one belonging to a neighbour, and Commodore PETs at school, but the PET was vastly too expensive and the ZX-81 too limited to be of great interest to me.

I read an article once that praised Apple for bringing home computers to the masses with the Apple ][, the first home computer for under US$ 1000. A thousand bucks? That was fantasy winning-the-football-pools money!

No, for me, the hero of the home computer revolution was Sir Clive Sinclair, for bringing us the first home computer for under GB £100. A hundred quid was achievable. A thousand would have gone on a newer car or a family holiday.
Read more... )
liam_on_linux: (Default)
In lieu of real content, a repurposed FB comment, 'cos I thought it stood alone fairly well. I'm meant to be writing about containers and the FB comment was a displacement activity.



The first single-user computers started to appear in the mid-1970s, such as the MITS Altair. These had no storage at all in their most minimal form -- you entered code into their few hundreds of bytes of memory (not MB, not kB, just 128 bytes or so.)

One of the things that was radical is that they had a microprocessor: the CPU was a single chip. Before that, processors were constructed from lots of components, e.g. the KENBAK-1.

A single-user desktop computer with a microprocessor was called a microcomputer.

So, in the mid- to late-1970s, hard disks were *extremely* expensive -- thousands of $/£, more than the computer itself. So nobody fitted them to microcomputers.

Even floppy drives were quite expensive. They'd double the price of the computer. So the first mass-produced "micros" saved to audio tape cassette. No disk drive, no disk controller -- it was left out to save costs.

If the machine was modular enough, you could add a floppy disk controller later, and plug a floppy drive into that.

With only tape to load and save from, working at 1200 bits per second or so, even small programs of a few kB took minutes to load. So the core software was built into a permanent memory chip in the computer, called a ROM. The computer didn't boot: you turned it on, and it started running the code in the ROM. No loading stage necessary, but you couldn't update or change it without swapping chips. Still, it was really tiny, so bugs were not a huge problem.

Later, by a few years into the 1980s, floppy drives fell in price so that high-end micros had them as a common accessory, although still not built in as standard for most.

But the core software was still on a ROM chip. They might have a facility to automatically run a program on a floppy, but you had to invoke a command to trigger it -- the computer couldn't tell when you inserted a diskette.

By the 16-bit era, the mid-1980s, 3.5" drives were cheap enough to bundle as standard. Now, the built-in software in the ROM just had to be complex enough to start the floppy drive and load the OS from there. Some machines still kept the whole OS in ROM though, such as the Atari ST and Acorn Archimedes. Others, like the Commodore Amiga, IBM PC & Apple Macintosh, loaded it from diskette.

Putting it on diskette was cheaper, it meant you could update it easily, or even replace it with alternative OSes -- or for games, do without an OS altogether and boot directly into the game.

But hard disks were still seriously expensive, and needed a separate hard disk controller to be fitted to the machine. Inexpensive home machines like the early or basic-model Amigas and STs didn't have one -- again, it was left out for cost-saving reasons.

On bigger machines with expansion slots, you could add a hard disk controller and it would have a ROM chip on it that added the ability to boot from a hard disk connected to the controller card. But if your machine was a closed box with no internal slots, it was often impossible to add such a controller, so you might get a machine which later in its life had a hard disk controller and drive added, but the ROMs couldn't be updated so it wasn't possible to boot from the hard disk.

But this was quite rare. The 2nd ever model of Mac, the Mac Plus, added SCSI ports, the PC was always modular, and the higher-end models of STs, Amigas and Archimedes had hard disk interfaces.

The phase of machines with HDs but booting from floppy was fairly brief and they weren't common.

If the on-board ROMs could be updated, replaced, or just supplemented with extra ones in the HD controller, you could add the ability to boot from HD. If the machine booted from floppy anyway, this wasn't so hard.



Which reminds me -- I am still looking for an add-on hard disk for an Amstrad PCW, if anyone knows of such a thing!
liam_on_linux: (Default)
Today, Linux is Unix. And Linux is a traditional, old-fashioned, native-binary, honking great monolithic lump of code in a primitive, unsafe, 1970s language.

The sad truth is this:

Unix is not going to evolve any more. It hasn't evolved much in 30 years. It's just being refined: the bugs are gradually getting caught, but no big changes have happened since the 1980s.

Dr Andy Tanenbaum was right in 1991. Linux is obsolete.

Many old projects had a version numbering scheme like, e.g., SunOS:

Release 1.0, r2, r3, r4...

Then a big rewrite: Version 2! Solaris! (AKA SunOS 5)

Then Solaris 2, 3, 4, 5... now we're on 11 and counting.

Windows reset after v3, with NT. Java did the reverse after 1.4: Java 1.5 was "Java 5". Looks more mature, right? Right?

Well, Unix dates from between 1970 and the rewrite in C in 1972. Motto: "Everything's a file."

Unix 2.0 happened way back in the 1980s and was released in 1991: Plan 9 from Bell Labs.

It was Unix, but with even more things turned into files. Integrated networking, distributed processes and more.

The world ignored it.

Plan 9 2.0 was Inferno: it went truly platform-neutral. C was replaced by Limbo, type-safe, compiling code down to binaries that ran on Dis, a universal VM. Sort of like Java, but better and reaching right down into the kernel.

The world ignored that, too.

Then came the idea of microkernels. They've been tried lots of times, but people seized on the idea of early versions that had problems -- Mach 1 and Mach 2 -- and failed projects such as the GNU HURD.

They ignore successful versions:
* Mach 3 as used in Mac OS X and iOS
* DEC OSF/1, later called DEC Tru64 Unix, also based on Mach
* QNX, a proprietary true-microkernel OS used widely around the world since the 1980s, now in Blackberry 10 but also in hundreds of millions of embedded devices.

All are proper solid commercial successes.

Now, there's Minix 3, a FOSS microkernel with the NetBSD userland on top.

But Linux is too established.

Yes, NextBSD is a very interesting project. But basically, it's just fitting Apple userland services onto FreeBSD.

So, yes, interesting, but FreeBSD is a sideline. Linux is the real focus of attention. FreeBSD jails are over a decade old, but look at the fuss the world is making about Docker.

There is now too much legacy around Unix -- and especially Linux -- for any other Unix to get much traction.

We've had Unix 2.0, then Unix 2.1, then a different, less radical, more conservative kind of Unix 2.0 in the form of microkernels. Simpler, cleaner, more modular, more reliable.

And everyone ignored it.

So we're stuck with the old one, and it won't go away until something totally different comes along to replace it altogether.
liam_on_linux: (Default)
Since it looks like my FB comment is about to get censored, I thought I'd repost it...

-----

Gods, you are such a bunch of newbies! Only one comment out of 20 knows the actual answer.

History lesson. Sit down and shaddup, ya dumb punks.

Early microcomputers did not have a single PCB with all the components on it. They were on separate cards, and all connected together via a bus. This was called a backplane and there were 2 types: active and passive. It didn't do anything except interconnect other components.

Then, with increasing integration, a main board with the main controller logic on it became common, but this had slots on it for other components that were too expensive to include. The pioneer was the Apple II, known affectionately as the Apple ][. The main board had the processor, RAM and glue logic. Cards provided facilities such as printer ports, an 80 column display, a disk controller and so on.

But unlike the older S100 bus and similar machines, these boards did nothing without the main board. So they were called daughter boards, and the one they plugged into was the motherboard.

Then came the Mac. This had no slots so there could be no daughterboards. Nothing plugged into it, not even RAM -- it accepted no expansions at all; therefore it made no sense to call it a motherboard.

It was not the only PCB in the computer, though. The original Mac, remember, had a 9" mono CRT built in. An analogue display, it needed analogue electronics to control it. These were on the Analog Board (because Americans can't spell.)

The board with the digital electronics on it -- the bits that did the computing, in other words the logic -- was the Logic Board.

2 main boards, not one. But neither was primary, neither had other subboards. So, logic board and analog board.

And it's stuck. There are no expansion slots on any modern Mac. They're all logic boards, *not* motherboards because they have no children.

https://www.ifixit.com/Teardown/Macintosh+128K+Teardown/21422

February 2026

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 17th, 2026 01:33 pm
Powered by Dreamwidth Studios