That is indeed a good article on OS 2200, and I've added the link to Wikipedia's page on that OS.
I'd argue that two of the greatest failures in microcomputer OSes have come about when companies have tried to do large-scale R&D with high budgets:
The first was IBM's development of OS/2 from the basic v1.0 to the full thing, with Presentation Manager, Communications Manager, and so on. They used up a lot of staff in the process - my friend who worked there has never really recovered his mental health - because they didn't seem to be able to get organised. They had more formal methods and radical ideas than anyone could keep track of, and they still failed to produce anything that could compete for usefulness and functionality with Windows 3.x and Classic MacOS.
The second was Taligent/Pink, which made even less progress.
The trick isn't spending lots of money. It's proceeding in achievable steps. Linux has been gradually re-engineered from a pretty basic kernel for a single platform to something very widely capable and portable, and a lot of that is due to doing the job in a sensibly incremental manner.
The other OSes that are being successful at present are macOS/iOS, which was done by fitting together existing parts, plus Apple writing the part they're good at, which is GUI, and Windows NT. Windows did benefit from a large-scale R&D project, but it wasn't done by Microsoft. It was done at DEC for the PRISM and MICA projects, and when DEC cancelled them, Microsoft scooped up the ideas and the people from the MICA OS project.
My job is doing porting and platforms for a large and long-lived mathematical modeller. In the past, it ran on Apollo Domain/OS, and Data General AOS, and VMS, and the code wasn't significantly different for any of them. It did not take well to classic MacOS, because it wasn't designed to deliberately yield control at short regular intervals, so we never shipped that, and nobody has ever wanted it on a mainframe. Code is code, once you dig in a bit.
no subject
I'd argue that two of the greatest failures in microcomputer OSes have come about when companies have tried to do large-scale R&D with high budgets:
The first was IBM's development of OS/2 from the basic v1.0 to the full thing, with Presentation Manager, Communications Manager, and so on. They used up a lot of staff in the process - my friend who worked there has never really recovered his mental health - because they didn't seem to be able to get organised. They had more formal methods and radical ideas than anyone could keep track of, and they still failed to produce anything that could compete for usefulness and functionality with Windows 3.x and Classic MacOS.
The second was Taligent/Pink, which made even less progress.
The trick isn't spending lots of money. It's proceeding in achievable steps. Linux has been gradually re-engineered from a pretty basic kernel for a single platform to something very widely capable and portable, and a lot of that is due to doing the job in a sensibly incremental manner.
The other OSes that are being successful at present are macOS/iOS, which was done by fitting together existing parts, plus Apple writing the part they're good at, which is GUI, and Windows NT. Windows did benefit from a large-scale R&D project, but it wasn't done by Microsoft. It was done at DEC for the PRISM and MICA projects, and when DEC cancelled them, Microsoft scooped up the ideas and the people from the MICA OS project.
My job is doing porting and platforms for a large and long-lived mathematical modeller. In the past, it ran on Apollo Domain/OS, and Data General AOS, and VMS, and the code wasn't significantly different for any of them. It did not take well to classic MacOS, because it wasn't designed to deliberately yield control at short regular intervals, so we never shipped that, and nobody has ever wanted it on a mainframe. Code is code, once you dig in a bit.