Paul Boddie's Free Software-related blog


Archive for the ‘technology’ Category

Unix, the Minicomputer, and the Workstation

Monday, February 9th, 2026

Previously, I described the rise of Norsk Data in the minicomputer market and the challenges it faced, shared with other minicomputer manufacturers like Digital, but also mainframe companies like ICL and IBM, along with other technology companies like Xerox. Norsk Data may have got its commercial start in shipboard computers, understandable given Norway’s maritime heritage, but its growth was boosted by a lucrative contract with CERN to supply the company’s 16-bit minicomputers for use in accelerator control systems. Branching out into other industries and introducing 32-bit processors raised the company’s level of ambition, and soon enough Norsk Data’s management started seeing the company as a credible rival to Digital and other established companies.

Had the minicomputing paradigm remained ascendant, itself disrupting the mainframe paradigm, then all might have gone well, but just as Digital and others had to confront the emergence of personal computing, so did Norsk Data. Various traditional suppliers including Digital were perceived as tackling the threat from personal computers rather ineffectively, but they could not be accused of not having a personal computing strategy of their own. The strategists at Norsk Data largely stuck to their guns, however, insisting that minicomputers and terminals were the best approach to institutional computing.

Adamant that their NOTIS productivity suite and other applications were compelling enough reasons to invest in their systems, they tried to buy their way into new markets, ignoring the dynamics that were leading customers in other directions and towards other suppliers. Even otherwise satisfied customers were becoming impatient with the shortcomings of Norsk Data’s products and its refusal to engage substantially with emerging trends like graphical user interfaces, demonstrated by the Macintosh, a variety of products available for IBM-compatible personal computers, and the personal graphical workstation.

Worse is not Better

With Norsk Data’s products under scrutiny for their suitability for office applications, other deficiencies in their technologies were identified. The company’s proprietary SINTRAN III operating system still only offered a non-hierarchical filesystem as standard, with a conservative limit on the number of files each user could have. By late 1984, the Acorn Electron, my chosen microcomputer from the era, could support a hierarchical filesystem on floppy disks. And yet, here we have a “superminicomputer” belatedly expanding its file allowance in a simple, flat, user storage area from 256 files to an intoxicating 4096 on a comparatively huge and costly storage volume.

As one report noted, “a fundamental need with OA systems is for a hierarchical system”, which Norsk Data had chosen to provide via a separate NOTIS-DS (Data Storage) product, utilising a storage database that was generally opaque to the operating system’s own tools. A genuine hierarchical filesystem was apparently under development for SINTRAN IV, distinct from efforts to provide the hierarchical filesystem demanded by the company’s Unix implementation, NDIX.

SINTRAN’s advocates seem to have had the same quaint ideas about commands and command languages as advocates of other systems from the era, bemoaning “terse” Unix commands and its “powerful but obscure” shell. Some of them have evidently wanted to have it both ways, trotting out how the command to list files, being one that prompted the user in various ways, could be made to operate in a more concise and less interactive fashion if written in an abbreviated form. Commands and filenames could generally be abbreviated and still be located if the abbreviation could be unambiguously expanded to the original name concerned.

Of course, such “magic” was just a form of implicitly applying wildcard characters in the places where hyphens were present in an abbreviated name, and this could presumably have been problematic in certain situations. It comes across as a weak selling point indeed, at least if one is trying to highlight the powerful features of an operating system, rather than any given command shell. Especially when the humble Acorn Electron also supported things like abbreviated commands and abbreviated BASIC keywords, the latter often being a source of complaint by reviewers appalled at the effects on program readability.

For real-time purposes, SINTRAN III seemingly stacked up well against Digital’s RSX-11M, and offered a degree of consistency between Norsk Data’s 16- and 32-bit environments. Nevertheless, its largely satisfied users saw the benefits in what Unix could provide and hoped for an environment where SINTRAN and Unix could coexist on the same system, with the former supporting real-time applications.

Perhaps taking such remarks into account, Norsk Data commissioned Logica to port 4.2BSD to the ND-500 family. Due to the architecture of such systems, its Unix implementation, NDIX, would run on the ND-500 processor but lean on the ND-100 front-end processor running SINTRAN for peripheral input/output and other support, such as handling interrupts occurring on the ND-500 processor itself, introducing the notion of “shadow programs”, “shadow processes”, or “twin processes” running on the ND-100 to support the handling of page faults.

Burdening the 16-bit component of the system with the activity of its more powerful 32-bit companion processor seems to have led to some dissatisfaction with the performance of the resulting system. Claims of such problems, particularly in connection with NDIX, being resolved and delivering a better experience than a VAX-11/785 seem rather fragile given the generally poor adoption of NDIX and the company’s unwillingness to promote it. Indeed, in 1988, amidst turbulent times and the need to adapt to market realities, Unix at Norsk Data was something apparently confined to Intel-based systems running SCO Unix and merely labelled up to look like models in the ND-5000 range that had succeeded the ND-500.

Half-hearted adoption of Unix was hardly confined to Norsk Data. Unix had been developed on Digital hardware and that company had offered a variant for its PDP-11 systems, but it only belatedly brought its VAX-based product, Ultrix, to that system in 1984, seven years after the machine’s introduction. Even then, certain models in various ranges would not support Ultrix, or at least not initially. Mitigating this situation was the early and continuing availability of the Berkeley Software Distribution (BSD) for the platform, this having been the basis for Ultrix itself.

Mainframe incumbents like IBM and ICL were also less than enthusiastic about Unix, at least as far as those mainframes were concerned. ICL developed a Unix environment for its mainframe operating system before going round again on the concept and eventually delivering its VME/X product to a market that, after earlier signals, perhaps needed to be convinced that the company was truly serious about Unix and open systems.

IBM had also come across as uncommitted to Unix, in contrast to its direct competitor in the mainframe market, Amdahl, whose implementation, UTS, had resulted from an early interest in efforts outside the company to port Unix to IBM-compatible mainframe hardware. IBM had contracted Interactive Systems Corporation to do a Unix port for the PC XT in the form of PC/IX, and as a response to UTS, ISC also did a port called IX/370 to IBM’s System/370 mainframes. Thereafter, IBM would partner with Locus Computing Corporation to make AIX PS/2 for its personal computers and, eventually in 1990, AIX/370 to update its mainframe implementation.

IBM’s Unix efforts coincided with its own attempts to enter the Unix workstation market, initially seeing limited success with the RT PC in 1985, and later seeing much greater success with the RS/6000 series in 1990. ICL seemed more inclined to steer customers favouring Unix towards its DRS “departmental” systems, some of which might have qualified as workstations. Both companies increasingly found themselves needing to convince such customers that Unix was not just an afterthought or a cynical opportunity to sell a few systems, but a genuine fixture in their strategies, provided across their entire ranges. Nevertheless, ICL maintained its DRS range and eventually delivered its DRS 6000 series, allegedly renamed from DRS 600 to match IBM’s model numbering, based on the SPARC architecture and System V Unix.

A lack of enthusiasm for Unix amongst minicomputer and mainframe vendors contrasted strongly with the aspirations of some microcomputer manufacturers. Acorn’s founder Chris Curry articulated such enthusiasm early in the 1980s, promising an expansion to augment the BBC Micro and possibly other models that would incorporate the National Semiconductor 32016 processor and run a Unix variant. Acorn’s approach to computer architecture in the BBC Micro was to provide an expandable core that would be relegated to handling more mundane tasks as the machine was upgraded to newer and better processors, somewhat like the way a microcontroller may be retained in various systems to handle input and output, interrupts and so on.

(Meanwhile, another microcomputer, the Dimension 68000, took the concept of combining different processors to an extreme, but unlike Acorn’s more modest approach of augmenting a 6502-based system with potentially more expensive processor cards, the Dimension offered a 68000 as its main processor to be augmented with optional processor cards for a 6502 variant, Z80 and 8086, these providing “emulation” capabilities for Apple, Kaypro and IBM PC systems respectively. Reviewers were impressed by the emulation support but unsure as to the demand for such a product in the market. Such hybrid systems seem to have always been somewhat challenging to sell.)

Acorn reportedly engaged Logica to port Xenix to the 32016, despite other Unix variants already existing, including Genix from National itself along with more regular BSD variants seen on the Whitechapel MG-1. Financial and technical difficulties appear to have curtailed Acorn’s plans, some of the latter involving the struggle for National to deliver working silicon, but another significant reason involves Acorn’s architectural approach, the pitfalls of which were explored and demonstrated by another company with connections to Acorn from the early 1980s, Torch Computers.

Tasked with covering the business angle of the BBC Micro, Torch delivered the first Z80 processor expansion for the system. The company then explored Acorn’s architectural approach by releasing second processor expansions that featured the Motorola 68000, intended to run a selection of operating systems including Unix. Although products were brought to market, and some reports even suggested that Torch had become one of the principal Unix suppliers in the UK in selling its Unicorn-branded expansions, it became evident that connecting a more powerful processor to an 8-bit system and relying on that 8-bit system to be responsible for input and output processing rather impeded the performance of the resulting system.

While servicing things like keyboards and printers were presumably within the capabilities of the “host” 8-bit system, storage needed higher bandwidth, particularly if hard drives were to be employed, and especially if increasingly desirable features such as demand paging were to be available in any Unix implementation. Torch realised that to remedy such issues, they would need to design a system from scratch, with the main processor and supporting chips having direct access to storage peripherals, leaving pretty much all of the 8-bit heritage behind. Their Triple-X workstation utilised the 68010 and offered a graphical Unix environment, elements of which would be licensed by NeXT for its own workstations.

Out of Acorn’s abandoned plans for a range of business machines, the company’s Cambridge Workstation, consisting of a BBC Micro with the 32016 expansion plus display, storage and keyboard, ran a proprietary operating system that largely offered the same kind of microcomputing paradigm as the BBC Micro, boosted by high-level language compilers and tools that were far more viable on the 32016 than the 6502. Nevertheless, Unix would remain unsupported, the memory management unit omitted from delivered forms of the Cambridge Workstation and related 32016 expansion. Eventually, Acorn would bring a more viable product to market in the form of its R-series workstations: entirely 32-bit machines based on the ARM architecture, albeit with their own shortcomings.

Norsk Data had also contracted out its Unix porting efforts to Logica, but unlike IBM, who had grasped the nettle with both hands to at least give the impression of taking Unix support seriously, Norsk Data took on the development of NDIX and seemingly proceeded with it begrudgingly, only belatedly supporting the models that were introduced. Even in captive markets where opportunities could be engineered, such as in the case of one initiative where Norsk Data systems were procured by the Norwegian state under a dubious scheme that effectively subsidised the company through sales of systems and services to regional computing centres, the models able to run NDIX were not necessarily the expensive flagship models that were sold in. With limited support for open systems and an emphasis on the increasingly archaic terminal-based model of providing computing services, that initiative was a costly failure.

Had NDIX been a compelling product, Norsk Data would have had a substantial incentive to promote it. From various documents, it seems that a certain amount of the NDIX development work was carried out at Norsk Data’s UK subsidiary, perhaps suffering in the midst of Wordplex-related intrigues towards the end of the 1980s. But at CERN, where demand might have been more easily generated, NDIX was regarded as “not being entirely satisfactory” after two years of effort trying to deliver it on the ND-500 series. The 16-bit ND-100 front-end processor was responsible for input and output, including to storage media, and the overheads imposed by this slower, more constrained system, undermined the performance of Unix on the ND-500.

Norsk Data, in conjunction with its partners and customers, had rediscovered what Torch had presumably identified in maybe 1984 or 1985 before that company swiftly pivoted to a new systems architecture to more properly enter the Unix workstation business at the start of 1986. One could argue that the ND-5000 series and later refinements, arriving from 1987 onwards, would change this situation somewhat for Norsk Data, but time and patience were perhaps running out even in the most receptive of environments to these newer developments.

The Spectre of the Workstation

The workstation looms large in the fate of Norsk Data, both as the instrument of its demise as well as a concept the company’s leadership never really seemed to fathom. Already in the late 1970s, the industry was being influenced by work done at Xerox on systems such as the Alto. Companies like Three Rivers Computer Corporation and Apollo Computer were being founded, and it was understandable that others, even in Norway, might be inspired and want a piece of the action.

To illustrate how the fate of many of these companies is intertwined, ICL had passed up on the opportunity of partnering with Apollo, choosing Three Rivers and adopting their PERQ workstation instead. At the time, towards the end of the 1970s, this seemed like the more prudent strategy, Three Rivers having the more mature product and being more willing to commit to Unix. But Apollo rapidly embraced the Motorola 68000 family, while the PERQ remained a discrete logic design throughout its commercial lifetime, eventually being described as providing “poor performance” as ICL switched to reselling Sun workstations instead.

Within Norsk Data, several engineers formulated a next-generation machine known as Nord-X, later deciding to leave the company and establish a new enterprise, Sim-X, to make computers with graphical displays based on bitslice technology, rather like machines such as the Alto and PERQ. One must wonder whether even at this early stage, a discussion was had about the workstation concept, only for the engineers to be told that this was not an area Norsk Data would prioritise.

Sim-X barely gets a mention in Steine’s account of the corporate history (“Fenomenet Norsk Data”, “The Norsk Data Phenomenon”), but it probably involves a treatment all by itself. The company apparently developed the S-2000 for graphical applications, initially for newspaper page layout, but also for other kinds of image processing. Later, it formed the basis of an attempt to make a Simula machine, in the same vein as the once-fashionable Lisp machine concept. Although Sim-X failed after a few years, one of its founders pursued the image processing system concept in the US with a company known as Lightspeed. Frustratingly minimal advertising for, and other coverage of, the Lightspeed Qolor can be found in appropriate industry publications of the era.

Since certain industry trends appear to have infiltrated the thinking at Norsk Data and motivated certain strategic decisions, it was not surprising that its early workstation efforts were focused on specific markets. It was not unusual for many computer companies in the early 1980s to become enthusiastic about computer-aided design (CAD) and computer-aided manufacturing (CAM). Even microcomputer companies like Acorn and Commodore aspired to have their own CAD workstation, focused on electronic design automation (EDA) and preferably based on Unix, largely having to defer its realisation until a point in time when they had the technical capabilities to deliver something credible.

Norsk Data had identified a vehicle for such aspirations in the form of Dietz Computer Systems, producer of the Technovision CAD system for mechanical design automation. This acquisition seemed to work well for the combined company, allowing the CAD operation to take advantage of Norsk Data’s hardware and base complete CAD systems on it. Such special-purpose workstations were arguably justifiable in an era where particular display technologies were superior for certain applications and where computing resources needed to be focused on particular tasks. However, more versatile machines, again inspired by the Alto and PERQ, drove technological development and gradually eliminated the need to compromise in the utilisation of various technologies. For instance, screens could be high-resolution, multicolour and support vector and bitmap graphics at acceptable resolutions, even accelerating the vector graphics on a raster display to cater to traditional vector display applications.

In its key scientific and engineering markets, Norsk Data had to respond to industry trends and the activities of its competitors. Hewlett-Packard may have embraced Unix and introduced PA-RISC to its product ranges in 1986, largely to Norsk Data’s apparent disdain, but it had also introduced an AI workstation. Language system workstations had emerged in the early 1980s, initially emphasising Pascal as the PERQ had done. Lisp machines had for a time been all the rage, emphasising Lisp as a language for artificial intelligence, knowledge base development, and for application to numerous other buzzword technologies of the era, empowering individual users with interactive, graphical environments that were meant to confer substantial productivity benefits.

Thus, Norsk Data attempted to jump on the Lisp machine bandwagon with Racal, the company that would produce the telecoms giant Vodafone, hoping to leverage the ability to microcode the ND-500 series to produce a faster, more powerful system than the average Lisp machine vendor. Predictably, claims were made about this Knowledge Processing System being “10 to 20 times more powerful than a VAX” for the intended applications. Reportedly, the company delivered some systems, although Steine contradicts this, claiming that the only system that was sold – to the University of Oslo – was never delivered. This is not entirely true, either, judging from an account of a “a burned-up backplane” in the KPS-10 delivered to the Institute for Informatics. Intriguingly, hardware from one delivered system has since surfaced on the Internet.

One potentially insightful article describes the memory-mapped display capabilities of the KPS-10, supporting up to 36 bitmapped monochrome screens or bitplanes, with the hardware supposedly supporting communications with graphical terminals at distances of 100 metres, suggesting that, once again, the company’s dogged adherence to the terminal computing paradigm had overridden any customer demand for autonomous networked workstations. It had been noted alongside the ambitious performance claims that increased integration using gate arrays would make a “single-user work station” possible, but such refinements would only arrive later with the ND-5000 series. In the research community, Racal’s brief presence in the AI workstation market, aiming to support Lisp and Prolog on the ND-500 hardware, left users turning to existing products and, undoubtedly, conventional workstations once Racal and Norsk Data pulled out.

With Norsk Data having announced a broader collaboration with Matra in France, the two companies announced a planned “desktop minisupercomputer” emphasising vector processing, which is another area where competing vendors had introduced products in venues like CERN, threatening the adoption of Norsk Data’s products. Although the “minisupercomputer” aspect of such an effort might have positioned the product alongside other products in the category, the “desktop” label is a curious one coming from Norsk Data. Perhaps the company had been made aware of systems like the Silicon Graphics IRIS 4D and had hoped to capture some of the growing interest in such higher-performance workstations, redirecting that interest towards their own products and advocating for solutions similar to that of the KPS-10. In any case, nothing came of the effort, and so the company had nothing to show.

The intrusion of categories like “desktop” and “workstation” into Norsk Data’s marketing will have been the result of shifting procurement trends in venues like CERN. From being a core vendor to CERN, contracted to provide hardware under non-competitive arrangements, conditions in the organisation had started to change, and with efforts ramping up on the Large Electron-Positron collider (LEP), procurement directed towards workstations also started to ramp up. Initially, Apollo featured prominently, gaining from their “first mover” advantage, but they were later joined by Sun Microsystems. Even IBM’s PC RT had appealed to some in the organisation. And despite Digital being perceived as a minicomputer company, it was still managing to sell VAXstations into CERN.

One has to wonder what Norsk Data’s sales representatives in France must have made of it all. Hundreds of workstations from other companies being bought in, millions of Swiss francs being left on the table, and yet the models being brought to market for them to sell were based on the Butterfly workstation concept, featuring Technostation models for CAD and Teamstation models for NOTIS, neither of them likely to appeal to people looking beyond the personal computer and wanting workstation capabilities on their desk. A Butterfly workstation based on the 80286 running NOTIS on a 16-bit minicomputer expansion card must have seemed particularly absurd and feeble.

Increased integration brought the possibilities of smaller deskside systems from Norsk Data, with the ND-5800 and later models perhaps being more amenable to smaller workloads and workstation applications. But it seems that the mindset persisted of such systems being powerful minicomputers to be shared, rather than powerful workstations for individuals. Graphics cards embedding Motorola 68000 family processors augmented Technovision models targeting the CAD market, but as the end of the 1980s beckoned, such augmentations fell rather short of the kind of hardware companies like Sun and Silicon Graphics were offering. Meanwhile, only the company’s lower-end models could sell at around the kind of prices set by the mainstream workstation vendors, but with low-end performance to match.

In a retrospective of the company, chief executive Rolf Skår remarked that the workstation vendors had deliberately cut margins instead of charging what was considered to be the value of a computer for a particular customer, which is perhaps another way of describing a form of pricing where what the market will bear determines how high the price will be set, squeezing the customer as much as possible. The article, featuring an erroneous figure for the price of a typical Norsk Data computer (100,000 Norwegian crowns, being broadly equivalent to $10,000) misleads the reader into thinking that the company’s machines were not that expensive after all and even competitive on price with a typical Sun workstation.

Evidently, there was confusion from Skår or the writer about the currencies involved, and such a computer would tend to cost more like 1 million crowns or $100,000: ten times that of the most affordable workstations. Norsk Data’s top-end machines ballooned in price in the mid-1980s, costing up to $500,000 for some single-processor models, and potentially $1.5 million for the very top-end four-processor configurations. Even with Sun introducing its first SPARC-based model at around $40,000, it could still outperform such “peak minicomputer” VAX and Norsk Data models and sell at a tenth of the price.

Norsk Data’s opportunistic pricing presumably tracked that of its big rival, Digital, and its own pricing of VAX models, sensing that if promoted as faster or better than the VAX, then potential customers would choose the Norsk Data machine, pricing and other characteristics being generally similar. When companies might have bought into a system with a big investment in information technology, this might have been a viable strategy, but as such technology became cheaper and available from numerous other providers, it became harder even for Digital to demand such a premium.

One can almost sense a sort of accusation that the workstation manufacturers ruined it for everyone, cheapening an otherwise lucrative market, but workstations were, of course, just another facet of the personal computing phenomenon and the gradual democratisation of computing itself. In the end, customers were always going to choose systems that delivered the computing power and user experience they desired, and it was simply a matter of those companies identifying and addressing such desires and needs, making such systems available at steadily more affordable prices, eventually coming to dominate the market. That Norsk Data’s management failed to appreciate such emerging trends, even when spelled out in black and white in procurement plans involving considerable sums of money, suggests that the blame for the company’s growth bonanza eventually coming to an end lay rather closer to home.

The Performance Game

With the ND-500 series, Norsk Data had been able to deliver floating-point performance that was largely competitive within the price bracket of its machines, and favourable comparisons were made between various ND-500 models and those in the VAX line-up, not entirely unjustifiably. Manufacturers who had not emphasised this kind of computation in their products found some kind of salvation in the form of floating-point processors or accelerators, made available by specialists like Mercury Computer Systems and Floating Point Systems, aided by the emergence of floating-point arithmetic chips from AMD and Weitek.

Presumably to fill a gap in its portfolio, Digital offered the Floating Point Systems accelerators in combination with some models. Meanwhile, the company had improved the performance of its VAX 8000 series to ostensibly eliminate many of Norsk Data’s performance claims. And curiously, Matra, who were supposed to be collaborating with Norsk Data on a vector computer, even offered the FPS products with the Norsk Data systems it had been selling, presumably to remedy a deficiency in its portfolio, but it is difficult to really know with a company like Matra.

Prior to the introduction of the ND-5000 series in 1987, Norsk Data had largely kept pace with their perceived competitors, alongside a range of other companies emphasising floating-point computation, but the company now needed to re-establish a lead over those competitors, particularly Digital. The ND-5000 series, employing CMOS gate array technology, was labelled as a “vaxkiller” in aggressive publicity but initially fell short of Norsk Data’s claims of better performance than the two-processor VAX 8800.

Using the only figures available to us, the top-end single-processor ND-5800 managed only 6.5 MWIPS and thus around 5 times the performance of the original VAX-11/780. In contrast, the dual-processor VAX 8800 was rated at around 9 times faster than its ancestor, with the single-processor VAX 8700 (later renamed to 8810) rated at around 6 times faster. All of these machines cost around half a million dollars. And yet, 1987 had seen some of the more potent first-generation RISC systems arrive in the marketplace from Hewlett-Packard and Silicon Graphics, both effectively betting their companies on the effectiveness of this technological phenomenon.

The performance of Norsk Data minicomputers and their competitors from 1975 to 1987.

The performance of Norsk Data minicomputers and their competitors from 1975 to 1987. Although the ND-500 models were occasionally faster than the VAX machines at floating-point arithmetic, according to Whetstone benchmark results, Digital steadily improved its models, with the VAX 8700 of 1986 introducing similar architectural improvements to those introduced in the ND-5000 processors. Note how one of Hewlett-Packard’s first PA-RISC systems makes its mark alongside these established architectures.

Just as Digital’s management had kept RISC efforts firmly parked in the realm of “research”, even with completed RISC systems running Unix and feeding into more advanced research, Norsk Data’s technical leadership publicly dismissed RISC as a fad and their RISC competitors as unimpressive, even though the biggest names in computing – IBM, HP and Motorola – were at that very moment, formulating, developing and even delivering RISC products that threatened Norsk Data’s own products. RISC processors influenced and transformed the industry, and as the 1990s progressed, further performance gains were enabled by RISC architectural design principles.

To be fair to the designers at both Digital and Norsk Data, they also incorporated techniques often associated with RISC, just as Motorola had done while still downplaying the RISC phenomenon. The VAX 8800 and ND-5800 thus incorporated architectural improvements over their predecessors, such as pipelining, that provided at least a doubling of performance. The ND-5000 ES range, announced in 1988 and apparently available from 1989, were claimed to deliver another doubling in performance, seemingly through increasing the operating frequency. This, however, would merely put such server products alongside relatively low-cost RISC workstations using the somewhat established MIPS R2000 processor.

With Digital’s VAX 9000 project stalling, it was left to the CVAX, Rigel and NVAX processors to progressively follow through with the enhancements made in the VAX 8800, propagating them to more highly integrated processors and producing considerably faster and cheaper machines towards the end of the 1980s and into the early 1990s. But this consolidation, starting with the VAX 6000 series, and the accompanying modest gains in performance led to customer unease about the trailing performance of VAX systems, particularly amongst those customers interested in running Unix.

Thus, Digital introduced workstations and servers based on the MIPS architecture, delivering a performance boost almost immediately to those customers. A VAX 6000 processor could deliver around 7 times the performance of the original VAX-11/780, whereas a MIPS R2000-based DECstation 3100 could deliver around 11 times the performance. Crucially, such systems did not cost hundreds of thousands of dollars, but mere tens of thousands, with the lowest-end workstations costing barely ten thousand dollars.

To keep pace with these threats, it seems that Norsk Data’s final push involved a ramping up of the ND-5830 “Rallar” processor from an operating frequency of 25MHz to 45MHz in 1990 or so, just as the MIPS R3000 was proliferating in products from several vendors at more modest 25MHz and 33MHz frequencies but still nipping at the heels of this final, accelerated ND-5850 product. MIPS would suffer delays and limited availability of faster products like the R6000, also bringing consequences to the timely delivery of its 64-bit R4000 architecture. Nevertheless, products from IBM in its emerging RS/6000 range would arrive in the same year and demonstrate comprehensively superior performance to Norsk Data’s entire range.

Whetstone benchmark figures, particularly for the latter ND-5830 and ND-5850, are impressive. However, there may be caveats to claims of competitive performance. In one case, a ND-5800 system was reported as running a computational model in around 5 minutes with one set of parameters and 70 minutes with another. Meanwhile, the same inputs required respective running times of around 9 minutes and 78 minutes on a VAXstation 3500. The VAXstation 3500 was a 3 MWIPS system, whereas the ND-5800 was rated at around 6.5 MWIPS, and yet for a larger workload, it seems that any floating-point processing advantage of the ND-5800 was largely eliminated.

As for the integer or general performance of the ND-500 and ND-5000 families, Norsk Data appear to have been evasive. The company used “MIPS” to mean Whetstone MIPS, measuring floating-point performance, which was perhaps excusable for “number crunchers” but less applicable to other applications and a narrow measure that gave way to others over time, anyway. Otherwise, “relative performance” measures and numbers of users were often given, leaving us to wonder how the systems actually stacked up. LINPACK benchmark figures are few and far between, which is odd given the emphasis Norsk Data made on numerical computation when promoting the systems.

Another benchmark employed by Norsk Data concerned the company’s pivot to the “transaction processing” market with their high-end systems. Introducing the tpServer series based on its final ND-5000 range, the company stated a TP1 benchmark score of 10 transactions per second for its single-processor ND-5700 model, with a relative performance factor suggesting around 30 transactions per second for its single-processor ND-5850 model. Interestingly, its Uniline 88 range, introduced as the company also pivoted to more mainstream technology, was based on the Motorola 88000 architecture and also appears to have offered around 30 transactions per second, with similar scalability options involving up to four processors as with its tpServer and other traditional products.

The MC88100 used in the Uniline 88 showed similar general performance to competitors using the MIPS R3000 and SPARC processors, so we might conclude that the ND-5850 may have been comparable to and competitive with the mainstream at the turn of the 1990s. But this would mean that the ND-5000 series no longer offered any hope of differentiating itself from the mainstream in terms of performance. With Unix not being prioritised for the series, either, further development of the platform must have started to look like a futile and costly exercise, appealing to steadily fewer niche customers and gradually losing the revenue necessary for the substantial ongoing investment needed to keep it competitive. Switching to the 88000 family would give comparable performance, established and accepted Unix implementations, and broader industry support.

The ND-5850 arrived at a time when the MIPS R6000 and products from other industry players were experiencing difficulties in what might be considered a detour into emitter-coupled logic (ECL) fabrication, motivated by the purported benefits of the technology, somewhat replaying events from earlier times. Back in 1987, Norsk Data, had reportedly resisted the temptation to adopt ECL, sticking with the supposedly “ignored” (but actually ubiquitous) CMOS technology, and advertising this wisdom in the press. MIPS and Control Data Corporation would eventually bring the R6000 to market, but with far less success than hoped.

History would run full circle, however, and with the ND-5000 architecture having been consigned to maintenance status, and with Norsk Data having adopted the Motorola 88000 architecture for future high-end products, the engineering department of Norsk Data, spun-out into a separate company, would describe its plans of developing a superscalar variant of the 88000 to be fabricated in ECL and known as ORION. Perhaps not unsurprisingly, such plans met insurmountable but unspecified technical obstacles, and the product effectively evaporated, ending all further transformative aspirations in the processor business. Meanwhile, MIPS would, from 1992 onwards, at least have the consolation of delivering the 64-bit R4000 to market, aided by its CMOS fabrication partners.

The performance of Norsk Data minicomputers and their competitors from 1987 onwards.

The performance of Norsk Data minicomputers and their competitors from 1987 onwards. The steady introduction of RISC products from Hewlett-Packard, Apollo Computer, Digital, IBM and others made the competitive landscape difficult for a low-volume manufacturer like Norsk Data. The company was soon having to contend with far cheaper workstation products from its competitors delivering comparable or superior performance (typically measured using SPECmark or SPECfp92 benchmarks here). Digital’s VAX products were also steadily improved and cost-reduced, but were eventually phased out in favour of Digital’s Alpha systems. In 1992, somewhat delayed, the MIPS R4000 appeared in systems like the SGI Indigo, offering a doubling of performance and maintaining the relentless pace of mainstream processor development.

Regardless, of whether specific Norsk Data products were better than specific products from its competitors, one is left wondering about that idea of Norsk Data making floating-point accelerators for other systems. After all, the ND-500 and ND-5000 processors were effectively accelerators for Norsk Data’s own 16-bit systems. And with that path having been taken, one might wonder whether the company would be the one having its offerings bundled by the likes of Digital. That a focus on such a market might have driven development at a faster tempo, pushing the company into the territory of floating-point specialists, minisupercomputers, and a lucrative market that would last until general-purpose products remedied their floating-point deficiencies, coupled with the rise of the RISC architectures.

Splitting off the floating-point expertise and coupling it with a variety of architectures in the form of a floating-point unit could have been an option. Indeed, Weitek, who had made such products the focus of their business apparently fabricated some of Norsk Data’s floating-point hardware for its later machines. Maybe there was good money in VLSI-fabricated Norsk Data accelerators for personal computers and workstations without great numeric processing options.

One might also wonder whether Norsk Data could have coupled its expertise in floating-point processing with an attractive instruction set architecture. Sadly, the company seemed wedded to the ND-500 architecture for compatibility reasons, involving an operating system that was not attractive to most potential customers, along with software that should have been portable and may not have been as desirable as perceived, anyway. Protecting the mirage of competitive advantage locked one form of expertise and potential commercial exploitation into others that were impeding commercial success.

The designers of the ND-5000 may have insisted that they could still match RISC designs even with their “complex instructions”, but Digital’s engineers had already concluded that their own comparable efforts to streamline VAX performance, achieving four- or five-fold gains within a couple of years, would remain inherently disadvantaged by the need to manage architectural complexity:

So while VAX may “catch up” to current single-instruction-issue RISC performance, RISC designs will push on with earlier adoption of advanced implementation techniques, achieving still higher performance. The VAX architectural disadvantage might thus be viewed as a time lag of some number of years.”

In jettisoning the architectural baggage of the ND-500 architecture, Norsk Data’s intention was presumably to be able to more freely apply its expertise to the 88000 architecture, but all of this happened far too late in the day. To add insult to injury, it also involved the choice of what proved to be another doomed processor architecture.

Common Threads of Minicomputing History

Friday, January 9th, 2026

In the past few years, in my exploration of computing history, the case of the Norwegian computing manufacturer Norsk Data has been a particular fascination. Growing up in 1980s Britain, it is entirely possible that the name will have appeared in newspapers and magazines I may have seen or read, although I cannot remember any particular occurrences. It is also easy to mix it up with Nokia Data: a company that was eventually acquired by the United Kingdom’s own computing behemoth, ICL.

Looking back, however, and it turns out that Norsk Data even managed to get systems into institutions not too many miles from where I grew up, and the company did have a firm commercial presence in the UK, finding niches in various industries and forms of endeavour. Having now lived in Norway for a considerable amount of time, it is perhaps more surprising that Norsk Data is almost as forgotten, and leaves almost as few traces, in its home country.

When I arrived in Norway, I gave no thought whatsoever to Norsk Data, even though I had been working at an organisation that had been one of the company’s most prominent customers and the foundation for its explosive growth during the 1970s and 1980s. But my own path through the Norwegian computing sector may well have crossed those of the company’s many previous employees, and in fact, one former employer of mine was part of a larger group that had acquired parts of the disintegrating Norsk Data empire.

It might come as a surprise that a company with over 4000 employees at its peak, many of them presumably in Oslo, and with annual revenues of almost 3 billion Norwegian crowns (around $450 million), would crumble within years and leave so little behind to show for itself. Admittedly, some of the locations of the company’s facilities have been completely redeveloped in recent years. But one might have expected an enduring cultural or social legacy.

In looking back, we might make some observations about a phenomenon that shares certain elements with events in other countries and other companies, along with more general observations about technological aspiration, contrasting the aspirations of that earlier era with today’s “innovation” culture, where companies arguably have much more mundane goals.

Big Claims by Small People

One of my motivations for looking into the history of Norsk Data arose from studying some of the rhetoric about its achievements and its influences on mainstream technology and wider society, these intersecting with CERN and the World Wide Web. There are some that have dared to claim that the Web was practically invented on Norsk Data systems, and with that, imaginations run riot and other bold claims are made. I personally strongly dislike such behaviour.

When Apple devotees, for example, insist that Apple invented a range of technologies, the obligation is then put on others to correct the ignorant statements concerned and to act to prevent the historical record from being corrupted. So, no, Apple did not “invent” overlapping windows. And when corrected, one finds oneself obliged to chase down all the usual caveats and qualifications in response that are so often condensed into “but really they did”. So, no, Apple were not the first to demonstrate systems where the background windows remained “live” and updated, either.

Why can’t people be satisfied with the achievements that were made by their favourite companies? Is it not enough to respect the work actually done, instead of extrapolating and maximising a claim that then extends to a claim of “invention” and thus dominance? Such behaviour is not only disrespectful to the others who also did such work and made such discoveries, potentially at an earlier time, but it is disrespectful to the collaborative environment of the era, many of whose participants would not have seen themselves as adversaries. It is even disrespectful to the idols of the devotees making their exaggerated claims.

And if people revisited history, instead of being so intent on rewriting it, they might learn that such claims were litigated – literally – in decades past. Attempts to exclude other companies from delivering common technologies left Apple with little more than a trashcan. Maybe the company’s lawyers had wished that the perverse gesture of dragging a disk icon to a dustbin icon to eject a floppy disk might, for once, have just erased the company’s opportunistic, wasteful and flimsy lawsuit.

Questions of Heritage

What intrigued me most were some of the claims by Norsk Data itself. The company started out in the late 1960s, introducing the Nord-1, a 16-bit minicomputer, for industrial control applications. Numerous claims of “firsts” are made for that model in the context of minicomputing (virtual memory, built-in floating-point arithmetic support), perhaps contentious and subject to verification themselves, but it was the introduction of its successor where such claims start to tread on more delicate territory.

The Nord-5, introduced in 1972, has occasionally been claimed as the first 32-bit minicomputer. In fact, it could only operate in conjunction with a Nord-1, with the combination potentially being regarded as a minicomputing system. At the time, and for the handful of customers involved, this combination was described as the NORDIC system: a name that was apparently not used much if ever again. In practice, this was one or more 16-bit minicomputers with an attached 32-bit arithmetic processor.

Such clarifications might seem pedantic, but people do have strong opinions on such matters. Whereas Digital Equipment Corporation’s VAX, introduced in 1977, might be regarded as an influential machine in the proliferation of 32-bit minicomputing, occasionally and incorrectly cited as the first system of its kind, it is generally conceded that the Interdata 7/32 and 8/32, introduced in 1973, have a more substantial claim on any such title. Certainly, these may well have been the first minicomputers priced at $10,000 or below. Meanwhile, the NORDIC system cost over $600,000 for the Norwegian Meteorological Institute to acquire.

One might argue that NORDIC was not a typical minicomputing system, nor priced accordingly. And it does raise the observation that if one is to attach a component with certain superior characteristics to an existing component, as much as this attached component complements the capabilities of the existing component, the combination is not necessarily equivalent to a coherent system built entirely with such superior characteristics in the first place. We may return to this topic later, not least because certain phenomena have a habit of recurring in the computing industry.

As much as one might say in categorising the Nord-5, it was an interesting machine. Thanks to those who took an interest in archiving Norsk Data’s heritage, we are able to look at descriptions of the machine’s architecture, its instruction set, and so on. For those who have encountered systems from an earlier time and found them constraining and austere, the Nord-5 is surprising in a few ways. Most prominently, it has 64 general-purpose registers of 32 bits in size, pairs of which may be grouped to form 64-bit floating-point registers where required.

The Nord-5 has only a small number of instruction formats, although some of them seem rather haphazardly organised. It turns out that this is where the machine’s implementation, based heavily on discrete logic integrated circuits and the SN74181 arithmetic logic unit in particular, dictates the organisation of the machine. One might have thought that the limitations of the technology would have restrained the designers, making them focus on a limited feature set so as to minimise chip count and system cost, but exotic functionality exists that is difficult to satisfactorily explain or rationalise at first glance.

For instance, indirect addressing, familiar from various processor architectures, tends to involve an instruction accessing a particular memory location (or pair of locations), reading the contents of that location (or those locations), and then treating this value (or those values) as a memory address. Normally, one would then operate on the contents of this final address. However, in the Nord-5 architecture, such indirection can be done over and over again, so that instead of just one value being loaded and interpreted as an address, the value found at this address may be interpreted as an address, and its value may be interpreted as an address. And so on, for a maximum of sixteen levels, all traversed upon executing a single instruction over a number of clock cycles!

I must admit that I am not particularly familiar with mainframe and minicomputer architectures, but certain characteristics do seem similar to other machines. For example, the PDP-10 or DECsystem-10, a 36-bit mainframe from Digital introduced in 1966, has sixteen general-purpose registers and only two instruction formats. It also has floating-point arithmetic support using pairs of registers. Later, Digital would discontinue this line of computers in favour of its increasingly popular and profitable VAX range of computers: a development that would parallel Norsk Data’s own technological strategy in some ways.

The Nord-5 and its successor, the largely similar Nord-50, were regarded as commercially unsuccessful, although one might argue that the former gave the company access to funding at a crucial point in its history. They also delivered respectable floating-point arithmetic performance, bringing about considerations of making them available for minicomputers from other manufacturers. Described in some reporting as “a cheaper smaller scale version of the CDC Cyber 76 or Cray-1“, even if we ignore the hype, one can consider how pursuing this floating-point accelerator business might have influenced the eventual fate of the company.

Role Models and Rivals

In potentially lucrative sales environments like CERN, where Norsk Data gained a lucrative foothold during the 1970s, the company would have seen a lot of business going the way of companies like Digital, IBM and Hewlett-Packard. Such companies would have been almost like role models, indicating areas in which Norsk Data might operate, and providing recipes for winning business and keeping it.

Indeed, when discussing Norsk Data, it is almost impossible to avoid introducing Digital Equipment Corporation into the discussion, not least because Norsk Data constantly made comparisons of itself with Digital, favourably compared its products to those of Digital, and quite clearly aspired to be like Digital to the point of seemingly trying to emulate the more established company. However, it might be said that this approach rather depended on what Digital’s own strategy was perceived to be, and whether the people at Norsk Data actually understood Digital’s business and products.

Much has been written about Digital’s own fall from grace, being a company with sought-after products that helped define an industry, only to approach the end of the 1980s in a state of near crisis, with its products being outperformed by the insurgent Unix system vendors and with its own customers wanting a more convincing Unix story from their supplier. In certain respects, Norsk Data’s fortunes followed a similar path, and we might then be left wondering if in trying to be like Digital, the company inadvertently copied its larger rival’s flaws and replicated its mistakes.

One apparent perception of Digital was that of a complete provider of technology, and it is certainly apparent that Digital was the kind of supplier who would gladly provide everything from hardware and the operating system, through compilers, tools and productivity applications, all the way to removal services for computing facilities. Certainly, computing investments at the minicomputing and mainframe level were considerable, and having a capable vendor was essential.

It was apparently often remarked that “nobody ever got fired for buying IBM”, but it could also be said that buying IBM meant that a whole category of worries could be placed on the supplier. Indeed, Digital was perceived as only offering potential solutions through their technology, as opposed to the kind of complete, working solutions that IBM would be selling. Nevertheless, opportunities were identified in various areas where the bulk of such solutions were ready to deploy. Digital sought to enter the office automation market with its ALL-IN-1 software, competing with IBM’s established products. Naturally, Norsk Data wanted a piece of this action, too.

The business model was not the only way that Norsk Data seemed obsessed with Digital. Company financial reports highlighted the superior growth figures in comparison to Digital and other computer companies. The introduction of the VAX in 1977 demanded a response, and the company set to work on a genuine 32-bit “superminicomputer” as a result. This effort dragged out, however, only eventually delivering the ND-500 series in 1981.

The ND-500 introduced a new architecture incompatible with that of the Nord-5 and Nord-50, trading the large, general-purpose register set with a smaller set of registers partitioned into specialised groups acting as accumulators, index registers, extension registers, base registers, stack and frame registers, and so on. Although resembling an extended form of Norsk Data’s 16-bit architecture, no effort had been made to introduce instruction set compatibility between the ND-500 and that existing architecture.

The instruction set itself aimed for the orthogonality for which the VAX had become famous, implemented using microcode and supported by a variable-length instruction encoding. Instructions, consisting of instruction code and operand specifier units, could be a single byte or “several thousand bytes” in length. A variety of addressing modes and data types were supported in the large array of instructions and their variants.

And yet each ND-500 “processor” in any given configuration was still coupled with a 16-bit ND-100 “front-end”, this being an updated Nord-10 used for input/output and running much of the operating system, thus perpetuating the architectural relationship between the 16- and 32-bit components previously seen in the NORDIC system from several years earlier. In effect, the ND-500 still favoured computational workloads, and without the front-end unit, it could not be considered a minicomputing system in its own right.

Going Vertical

One distinct difference between the apparent strategy of Norsk Data and that of Digital, perhaps based on misconceptions of Digital’s approach or maybe founded on simple opportunism, was the way that the Norsk Data sought to be a complete, “vertically integrated” supplier in various specialised markets, whereas Digital could more accurately be described as a platform company. In one commentary I discovered while browsing, I found these pertinent remarks:

“In the old “Vertical” business model a major supplier would develop everything in house from basic silicon chips right through to … financial applications software packages. This model was clearly absurd. A company may be good at developing or providing several of the technologies and services in the value chain but it is inconceivable that any single company could be the best at doing everything.”

They originate from a representative of ICL, describing that company’s adoption of open standards and Unix as “a strategic platform”. Companies like ICL had their origins in earlier times when computer companies were almost expected to do everything for a customer, in part due to a lack of interoperability between systems, in part due to a traditional coupling of hardware and software, and in part due to a lack of expertise in information systems in the broader economy, making it a requirement for those companies to apply their proprietary technologies to the customer’s situation and to tackle each case as it came along.

Gradually, software became seen as an independent technology and product in its own right, interoperability materialised, and opportunities emerged for autonomous “third parties” in the industry. The “horizontal” model, where customers could choose and combine the technologies that were most appropriate for them, was resisted by various established companies, but in a dynamic market, they were eventually made to confront their own limitations.

In historical reviews of Norsk Data and its business, such as Tor Olav Steine’s “Fenomenet Norsk Data” (“The Norsk Data Phenomenon”), there is surprisingly little use of the word “solution” in the sense of an information technology system taken into practical use, which is a word that is used a lot in certain areas of the computer business. In those areas, like consultancy, the nature of the business may revolve entirely around the provision and augmentation of existing products to provide something a customer can use: what we call a “solution”. Such businesses simply could not exist without software and hardware platforms to deliver such solutions.

Where “solution” is used in such a way in Steine’s account, it is in the context of a company like Norsk Data choosing not to sell “solutions into specific markets” like the banking sector, identifying this a critical weakness of the company’s strategy. Certainly, a company like Norsk Data had to be adaptable and to accommodate initiatives to supply such sectors, but the mindset exhibited is that the company had to back up the salesforce with a “massive effort” to solve all of the customer’s problems. This was precisely the kind of “vertical” supplier that ICL and IBM had been out of historical necessity, entrusted with such endeavours, but also burdened by society’s continuing expectations of such companies.

Indeed, it says a great deal that IBM was the principal competition in the sector used to illustrate this alleged weakness of Norsk Data. IBM’s own crisis arrived in the early 1990s with a then-record financial loss and waves of reorganisations, somewhat decoupling the product divisions of the company from its growing services and solutions divisions, also gradually causing the company to adopt open systems and technologies. Its British counterpart in such traditional sectors, ICL, dabbled in open systems and Unix, largely keeping them away from its mainframe business, but pivoted strongly at the start of the 1990s, perhaps influenced by Fujitsu – its partner, investor and, eventually, owner – to adopt the SPARC architecture and System V Unix.

Norsk Data, however, stuck with vertical integration to its initial benefit and then its later detriment. The company had done some good business on the back of acquisitions in certain sectors – typesetting systems, computer-aided design/manufacturing – where opportunities were identified to migrate existing products to Norsk Data’s hardware and to ostensibly boost the performance that may have been lacking in the existing offerings, but the company found itself struggling to repeat such successes. In markets like the UK, it encountered indifference from software companies, who apparently perceived the company to be “too small”, and tried to invest in and cultivate smaller companies as vehicles for its technology.

Here, there may have been a possible lack of awareness or acceptance that instead of being “too small”, Norsk Data was perhaps too niche or too non-standard, in an era of emerging standards. After all, such standards increasingly defined software and hardware platforms on which other companies would build. The fixation on vertical market opportunities, having something that competitors did not, and “striking a knockout” in competitive situations, seems rather incompatible with cultivating an ecosystem around one’s products.

Another trait is apparent from discussion of the company, that being the tendency of selling in one set of products to a customer so as to be able to try and sell the customer another set of products. Thus, a customer buying one of the vertical market products might be coaxed into adopting various other strategic Norsk Data products, like the celebrated NOTIS suite of productivity applications. And with the potential for niche products to create opportunities for further sales and the proliferation of the company’s core technologies, Norsk Data got itself into trouble.

Personal Computing the Hard Way

Despite the increasing prominence of personal computing in the late 1970s and early 1980s, Norsk Data had remained largely dismissive of the trend, as many traditional vendors had also been initially. Minicomputer vendors sold multi-user machines that ran applications for each of the users, communicating output, usually character-based, to simple display terminals whose users would respond with keystrokes that would be communicated back to the “host”, thus providing an interactive computing environment. With shared storage, applications could provide a degree of collaboration absent from the average, standalone microcomputer. What exactly could a standalone microcomputer do that a terminal attached to a minicomputer could not?

Alongside this, applications that would become familiar to microcomputer users had emerged in minicomputer environments. For instance, word processing systems had demonstrated productivity benefits to organisations, providing far more flexibility and efficiency over typewriters and secretarial pools (also known as typing pools, so no, nothing to do with splashing around). Minicomputer environments could provide shared resources for still-expensive devices like printers, particularly high-end ones, and the shared storage permitted a library of materials to be accessed and curated.

From such easy triumphs in computerisation, much was anticipated from the nebulous practice of office automation. But perhaps because of the fragmented needs and demands of organisations, all-conquering big office systems could not hold off the gradual erosion of minicomputing dominance by the intrusion of microcomputers. Introduced at a relatable, personal level, one might argue that a personal computer as a simple product or commodity, along with software that was similarly packaged, may have been more obviously adaptable to some kinds of organisations, particularly small ones without strong expectations of what computer systems should do and how they might behave.

Indeed, where traditional suppliers of computers were perceived by newcomers as unapproachable or intimidating, microcomputers offered a potentially gentler introduction, as amusingly noted in one Norwegian article featuring IBM, Digital, HP and Norsk Data. For example, correspondence, documentation and other written records may have been cumbersome to prepare even with electronic typewriters – these providing only crude editing functions – and depending on the levels of enthusiasm for alternatives and frustration with the current situation, it would have been natural to acquire a personal computer with accessories, and to try out word processing and other applications to see what worked best for any given person, office, department, or organisation.

(A mixture of personal computing systems might have eventually generated interoperability problems, amongst others, but the agility that personal computers afforded organisations would potentially inform larger and more ambitious attempts to introduce technology later on.)

Personal computers began to shape user expectations of what computers of all kinds could do. Indeed, it is revealing that in treatments of the office automation market from the early 1980s, microcomputers keep recurring, and personal workstations – particularly the Xerox Star – set the tone for whatever office automation was meant to be. This was undoubtedly due to the unavoidable focus on the user interface that microcomputing and personal computing demanded. After all, personal computing cannot really be personal without considering the user!

Crucially, however, Xerox appeared to understand that one product could not be right for everyone, thus pitching a range of systems for a variety of users. The Xerox 860 focused largely on traditional word processing applications. The Xerox 8010 (or Star) was a networked workstation for sophisticated users. The company realised, particularly with IBM poised to move into personal computing, that a need existed for a more affordable product, leading to the much cheaper Xerox 820 running the established CP/M operating system. Although the Xerox 820 appears to have been considered a disappointment by commentators, who were perhaps expecting something more revolutionary, it did appear to signal that Xerox took affordable personal computing seriously, and the company was not alone in formulating such a product.

Digital tried a few different approaches to personal computing, two of which involved applications of their minicomputer architectures: the PDP-8-based DECmate, and the PDP-11-based DEC Professional. But it was their third and perhaps least proprietary approach, the DEC Rainbow, that perhaps stood the best chance of success, following a similar path to the Xerox 820, but taking the Zilog Z80-based core of such a machine and extending it with a companion Intel 8088 processor for increased versatility.

Such hybrid systems were not uncommon for a brief period at the start of the 1980s: established CP/M users would need Z80 compatibility, whereas new users and new software would have benefited from the 8088 running CP/M-86 or MS-DOS. The Rainbow was not a success, hampered by Digital’s proprietary instincts. Personally, I found it surprising to learn that the machine had a monochrome display as standard. Even with the RGB colour option, it would have rendered its own logo relatively unsatisfactorily!

What was Norsk Data’s response to the personal computing bandwagon? A telling quote can be found in an article from a 1985 newsletter:

“We do not believe in “the universal workstation” that can solve all problems for all user categories. Alternative hardware and software combinations seem to be the right answer. The functionality requirements for the personal workstations are definitely not satisfied by the “traditional PC”. For the majority of users today, the NOTIS terminal is the best alternative for a personal workstation that is integrated with the rest of the organization.”

The first two sentences seem reasonable enough, and the third could certainly have seemed reasonable at the time. But then comes the absurd, self-serving conclusion: a proprietary character terminal is the “best alternative” to something like the Xerox Star or its successor, the Xerox 6085 “Daybreak”, introduced in 1985, or other actual workstation products arriving on the market.

Evidently, decision-makers at the company remained fixated on what they considered their blockbuster products. But the personal computing trend was not about to disappear. The company’s first attempt at a product was initiated in 1983 and released in 1984, involving a rebadged IBM-PC-compatible from Columbia Computers, sold “half-heartedly” and was perhaps more influential within the company than outside.

Then, in 1986, came the product that only Norsk Data could make: the Butterfly workstation, featuring an Intel 80286 processor and running MS-DOS and contemporary Windows, but also featuring two expansion cards that implemented the ND-110 minicomputer processor to run the proprietary SINTRAN operating system. Naturally, such a workstation, with its built-in minicomputer, was intended to run the cherished NOTIS software, and a variant known as the Teamstation permitted the connection of four terminals to share in such goodness.

One can almost understand the thinking behind such products. There was an increasing clamour for approachable computing, with relatively low starting costs, and with buyers starting out with a single machine and seeing whether they liked the experience. Providing something whose experience for a single user could be expanded to cover another four users might have seemed like a reasonable idea. But to make sense to customers, those extra terminals would need to be inexpensive and offer something that another four personal computers might not, and the software involved would have to be better than the kinds of programs that ran natively on personal computers at the time. Here, the beliefs of those at Norsk Data and those of potential customers could easily have been rather different.

"Norsk Data didn't buy Wordplex for nothing."

“Norsk Data didn’t buy Wordplex for nothing.” Norsk Data perhaps inadvertently played into 1980s stereotypes when boasting about having loads of money. Wordplex was a struggling word processing systems vendor, and the acquisition did not lead to the happy marriage of convenience – or otherwise – that was promised.

Turning Something into Nothing

Against the industry tide, it seems that the company did what came most naturally, seeking growth for its own applications amongst a captive customer audience. Thus, amidst a refinancing exercise at word processing supplier Wordplex, that turned into a takeover opportunity for Apricot Computers, Norsk Data barged in with a more valuable offer, leaving Apricot to withdraw from the contest, presumably with some relief. Wordplex, one of the success stories in an earlier phase of office automation, was struggling financially but had an enviable customer base in a market where Norsk Data had wanted a greater presence than it had previously managed to attain.

What the exact plan was for Wordplex is not entirely clear. The company had its own product roadmap, centred on its Zilog Z8000-based Series 8000 systems, initially running Wordplex’s proprietary operating system. Wordplex evidently acknowledged the emergence of Unix and sought to introduce Xenix for its systems, chosen perhaps for its continued support for the aging Z8000 architecture. Norsk Data’s contribution seems to have been to sell their own machines to “stand beside or stack vertically” on the Series 8000 machines, offering what looked suspiciously like the NOTIS suite. One could easily imagine that Wordplex’s product range was unlikely to receive much further development after that.

Commentators associated with Norsk Data seem to regard Wordplex as something of a misadventure. Steine goes as far as to accuse the Wordplex management of subverting the organisation and pursuing their own agenda, as opposed to getting on with their new duties of selling Norsk Data’s systems to those valuable customers. Yet he does seem to accept that by pushing new systems onto Wordplex’s customers, computing departments pushed back on the additional complexity these new, proprietary systems would introduce, although Steine seems to attribute such pushback more on an unwillingness to tolerate new vendors in their computer rooms.

Perhaps Wordplex’s management stuck to what they knew because they just weren’t given better tools for the job. Existing customers would want to see some continuity, even if their users would eventually see themselves migrated onto other technology. In the end, Norsk Data’s perceived opportunities never materialised, and Wordplex customers presumably saw the writing on the wall and migrated to other systems. The dedicated word processing business was being disrupted by low-cost personal computers either dressed up like word processors, in the case of the Amstrad PCW, or providing more general functionality, maybe even in a networked configuration.

It is telling that in the documentary covering the takeover, a remark is made about how Norsk Data seemed inexperienced at acquisitions and the task of integrating distinct corporate cultures, and yet the company had, in fact, acquired other companies to fuel its rapid growth. But still, it is apparent that entities like Comtec and Technovision remained distinct silos within the larger organisation. I have personal familiarity with one institutional customer of Comtec, although it may have been a faint memory by the time I interacted with its users.

That customer was CERN’s publishing section, responsible for the weekly Bulletin and other output, who had adopted the NORTEXT typesetting system with some success. In 1985, these users were gradually trying to adopt various NOTIS applications, expressing a form of cautious optimism. By 1986, with an audit of CERN’s information systems in progress, these users were facing an upgrade of NORTEXT that required terminals designed for NOTIS, as well as enhancements for NOTIS that stressed their older, 16-bit ND-100 series hardware, having been developed primarily for the newer, 32-bit, ND-500 systems.

More investment was requested to take advantage of newer hardware, increased storage, and to provide more terminals. Indeed, the introduction of ND-500 models would help to rationalise the hardware situation, reduce maintenance costs and demands, and provide better services to those users. But at the same time, amidst a “lively discussion”, the shortcomings of NOTIS were noted, that Norsk Data were “unlikely to satisfy the needs of the administration in terms of fully automated office functions such as agenda, calendars, conference scheduling”, and that better integration was needed with the growing Macintosh community inside CERN.

Indeed, the influence of the graphical user interface, and the success of the Macintosh in delivering a coherent platform for developers and users, put companies wedded to character-based terminal applications on the back foot. Graphical applications were the natural medium of such platforms, whereas companies like Norsk Data struggled to accommodate such applications within their paradigm, suggesting upgrades to more costly graphical terminals. At best, the result added around $1,000 to the cost of the terminal and merely offered an “experimental” and narrow attempt at being “Macintosh like”, in a world where potential users were more likely to opt for the real thing instead.

Despite the mythology around the Mac, the platform was, like many others, still finding its feet and lacking numerous desirable capabilities. The mid-1980s was a fluid era for the graphical personal computer, and although a similar mythology developed around the Amiga, which was more capable than the Mac in several respects, success for a platform demanded a combination of technology, applications, the convenience of making such applications, and a demand for them.

On the dominant IBM-compatible platform, it took a while for a dominant graphical layer to assert itself, leaving observers attempting to track the winner from candidates such as VisiOn, GEM, Windows, OS/2 and NewWave. It is perhaps unsurprising that Norsk Data had no ready answer, and that even as it introduced its own personal computers running DOS and early Windows software, it was merely waiting for an industry consensus to shake out. Other strategies could have been followed, however: vendors in Norsk Data’s situation chose to enter the workstation market, which is a topic to be considered in its own right.

Another company that struggled with personal computing was ICL. Having acquired some interesting products, such as those made by Singer Business Machines – a division of the Singer Corporation, perhaps most famous for its sewing machines – the same division, then operating as part as ICL, made a system that formed the basis of ICL’s Distributed Resource System product family. In the initial DRS 20 range, computers with 8085 processors running CP/M would run applications and access other machines acting as file servers over ICL’s proprietary Macrolan network.

Such solutions were not always well received by the personal computing media. Expectations that ICL would bring its market position to bear on the rapidly developing industry led to disappointment when the company introduced the first DRS models, drawing suggestions that the diskless “workstations” would make rather competitive personal computers, if only ICL were to remove the “nearly £1000” network card and replace it with a disk controller. Later models would upgrade the processor to the 8086 family and run Concurrent DOS. Low-end models did indeed get disk drives, but did not break out into the standalone personal computer market.

Instead, ICL also decided to sell a different set of products, licensed from a company called Rair, as its own Personal Computer series, and these even utilised similar technologies to its initial DRS line-up, such as the 8085, CP/M, and MP/M, but offered eight serial ports for connected terminals instead of network connectivity. Rair’s rise to prominence perhaps occurred through the introduction of the Rair Black Box, to which a terminal had to be attached in order to use the system. A repackaged version formed the basis of the first ICL PC.

ICL appear to have been rather more agile than Norsk Data at introducing upgrades to their PC and DRS families. The PC range evolved to include models like the exotic-sounding Quattro, still trying to cater to office environments wanting to serve applications to terminals in a relatively inexpensive way that was, nevertheless, seen as less than persuasive in an era where personal computing had now established itself. Eventually, ICL reconciled itself to producing IBM-compatible PCs. In the early 1990s, I encountered some of these in a brief school-era “work shadowing” stay at a municipal computing department which predictably operated an ICL mainframe and some serious Xerox printing hardware.

Meanwhile, the DRS range gained a colour graphical workstation running the GEM desktop software on Concurrent DOS. GEM was a viable product adopted by a variety of companies including Atari, Amstrad and Acorn, despite Apple attempting to assert ownership of various aspects of the desktop paradigm. It would have been interesting to see Apple try and shake ICL down over such claims, given all the prior art in ICL’s PERQ that helped sink Apple’s litigation against Microsoft and Hewlett-Packard later in the decade. But it is how part of the DRS range evolved that perhaps illustrates how the likes of Norsk Data might have acted more decisively.

Common Threads of Computer Company History

Tuesday, August 5th, 2025

When descending into the vaults of computing history, as I have found myself doing in recent years, and with the volume of historical material now available for online perusal, it has largely become possible to finally have the chance of re-evaluating some of the mythology cultivated by certain technological communities, some of it dating from around the time when such history was still being played out. History, it is said, is written by the winners, and this is true to a large extent. How often have we seen Apple being given the credit for many technological developments that were actually pioneered elsewhere?

But the losers, if they may be considered that, also have their own narratives about the failure of their own favourites. In “A Tall Tale of Denied Glory”, I explored some myths about Commodore’s Amiga Unix workstation and how it was claimed that this supposedly revolutionary product was destined for success, cheered on by the big names in workstation computing, only to be defeated by Commodore’s own management. The story turned out to be far more complicated than that, but it illustrates that in an earlier age where there was more limited awareness of an industry with broader horizons than many could contemplate, everyone could get round a simplistic tale and vent their frustration at the outcome.

Although different technological communities, typically aligned with certain manufacturers, did interact with each other in earlier eras, even if the interactions mostly focused on advocacy and argument about who had chosen the best system, there was always the chance of learning something from each other. However, few people probably had the opportunity to immerse themselves in the culture and folklore of many such communities at once. Today, we have the luxury of going back and learning about what we might have missed, reading people’s views, and even watching television programmes and videos made about the systems and platforms we just didn’t care for at the time.

It was actually while searching for something else, as most great discoveries seem to happen, that I encountered some more mentions of the Amiga Unix rumours, these being relatively unremarkable in their familiarity, although some of them were qualified by a claim by the person airing these rumours (for the nth time) that they had, in fact, worked for Sun. Of course, they could have been the mailboy for all I know, and my threshold for authority in potential source material for this matter is now set so high that it would probably have to be Scott McNealy for me to go along with these fanciful claims. However, a respondent claimed that a notorious video documenting the final days of Commodore covered the matter.

I will not link to this video for a number of reasons, the most trivial of which is that it just drags on for far too long. And, of course, one thing it does not substantially cover is the matter under discussion. A single screen of text just parrots the claims seen elsewhere about Sun planning to “OEM” the Amiga 3000UX without providing any additional context or verification. Maybe the most interesting thing for me was to see that Commodore were using Apollo workstations running the Mentor Graphics CAD suite, but then so were many other companies at one point in time or other.

In the video, we are confronted with the demise of a company, the accompanying desolation, cameraderie under adversity, and plenty of negative, angry, aggressive emotion coupled with regressive attitudes that cannot simply be explained away or excused, try as some commentators might. I found myself exploring yet another rabbit hole with a few amusing anecdotes and a glimpse into an era for which many people now have considerable nostalgia, but one that yielded few new insights.

Now, many of us may have been in similar workplace situations ourselves: hopeless, perhaps even deluded, management; a failing company shedding its workforce; the closure of the business altogether. Often, those involved may have sustained a belief in the merits of the enterprise and in its products and people, usually out of the necessity to keep going, whether or not the management might have bungled the company’s strategy and led it down a potentially irreversible path towards failure.

Such beliefs in the company may have been forged in earlier, more successful times, as a company grows and its products are favoured over those of the competition. A belief that one is offering something better than the competition can be highly motivating. Uncalibrated against the changing situation, however, it can lead to complacency and the experience of helplessly watching as the competition recover and recapture the market. Trapped in the moment, the sequence of events leading to such eventualities can be hard to unravel, and objectivity is usually left as a matter for future observers.

Thus, the belief often emerges that particular companies faced unique challenges, particularly by the adherents of those companies, simply because everything was so overwhelming and inexplicable when it all happened, like a perfect storm making an unexpected landfall. But, being aware of what various companies experienced, and in peeking over the fence or around the curtain at what yet another company may have experienced, it turns out that the stories of many of these companies all have some familiar, common themes. This should hardly surprise us: all of these companies will have operated largely within the same markets and faced common challenges in doing so.

A Tale of Two Companies

The successful microcomputer vendors of the 1980s, which were mostly those that actually survived the decade, all had to transition from one product generation to the next. Acorn, Apple and Commodore all managed to do so, moving up from 8-bit systems to more sophisticated systems using 32-bit architectures. But these transitions only got them so far, both in terms of hardware capabilities and the general sophistication of their systems, and by the early 1990s, another update to their technological platforms was due.

Acorn had created the ARM processor architecture, and this had mostly kept the company competitive in terms of hardware performance in its traditional markets. But it had chosen a compromised software platform, RISC OS, on which to base its Archimedes systems. It had also introduced a couple of Unix workstation products, themselves based on the Archimedes hardware, but these were trailing the pace in a much more competitive market. Acorn needed the newly independent ARM company to make faster, more capable chips, or it would need embrace other processor architectures. Without such a boost forthcoming, it dropped Unix and sought to expand in “longshot” markets like set-top boxes for video-on-demand and network computing.

Commodore had a somewhat easier time of it, at least as far as processors were concerned, riding on the back of what Motorola had to offer, which had been good enough during much of the 1980s. Like Acorn, Commodore made their own graphics chips and had enjoyed a degree of technical superiority over mainstream products as a result, but as Acorn had experienced, the industry had started to catch up, leading to a scramble to either deliver something better or to go with the mainstream. Unlike Acorn, Commodore did do a certain amount of business actually going with the mainstream and selling IBM-compatible PCs, although the increasing commoditisation of that business led the company to disengage and to focus on its own technologies.

Commodore had its own distractions, too. While Acorn pursued set-top boxes for high-bandwidth video-on-demand and interactive applications on metropolitan area networks, Commodore tried to leverage its own portfolio rather more directly, trading on its strengths in gaming and multimedia, hoping to be the one who might unite these things coherently and lucratively. In the late 1980s and early 1990s, Japanese games console manufacturers had embraced the Compact Disc format, but NEC’s PC Engine CD-ROM² and Sega’s Mega-CD largely bolted CD technology onto existing consoles. Philips and Sony, particularly the former, had avoided direct competition with games consoles, pitching their CD-i technology more at the rather more sedate “edutainment” market.

With CDTV, Commodore attempted to enter the same market at Philips, downplaying the device’s Amiga 500 foundations and fast-tracking the product to market, only belatedly offering the missing CD-ROM drive option for its best-selling Amiga 500 system that would allow existing customers to largely recreate the same configuration themselves. Both CD-i and CDTV were considered failures, but Commodore wouldn’t let go, eventually following up with one of the company’s final products, the CD32, aiming more directly at the console market. Although a relative success against the lacklustre competition, it came too late to save the company which had entered a steep decline only to be driven to bankruptcy by a patent aggressor.

Whether plucky little Commodore would have made a comeback without financial headwinds and patent industry predators is another matter. Early multimedia consoles had unconvincing video playback capabilities without full-motion video hardware add-ons, but systems like the 3DO Interactive Multiplayer sought to strengthen the core graphical and gaming capabilities of such products, introducing hardware-accelerated 3D graphics and high-quality audio. Within only a year or so of the CD32’s launch, more complete systems such as the Sega Saturn and, crucially, the Sony PlayStation would be available. Commodore’s game may well have been over, anyway.

Back in Cambridge, a few months after Commodore’s demise, Acorn entered into a collaboration with an array of other local technology, infrastructure and media companies to deliver network services offering “interactive television“, video-on-demand, and many of the amenities (shopping, education, collaboration) we take for granted on the Internet today, including access to the Web of that era. Although Acorn’s core technologies were amenable to such applications, they did need strengthening in some respects: like multimedia consoles, video decoding hardware was a prerequisite for Acorn’s set-top boxes, and although Acorn had developed its own competent software-based video decoding technology, the market was coalescing around the MPEG standard. Fortunately for Acorn, MPEG decoder hardware was gradually becoming a commodity.

Despite this interactive services trial being somewhat informative about the application of the technologies involved, the video-on-demand boom fizzled out, perhaps demonstrating to Acorn once again that deploying fancy technologies in a relatively affluent region of the country for motivated, well-served early adopters generally does not translate into broader market adoption. Particularly if that adoption depended on entrenched utility providers having to break open their corporate wallets and spend millions, if not billions, on infrastructure investments that would not repay themselves for years or even decades. The experience forced Acorn to refocus its efforts on the emerging network computer trend, leading the company down another path leading mostly nowhere.

Such distractions arguably served both companies poorly, causing them to neglect their core product lines and to either ignore or to downplay the increasing uncompetitiveness of those products. Commodore’s efforts to go upmarket and enter the potentially lucrative Unix market had begun too late and proceeded too slowly, starting with efforts around Motorola 68020-based systems that could have opened a small window of opportunity at the low end of the market if done rather earlier. Unix on the 68000 family was a tried and tested affair, delivered by numerous companies, and supplied by established Unix porting houses. All Commodore needed to do was to bring its legendary differentiation to the table.

Indeed, Acorn’s one-time stablemate, Torch Computers, pioneered low-end graphical Unix computing around the earlier 68010 processor with its Triple X workstation, seeking to upgrade to the 68020 with its Quad X workstation, but it had been hampered by a general lack of financing and an owner increasingly unwilling to continue such financing. Coincidentally, at more or less the same time that the assets of Torch were finally being dispersed, their 68030-based workstation having been under development, Commodore demonstrated the 68030-based Amiga 3000 for its impending release. By the time its Unix variant arrived, Commodore was needing to bring far more to the table than what it could reasonably offer.

Acorn themselves also struggled in their own moves upmarket. While the ARM had arrived with a reputation of superior performance against machines costing far more, the march of progress had eroded that lead. The designers of the ARM had made a virtue of a processor being able to make efficient use of its memory bandwidth, as opposed to letting the memory sit around idle as the processor digested each instruction. This facilitated cheaper systems where, in line with the design of Acorn’s 8-bit computers, the processor would take on numerous roles within the system including that of performing data transfers on behalf of hardware peripherals, doing so quite effectively and obviating the need for costly interfacing circuitry that would let hardware peripherals directly access the memory themselves.

But for more powerful systems, the architectural constraints can be rather different. A processor that is supposedly inefficient in its dealings with memory may at least benefit from peripherals directly accessing memory independently, raising the general utilisation of the memory in the system. And even a processor that is highly effective at keeping itself busy and highly efficient at utilising the memory might be better off untroubled by interrupts from hardware devices needing it to do work for them. There is also the matter of how closely coupled the processor and memory should be. When 8-bit processors ran at around the same speed as their memory devices, it made sense to maximise the use of that memory, but as processors increased in speed and memory struggled to keep pace, it made sense to decouple the two.

Other RISC processors such as those from MIPS arrived on the market making deliberate use of faster memory caches to satisfy those processors’ efficient memory utilisation while acknowledging the increasing disparity between processor and memory speeds. When upgrading the ARM, Acorn had to introduce a cache in its ARM3 to try and keep pace, doing so with acclaim amongst its customers as they saw a huge jump in performance. But such a jump was long overdue, coming after Acorn’s first Unix workstation had shipped and been largely overlooked by the wider industry.

Acorn’s second generation of workstations, being two configurations of the same basic model, utilised the ARM3 but lacked a hardware floating-point unit. Commodore could rely on the good old 68881 from Motorola, but Acorn’s FPA10 (floating-point accelerator) arrived so late that only days after its announcement, three years or so after those ARM3-based systems had been launched and two years later than expected, Acorn discontinued its Unix workstation effort altogether.

It is claimed that Commodore might have skipped the 68030 and gone straight for the 68040 in its Unix workstation, but indications are that the 68040 was probably scarce and expensive at first, and soon only Apple would be left as a major volume customer for the product. All of the other big Motorola 68000 family customers had migrated to other architectures or were still planning to, and this was what Commodore themselves resolved to do, formulating an ambitious new chipset called Hombre based around Hewlett-Packard’s PA-RISC architecture that was never realised.

Performance of Amiga and workstation systems in approximate chronological order of introduction

A chart showing how Unix workstation performance steadily improved, largely through the introduction of steadily faster RISC processors.

Acorn, meanwhile, finally got a chip upgrade from ARM in the form of the rather modest ARM6 series, choosing to develop new systems around the ARM600 and ARM610 variants, along with systems using upgraded sound and video hardware. One additional benefit of the newer ARM chips was an integrated memory management unit more suitable for Unix implementations than the one originally developed for the ARM. For followers of the company, such incoming enhancements provided a measure of hope that the company’s products would remain broadly competitive in hardware terms with mainstream personal computers.

Perhaps most important to most Acorn users at the time, given the modest gains they might see from the ARM600/610, was the prospect of better graphical capabilities, but Acorn chose not to release their intermediate designs along the way to their grand new system. And so, along came the Risc PC: a machine with two processor sockets and logic to allow one of the processors to be an x86-compatible processor that could run PC software. Once again, Acorn gave the whole hardware-based PC accelerator card concept another largely futile outing, failing to learn that while existing users may enjoy dabbling with software from another platform, it hardly ever attracts new customers in any serious numbers. Even Commodore had probably learned that lesson by then.

Nevertheless, Acorn’s Risc PC was a somewhat credible platform for Unix, if only Acorn hadn’t cancelled their own efforts in that realm. Prominent commentators and enthusiastic developers seized the moment, and with Free Software Unix implementations such as NetBSD and FreeBSD emerging from the shadow of litigation cast upon them, a community effort could be credibly pursued. Linux was also ported to ARM, but such work was actually begun on Acorn’s older A5000 model.

Acorn never seized this opportunity properly, however. Despite entering the network computer market in pursuit of some of Larry Ellison’s billions, expectations of the software in network computers had also increased. After all, networked computers have many of the responsibilities of those sophisticated minicomputers and workstations. But Acorn was still wedded to RISC OS and, for the most part, to ARM. And it ultimately proved that while RISC OS might present quite a nice graphical interface, it was actually NetBSD that could provide the necessary versatility and reliability being sought for such endeavours.

And as the 1990s got underway, the mundane personal computer started needing some of those workstation capabilities, too, eventually erasing the distinction between these two product categories. Tooling up for Unix might have seemed like a luxury, but it had been an exercise in technological necessity. Acorn’s RISC OS had its attractions, notably various user interface paradigms that really should have become more commonplace, together with a scalable vector font system that rendered anti-aliased characters on screen years before Apple or Microsoft managed to, one that permitted the accurate reproduction of those fonts on a dot-matrix printer, a laser printer, and everything in-between.

But the foundations of RISC OS were a legacy from Acorn’s 8-bit era, laid down hastily in an arguably cynical fashion to get the Archimedes out of the door and to postpone the consequences. Commodore inevitably had similar problems with its own legacy software technology, ostensibly more modern than Acorn’s when it was introduced in the Amiga, even having some heritage from another Cambridge endeavour. Acorn might have ported its differentiating technologies to Unix, following the path taken by Torch and its close relative, IXI, also using the opportunity to diversify its hardware options.

In all of this consideration given to Acorn and Commodore, it might seem that Apple, mentioned many paragraphs earlier, has been forgotten. In fact, Apple went through many of the same trials and ordeals as its smaller rivals. Indeed, having made so much money from the Macintosh, Apple’s own attempts to modernise itself and its products involve such a catalogue of projects and initiatives that even summarising them would expand this article considerably.

Only Apple would buy a supercomputer to attempt to devise its own processor architecture – Aquarius – only not to follow through and eventually be rescued by the pair of IBM and Motorola, humbled by an unanticipated decline in their financial and market circumstances. Or have several operating system projects – Opus, Pink, Star Trek, NuKernel, Copland – that were all started but never really finished. Or to get into personal digital assistants with the unfairly maligned Newton, or to consider redesigning the office entirely with its Workspace 2000 collaboration. And yet end up acquiring NeXT, revamping its technologies along that company’s lines, and still barely make it to the end of the decade.

The Final Chapters

Commodore got almost half-way through the 1990s before bankruptcy beckoned. Motorola’s 68060, informed by the work on the chip manufacturer’s abandoned 88000 RISC architecture, provided a considerable performance boost to its more established architecture, even if it now trailed the pack, perhaps only matching previous generations of SPARC and MIPS processors, and now played second fiddle to PowerPC in Motorola’s own line-up.

Acorn’s customers would be slightly luckier. Digital’s StrongARM almost entirely eclipsed ARM’s rather sedate ARM7-based offerings, except in floating-point performance in comparison to a single system-on-chip product, the ARM7500FE. This infusion of new technology was a blessing and a curse for Acorn and its devotees. The Risc PC could not make full use of this performance, and a new machine would be needed to truly make the most of it, also getting a long-overdue update in a range of core industry technologies.

Commodore’s devotees tend to make much of the company’s mismanagement. Deserved or otherwise, one may now be allowed to judge whether the company was truly unique in this regard. As Acorn’s network computer ambitions were curtailed, market conditions became more unfavourable to its increasingly marginalised platform, and the lack of investment in that core platform started to weigh heavily on the company and its customers. A shift in management resulted in a shift in business and yet another endeavour being initiated.

Acorn’s traditional business units were run down, the company’s next generation of personal computer hardware cancelled, and yet a somewhat tangential silicon design business was effectively being incubated elsewhere within the organisation. Meanwhile, Acorn, sitting on a substantial number of shares in ARM, supposedly presented a vulnerability for the latter and its corporate stability. So, a plan was hatched that saw Acorn sold off to a division of an investment bank based in a tax haven, the liberation of its shares in ARM, and the dispersal of Acorn’s assets at rather low prices. That, of course, included the newly incubated silicon design operation, bought by various figures in Acorn’s “senior management”.

Just as Commodore’s demise left customers and distributors seemingly abandoned, so did Acorn’s. While Commodore went through the indignity of rescues and relaunches, Acorn itself disappeared into the realms of anonymous holding companies, surfacing only occasionally in reports of product servicing agreements and other unglamorous matters. Acorn’s product lines were kept going for as long as could be feasible by distributors who had paid for the privilege, but without the decades of institutional experience of an organisation terminated almost overnight, there was never likely to be a glorious resurgence of its computer systems. Its software platform was developed further, primarily for set-top box applications, and survives today more as a curiosity than a contender.

In recent days, efforts have been made by Commodore devotees to secure the rights to trademarks associated with the company, these having apparently been licensed by various holding companies over the years. Various Acorn trademarks were also offloaded to licensors, leading to at least one opportunistic but ill-conceived and largely unwelcome attempt to trade on nostalgia and to cosplay the brand. Whether such attempts might occur in future remains uncertain: Acorn’s legacy intersects with that of the BBC, ARM and other institutions, and there is perhaps more sensitivity about how its trademarks might be used.

In all of this, I don’t want to downplay all of the reasons often given for these companies’ demise, Commodore’s in particular. In reading accounts of people who worked for the company, it is clear that it was not a well-run workplace, with exploitative and abusive behaviour featuring disturbingly regularly. Instead, I wish to highlight the lack of understanding in the communities around these companies and the attribution of success or failure to explanations that do not really hold up.

For instance, the Acorn Electron may have consumed many resources in its development and delivery, but it did not lead to Acorn’s “downfall”, as was claimed by one absurd comment I read recently. Acorn’s rescue by Olivetti was the consequence of several other things, too, including an ill-advised excursion into the US market, an attempt to move upmarket with an inadequate product range, some curious procurement and logistics practices, and a lack of capital from previous stock market flotations. And if there had been such a “downfall”, such people would not be piping up constantly about ARM being “the chip in everyone’s phone”, which is tiresomely fashionable these days. ARM may well have been just a short footnote in some dry text about processor architectures.

In these companies, some management decisions may have made sense, while others were clearly ill-considered. Similarly, those building the products could only do so much given the technological choices that had already been made. But more intriguing than the actual intrigues of business is to consider what these companies might have learned from each other, what the product developers might have borrowed from each other had they been able to, and what they might have achieved had they been able to collaborate somehow. Instead, both companies went into decline and ultimately fell, divided by the barriers of competition.

Update: It seems that the chart did not have the correct value for the Amiga 4000/040, due to a missing conversion from VAX MIPS to something resembling the original Dhrystone score. Thus, in integer performance as measured by this benchmark, the 68040 at 25MHz was broadly comparable to the R3000 at 25MHz, but was also already slipping behind faster R3000 parts even before the SuperSPARC and R4000 emerged.

Replaying the Microcomputing Revolution

Monday, January 6th, 2025

Since microcomputing and computing history are particular topics of interest of mine, I was naturally engaged by a recent article about the Raspberry Pi and its educational ambitions. Perhaps obscured by its subsequent success in numerous realms, the aspirations that originally drove the development of the Pi had their roots in the effects of the introduction of microcomputers in British homes and schools during the 1980s, a phenomenon that supposedly precipitated a golden age of hands-on learning, initiating numerous celebrated and otherwise successful careers in computing and technology.

Such mythology has the tendency to greaten expectations and deepen nostalgia, and when society enters a malaise in one area or another, it often leads to efforts to bring back the magic through new initiatives. Enter the Raspberry Pi! But, as always, we owe it to ourselves to step through the sequence of historical events, as opposed to simply accepting the narratives peddled by those with an agenda or those looking for comforting reminders of their own particular perspectives from an earlier time.

The Raspberry Pi and other products, such as the BBC Micro Bit, associated with relatively recent educational initiatives, were launched with the intention of restoring the focus of learning about computing to that of computing and computation itself. Once upon a time, computers were largely confined to large organisations and particular kinds of endeavour, generally interacting only indirectly with wider society. Thus, for most people, what computers were remained an abstract notion, often coupled with talk of the binary numeral system as the “language” of these mysterious and often uncompromising machines.

However, as microcomputers emerged both in the hobbyist realm – frequently emphasised in microcomputing history coverage – and in commercial environments such as shops and businesses, governments and educators identified a need for “computer literacy”. This entailed practical experience with computers and their applications, informed by suitable educational material, enabling the broader public to understand the limitations and the possibilities of these machines.

Although computers had already been in use for decades, microcomputing diminished the cost of accessible computing systems and thereby dramatically expanded their reach. And when technology is adopted by a much larger group, there is usually a corresponding explosion in applications of that technology as its users make their own discoveries about what the technology might be good for. The limitations of microcomputers relative to their more sophisticated predecessors – mainframes and minicomputers – also meant that existing, well-understood applications were yet to be successfully transferred from those more powerful and capable systems, leaving the door open for nimble, if somewhat less capable, alternatives to be brought to market.

The Capable User

All of these factors pointed towards a strategy where users of computers would not only need to be comfortable interacting with these systems, but where they would also need to have a broad range of skills and expertise, allowing them to go beyond simply using programs that other people had made. Instead, they would need to be empowered to modify existing programs and even write their own. With microcomputers only having a limited amount of memory and often less than convenient storage solutions (cassette tapes being a memorable example), and with few available programs for typically brand new machines, the emphasis of the manufacturer was often on giving the user the tools to write their own software.

Computer literacy efforts sensibly and necessarily went along with such trends, and from the late 1970s and early 1980s, after broader educational programmes seeking to inform the public about microelectronics and computing, these efforts targeted existing models of computer with learning materials like “30 Hour BASIC”. Traditional publishers became involved as the market opportunities grew for producing and selling such materials, and publications like Usbourne’s extensive range of computer programming titles were incredibly popular.

Numerous microcomputer manufacturers were founded, some rather more successful and long-lasting than others. An industry was born, around which was a vibrant community – or many vibrant communities – consuming software and hardware for their computers, but crucially also seeking to learn more about their machines and exchanging their knowledge, usually through the specialist print media of the day: magazines, newsletters, bulletins and books. This, then, was that golden age, of computer studies lessons at school, learning BASIC, and of late night coders at home, learning machine code (or, more likely, assembly language) and gradually putting together that game they always wanted to write.

One can certainly question the accuracy of the stereotypical depiction of that era, given that individual perspectives may vary considerably. My own experiences involved limited exposure to educational software at primary school, and the anticipated computer studies classes at secondary school never materialising. What is largely beyond dispute is that after the exciting early years of microcomputing, the educational curriculum changed focus from learning about computers to using them to run whichever applications happened to be popular or attractive to potential employers.

The Vocational Era

Thus, microcomputers became mere tools to do other work, and in that visionless era of Thatcherism, such other work was always likely to be clerical: writing letters and doing calculations in simple spreadsheets, sowing the seeds of dysfunction and setting public expectations of information systems correspondingly low. “Computer studies” became “information technology” in the curriculum, usually involving systems feigning a level of compatibility with the emerging IBM PC “standard”. Naturally, better-off schools will have had nicer equipment, perhaps for audio and video recording and digitising, plus the accompanying multimedia authoring tools, along with a somewhat more engaging curriculum.

At some point, the Internet will have reached schools, bringing e-mail and Web access (with all the complications that entails), and introducing another range of practical topics. Web authoring and Web site development may, if pursued to a significant extent, reveal such things as scripts and services, but one must then wonder what someone encountering the languages involved for the first time might be able to make of them. A generation or two may have grown up seeing computers doing things but with no real exposure to how the magic was done.

And then, there is the matter of how receptive someone who is largely unexposed to programming might be to more involved computing topics, lower-level languages, data structures and algorithms, of the workings of the machine itself. The mythology would have us believe that capable software developers needed the kind of broad exposure provided by the raw, unfiltered microcomputing experience of the 1980s to be truly comfortable and supremely effective at any level of a computing system, having sniffed out every last trick from their favourite microcomputer back in the day.

Those whose careers were built in those early years of microcomputing may now be seeing their retirement approaching, at least if they have not already made their millions and transitioned into some kind of role advising the next generation of similarly minded entrepreneurs. They may lament the scarcity of local companies in the technology sector, look at their formative years, and conclude that the system just doesn’t make them like they used to.

(Never mind that the system never made them like that in the first place: all those game-writing kids who may or may not have gone on to become capable, professional developers were clearly ignoring all the high-minded educational stuff that other people wanted them to study. Chess computers and robot mice immediately spring to mind.)

A Topic for Another Time

What we probably need to establish, then, is whether such views truly incorporate the wealth of experience present in society, or whether they merely reflect a narrow perspective where the obvious explanation may apply to some people’s experience but fails to explain the entire phenomenon. Here, we could examine teaching at a higher educational level than the compulsory school system, particularly because academic institutions were already performing and teaching computing for decades before controversies about the school computing curriculum arose.

We might contrast the casual, self-taught, experimental approach to learning about programming and computers with the structured approach favoured in universities, of starting out with high-level languages, logic, mathematics, and of learning about how the big systems achieved their goals. I encountered people during my studies who had clearly enjoyed their formative experiences with microcomputers becoming impatient with the course of these studies, presumably wondering what value it provided to them.

Some of them quit after maybe only a year, whereas others gained an ordinary degree as opposed to graduating with honours, but hopefully they all went on to lucrative and successful careers, unconstrained and uncurtailed by their choice. But I feel that I might have missed some useful insights and experiences had I done the same. But for now, let us go along with the idea that constructive exposure to technology throughout the formative education of the average person enhances their understanding of that technology, leading to a more sophisticated and creative population.

A Complete Experience

Backtracking to the article that started this article off, we then encounter one educational ambition that has seemingly remained unaddressed by the Raspberry Pi. In microcomputing’s golden age, the motivated learner was ostensibly confronted with the full power of the machine from the point of switching on. They could supposedly study the lowest levels and interact with them using their own software, comfortable with their newly acquired knowledge of how the hardware works.

Disregarding the weird firmware situation with the Pi, it may be said that most Pi users will not be in quite the same position when running the Linux-based distribution deployed on most units as someone back in the 1980s with their BBC Micro, one of the inspirations for the Pi. This is actually a consequence of how something even cheaper than a microcomputer of an earlier era has gained sophistication to such an extent that it is architecturally one of those “big systems” that stuffy university courses covered.

In one regard, the difference in nature between the microcomputers that supposedly conferred developer prowess on a previous generation and the computers that became widespread subsequently, including single-board computers like the Pi, undermines the convenient narrative that microcomputers gave the earlier generation their perfect start. Systems built on processors like the 6502 and the Z80 did not have different privilege levels or memory management capabilities, leaving their users blissfully unaware of such concepts, even if the curious will have investigated the possibilities of interrupt handling and been exposed to any related processor modes, or even if some kind of bank switching or simple memory paging had been used by some machines.

Indeed, topics relevant to microcomputers from the second half of the 1980s are surprisingly absent from retrocomputing initiatives promoting themselves as educational aids. While the Commander X16 is mostly aimed at those seeking a modern equivalent of their own microcomputer learning environment, and many of its users may also end up mostly playing games, the Agon Light and related products are more aggressively pitched as being educational in nature. And yet, these projects cling to 8-bit processors, some inviting categorisation as being more like microcontrollers than microprocessors, as if the constraints of those processor architectures conferred simplicity. In fact, moving up from the 6502 to the 68000 or ARM made life easier in many ways for the learner.

When pitching a retrocomputing product at an audience with the intention of educating them about computing, also adding some glamour and period accuracy to the exercise, it would arguably be better to start with something from the mid-1980s like the Atari ST, providing a more scalable processor architecture and sensible instruction set, but also coupling the processor with memory management hardware. The Atari ST and Commodore Amiga didn’t have a memory management unit in their earliest models, only introducing one later to attempt a move upmarket.

Certainly, primary school children might not need to learn the details of all of this power – just learning programming would be sufficient for them – but as they progress into the later stages of their education, it would be handy to give them new challenges and goals, to understand how a system works where each program has its own resources and cannot readily interfere with other programs. Indeed, something with a RISC processor and memory management capabilities would be just as credible.

How “authentic” a product with a RISC processor and “big machine” capabilities would be, in terms of nostalgia and following on from earlier generations of products, might depend on how strict one decides to be about the whole exercise. But there is nothing inauthentic about a product with such a feature set. In fact, one came along as the de-facto successor to the BBC Micro, and yet relatively little attention seems to be given to how it addressed some of the issues faced by the likes of the Pi.

Under The Hood

In assessing the extent of the Pi’s educational scope, the aforementioned article has this to say:

“Encouraging naive users to go under the hood is always going to be a bad idea on systems with other jobs to do.”

For most people, the Pi is indeed running many jobs and performing many tasks, just as any Linux system might do. And as with any “big machine”, the user is typically and deliberately forbidden from going “under the hood” and interfering with the normal functioning of the system. Even if a Pi is only hosting a single user, unlike the big systems of the past with their obligations to provide a service to many users.

Of course, for most purposes, such a system has traditionally been more than adequate for people to learn about programming. But traditionally, low-level systems programming and going under the hood generally meant downtime, which on expensive systems was largely discouraged, confined to inconvenient times of day, and potentially undertaken at one’s peril. Things have changed somewhat since the old days, however, and we will return to that shortly. But satisfying the expectations of those wanting a responsive but powerful learning environment was a challenge encountered even as the 1980s played out.

With early 1980s microcomputers like the BBC Micro, several traits comprised the desirable package that people now seek to reproduce. The immediacy of such systems allowed users to switch on and interact with the computer in only a few seconds, as opposed to a lengthy boot sequence that possibly also involved inserting disks, never mind the experiences of the batch computing era that earlier computing students encountered. Such interactivity lent such systems a degree of transparency, letting the user interact with the system and rapidly see the effects. Interactions were not necessarily constrained to certain facets of the system, allowing users to engage with the mechanisms “under the hood” with both positive and negative effects.

The Machine Operating System (MOS) of the BBC Micro and related machines such as the Acorn Electron and BBC Master series, provided well-defined interfaces to extend the operating system, introduce event or interrupt handlers, to deliver utilities in the form of commands, and to deliver languages and applications. Such capabilities allowed users to explore the provided functionality and the framework within which it operated. Users could also ignore the operating system’s facilities and more or less take full control of the machine, slipping out of one set of imposed constraints only to be bound by another, potentially more onerous set of constraints.

Earlier Experiences

Much is made of the educational impact of systems like the BBC Micro by those wishing to recapture some of the magic on more capable systems, but relatively few people seem to be curious about how such matters were tackled by the successor to the BBC Micro and BBC Master ranges: Acorn’s Archimedes series. As a step away from earlier machines, the Archimedes offers an insight into how simplicity and immediacy can still be accommodated on more powerful systems, through native support for familiar technology such as BASIC, compatibility layers for old applications, and system emulators for those who need to exercise some of the new hardware in precisely the way that worked on the older hardware.

When the Archimedes was delivered, the original Arthur operating system largely provided the recognisable BBC Micro experience. Starting up showed a familiar welcome message, and even if it may have dropped the user at a “supervisor” prompt as opposed to BASIC, something which did also happen occasionally on earlier machines, typing “BASIC” got the user the rest of the way to the environment they had come to expect. This conferred the ability to write programs exercising the graphical and audio capabilities of the machine to a substantial degree, including access to assembly language, albeit of a different and rather superior kind to that of the earlier machines. Even writing directly to screen memory worked, albeit at a different location and with a more sensible layout.

Under Arthur, users could write programs largely as before, with differences attributable to the change in capabilities provided by the new machines. Even though errant pokes to exotic memory locations might have been trapped and handled by the system’s enhanced architecture, it was still possible to write software that ran in a privileged mode, installed interrupt handlers, and produced clever results, at the risk of freezing or crashing the system. When Arthur was superseded by RISC OS, the desktop interface became the default experience, hiding the immediacy and the power of the command prompt and BASIC, but such facilities remained only a keypress away and could be configured as the default with perhaps only a single command.

RISC OS exposed the tensions between the need for a more usable and generally accessible interface, potentially doing many things at once, and the desire to be able to get under the hood and poke around. It was possible to write desktop applications in BASIC, but this was not really done in a particularly interactive way, and programs needed to make system calls to interact with the rest of the desktop environment, even though the contents of windows were painted using the classic BASIC graphics primitives otherwise available to programs outside the desktop. Desktop programs were also expected to cooperate properly with each other, potentially hanging the system if not written correctly.

The Maestro music player in RISC OS, written in BASIC.

The Maestro music player in RISC OS, written in BASIC. Note that the !RunImage file is a BASIC program, with the somewhat compacted code shown in the text editor.

A safer option for those wanting the classic experience and to leverage their hard-earned knowledge, was to forget about the desktop and most of the newer capabilities of the Archimedes and to enter the BBC Micro emulator, 65Host, available on one of the supplied application disks, writing software just as before, and then running that software or any other legacy software of choice. Apart from providing file storage to the emulator and bearing all the work of the emulator itself, this did not really exercise the newer machine, but it still provided a largely authentic, traditional experience. One could presumably crash the emulated machine, but this should merely have terminated the emulator.

An intermediate form of legacy application support was also provided. 65Tube, with “Tube” referencing an interfacing paradigm used by the BBC Micro, allowed applications written against documented interfaces to run under emulation but accessing facilities in the native environment. This mostly accommodated things like programming language environments and productivity applications and might have seemed superfluous alongside the provision of a more comprehensive emulator, but it potentially allowed such applications to access capabilities that were not provided on earlier systems, such as display modes with greater resolutions and more colours, or more advanced filesystems of different kinds. Importantly, from an educational perspective, these emulators offered experiences that could be translated to the native environment.

65Tube running in MODE 15.

65Tube running in MODE 15, utilising many more colours than normally available on earlier Acorn machines.

Although the Archimedes drifted away from the apparent simplicity of the BBC Micro and related machines, most users did not fully understand the software stack on such earlier systems, anyway. However, despite the apparent sophistication of the BBC Micro’s successors, various aspects of the software architecture were, in fact, preserved. Even the graphical user interface on the Archimedes was built upon many familiar concepts and abstractions. The difficulty for users moving up to the newer system arose upon finding that much of their programming expertise and effort had to be channelled into a software framework that confined the activities of their code, particularly in the desktop environment. One kind of framework for more advanced programs had merely been replaced by others.

Finding Lessons for Today

The way the Archimedes attempted to accommodate the expectations cultivated by earlier machines does not necessarily offer a convenient recipe to follow today. However, the solutions it offered should draw our attention to some other considerations. One is the level of safety in the environment being offered: it should be possible to interact with the system without bringing it down or causing havoc.

In that respect, the Archimedes provided a sandboxed environment like an emulator, but this was only really viable for running old software, as indeed was the intention. It also did not multitask, although other emulators eventually did. The more integrated 65Tube emulator also did not multitask, although later enhancements to RISC OS such as task windows did allow it to multitask to a degree.

65Tube running in a task window.

65Tube running in a task window. This relies on the text editing application and unfortunately does not support fancy output.

Otherwise, the native environment offered all the familiar tools and the desired level of power, but along with them plenty of risks for mayhem. Thus, a choice between safety and concurrency was forced upon the user. (Aside from Arthur and RISC OS, there was also Acorn’s own Unix port, RISC iX, which had similar characteristics to the kind of Linux-based operating system typically run on the Pi. You could, in principle, run a BBC Micro emulator under RISC iX, just as people run emulators on the Pi today.)

Today, we could actually settle for the same software stack on some Raspberry Pi models, with all its advantages and disadvantages, by running an updated version of RISC OS on such hardware. The bundled emulator support might be missing, however, but for those wanting to go under the hood and also take advantage of the hardware, it is unlikely that they would be so interested in replicating the original BBC Micro experience with perfect accuracy, instead merely seeking to replicate the same kind of experience.

Another consideration the Archimedes raises is the extent to which an environment may take advantage of the host system, and it is this consideration that potentially has the most to offer in formulating modern solutions. We may normally be completely happy running a programming tool in our familiar computing environments, where graphical output, for example, may be confined to a window or occasionally shown in full-screen mode. Indeed, something like a Raspberry Pi need not have any rigid notion of what its “native” graphical capabilities are, and the way a framebuffer is transferred to an actual display is normally not of any real interest.

The learning and practice of high-level programming can be adequately performed in such a modern environment, with the user safely confined by the operating system and mostly unable to bring the system down. However, it might not adequately expose the user to those low-level “under the hood” concepts that they seem to be missing out on. For example, we may wish to introduce the framebuffer transfer mechanism as some kind of educational exercise, letting the user appreciate how the text and graphics plotting facilities they use lead to pixels appearing on their screen. On the BBC Micro, this would have involved learning about how the MOS configures the 6845 display controller and the video ULA to produce a usable display.

The configuration of such a mechanism typically resides at a fairly low level in the software stack, out of the direct reach of the user, but allowing a user to reconfigure such a mechanism would risk introducing disruption to the normal functioning of the system. Therefore, a way is needed to either expose the mechanism safely or to simulate it. Here, technology’s steady progression does provide some possibilities that were either inconvenient or impossible on an early ARM system like the Archimedes, notably virtualisation support, allowing us to effectively run a simulation of the hardware efficiently on the hardware itself.

Thus, we might develop our own framebuffer driver and fire up a virtual machine running our operating system of choice, deploying the driver and assessing the consequences provided by a simulation of that aspect of the hardware. Of course, this would require support in the virtual environment for that emulated element of the hardware. Alternatively, we might allow some kind of restrictive access to that part of the hardware, risking the failure of the graphical interface if misconfiguration occurred, but hopefully providing some kind of fallback control mechanism, like a serial console or remote login, to restore that interface and allow the errant code to be refined.

A less low-level component that might invite experimentation could be a filesystem. The MOS in the BBC Micro and related machines provided filesystem (or filing system) support in the form of service ROMs, and in RISC OS on the Archimedes such support resides in the conceptually similar relocatable modules. Given the ability of normal users to load such modules, it was entirely possible for a skilled user to develop and deploy their own filesystem support, with the associated risks of bringing down the system. Linux does have arguably “glued-on” support for unprivileged filesystem deployment, but there might be other components in the system worthy of modification or replacement, and thus the virtual machine might need to come into play again to allow the desired degree of experimentation.

A Framework for Experimentation

One can, however, envisage a configurable software system where a user session might involve a number of components providing the features and services of interest, and where a session might be configured to exclude or include certain typical or useful components, to replace others, and to allow users to deploy their own components in a safe fashion. Alongside such activities, a normal system could be running, providing access to modern conveniences at a keypress or the touch of a button.

We might want the flexibility to offer something resembling 65Host, albeit without the emulation of an older system and its instruction set, for a highly constrained learning environment where many aspects of the system can be changed for better or worse. Or we might want something closer to 65Tube, again without the emulation, acting mostly as a “native” program but permitting experimentation on a few elements of the experience. An entire continuum of possibilities could be supported by a configurable framework, allowing users to progress from a comfortable environment with all of the expected modern conveniences, gradually seeing each element removed and then replaced with their own implementation, until arriving in an environment where they have the responsibility at almost every level of the system.

In principle, a modern system aiming to provide an “under the hood” experience merely needs to simulate that experience. As long as the user experiences the same general effects from their interactions, the environment providing the experience can still isolate a user session from the underlying system and avoid unfortunate consequences from that misbehaving session. Purists might claim that as long as any kind of simulation is involved, the user is not actually touching the hardware and is therefore not engaging in low-level development, even if the code they are writing would be exactly the code that would be deployed on the hardware.

Systems programming can always be done by just writing programs and deploying them on the hardware or in a virtual machine to see if they work, resetting the system and correcting any mistakes, which is probably how most programming of this kind is done even today. However, a suitably configurable system would allow a user to iteratively and progressively deploy a customised system, and to work towards deploying a complete system of their own. With the final pieces in place, the user really would be exercising the hardware directly, finally silencing the purists.

Naturally, given my interest in microkernel-based systems, the above concept would probably rest on the use of a microkernel, with much more of a blank canvas available to define the kind of system we might like, as opposed to more prescriptive systems with monolithic kernels and much more of the basic functionality squirrelled away in privileged kernel code. Perhaps the only difficult elements of a system to open up to user modification, those that cannot also be easily delegated or modelled by unprivileged components, would be those few elements confined to the microkernel and performing fundamental operations such as directly handling interrupts, switching execution contexts (threads), writing memory mappings to the appropriate registers, and handling system calls and interprocess communications.

Even so, many aspects of these low-level activities are exposed to user-level components in microkernel-based operating systems, leaving few mysteries remaining. For those advanced enough to progress to kernel development, traditional systems programming practices would surely be applicable. But long before that point, motivated learners will have had plenty of opportunities to get “under the hood” and to acquire a reasonable understanding of how their systems work.

A Conclusion of Sorts

As for why people are not widely using the Raspberry Pi to explore low-level computing, the challenge of facilitating such exploration when the system has “other jobs to do” certainly seems like a reasonable excuse, especially given the choice of operating system deployed on most Pi devices. One could remove those “other jobs” and run RISC OS, of course, putting the learner in an unfamiliar and more challenging environment, perhaps giving them another computer to use at the same time to look things up on the Internet. Or one could adopt a different software architecture, but that would involve an investment in software that few organisations can be bothered to make.

I don’t know whether the University of Cambridge has seen better-educated applicants in recent years as a result of Pi proliferation, or whether today’s applicants are as similarly perplexed by low-level concepts as those from the pre-Pi era. But then, there might be a lesson to be learned about applying some rigour to technological interventions in society. After all, there were some who justifiably questioned the effectiveness of rolling out microcomputers in schools, particularly when teachers have never really been supported in their work, as more and more is asked of them by their political overlords. Investment in people and their well-being is another thing that few organisations can be bothered to make, too.

A Tall Tale of Denied Glory

Monday, March 4th, 2024

I seem to be spending too much time looking into obscure tales from computing history, but continuing an earlier tangent from a recent article, noting the performance of different computer systems at the end of the 1980s and the start of the 1990s, I found myself evaluating one of those Internet rumours that probably started to do the rounds about thirty years ago. We will get to that rumour – a tall tale, indeed – in a moment. But first, a chart that I posted in an earlier article:

Performance evolution of the Archimedes and various competitors

Performance evolution of the Archimedes and various competitors

As this nice chart indicates, comparing processor performance in computers from Acorn, Apple, Commodore and Compaq, different processor families bestowed a competitive advantage on particular systems at various points in time. For a while, Acorn’s ARM2 processor gave Acorn’s Archimedes range the edge over much more expensive systems using the Intel 80386, showcased in Compaq’s top-of-the-line models, as well as offerings from Apple and Commodore, these relying on Motorola’s 68000 family. One can, in fact, claim that a comparison between ARM-based systems and 80386-based systems would have been unfair to Acorn: more similarly priced systems from PC-compatible vendors would have used the much slower 80286, making the impact of the ARM2 even more remarkable.

Something might be said about the evolution of these processor families, what happened after 1993, and the introduction of later products. Such topics are difficult to address adequately for a number of reasons, principally the absence of appropriate benchmark results and the evolution of benchmarking to more accurately reflect system performance. Acorn never published SPEC benchmark figures, nor did ARM (at the time, at least), and any given benchmark as an approximation to “real-world” computing activities inevitably drifts away from being an accurate approximation as computer system architecture evolves.

However, in another chart I made to cover Acorn’s Unix-based RISC iX workstations, we can consider another range of competitors and quite a different situation. (This chart also shows off the nice labelling support in gnuplot that wasn’t possible with the currently disabled MediaWiki graph extension.)

Performance of the Acorn R-series and various competitors in approximate chronological order of introduction

Performance of the Acorn R-series and various competitors in approximate chronological order of introduction: a chart produced by gnuplot and converted from SVG to PNG for Wikipedia usage.

Now, this chart only takes us from 1989 until 1992, which will not satisfy anyone wondering what happened next in the processor wars. But it shows the limits of Acorn’s ability to enter the lucrative Unix workstation market with a processor that was perceived to be rather fast in the world of personal computers. Acorn’s R140 used the same ARM2 processor introduced in the Archimedes range, but even at launch this workstation proved to be considerably slower than somewhat more expensive workstation models from Digital and Sun employing MIPS and SPARC processors respectively.

Fortunately for Acorn, adding a cache to the ARM2 (plus a few other things) to make the ARM3 unlocked a considerable boost in performance. Although the efficient utilisation of available memory bandwidth had apparently been a virtue for the ARM designers, coupling the processor to memory performance had put a severe limit on overall performance. Meanwhile, the designers of the MIPS and SPARC processor families had started out with a different perspective and had considered cache memory almost essential in the kind of computer architectures that would be using these processors.

Acorn didn’t make another Unix workstation after the R260, released in 1990, for reasons that could be explored in depth another time. One of them, however, was that ARM processor design had been spun out to a separate company, ARM Limited, and appeared to be stalling in terms of delivering performance improvements at the same rate as previously, or indeed at the same rate as other processor families. Acorn did introduce the ARM610 belatedly in 1994 in its Risc PC, which would have been more amenable to running Unix, but by then the company was arguably beginning the process of unravelling for another set of reasons to be explored another time.

So, That Tall Tale

It is against this backdrop of competitive considerations that I now bring you the tall tale to which I referred. Having been reminded of the Atari Transputer Workstation by a video about the Transputer – another fascinating topic and thus another rabbit hole to explore – I found myself investigating Atari’s other workstation product: a Unix workstation based on the Motorola 68030 known as the Atari TT030 or TT/X, augmenting the general Atari TT product with the Unix System V operating system.

On the chart above, a 68030-based system would sit at a similar performance level to Acorn’s R140, so ignoring aspirational sentiments about “high-end” performance and concentrating on a price of around $3000 (with a Unix licence probably adding to that figure), there were some hopes that Atari’s product would reach a broad audience:

As a UNIX platform, the affordable TT030 may leapfrog machines from IBM, Apple, NeXT, and Sun, as the best choice for mass installation of UNIX systems in these environments.

As it turned out, Atari released the TT without Unix in 1990 and only eventually shipped a Unix implementation in around 1992, discontinuing the endeavour not long afterwards. But the tall tale is not about Atari: it is about their rivals at Commodore and some bizarre claims that seem to have drifted around the Internet for thirty years.

Like Atari and Acorn, Commodore also had designs on the Unix workstation market. And like Atari, Commodore had a range of microcomputers, the Amiga series, based on the 68000 processor family. So, the natural progression for Commodore was to design a model of the Amiga to run Unix, eventually giving us the Amiga 3000UX, priced from around $5000, running an implementation of Unix System V Release 4 branded as “Amiga Unix”.

Reactions from the workstation market were initially enthusiastic but later somewhat tepid. Commodore’s product, although delivered in a much more timely fashion than Atari’s, will also have found itself sitting at a similar performance level to Acorn’s R140 but positioned chronologically amongst the group including Acorn’s much faster R260 and the 80486-based models. It goes without saying that Atari’s eventual product would have been surrounded by performance giants by the time customers could run Unix on it, demonstrating the need to bring products to market on time.

So what is this tall tale, then? Well, it revolves around this not entirely coherent remark, entered by some random person twenty-one years ago on the emerging resource known as Wikipedia:

The Amiga A3000UX model even got the attention of Sun Microsystems, but unfortunately Commodore did not jump at the A3000UX.

If you search the Web for this, including the Internet Archive, the most you will learn is that Sun Microsystems were supposedly interested in adopting the Amiga 3000UX as a low-cost workstation. But the basis of every report of this supposed interest always seems to involve “talk” about a “deal” and possibly “interest” from unspecified “people” at Sun Microsystems. And, of course, the lack of any eventual deal is often blamed on Commodore’s management and perennial villain of the Amiga scene…

There were talks of Sun Microsystems selling Amiga Unix machines (the prototype Amiga 3500) as a low-end Unix workstations under their brand, making Commodore their OEM manufacturer. This deal was let down by Commodore’s Mehdi Ali, not once but twice and finally Sun gave up their interest.

Of course, back in 2003, anything went on Wikipedia. People thought “I know this!” or “I heard something about this!”, clicked the edit link, and scrawled away, leaving people to tidy up the mess two decades later. So, I assume that this tall tale is just the usual enthusiast community phenomenon of believing that a favourite product could really have been a contender, that legitimacy could have been bestowed on their platform, and that their favourite company could have regained some of its faded glory. Similar things happened as Acorn went into decline, too.

Picking It All Apart

When such tales appeal to both intuition and even-handed consideration, they tend to retain a veneer of credibility: of being plausible and therefore possibly true. I cannot really say whether the tale is actually true, only that there is no credible evidence of it being true. However, it is still worth evaluating the details within such tales on their merits and determine whether the substance really sounds particularly likely at all.

So, why would Sun Microsystems be interested in a Commodore workstation product? Here, it helps to review Sun’s own product range during the 1980s, to note that Sun had based its original workstation on the Motorola 68000 and had eventually worked up the 68000 family to the 68030 in its Sun-3 products. Indeed, the final Sun-3 products were launched in 1989, not too long before the Amiga 3000UX came to market. But the crucial word in the previous sentence is “final”: Sun had adopted the SPARC processor family and had started introducing SPARC-based models two years previously. Like other workstation vendors, Sun had started to abandon Motorola’s processors, seeking better performance elsewhere.

A June 1989 review in Personal Workstation magazine is informative, featuring the 68030-based Sun 3/80 workstation alongside Sun’s SPARCstation 1. For diskless machines, the Sun 3/80 came in at around $6000 whereas the SPARCstation 1 came in at around $9000. For that extra $3000, the buyer was probably getting around four times the performance, and it was quite an incentive for Sun’s customers and developers to migrate to SPARC on that basis alone. But even for customers holding on to their older machines and wanting to augment their collection with some newer models, Sun was offering something not far off the “low-cost” price of an Amiga 3000UX with hardware that was probably more optimised for the role.

Sun will have supported customers using these Sun-3 models for as long as support for SunOS was available, eventually introducing Solaris which dropped support for the 68000 family architecture entirely. Just like other Unix hardware vendors, once a transition to various RISC architectures had been embarked upon, there was little enthusiasm for going back and retooling to support the Motorola architecture again. And, after years resisting, even Motorola was embracing RISC with its 88000 architecture, tempting companies like NeXT and Apple to consider trading up from the 68000 family: an adventure that deserves its own treatment, too.

So, under what circumstances would Sun have seriously considered adopting Commodore’s product? On the face of it, the potential compatibility sounds enticing, and Commodore will have undoubtedly asserted that they had experience at producing low-cost machines in volume, appealing to Sun’s estimate, expressed in the Personal Workstation review, that the customer base for a low-cost workstation would double for every $1000 drop in price. And surely Sun would have been eager to close the doors on manufacturing a product line that was going to be phased out sooner or later, so why not let Commodore keep making low-cost models to satisfy existing customers?

First of all, we might well doubt any claims to be able to produce workstations significantly cheaper than those already available. The Amiga 3000UX was, as noted, only $1000 or so cheaper than the Sun 3/80. Admittedly, it had a hard drive as standard, making the comparison slightly unfair, but then the Sun 3/80 was around already in 1989, meaning that to be fair to that product, we would need to see how far its pricing will have fallen by the time the Amiga 3000UX became available. Commodore certainly had experience in shipping large volumes of relatively inexpensive computers like the Amiga 500, but they were not shipping workstation-class machines in large quantities, and the eventual price of the Amiga 3000UX indicates that such arguments about volume do not automatically confer low cost onto more expensive products.

Even if we imagine that the Amiga 3000UX had been successfully cost-reduced and made more competitive, we then need to ask what benefits there would have been for the customer, for developers, and for Sun in selling such a product. It seems plausible to imagine customers with substantial investments in software that only ran on Sun’s older machines, who might have needed newer, compatible hardware to keep that software running. Perhaps, in such cases, the suppliers of such software were not interested or capable of porting the software to the SPARC processor family. Those customers might have kept buying machines to replace old ones or to increase the number of “seats” in their environment.

But then again, we could imagine that such customers, having multiple machines and presumably having them networked together, could have benefited from augmenting their old Motorola machines with new SPARC ones, potentially allowing the SPARC machines to run a suitable desktop environment and to use the old applications over the network. In such a scenario, the faster SPARC machines would have been far preferable as workstations, and with the emergence of the X Window System, a still lower-cost alternative would have been to acquire X terminals instead.

We might also question how many software developers would have been willing to abandon their users on an old architecture when it had been clear for some time that Sun would be transitioning to SPARC. Indeed, by producing versions of the same operating system for both architectures, one can argue that Sun was making it relatively straightforward for software vendors to prepare for future products and the eventual deprecation of their old products. Moreover, given the performance benefits of Sun’s newer hardware, developers might well have been eager to complete their own transition to SPARC and to entice customers to follow rapidly, if such enticement was even necessary.

Consequently, if there were customers stuck on Sun’s older hardware running applications that had been effectively abandoned, one could be left wondering what the scale of the commercial opportunity was in selling those customers more of the same. From a purely cynical perspective, given the idiosyncracies of Sun’s software platform from time to time, it is quite possible that such customers would have struggled to migrate to another 68000 family Unix platform. And even without such portability issues and with the chance of running binaries on a competing Unix, the departure of many workstation vendors to other architectures may have left relatively few appealing options. The most palatable outcome might have been to migrate to other applications instead and to then look at the hardware situation with fresh eyes.

And we keep needing to return to that matter of performance. A 68030-based machine was arguably unappealing, like 80386-based systems, clearing the bar for workstation computing but not by much. If the cost of such a machine could have been reduced to an absurdly low price point then one could have argued that it might have provided an accessible entry point for users into a vendor’s “ecosystem”. Indeed, I think that companies like Commodore and Acorn should have put Unix-like technology in their low-end products, harmonising them with higher-end products actually running Unix, and having their customers gradually migrate as more powerful computers became cheaper.

But for workstations running what one commentator called “wedding-cake configurations” of the X Window System, graphical user interface toolkits, and applications, processors like the 68030, 80386 and ARM2 were going to provide a disappointing experience whatever the price. Meanwhile, Sun’s existing workstations were a mature product with established peripherals and accessories. Any cost-reduced workstation would have been something distinct from those existing products, impaired in performance terms and yet unable to make use of things like graphics accelerators which might have made the experience tolerable.

That then raises the question of the availability of the 68040. Could Commodore have boosted the Amiga 3000UX with that processor, bringing it up to speed with the likes of the ARM3-based R260 and 80486-based products, along with the venerable MIPS R2000 and early SPARC processors? Here, we can certainly answer in the affirmative, but then we must ask what this would have done to the price. The 68040 was a new product, arriving during 1990, and although competitively priced relative to the SPARC and 80486, it was still quoted at around $800 per unit, featuring in Apple’s Macintosh range in models that initially, in 1991, cost over $5000. Such a cost increase would have made it hard to drive down the system price.

In the chart above, the HP 9000/425t represents possibly the peak of 68040 workstation performance – “a formidable entry-level system” – costing upwards of $9000. But as workstation performance progressed, represented by new generations of DECstations and SPARCstations, the 68040 stalled, unable to be clocked significantly faster or otherwise see its performance scaled up. Prominent users such as Apple jumped ship and adopted PowerPC along with Motorola themselves! Motorola returned to the architecture after abandoning further development of the 88000 architecture, delivering the 68060 before finally consigning the architecture to the embedded realm.

In the end, even if a competitively priced and competitively performing workstation had been deliverable by Commodore, would it have been in Sun’s interests to sell it? Compatibility with older software might have demanded the continued development of SunOS and the extension of support for older software technologies. SunOS might have needed porting to Commodore’s hardware, or if Sun were content to allow Commodore to add any necessary provision to its own Unix implementation, then porting of those special Sun technologies would have been required. One can question whether the customer experience would have been satisfactory in either case. And for Sun, the burden of prolonging the lifespan of products that were no longer the focus of the company might have made the exercise rather unattractive.

Companies can always choose for themselves how much support they might extend to their different ranges of products. Hewlett-Packard maintained several lines of workstation products and continued to maintain a line of 68030 and 68040 workstations even after introducing their own PA-RISC processor architecture. After acquiring Apollo Computer, who had also begun to transition to their own RISC architecture from the 68000 family, HP arguably had an obligation to Apollo’s customers and thus renewed their commitment to the Motorola architecture, particularly since Apollo’s own RISC architecture, PRISM, was shelved by HP in favour of PA-RISC.

It is perhaps in the adoption of Sun technology that we might establish the essence of this tale. Amiga Unix was provided with Sun’s OPEN LOOK graphical user interface, and this might have given people reason to believe that there was some kind of deeper alliance. In fact, the alliance was really between Sun and AT&T, attempting to define Unix standards and enlisting the support of Unix suppliers. In seeking to adhere most closely to what could be regarded as traditional Unix – that defined by its originator, AT&T – Commodore may well have been picking technologies that also happened to be developed by Sun.

This tale rests on the assumption that Sun was not able to drive down the prices of its own workstations and that Commodore was needed to lead the way. Yet workstation prices were already being driven down by competition. Already by May 1990, Sun had announced the diskless SPARCstation SPC at the magic $5000 price point, although its lowest-cost colour workstation was reportedly the SPARCstation IPC at a much more substantial $10000. Nevertheless, its competitors were quite able to demonstrate colour workstations at reasonable prices, and eventually Sun followed their lead. Meanwhile, the Amiga 3000UX cost almost $8000 when coupled with a colour monitor.

With such talk of commodity hardware, it must not be forgotten that Sun was not without other options. For example, the company had already delivered SunOS on the Sun386i workstation in 1988. Although rather expensive, costing $10000, and not exactly a generic PC clone, it did support PC architecture standards. This arguably showed the way if the company were to target a genuine commodity hardware platform, and eventually Sun followed this path when making its Solaris operating system available for the Intel x86 architecture. But had Sun had a desperate urge to target commodity hardware back in 1990, partnering with a PC clone manufacturer would have been a more viable option than repurposing an Amiga model. That clone manufacturer could have been Commodore, too, but other choices would have been more convincing.

Conclusions and Reflections

What can we make of all of this? An idle assertion with a veneer of plausibility and a hint of glory denied through the notoriously poor business practices of the usual suspects. Well, we can obviously see that nothing is ever as simple as it might seem, particularly if we indulge every last argument and pursue every last avenue of consideration. And yet, the matter of Commodore making a Unix workstation and Sun Microsystems being “interested in rebadging the A3000UX” might be as simple as imagining a rather short meeting where Commodore representatives present this opportunity and Sun’s representatives firmly but politely respond that the door has been closed on a product range not long for retirement. Thanks but no thanks. The industry has moved on. Did you not get that memo?

Given that there is the essence of a good story in all of this, I consulted what might be the first port of call for Commodore stories: David Pleasance’s book, “Commodore The Inside Story”. Sadly, I can find no trace of any such interaction, with Unix references relating to a much earlier era and Commodore’s Z8000-based Unix machine, the unreleased Commodore 900. Yet, had such a bungled deal occurred, I am fairly sure that this book would lay out the fiasco in plenty of detail. Even Dave Haynie’s chapter, which covers development of the Amiga 3000 and subsequent projects, fails to mention any such dealings. Perhaps the catalogue of mishaps at Commodore is so extensive that a lucrative agreement with one of the most prominent corporations in 1990s computing does not merit a mention.

Interestingly, the idea of a low-cost but relatively low-performance 68030-based workstation from a major Unix workstation vendor did arrive in 1989 in the form of the Apollo DN2500, costing $4000, from Hewlett-Packard. Later on, Commodore would apparently collaborate with HP on chipset development, with this being curtailed by Commodore’s bankruptcy. Commodore were finally moving off the 68000 family architecture themselves, all rather too late to turn their fortunes around. Did Sun need a competitive 68040-based workstation? Although HP’s 9000/425 range was amongst the top sellers, Sun was doing nicely enough with its SPARC-based products, shipping over twice as many workstations as HP.

While I consider this tall tale to be nothing more than folklore, like the reminiscences of football supporters whose team always had a shot at promotion to the bigger league every season, “not once but twice” has a specificity that either suggests a kernel of truth or is a clever embellishment to sustain a group’s collective belief in something that never was. Should anyone know the real story, please point us to the documentation. Or, if there never was any paper trail but you happened to be there, please write it up and let us all know. But please don’t just go onto Wikipedia and scrawl it in the tradition of “I know this!”

For the record, I did look around to see if anyone recorded such corporate interactions on Sun’s side. That yielded no evidence, but I did find something else that was rather intriguing: hints that Sun may have been advised to try and acquire Acorn or ARM. Nothing came from that, of course, but at least this is documentation of an interaction in the corporate world. Of stories about something that never happened, it might also be a more interesting one than the Commodore workstation that Sun never got to rebadge.

Update: I did find a mention of Sun Microsystems and Unix International featuring the Amiga 3000UX on their exhibition stands at the Uniforum conference in early 1991. As noted above, Sun had an interest in promoting adoption of OPEN LOOK, and Unix International – the Sun/AT&T initiative to define Unix standards – had an interest in promoting System V Release 4 and, to an extent, OPEN LOOK. So, while the model may have “even got the attention of Sun Microsystems”, it was probably just a nice way of demonstrating vendor endorsement of Sun’s technology from a vendor who admitted that what it could offer was not “competitive with Sun” and what it had to offer.

Another update: My attention was brought to an article by “datagubben” Carl Svensson entitled “The Amiga 3000 UNIX and Sun Microsystems: Deal or no deal?”. It is worth reading for an independent view of the same tall tale, but with some effort made to contact people who might have been familiar with the events concerned. I had to follow up with Carl over one observation he made about the UnixWorld review cited in the Amiga Unix article on Wikipedia:

“The Sparcstation IPC model referenced in the review was launched at $9,995 in 1990, and it seems unlikely to have dropped a full $3,000 – almost a third of its original price – in a mere year.”

Unlikely as it may seem, Sun did indeed make substantial price cuts during 1991. In a highly competitive market, with DEC introducing DECstation models at attractive prices and IBM making an entrance with its RS/6000 range, including low-end models, Sun had to discount existing models and to introduce new ones to keep up. The workstation market was certainly not a place where easy money could be made by just showing up, particularly by a company like Commodore with far fewer resources than the corporations already engaged in such intense competition.

How to deal with Wikipedia’s broken graphs and charts by avoiding Web technology escalation

Thursday, February 15th, 2024

Almost a year ago, a huge number of graphs and charts on Wikipedia became unviewable because a security issue had been identified in the underlying JavaScript libraries employed by the MediaWiki Graph extension, necessitating this extension’s deactivation. Since then, much effort has been expended formulating a strategy to deal with the problem, although it does not appear to have brought about any kind of workaround, let alone a solution.

The Graph extension provided a convenient way of embedding data into a MediaWiki page that would then be presented as, say, a bar chart. Since it is currently disabled on Wikipedia, the documentation fails to show what these charts looked like, but they were fairly basic, clean and not unattractive. Fortunately, the Internet Archive has a record of older Wikipedia articles, such as one relevant to this topic, and it is able to show such charts from the period before the big switch-off:

Performance evolution of the Archimedes and various competitors

Performance evolution of the Archimedes and various competitors: a chart produced by the Graph extension

The syntax for describing a chart suffered somewhat from following the style that these kinds of extensions tend to have, but it was largely tolerable. Here is an example:

{{Image frame
 | caption=Performance evolution of the Archimedes and various competitors
 | content = {{Graph:Chart
 | width=400
 | xAxisTitle=Year
 | yAxisTitle=VAX MIPS
 | legend=Product and CPU family
 | type=rect
 | x=1987,1988,1989,1990,1991,1992,1993
 | y1=2.8,2.8,2.8,10.5,13.8,13.8,15.0
 | y2=0.5,1.4,2.8,3.6,3.6,22.2,23.3
 | y3=2.1,3.4,6.6,14.7,19.2,30,40.3
 | y4=1.6,2.1,3.3,6.1,8.3,10.6,13.1
 | y1Title=Archimedes (ARM2, ARM3)
 | y2Title=Amiga (68000, 68020, 68030, 68040)
 | y3Title=Compaq Deskpro (80386, 80486, Pentium)
 | y4Title=Macintosh II, Quadra/Centris (68020, 68030, 68040)
}}
}}

Unfortunately, rendering this data as a collection of bars on two axes relied on a library doing all kinds of potentially amazing but largely superfluous things. And, of course, this introduced the aforementioned security issue that saw the whole facility get switched off.

After a couple of months, I decided that I wasn’t going to see my own contributions diminished by a lack of any kind of remedy, and so I did the sensible thing: use an established tool to generate charts, and upload the charts plus source data and script to Wikimedia Commons, linking the chart from the affected articles. The established tool of choice for this exercise was gnuplot.

Migrating the data was straightforward and simply involved putting the data into a simpler format. Here is an excerpt of the data file needed by gnuplot, with some items updated from the version shown above:

# Performance evolution of the Archimedes and various competitors (VAX MIPS by year)
#
Year    "Archimedes (ARM2, ARM3)" "Amiga (68000, 68020, 68030, 68040)" "Compaq Deskpro (80386, 80486, Pentium)" "Mac II, Quadra/Centris (68020, 68030, 68040)"
1987    2.8     0.5     2.1     1.6
1988    2.8     1.5     3.5     2.1
1989    2.8     3.0     6.6     3.3
1990    10.5    3.6     14.7    6.1
1991    13.8    3.6     19.2    8.3
1992    13.8    18.7    28.5    10.6
1993    15.1    21.6    40.3    13.1

Since gnuplot is more flexible and more capable in parsing data files, we get the opportunity to tabulate the data in a more readable way, also adding some commentary without it becoming messy. I have left out the copious comments in the actual source data file to avoid cluttering this article.

And gnuplot needs a script, requiring a little familiarisation with its script syntax. We can see that various options are required, along with axis information and some tweaks to the eventual appearance:

set terminal svg enhanced size 1280 960 font "DejaVu Sans,24"
set output 'Archimedes_performance.svg'
set title "Performance evolution of the Archimedes and various competitors"
set xlabel "Year"
set ylabel "VAX MIPS"
set yrange [0:*]
set style data histogram
set style histogram cluster gap 1
set style fill solid border -1
set key top left reverse Left
set boxwidth 0.8
set xtics scale 0
plot 'Archimedes_performance.dat' using 2:xtic(1) ti col linecolor rgb "#0080FF", '' u 3 ti col linecolor rgb "#FF8000", '' u 4 ti col linecolor rgb "#80FF80", '' u 5 ti col linecolor rgb "#FF80FF"

The result is a nice SVG file that, when uploaded to Wikimedia Commons, will be converted to other formats for inclusion in Wikipedia articles. The file can then be augmented with the data and the script in a manner that is not entirely elegant, but the result allows people to inspect the inputs and to reproduce the chart themselves. Here is the PNG file that the automation produces for embedding in Wikipedia articles:

Performance evolution of the Archimedes and various competitors

Performance evolution of the Archimedes and various competitors: a chart produced by gnuplot and converted from SVG to PNG for Wikipedia usage.

Embedding the chart in a Wikipedia article is as simple as embedding the SVG file, specifying formatting properties appropriate to the context within the article:

[[File:Archimedes performance.svg|thumb|upright=2|Performance evolution of the Archimedes and various competitors]]

The control that gnuplot provides over the appearance is far superior to that of the Graph extension, meaning that the legend in the above figure could be positioned more conveniently, for instance, and there is a helpful gallery of examples that make familiarisation and experimentation with gnuplot more accessible. So I felt rather happy and also vindicated in migrating my charts to gnuplot despite the need to invest a bit of time in the effort.

While there may be people who need the fancy JavaScript-enabled features of the currently deactivated Graph extension in their graphs and charts on Wikipedia, I suspect that many people do not. For that audience, I highly recommend migrating to gnuplot and thereby eliminating dependencies on technologies that are simply unnecessary for the application.

It would be absurd to suggest riding in a spaceship every time we wished to go to the corner shop, knowing full well that more mundane mobility techniques would suffice. Maybe we should adopt similar, proportionate measures of technology adoption and usage in other areas, if only to avoid the inconvenience of seeing solutions being withdrawn for prolonged periods without any form of relief. Perhaps, in many cases, it would be best to leave the spaceship in its hangar after all.

How does the saying go, again?

Monday, February 12th, 2024

If you find yourself in a hole, stop digging? It wasn’t hard to be reminded of that when reading an assertion that a “competitive” Web browser engine needs funding to the tune of at least $100 million a year, presumably on development costs, and “really” $200-300 million.

Web browsers have come a long way since their inception. But they now feature absurdly complicated layout engines, all so that the elements on the screen can be re-jigged at a moment’s notice to adapt to arbitrary changes in the content, and yet they still fail to provide the kind of vanity publishing visuals that many Web designers seem to strive for, ceding that territory to things like PDFs (which, of course, generally provide static content). All along, the means of specifying layout either involves the supposedly elegant but hideously overcomplicated CSS, or to have scripts galore doing all the work, presumably all pounding the CPU as they do so.

So, we might legitimately wonder whether the “modern Web” is another example of technology for technology’s sake: an effort fuelled by Valley optimism and dubiously earned money that not only undermines interoperability and choice by driving out implementers who are not backed by obscene wealth, but also promotes wastefulness in needing ever more powerful systems to host ever more complicated browsers. Meanwhile, the user experience is constantly degraded: now you, the user, get to indicate whether hundreds of data surveillance companies should be allowed to track your activities under the laughable pretense of “legitimate interest”.

It is entirely justified to ask whether the constant technological churn is giving users any significant benefits or whether they could be using less sophisticated software to achieve the same results. In recent times, I have had to use the UK Government’s Web portal to initiate various processes, and one might be surprised to learn that it provides a clear, clean and generally coherent user experience. Naturally, it could be claimed that such nicely presented pages make good use of the facilities that CSS and the Web platform have to offer, but I think that it provides us with a glimpse into a parallel reality where “less” actually does deliver “more”, because reduced technological complication allows society to focus on matters of more pressing concern.

Having potentially hundreds or thousands of developers beavering away on raising the barrier to entry for delivering online applications is surely another example of how our societies’ priorities can be led astray by self-serving economic interests. We should be able to interact with online services using far simpler technology running on far more frugal devices than multi-core systems with multiple gigabytes of RAM. People used things like Minitel for a lot of the things people are doing today, for heaven’s sake. If you had told systems developers forty years ago that, in the future, instead of just connecting to a service and interacting with it, you would end up connecting to dozens of different services (Google, Facebook, random “adtech” platforms running on dark money) to let them record your habits, siphon off data, and sell you things you don’t want, they would probably have laughed in your face. We were supposed to be living on the Moon by now, were we not?

The modern Web apologist would, of course, insist that the modern browser offers so much more: video, for instance. I was reminded of this a few years ago when visiting the Oslo Airport Express Web site which, at that time, had a pointless video of the train rolling into the station behind the user interface controls, making my browser run rather slowly indeed. As an undergraduate, our group project was to design and implement a railway timetable querying system. On one occasion, our group meeting focusing on the user interface slid, as usual, into unfocused banter where one participant helpfully suggested that behind the primary user interface controls there would have to be “dancing ladies”. To which our only female group member objected, insisting that “dancing men” would also have to be an option. The discussion developed, acknowledging that a choice of dancers would first need to be offered, along with other considerations of the user demographic, even before asking the user anything about their rail journey.

Well, is that not where we are now? But instead of being asked personal questions, a bunch of voyeurs have been watching your every move online and have already deduced the answers to those questions and others. Then, a useless video and random developer excess drains away your computer’s interactivity as you type into treacle, trying to get a sensible result from a potentially unhelpful and otherwise underdeveloped service. How is that hole coming along, again?

Firefox and Monospaced Fonts

Friday, December 8th, 2023

This has been going on for years, but a recent upgrade brought it to my attention and it rather says everything about what is wrong with the way technology is supposedly improved. If you define a style for your Web pages using a monospaced font like Courier, Firefox still decides to convert letter pairs like “fi” and “fl” to ligatures. In other words, it squashes the two letters together into a single character.

Now, I suppose that it does this in such a way that the resulting ligature only occupies the space of a single character, thereby not introducing proportional spacing that would disrupt the alignment of characters across lines, but it does manage to disrupt the distribution of characters and potentially the correspondence of characters between lines. Worst of all, though, this enforced conversion is just ugly.

Here is what WordPress seems to format without suffering from this problem, by explicitly using the “monospace” font-style identifier:

long client_flush(file_t *file);

And here is what happens when Courier is chosen as the font:

long client_flush(file_t *file);

In case theming, browser behaviour, and other factors obscure the effect I am attempting to illustrate, here it is with the ligatures deliberately introduced:

long client_flush(file_t *file);

In fact, the automatic ligatures do remain as two distinct letters crammed into a single space whereas I had to go and find the actual ligatures in LibreOffice’s “special character” dialogue to paste into the example above. One might argue that by keeping the letters distinct, it preserves the original text so that it can be copied and pasted back into a suitable environment, like a program source file or an interactive prompt or shell. But still, when the effect being sought is not entirely obtained, why is anyone actually bothering to do this?

It seems to me that this is yet another example of “design” indoctrination courtesy of the products of companies like Apple and Adobe, combined with the aesthetics-is-everything mentality that values style over substance. How awful it is that someone may put the letter “f” next to the letter “i” or “l” without pulling them closer together and using stylish typographic constructs!

Naturally, someone may jump to the defence of the practice being described here, claiming that what is really happening is kerning, as if someone like me might not have heard of it. Unfortunately for them, I spent quite a bit of time in the early 1990s – quite possibly before some of today’s “design” gurus were born – learning about desktop publishing and typography (for a system that had a coherent outline font system before platforms like the Macintosh and Windows did). Generally, you don’t tend to apply kerning to monospaced fonts like Courier: the big hint is the “monospaced” bit.

Apparently, the reason for this behaviour is something to do with the font library being used and it will apparently be fixed in future Firefox releases, or at least ones later than the one I happen to be using in Debian. Workarounds using configuration files reminiscent of the early 2000s Linux desktop experience apparently exist, although I don’t think they really work.

But anyway, well done to everyone responsible for this mess, whether it was someone’s great typographic “design” vision being imposed on everyone else, or whether it was just that yet more technologies were thrown into the big cauldron and stirred around without any consideration of the consequences. I am sure yet more ingredients will be thrown in to mask the unpleasant taste, also conspiring to make all our computers run more slowly.

Sometimes I think that “modern Web” platform architects have it as their overriding goal to reproduce the publishing solutions of twenty to thirty years ago using hardware hundreds or even thousands of times more powerful, yet delivering something that runs even slower and still producing comparatively mediocre results. As if the aim is to deliver something akin to a turn-of-the-century Condé Nast publication on the Web with gigabytes of JavaScript.

But maybe, at least for the annoyance described here, the lesson is that if something is barely worth doing, largely because it is probably only addressing someone’s offended sense of aesthetics, maybe just don’t bother doing it. There are, after all, plenty of other things in the realm of technology and beyond that more legitimately demand humanity’s attention.

Considering Unexplored Products of the Past: Formulating a Product

Friday, February 10th, 2023

Previously, I described exploring the matter of developing emulation of a serial port, along with the necessary circuitry, for Elkulator, an emulator for the Acorn Electron microcomputer, motivated by a need to provide a way of transferring files into and out of the emulated computer. During this exploration, I had discovered some existing software that had been developed to provide some level of serial “filing system” support on the BBC Microcomputer – the higher-specification sibling of the Electron – with the development of this software having been motivated by an unforeseen need to transfer software to a computer without any attached storage devices.

This existing serial filing system software was a good indication that serial communications could provide the basis of a storage medium. But instead of starting from a predicament involving computers without usable storage facilities, where an unforeseen need motivates the development of a clever workaround, I wanted to consider what such a system might have been like if there had been a deliberate plan from the very beginning to deploy computers that would rely on a serial connection for all their storage needs. Instead of having an implementation of the filing system in RAM, one could have the luxury of putting it into a ROM chip that would be fitted in the computer or in an expansion, and a richer set of features might then be contemplated.

A Smarter Terminal

Once again, my interest in the historical aspects of the technology provided some guidance and some inspiration. When microcomputers started to become popular and businesses and institutions had to decide whether these new products had any relevance to their operations, there was some uncertainty about whether such products were capable enough to be useful or whether they were a distraction from the facilities already available in such organisations. It seems like a lifetime ago now, but having a computer on every desk was not necessarily seen as a guarantee of enhanced productivity, particularly if they did not link up to existing facilities or did not coordinate the work of a number of individuals.

At the start of the 1980s, equipping an office with a computer on every desk and equipping every computer with a storage solution was an expensive exercise. Even disk drives offering only a hundred kilobytes of storage on each removable floppy disk were expensive, and hard disk drives were an especially expensive and precious luxury that were best shared between many users. Some microcomputers were marketed as multi-user systems, encouraging purchasers to connect terminals to them and to share those precious resources: precisely the kind of thing that had been done with minicomputers and mainframes. Such trends continued into the mid-1980s, manifested by products promoted by companies with mainframe origins, such companies perpetuating entrenched tendencies to frame computing solutions in certain ways.

Terminals themselves were really just microcomputers designed for the sole purpose of interacting with a “host” computer, and institutions already operating mainframes and minicomputers would have experienced the need to purchase several of them. Until competition intensified in the terminal industry, such products were not particularly cheap, with the DEC VT220 introduced in 1983 costing $1295 at its introduction. Meanwhile, interest in microcomputers and the possibility of distributing some kinds of computing activity to these new products, led to experimentation in some organisations. Some terminal manufacturers responded by offering terminals that also ran microcomputer software.

Much of the popular history of microcomputing, familiar to anyone who follows such topics online, particularly through YouTube videos, focuses on adoption of such technology in the home, with an inevitable near-obsession with gaming. The popular history of institutional adoption often focuses on the upgrade parade from one generation of computer to the next. But there is a lesser told history involving the experimentation that took place at the intersection of microcomputing and minicomputing or mainframe computing. In universities, computers like the BBC Micro were apparently informally introduced as terminals for other systems, terminal ROMs were developed and shared between institutions. However, there seems to have been relatively little mainstream interest in such software as fully promoted commercial products, although Acornsoft – Acorn’s software outlet – did adopt such a ROM to sell as their Termulator product.

The Acorn Electron, introduced at £199, had a “proper” keyboard and the ability to display 80 columns of text, unlike various other popular microcomputers. Indeed, it may have been the lowest-priced computer to be able to display 80 columns of relatively high definition text as standard, such capabilities requiring extra cards for machines like the Apple II and the Commodore 64. Considering the much lower price of such a computer, the ongoing experimentation underway at the time with its sibling machine on alternative terminal solutions, and the generally favourable capabilities of both these machines, it seems slightly baffling that more was not done to pursue opportunities to introduce a form of “intelligent terminal” or “hybrid terminal” product to certain markets.

VIEW in 80 columns on the Acorn Electron.

VIEW in 80 columns on the Acorn Electron.

None of this is to say that institutional users would have been especially enthusiastic. In some institutions, budgets were evidently generous enough that considerable sums of money would be spent acquiring workstations that were sometimes of questionable value. But in others, the opportunity to make savings, to explore other ways of working, and perhaps also to explicitly introduce microcomputing topics such as software development for lower-specification hardware would have been worthy of some consideration. An Electron with a decent monochrome monitor, like the one provided with the M2105, plus some serial hardware, could have comprised a product sold for perhaps as little as £300.

The Hybrid Terminal

How would a “hybrid terminal” solution work, how might it have been adopted, and what might it have been used for? Through emulation and by taking advantage of the technological continuity in multi-user systems from the 1980s to the present day, we can attempt to answer such questions. Starting with communications technologies familiar in the world of the terminal, we might speculate that a serial connection would be the most appropriate and least disruptive way of interfacing a microcomputer to a multi-user system.

Although multi-user systems, like those produced by Digital Equipment Corporation (DEC), might have offered network connectivity, it is likely that such connectivity was proprietary, expensive in terms of the hardware required, and possibly beyond the interfacing capabilities of most microcomputers. Meanwhile, Acorn’s own low-cost networking solution, Econet, would not have been directly compatible with these much higher-end machines. Acorn’s involvement in network technologies is also more complicated than often portrayed, but as far as Econet is concerned, only much later machines would more conveniently bridge the different realms of Econet and standards-based higher-performance networks.

Moreover, it remains unlikely that operators and suppliers of various multi-user systems would have been enthusiastic about fitting dedicated hardware and installing dedicated software for the purpose of having such systems communicate with third-party computers using a third-party network technology. I did find it interesting that someone had also adapted Acorn’s network filing system that usually runs over Econet to work instead over a serial connection, which presumably serves files out of a particular user account. Another discovery I made was a serial filing system approach by someone who had worked at Acorn who wanted to transfer files between a BBC Micro system and a Unix machine, confirming that such functionality was worth pursuing. (And there is also a rather more complicated approach involving more exotic Acorn technology.)

Indeed, to be successful, a hybrid terminal approach would have to accommodate existing practices and conventions as far as might be feasible in order to not burden or disturb the operators of these existing systems. One motivation from an individual user’s perspective might be to justify introducing a computer on their desk, to be able to have it take advantage of the existing facilities, and to augment those facilities where it might be felt that they are not flexible or agile enough. Such users might request help from the operators, but the aim would be to avoid introducing more support hassles, which would easily arise if introducing a new kind of network to the mix. Those operators would want to be able to deploy something and have it perform a role without too much extra thought.

I considered how a serial link solution might achieve this. An existing terminal would be connected to, say, a Unix machine and be expected to behave like a normal client, allowing the user to log into their account. The microcomputer would send some characters down the serial line to the Unix “host”, causing it to present the usual login prompt, and the user would then log in as normal. They would then have the option of conducting an interactive session, making their computer like a conventional terminal, but there would also be the option of having the Unix system sit in the background, providing other facilities on request.

Logging into a remote service via a serial connection.

Logging into a remote service via a serial connection.

The principal candidates for these other facilities would be file storage and printing. Both of these things were centrally managed in institutions, often available via the main computing service, and the extensible operating system of the Electron and related microcomputers invites the development of software to integrate the core support for these facilities with such existing infrastructure. Files would be loaded from the user’s account on the multi-user system and saved back there again. Printing would spool the printed data to files somewhere in the user’s home directory for queuing to centralised printing services.

Attempting an Implementation

I wanted to see how such a “serial computing environment” would work in practice, how it would behave, what kinds of applications might benefit, and what kind of annoyances it might have. After all, it might be an interesting idea or a fun idea, but it need not be a particularly good one. The first obstacle was that of understanding how the software elements would work, primarily on the Electron itself, from the tasks that I would want the software to perform down to the way the functionality would be implemented. On the host or remote system, I was rather more convinced that something could be implemented since it would mostly be yet another server program communicating over a stream, with plenty of modern Unix conveniences to assist me along the way.

As it turned out, my investigations began with a trip away from home and the use of a different, and much more constrained, development environment involving an ARM-based netbook. Fortunately, Elkulator and the different compilers and tools worked well enough on that development hardware to make the exercise approachable. Another unusual element was that I was going to mostly rely on the original documentation in the form of the actual paper version of the Acorn Electron Advanced User Guide for information on how to write the software for the Electron. It was enlightening coming back to this book after a few decades for assistance on a specific exercise, even though I have perused the book many times in its revised forms online, because returning to it with a focus on a particular task led me to find that the documentation in the book was often vague or incomplete.

Although the authors were working in a different era and presumably under a degree of time pressure, I feel that the book in some ways exhibits various traits familiar to those of us working in the software industry, these indicating a lack of rigour and of sufficient investment in systems documentation. For this, I mostly blame the company who commissioned the work and then presumably handed over some notes and told the authors to fill in the gaps. As if to strengthen such perceptions of hurriedness and lack of review, it also does not help that “system” is mis-spelled “sysem” in a number of places in the book!

Nevertheless, certain aspects of the book were helpful. The examples, although focusing on one particular use-case, did provide helpful detail in deducing the correct way of using certain mechanisms, even if they elected to avoid the correct way of performing other tasks. Acorn’s documentation had a habit of being “preachy” about proper practices, only to see its closest developers ignore those practices, anyway. Eventually, on returning from my time away, I was able to fill in some of the gaps, although by this time I had a working prototype that was able to do basic things like initiate a session on the host system and to perform some file-related operations.

There were, and still are, a lot of things that needed, and still need, improvement with my implementation. The way that the operating system needs to be extended to provide extra filing system functionality involves plenty of programming interfaces, plenty of things to support, and also plenty of opportunities for things to go wrong. The VIEW word processor makes use of interfaces for both whole-file loading and saving as well as random-access file operations. Missing out support for one or the other will probably not yield the desired level of functionality.

There are also intricacies with regard to switching printing on and off – this typically being done using control characters sent through the output stream – and of “spool” files which capture character output. And filing system ROMs need to be initialised through a series of “service calls”, these being largely documented, but the overall mechanism is left largely undescribed in the documentation. It is difficult enough deciphering the behaviour of the Electron’s operating system today, with all the online guidance available in many forms, so I cannot imagine how difficult it would have been as a third party to effectively develop applications back in the day.

Levels of Simulation

To support the activities of the ROM software in the emulated Electron, I had to develop a server program running on my host computer. As noted above, this was not onerous, especially since I had already written a program to exercise the serial communications and to interact with the emulated serial port. I developed this program further to respond to commands issued by my ROM, performing host operations and returning results. For example, the CAT command produces a “catalogue” of files in a host directory, and so my server program performs a directory listing operation, collects the names of the files, and then sends them over the virtual serial link to the ROM for it to display to the user.

To make the experience somewhat authentic and to approximate to an actual deployment environment, I included a simulation of the login prompt so that the user of the emulated Electron would have to log in first, with the software also having to deal with a logged out (or not yet logged in) condition in a fairly graceful way. To ensure that they are logged in, a user selects the Serial Computing Environment using the *SCE command, this explicitly selecting the serial filing system, and the login dialogue is then presented if the user has not yet logged into the remote host. Once logged in, the ROM software should be able to test for the presence of the command processor that responds to issued commands, only issuing commands if the command processor has signalled its presence.

Although this models a likely deployment environment, I wanted to go a bit further in terms of authenticity, and so I decided to make the command processor a separate program that would be installed in a user account on a Unix machine. The user’s profile script would be set up to run the command processor, so that when they logged in, this program would automatically run and be ready for commands. I was first introduced to such practices in my first workplace where a menu-driven, curses-based program I had written was deployed so that people doing first-line technical support could query the database of an administrative system without needing to be comfortable with the Unix shell environment.

For complete authenticity I would actually want to have the emulated Electron contact a Unix-based system over a physical serial connection, but for now I have settled for an arrangement whereby a pseudoterminal is created to run the login program, with the terminal output presented to the emulator. Instead of seeing a simulated login dialogue, the user now interacts with the host system’s login program, allowing them to log into a real account. At that point, the command processor is invoked by the shell and the user gets back control.

Obtaining a genuine login dialogue from a Unix system.

Obtaining a genuine login dialogue from a Unix system.

To prevent problems with certain characters, the command processor configures the terminal to operate in raw mode. Apart from that, it operates mostly as it did when run together with the login simulation which did not have to concern itself with such things as terminals and login programs.

Some Applications

This effort was motivated by the need or desire to be able to access files from within Elkulator, particularly from applications such as VIEW. Naturally, VIEW is really just one example from the many applications available for the Electron, but since it interacts with a range of functionality that this serial computing environment provides, it serves to showcase such functionality fairly well. Indeed, some of the screenshots featured in this and the previous article show VIEW operating on text that was saved and loaded over the serial connection.

Accessing files involves some existing operating system commands, such as *CAT (often abbreviated to *.) to list the catalogue of a storage medium. Since a Unix host supports hierarchical storage, whereas the Electron’s built-in command set only really addresses the needs of a flat storage medium (as provided by various floppy disk filing systems for Electron and BBC Micro), the *DIR command has been introduced from Acorn’s hierarchical filing systems (such as ADFS) to navigate between directories, which is perhaps confusing to anyone familiar with other operating systems, such as the different variants of DOS and their successors.

Using catalogue and directory traversal commands.

Using catalogue and directory traversal commands.

VIEW allows documents to be loaded and saved in a number of ways, but as a word processor it also needs to be able to print these documents. This might be done using a printer connected to a parallel port, but it makes a bit more sense to instead allow the serial printer to be selected and for printing to occur over the serial connection. However, it is not sufficient to merely allow the operating system to take over the serial link and to send the printed document, if only because the other side of this link is not a printer! Indeed, the command processor is likely to be waiting for commands and to see the incoming data as ill-formed input.

The chosen solution was to intercept attempts to send characters to a serial printer, buffering them and then sending the buffered data in special commands to the command processor. This in turn would write the printed characters to a “spool” file for each printing session. From there, these files could be sent to an appropriate printer. This would give the user rather more control over printing, allowing them to process the printout with Unix tools, or to select one particular physical printer out of the many potentially available in an organisation. In the VIEW environment, and in the MOS environment generally, there is no built-in list of printers or printer selection dialogue.

Since the kinds of printers anticipated for use with VIEW might well have been rather different from the kinds connected to multi-user systems, it is likely that some processing would be desirable where different text styles and fonts have been employed. Today, projects like PrinterToPDF exist to work with old-style printouts, but it is conceivable that either the “printer driver generator” in the View suite or some postprocessing tool might have been used to produce directly printable output. With unstyled text, however, the printouts are generally readable and usable, as the following excerpt illustrates.

               A  brief report on the experience
               of using VIEW as a word processor
               four decades on.

Using VIEW on the Acorn  Electron  is  an  interesting  experience  and  a
glimpse  into  the  way  word  processing  was  once done. Although I am a
dedicated user of Vim, I am under no  illusions  of  that  program's  word
processing  capabilities: it is deliberately a screen editor based on line
editor  heritage,  and  much  of  its  operations  are  line-oriented.  In
contrast, VIEW is intended to provide printed output: it presents the user
with a  ruler  showing  the  page margins and tab stops, and it even saves
additional   rulers   into  the  stored  document   in   their   on-screen
representations. Together with its default typewriter-style  behaviour  of
allowing  the  cursor  to  be moved into empty space and of overwriting or
replacing text, there is a quaint feel to it.

Since VIEW is purely text-based, I can easily imagine converting its formatting codes to work with troff. That would then broaden the output options. Interestingly, the Advanced User Guide was written in VIEW and then sent to a company for typesetting, so perhaps a workflow like this would have been useful for the authors back then.

A major selling point of the Electron was its provision of BBC BASIC as the built-in language. As the BBC Micro had started to become relatively widely adopted in schools across the United Kingdom, a less expensive computer offering this particular dialect of BASIC was attractive to purchasers looking for compatibility with school computers at home. Obviously, there is a need to be able to load and save BASIC programs, and this can be done using the serial connection.

Loading a BASIC program from the Unix host.

Loading a BASIC program from the Unix host.

Beyond straightforward operations like these, BASIC also provides random-access file operations through various keywords and constructs, utilising the underlying operating system interfaces that invoke filing system operations to perform such work. VIEW also appears to use these operations, so it seems sensible not to ignore them, even if many programmers might have preferred to use bulk transfer operations – the standard load and save – to get data in and out of memory quickly.

A BASIC program reading and showing a file.

A BASIC program reading and showing a file.

Interactions between printing, the operating system’s own spooling support, outputting characters and reading and writing data are tricky. A degree of experimentation was required to make these things work together. In principle, it should be possible to print and spool at the same time, even with output generated by the remote host that has been sent over the serial line for display on the Electron!

Of course, as a hybrid terminal, the exercise would not be complete without terminal functionality. Here, I wanted to avoid going down another rabbit hole and implementing a full terminal emulator, but I still wanted to demonstrate the invocation of a shell on the Unix host and the ability to run commands. To show just another shell session transcript would be rather dull, so here I present the perusal of a Python program to generate control codes that change the text colour on the Electron, along with the program’s effects:

Interaction with the shell featuring multiple text colours.

Interaction with the shell featuring multiple text colours.

As a bitmapped terminal, the Electron is capable of much more than this. Although limited to moderate resolutions by the standards of the fanciest graphics terminals even of that era, there are interesting possibilities for Unix programs and scripts to generate graphical output.

A chart generated by a Python program showing workstation performance results.

A chart generated by a Python program showing workstation performance results.

Sending arbitrary character codes requires a bit of terminal configuration magic so that line feeds do not get translated into other things (the termios manual page is helpful, here, suggesting the ONLCR flag as the culprit), but the challenge, as always, is to discover the piece of the stack of technologies that is working against you. Similar things can be said on the Electron as well, with its own awkward confluence of character codes for output and output control, requiring the character output state to be tracked so that certain values do not get misinterpreted in the wrong context.

Others have investigated terminal connectivity on Acorn’s 8-bit microcomputers and demonstrated other interesting ways of producing graphical output from Unix programs. Acornsoft’s Termulator could even emulate a Tektronix 4010 graphical terminal. Curiously, Termulator also supported file transfer between a BBC Micro and the host machine, although only as a dedicated mode and limited to ASCII-only text files, leaving the hybrid terminal concept unexplored.

Reflections and Remarks

I embarked on this exercise with some cautiousness, knowing that plenty of uncertainties lay ahead in implementing a functional piece of software, and there were plenty of frustrating moments as some of the different elements of the rather underdocumented software stack conspired to produce undesirable behaviour. In addition, the behaviour of my serial emulation code had a confounding influence, requiring some low-level debugging (tracing execution within the emulator instruction by instruction, noting the state of the emulated CPU), some slowly dawning realisations, and some adjustments to hopefully make it work in a more cooperative fashion.

There are several areas of potential improvement. I first programmed in 6502 assembly language maybe thirty-five years ago, and although I managed to get some sprite and scrolling routines working, I never wrote any large programs, nor had to interact with the operating system frameworks. I personally find the 6502 primitive, rigid, and not particularly conducive to higher-level programming techniques, and I found myself writing some macros to take away the tedium of shuffling values between registers and the stack, constantly aware of various pitfalls with regard to corrupting registers.

My routines extending the operating system framework possibly do not do things the right way or misunderstand some details. That, I will blame on the vague documentation as well as any mistakes made micromanaging the registers. Particularly frustrating was the way that my ROM code would be called with interrupts disabled in certain cases. This made implementation challenging when my routines needed to communicate over the serial connection, when such communication itself requires interrupts to be enabled. Quite what the intention of the MOS designers was in such circumstances remains something of a mystery. While writing this article, I realised that I could have implemented the printing functionality in a different way, and this might have simplified things, right up to the point where I saw, thanks to the debugger provided by Elkulator, that the routines involved are called – surprise! – with interrupts disabled.

Performance could be a lot better, with this partly due to my own code undoubtedly requiring optimisation. The existing software stack is probably optimised to a reasonable extent, but there are various persistent background activities that probably steal CPU cycles unnecessarily. One unfortunate contributor to performance limitations is the hardware architecture of the Electron. Indeed, I discovered while testing in one of the 80-column display modes that serial transfers were not reliable at the default transfer rate of 9600 baud, instead needing to be slowed down to only 2400 baud. Some diagnosis confirmed that the software was not reading the data from the serial chip quickly enough, causing an overflow condition and data being lost.

Motivated by cost reduction and product positioning considerations – the desire to avoid introducing a product that might negatively affect BBC Micro sales – the Electron was deliberately designed to use a narrow data bus to fewer RAM chips than otherwise would have been used, with a seemingly clever technique being employed to allow the video circuitry to get the data at the desired rate to produce a high-resolution or high-bandwidth display. Unfortunately, the adoption of the narrow data bus, facilitated by the adoption of this particular technique, meant that the CPU could only ever access RAM at half its rated speed. And with the narrow data bus, the video circuitry effectively halts the CPU altogether for a substantial portion of its time in high-bandwidth display modes. Since serial communications handling relies on the delivery and handling of interrupts, if the CPU is effectively blocked from responding quickly enough, it can quickly fall behind if the data is arriving and the interrupts are occurring too often.

That does raise the issue of reliability and of error correction techniques. Admittedly, this work relies on a reliable connection between the emulated Electron and the host. Some measures are taken to improve the robustness of the communication when messages are interrupted so that the host in particular is not left trying to send or receive large volumes of data that are no longer welcome or available, and other measures are taken to prevent misinterpretation of stray data received in a different and thus inappropriate context. I imagine that I may have reinvented the wheel badly here, but these frustrations did provide a level of appreciation of the challenges involved.

Some Broader Thoughts

It is possible that Acorn, having engineered the Electron too aggressively for cost, made the machine less than ideal for the broader range of applications for which it was envisaged. That said, it should have been possible to revise the design and produce a more performant machine. Experiments suggest that a wider data path to RAM would have helped with the general performance of the Electron, but to avoid most of the interrupt handling problems experienced with the kind of application being demonstrated here, the video system would have needed to employ its existing “clever” memory access technique in conjunction with that wider data path so as to be able to share the bandwidth more readily with the CPU.

Contingency plans should have been made to change or upgrade the machine, if that had eventually been deemed necessary, starting at the point in time when the original design compromises were introduced. Such flexibility and forethought would also have made a product with a longer appeal to potential purchasers, as opposed to a product that risked being commercially viable for only a limited period of time. However, it seems that the lessons accompanying such reflections on strategy and product design were rarely learned by Acorn. If lessons were learned, they appear to have reinforced a particular mindset and design culture.

Virtue is often made of the Acorn design philosophy and the sometimes rudely expressed and dismissive views of competing technologies that led the company to develop the ARM processor. This approach enabled comparatively fast and low-cost systems to be delivered by introducing a powerful CPU to do everything in a system from running applications to servicing interrupts for data transfers, striving for maximal utilisation of the available memory bandwidth by keeping the CPU busy. That formula worked well enough at the low end of the market, but when the company tried to move upmarket once again, its products were unable to compete with those of other companies. Ultimately, this sealed the company’s fate, even if more fortuitous developments occurred to keep ARM in the running.

(In the chart shown earlier demonstating graphical terminal output and illustrating workstation performance, circa 1990, Acorn’s R260 workstation is depicted as almost looking competitive until one learns that the other workstations depicted arrived a year earlier and that the red bar showing floating-point performance only applies to Acorn’s machine three years after its launch. It would not be flattering to show the competitors at that point in history, nor would it necessarily be flattering to compare whole-system performance, either, if any publication sufficiently interested in such figures had bothered to do so. There is probably an interesting story to be told about these topics, particularly how Acorn’s floating-point hardware arrived so late, but I doubt that there is the same willingness to tell it as there is to re-tell the usual celebratory story of ARM for the nth time.)

Acorn went on to make the Communicator as a computer that would operate in a kind of network computing environment, relying on network file servers to provide persistent storage. It reused some of the technology in the Electron and the BT Merlin M2105, particularly the same display generator and its narrow data bus to RAM, but ostensibly confining that aspect of the Electron’s architecture to a specialised role, and providing other facilities for applications and, as in the M2105, for interaction with peripherals. Sadly, the group responsible in Acorn had already been marginalised and eventually departed, apparently looking to pursue the concept elsewhere.

As for this particular application of an old computer and a product that was largely left uncontemplated, I think there probably was some mileage in deploying microcomputers in this way, even outside companies like Acorn where such computers were being developed and used, together with software development companies with their own sophisticated needs, where minicomputers like the DEC VAX would have been available for certain corporate or technical functions. Public (or semi-public) access terminals were fairly common in universities, and later microcomputers were also adopted in academia due to their low cost and apparently sufficient capabilities.

Although such adoption appears to have focused on terminal applications, it cannot have been beyond the wit of those involved to consider closer integration between the microcomputing and multi-user environments. In further and higher education, students will have had microcomputing experience and would have been able to leverage their existing skills whilst learning new ones. They might have brought their microcomputers along with them, giving them the opportunity to transfer or migrate their existing content – their notes, essays, programs – to the bright and emerging new world of Unix, as well as updating their expertise.

As for updating my own expertise, it has been an enlightening experience in some ways, and I may well continue to augment the implemented functionality, fix and improve things, and investigate the possibilities this work brings. I hope that this rather lengthy presentation of the effort has provided insights into experiences of the past that was and the past that might have been.

Considering Unexplored Products of the Past: Emulating an Expansion

Wednesday, February 8th, 2023

In the last couple of years, possibly in common with quite a few other people, certainly people of my vintage, and undoubtedly those also interested in retrocomputing, I have found myself revisiting certain aspects of my technological past. Fortunately, sites like the Internet Archive make this very easy indeed, allowing us to dive into publications from earlier eras and to dredge up familiar and not so familiar magazine titles and other documentation. And having pursued my retrocomputing interest for a while, participating in forums, watching online videos, even contributing to new software and hardware developments, I have found myself wanting to review some of the beliefs and perceptions that I and other people have had of the companies and products we grew up with.

One of the products of personal interest to me is the computer that got me and my brother started with writing programs (as well as playing games): the Acorn Electron, a product of Acorn Computers of Cambridge in the United Kingdom. Much can be said about the perceived chronology of this product’s development and introduction, the actual chronology, and its impact on its originator and on wider society, but that surely deserves a separate treatment. What I can say is that reviewing the archives and other knowledge available to us now can give a deeper understanding of the processes involved in the development of the Electron, the technological compromises made, and the corporate strategy that led to its creation and eventually its discontinuation.

By Bilby - Own work, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=10957142

The Acorn Electron
(Picture attribution: By BilbyOwn work, CC BY 3.0, Link)

It has been popular to tell simplistic narratives about Acorn Computers, to reduce its history to a few choice moments as the originator of the BBC Microcomputer and the ARM processor, but to do so is to neglect a richer and far more interesting story, even if the fallibility of some of the heroic and generally successful characters involved may be exposed by telling some of that story. And for those who wonder how differently some aspects of computing history might have turned out, exploring that story and the products involved can be an adventure in itself, filling in the gaps of our prior experiences with new insights, realisations and maybe even glimpses into opportunities missed and what might have been if things had played out differently.

At the Rabbit Hole

Reading about computing history is one thing, but this tale is about actually doing things with old software, emulation, and writing new software. It started off with a discussion about the keyboard shortcuts for a word processor and the differences between the keyboards on the Acorn Electron and its higher-specification predecessor, the BBC Microcomputer. Having acquainted myself with the circuitry of the Electron, how its keyboard is wired up, and how the software accesses it, I was obviously intrigued by these apparent differences, but I was also intrigued by the operation of the word processor in question, Acornsoft’s VIEW.

Back in the day, as people like to refer to the time when these products were first made available, such office or productivity applications were just beyond my experience. Although it was slightly fascinating to read about them, most of my productive time was spent writing programs, mostly trying to write games. I had actually seen an office suite written by Psion on the ACT Sirius 1 in the early 1980s, but word processors were the kind of thing that people used in offices or, at the very least, by people who had a printer so that they could print the inevitable letters that everyone would be needing to write.

Firing up an Acorn Electron emulator, specifically Elkulator, I discovered that one of the participants in the discussion was describing keyboard shortcuts that didn’t match up to those that were described in a magazine article from the era, these appearing correct as I tried them out for myself. It turned out that the discussion participant in question was using the BBC Micro version of VIEW on the Electron and was working around the mismatch in keyboard layouts. Although all of this was much ado about virtually nothing, it did two things. Firstly, it made me finally go in and fix Elkulator’s keyboard configuration dialogue, and secondly, it made me wonder how convenient it would be to explore old software in a productive way in an emulator.

Reconciling Keyboards

Having moved to Norway many years ago now, I use a Norwegian keyboard layout, and this has previously been slightly problematic when using emulators for older machines. Many years ago, I used and even contributed some minor things to another emulator, ElectrEm, which had a nice keyboard configuration dialogue. The Electron’s keyboard corresponds to certain modern keyboards pretty well, at least as far as the alphanumeric keys are concerned. More challenging are the symbols and control-related keys, in particular the Electron’s special Caps Lock/Function key which sits where many people now have their Tab key.

Obviously, there is a need to be able to tell an emulator which keys on a modern keyboard are going to correspond to the keys on the emulated machine. Being derived from an emulator for the BBC Micro, however, Elkulator’s keyboard configuration dialogue merely presented a BBC Micro keyboard on the screen and required the user to guess which “Beeb” key might correspond to an Electron one. Having put up with this situation for some time, I finally decided to fix this once and for all. The process of doing so is not particularly interesting, so I will spare you the details of doing things with the Allegro toolkit and the Elkulator source code, but I was mildly pleased with the result:

The revised keyboard configuration dialogue in Elkulator.

The revised keyboard configuration dialogue in Elkulator.

By also adding support for redefining the Break key in a sensible way, I was also finally able to choose a key that desktop environments don’t want to interfere with: F12 might work for Break, but Ctrl-F12 makes KDE/Plasma do something I don’t want, and yet Ctrl-Break is quite an important key combination when using an Electron or BBC Micro. Why Break isn’t a normal key on these machines is another story in itself, but here is an example of redefining it and even allowing multiple keys on a modern keyboard to act as Break on the emulated computer:

Redefining the Break key in Elkulator.

Redefining the Break key in Elkulator.

Being able to confidently choose and use keys made it possible to try out VIEW in a more natural way. But this then led to another issue: how might I experiment with such software productively? It would be good to write documents and to be able to extract them from the emulator, rather than see them disappear when the emulator is closed.

Real and Virtual Machines

One way to get text out of a system, whether it is a virtual system like the emulated Electron or a real machine, is to print it. I vaguely remembered some support for printing from Elkulator and was reminded by my brother that he had implemented such support himself a while ago as a quick way of getting data out of the emulated system. But I also wanted to be able to get data into the emulated system as well, and the parallel interface typically used by the printer is not bidirectional on the Electron. So, I would need to look further for a solution.

It is actually the case that Elkulator supports reading from and writing to disk (or disc) images. The unexpanded Electron supports read/write access to cassettes (or tapes), but Elkulator does not support writing to tapes, probably because the usability considerations are rather complicated: one would need to allow the user to control the current position on a tape, and all this would do is to remind everyone how inconvenient tapes are. Meanwhile, writing to disk images would be fairly convenient within the emulator, but then one would need to use tools to access the files within the images outside the emulator.

Some emulators for various systems also support the notion of a host filesystem (or filing system) where some special support has been added to make the emulated machine see another peripheral and to communicate with it, this peripheral really being a program on the host machine (the machine that is running the emulator). I could have just written such support, although it would also have needed some software support written for the emulated machine as well, but this approach would have led me down a path of doing something specific to emulation. And I have a principle of sorts which is that if I am going to change the way an emulated machine behaves, it has to be rooted in some kind of reality and not just enhance the emulated machine in a way that the original, “real” machine could not have been.

Building on Old Foundations

As noted earlier, I have an interest in the way that old products were conceived and the roles for which those products were intended by their originators. The Electron was largely sold as an unexpanded product, offering only power, display and cassette ports, with a general-purpose expansion connector being the gateway to anything else that might have been added to the system later. This was perceived somewhat negatively when the machine was launched because it was anticipated that buyers would probably, at the very least, want to plug joysticks into the Electron to play games. Instead, Acorn offered an expansion unit, the Plus 1, that cost another £60 which provided joystick, printer and cartridge connectors.

But this flexibility in expanding the machine meant that it could have been used as the basis for a fairly diverse range of specialised products. In fact, one of the Acorn founders, Chris Curry, enthused about the Electron as a platform for such products, and one such product did actually make it to market, in a way: the BT Merlin M2105 messaging terminal. This terminal combined the Electron with an expansion unit containing circuitry for communicating over a telephone line, a generic serial communications port, a printer port, as well as speech synthesis circuitry and a substantial amount of read-only memory (ROM) for communications software.

Back in the mid-1980s, telecommunications (or “telecoms”) was the next big thing, and enthusiasm for getting a modem and dialling up some “online” service or other (like Prestel) was prevalent in the computing press. For businesses and institutions, there were some good arguments for adopting such technologies, but for individuals the supposed benefits were rather dulled by the considerable costs of acquiring the hardware, buying subscriptions, and the notoriously high telephone call rates of the era. Only the relatively wealthy or the dedicated few pursued this side of data communications.

The M2105 reportedly did some service in the healthcare sector before being repositioned for commercial applications. Along with its successor product, the Acorn Communicator, it enjoyed a somewhat longer lifespan in certain enterprises. For the standard Electron and its accompanying expansions, support for basic communications capabilities was evidently considered important enough to be incorporated into the software of the Plus 1 expansion unit, even though the Plus 1 did not provide any of the specific hardware capabilities for communication over a serial link or a telephone line.

It was this apparently superfluous software capability that I revisited when I started to think about getting files in and out of the emulator. When emulating an Electron with Plus 1, this serial-capable software is run by the emulator, just as it is by a real Electron. On a real system of this kind, a cartridge could be added that provides a serial port and the necessary accompanying circuitry, and the system would be able to drive that hardware. Indeed, such cartridges were produced decades ago. So, if I could replicate the functionality of a cartridge within the emulator, making some code that pretends to be a serial communications chip (or UART) that has been interfaced to the Electron, then I would in principle be able to set up a virtual serial connection between the emulated Electron and my modern host computer.

Emulated Expansions

Modifying Elkulator to add support for serial communications hardware was fairly straightforward, with only a few complications. Expansion hardware on the Electron is generally accessible via a range of memory addresses that actually signal peripherals as opposed to reading and writing memory. The software provided by the Plus 1 expansion unit is written to expect the serial chip to be accessible via a range of memory locations, with the serial chip accepting values sent to those locations and producing values from those locations on request. The “memory map” through which the chip is exposed in the Electron corresponds directly to the locations or registers in the serial chip – the SCN2681 dual asynchronous receiver/transmitter (DUART) – as described by its datasheet.

In principle, all that is needed is to replicate the functionality described by the datasheet. With this done, the software will drive the chip, the emulated chip will do what is needed, and the illusion will be complete. In practice, a certain level of experimentation is needed to fill in the gaps left by the datasheet and any lack of understanding on the part of the implementer. It did help that the Plus 1 software has been disassembled – some kind of source code regenerated from the binary – so that the details of its operation and its expectations of the serial chip’s operation can be established.

Moreover, it is possible to save a bit of effort by seeing which features of the chip have been left unused. However, some unused features can be provided with barely any extra effort: the software only drives one serial port, but the chip supports two in largely the same way, so we can keep support for two just in case there is a need in future for such capabilities. Maybe someone might make a real serial cartridge with two ports and want to adapt the existing software, and they could at least test that software under emulation before moving to real hardware.

It has to be mentioned that the Electron’s operating system, known as the Machine Operating System or MOS, is effectively extended by the software provided in the Plus 1 unit. Even the unexpanded machine provides the foundations for adding serial communications and printing capabilities in different ways, and the Plus 1 software merely plugs into that framework. A different kind of serial chip would be driven by different software but it would plug into the same framework. At no point does anyone have to replace the MOS with a patched version, which seems to be the kind of thing that happens with some microcomputers from the same era.

Ultimately, what all of this means is that having implemented the emulated serial hardware, useful things can already be done with it within the bare computing environment provided by the MOS. One can set the output stream to use the serial port and have all the text produced by the system and programs sent over the serial connection. One can select the serial port for the input stream and send text to the computer instead of using the keyboard. And printing over the serial connection is also possible by selecting the appropriate printer type using a built-in system command.

In Elkulator, I chose to expose the serial port via a socket connection, with the emulator binding to a Unix domain socket on start-up. I then wrote a simple Python program to monitor the socket, to show any data being sent from the emulator and to send any input from the terminal to the emulator. This permitted the emulated machine to be operated from a kind of remote console and for the emulated machine to be able to print to this console. At last, remote logins are possible on the Electron! Of course, such connectivity was contemplated and incorporated from the earliest days of these products.

Filing Options

If the goal of all of this had been to facilitate transfers to and from the emulated machine, this might have been enough, but a simple serial connection is not especially convenient to use. Although a method of squirting a file into the serial link at the Electron could be made convenient for the host computer, at the other end one has to have a program to do something with that file. And once the data has arrived, would it not be most convenient to be able to save that data as a file? We just end up right back where we started: having some data inside the Electron and nowhere to put it! Of course, we could enable disk emulation and store a file on a virtual disk, but then it might just have been easier to make disk image handling outside the emulator more convenient instead.

It seemed to me that the most elegant solution would be to make the serial link act as the means through which the Electron accesses files. That instead of doing ad-hoc transfers of data, such data would be transferred as part of operations that are deliberately accessing files. Such ambitions are not unrealistic, and here I could draw on my experience with the platform, having acquired the Acorn Electron Advanced User Guide many, many years ago, in which there are details of implementing filing system ROMs. Again, the operating system had been designed to be extended in order to cover future needs, and this was one of them.

In fact, I had not been the only one to consider a serial filing system, and I had been somewhat aware of another project to make software available via a serial link to the BBC Micro. That project had been motivated by the desire to be able to get software onto that computer where no storage devices were otherwise available, even performing some ingenious tricks to transfer the filing system software to the machine and to have that software operate from RAM. It might have been tempting merely to use this existing software with my emulated serial port, to get it working, and then to get back to trying out applications, loading and saving, and to consider my work done. But I had other ideas in mind…