Paul Boddie's Free Software-related blog


Archive for the ‘Free Software’ Category

The Atomisation of Society

Sunday, April 26th, 2026

Just as it always was, and just as it also was with the Bitcoin and Blockchain fads, artificial intelligence is back in vogue, and now we must suffer hearing about “AI” all the time, shoehorned into everything technological and plenty of other things besides. I can hardly be in a minority in finding it all tiresome and even somewhat disheartening.

Of course, artificial intelligence is a broad discipline, and there are useful technologies doing useful work that may have been reported with varying levels of hype over the course of recent history. Over the last decade or so, machine learning has received a degree of media attention, often doing things related to mundane topics like pattern recognition, supporting applications in worthy domains such as medicine. Interestingly, activities like machine translation and visual object recognition are increasingly taken for granted by wider society, despite being problems that were once considered difficult to solve or tackle reliably.

If worthy domains are not your thing, and your phone is the centre of your life, you might be able to point it at some foreign language text and see it appear, magically translated, in a language you might understand. But this mostly isn’t what the current wave of “AI” hype is all about, even if such relatively useful applications are used to greenwash that hype.

Has your Internet search experience degraded to the point of near uselessness? Worry not: you can now be distracted by something pretending to be intelligent feeding you some misinformation instead. Never mind that an improved search experience, yielding useful, informative results was well within the technological grasp of the well-resourced corporations involved. Boring old part-of-speech tagging, maybe some semantic networks, and a bit of traditional information retrieval technology could go a long way. But, of course, that wouldn’t perpetuate the predatory advertising-fuelled surveillance economy or attract billions of speculative investment dollars.

In Britain, it isn’t enough to saddle the National Health Service with market economics, layers of bureaucracy, and numerous consultants and consultancies siphoning off considerable sums at the taxpayer’s expense, all as the fundamental challenges of the service’s information systems – and core activities – remain undiminished. Now, “AI” will drift in, touted by commercial opportunists and predators, new and old, alongside “data science” practitioners aiming to exploit private health information for their own benefit.

And as if by magic, new “efficiencies” will supposedly result. Never mind that without fixing the bread-and-butter issues, no amount of smoke and mirrors can deliver the magic envisaged. But, of course, it is far easier to wish away one’s problems than to confront them, especially if there are commercial interests with products and services to sell. Why not delegate hard problems to some supposedly all-knowing, all-understanding entity that will magically understand all those things for which the impatient politician or manager simply has no time (or intellectual stamina)?

It is also apparently not enough to saddle educators with extra responsibilities that in a well-organised society would be the task of social and healthcare workers and, in some situations, that of police outreach workers and the police service itself, all as funding is perennially slashed in the education sector. Instead, policy-makers sing tunelessly from the “AI” cult’s songsheet, fuelling yet another corrosive societal problem and, as always, dumping the consequences in the laps of educators for “careful consideration”. And, of course, the welfare of the students barely registers on the conscience of those policy-makers as they perform their victory lap.

As Winter Follows Summer

For the most part, the “AI” in the latest round of hype is that most easily understood by, or most relatable to, the average media commentator. It is the gadget that generates written or visual content as if it were some kind of creative being. For their managerial counterpart with the job title incorporating the dishonestly chosen words “customer satisfaction”, the hammer in their toolbox marked “chatbot” is thus the “go to” tool for every digital transformation job, just as it was in the early 2000s for organisations looking to somehow “deepen” the level of interaction with their customers.

Amusingly, many of today’s corporate chatbots are largely the same kind of keyword-scanning “choose an option” counterparts to the classic phone menu as their predecessors were. The most likely reason for this is that the much-hyped large language models, exposed to querying from random people on the Internet, risk incurring reputational damage to any given hapless organisation as they go off-script and start to parrot “poorly sourced” training data, to put it charitably, of which there may be quite a bit.

And it is the matter of what is used to feed the “AI” monster that should concern those of us in Free Software and every other industry that creates content. It is here that we find out who are allies are, who is really committed to a free and fair society, and who it is that instead chooses to betray us. Unsurprisingly, those who betray us are largely the same group as those who, once employed by a “rich uncle” in the form of some corporate behemoth dangling incentives and a comfortable lifestyle, try to explain away the problematic behaviour of that rich uncle. Worse still, they tend to demand the silencing of any uncomfortable criticism of either the behemoth or themselves in their sordid pact.

You will find these people regularly trolling others in any venue where comments may be left to supposedly discuss the headline content, basking in their privilege and telling everybody else what little everybody on the outside really knows. As layoffs roll in across the technology sector once again, it remains to be seen whether the enthusiasm for defending some corporation’s questionable ethical record remains buoyant or whether there might be some buyer’s remorse amongst those who took the easy money. But of course, these people will glide through another gilded door, ushered in by some other member of the old boys’ network. After all, some of them have been trying to undermine Free Software for years.

This time round, the layoffs are justified by management using the very thing that the corporate apologists seek to defend. Perhaps the most enthusiastic apologists for “AI” have not been shown the door just yet, but plenty of their colleagues have. Then again, there are plenty of people who see a harmful phenomenon and think: “That could never happen to me!” After all, they are the special ones. They might concede that something has come along to “shake up” the industry, but as the special ones, they are the ones who will adapt, who will turn this disruptive force into their “competitive advantage”.

In any online discussion related to “AI” and the software industry, you can expect to read the same old guff. Some self-proclaimed top-flight developer will tell everyone how much “more productive” they are by getting “AI” tooling to generate all those unit tests that they previously had to write by hand. Never mind that their claimed “insane level of productivity” is tempered by their own admitted need to review the slop created for them, a result of them describing their needs in “plain English” when it has been known for decades that the definition of a functional specification for a piece of software, especially when done in a natural language, is one of the harder tasks in systems development.

Many of these people will claim that this lets them “spend more time on the creative part” of their job, never mind that the kind of person who writes such drivel is usually the kind of person that most observers would struggle to categorise as creative. But their advocacy for systems that systematically trawl other people’s work, undergo “training” on it, and transform it for eventual regurgitation in a hopefully distinct and thus plausibly “original” form, and their sudden enthusiasm for the creative side of their work, will find its reward in the end. After all, creative jobs have been amongst the first to be eliminated in this new wave of “AI”, as managers (aping their tech idols) declare that graphic designers are no longer needed, just as I learned in one enlightening, spontaneous conversation a year and a half or so ago, or as anyone can discover just by following the news.

Your manager will surely not care how much more quickly all those unit tests are getting written – not that they cared before, exactly – nor will they value your “creativity” when the latest toy is dangled in front of them promising to do your job. Your peers will not care about any kind of epiphany you have, either. They will still be the special ones and you, like the rest of us, just jealous of their success. You will have had your chance to change the world for the better, and you will have blown that chance.

The Hoarding of Privilege

One might think that technologists might be attuned to the perils of technology for themselves and society, but one should never underestimate the lure of selfishness and self-interest, of not wishing to know about any negative consequences of their work or their interests. As one might expect, solidarity is the first casualty.

One sees talented developers who have spent their entire career writing their own code, tinkering with “AI” coding tools and enthusing about the output, wasting other people’s time pontificating about why the assembly language generated for some 8-bit microprocessor isn’t as plausible as a Python script for a task solved independently hundreds of times and published to the Internet every single time. One sees talented developers make nice new applications, only then to go off and use “AI” for the artwork or for other “creative” elements of the work.

They might say that they aren’t talented enough to do those tasks themselves, so they can justify delegating them to a machine. Maybe they just want something that fills the gap, or maybe they don’t want to pay for such work. But even if everyone is doing of all this for fun and for free, why do they not involve others in their creative exercise, people who might be able to draw pictures or write prose?

It sounds to me that they just do not value such creative endeavours or the role of those in society who are talented enough pursue those endeavours. And as they hit up “AI” to conjure up some art, featuring a particular subject, done in a particular style, they perhaps purposefully neglect the creators of the works that fed the “AI” in the first place: those very same people they breeze past as they go and hit all the buttons.

It must be so wonderful to be in a position of having made plenty of money in a profession that was once valued, maybe even lucrative, to now be able to check out for the rest of one’s career. To pull up the ladder on the next generation, eliminate their opportunities, and to indulge in that time-old practice of berating them for being lazy and not good enough. To indulge in “vibe coding” personally, maybe even promote it educationally, but then to label as decadent all recent practitioners in one’s own profession, simply not made of the same stuff as folk were in the “good old days”.

One encounters people who say that they are glad that they are retired or will soon be retiring. For some of them, this may be a legitimate reaction to an increasingly tiresome and possibly even hostile workplace culture, looking forward to some well-deserved relief tinged with remorse for a vocation that no longer delivers the satisfaction it once did. For others, it just sounds like they are content to leave the world in a worse state than it was, and to abandon the job of reversing this damage to subsequent generations: hardly an isolated occurrence in the modern era.

It can be all too easy for people to downplay the effects of new technologies, often with claims that appeal to ideas about “human nature”. As children go through their entire childhood exposed to technologies that are inadequately supervised and often exploitatively designed, some might claim that schoolchildren or students seeking to cheat on their assignments “isn’t a real issue”. And with this, we are presumably supposed to move on. But anyone who knows any teachers will also know that plagiarism is very much “a real issue”, both for students and teachers.

With ready access to online tools, even encouragement to use them, the temptation is too strong for many students, particularly those who have been struggling throughout their education. With a diminishing incentive to learn, and besieged by social pressures amplified by technology, plagiarism starts to look like all they have left in their arsenal. When we hear stories about disengaged young people and their apathy, imagine being the person who has to try and motivate these young people, especially when it seems like every last person with influence over this situation has abandoned everyone affected.

And plagiarism is, in fact, such an issue that Microsoft, having flooded the zone with “AI” toys, offers other toys to detect plagiarism. There is good money to be made by escalating a situation into a conflict and equipping both sides, as those in certain other professions and industries know all too well. I have read tales of educational institutions being infatuated with “AI” to the point of rolling it out, often excused in numerous ways, but mostly amounting to someone in a position of power wanting to try all the toys.

(And given that school administrations are routinely surrendering their classrooms to gambling industry practices and data surveillance predators, “AI” probably does seem like just another fun toy to play with.)

I have heard tales of decision-makers and executives trying to mitigate the harmful effects of their decisions, related to “AI” and otherwise, by leaning into the consumerisation of education and seeking to wave through students who may have resorted to plagiarism. Students who just want the piece of paper, so that they can get on all those celebrated ladders of society – good employment, decent housing – before they are pulled up and out of reach entirely.

But what will become of such students when they move on to the next stage of their education or development, increasingly out of their depth, feeling less and less adequate, confident, fulfilled or secure in their own existence? At what point do we justifiably label the abdication of responsibility, by those in authority and those who seek to profit from such misery, the abuse that it arguably is?

“You start with the hope that the next swipe will bring a reward, and eventually it just feels good to see the images floating upwards. And you believe that you’ll get something out of it in the end. But that’s the same dirty trick that gambling machines use. The house always wins.”

When society fails to protect its members as they grow up, how can those in privileged positions criticise people, overwhelmed and desensitised by torrents of increasingly harmful content, for not participating in or engaging with society? Maybe those elected and appointed to protect our interests could do exactly that, instead of looking out only for themselves and the vested interests who have been lining their party’s pockets.

Quality Uncontrolled

One potentially common element in the workplace and the classroom is how each group perceives the materials they encounter, along with their attitudes towards the cultural works of their own society. For children growing up in earlier times, one might expect the consumption of books, magazines, newspapers, television series, films, popular music and other such works to have played a defining role in their perception of culture and its value.

But after years of relentless and cynical commercialisation, children have increasingly been growing up in an environment of relentlessly derivative entertainment content. Instead of new and original stories, endless comic book franchise reboots and the like, largely to keep trademarks and other “intellectual property” warm and out of the public domain, may have habituated people to expect less and less from culture and to place a lower value on it.

Instead of being engaged by cultural works, those works become mere wallpaper, something to have on in the background while other, more harmful, forms of engagement steal the attention of the viewer or listener. And with their perspectives and beliefs unengaged and unchallenged by mainstream culture, people risk being sucked into a bubble all of their own, all too readily and easily fed by “AI”, reinforcing their flawed views and pandering to their prejudices.

Why would you want “AI” to generate content for your own consumption? Habituated to having one’s favourite superhero meet and/or fight one’s other favourite superhero in endless re-runs of generic, commercial filler, it isn’t such a big step to go on and have one’s tummy rubbed all the time by unchallenging and probably inaccurate content that happens to appeal to one’s existing biases, now readily produced in bulk and on demand by “AI”.

For some of us, the process of creating something new is part of the eventual reward, and the process of researching, understanding and communicating something is engaging all by itself. But the degradation of culture to a mere consumable risks the marginalisation or even eradication of the creative process, the investigative process, of historical and critical enquiry, all so that people can be soothed and entertained.

There is a relatively well-known quote about AI being supposed to free up a person’s time for pursuing their art by taking care of the cleaning and the laundry, but instead “AI” has made them and their art redundant and leaves them with the cleaning and the laundry. What might this have to do with the workplace? Well, to answer that, we have to address the phenomenon of the software product and its gradual erosion.

Angst or Realism?

When the well-known technologist Tim Bray raised the issue of “AI Angst“, alongside broader issues, the accompanying off-site discussion of the article predictably featured the usual consumerist “works for me” and aspirational “more productive” tropes. But it also exposed the sentiment that “AI” is also quite able to take the joy and the motivation out of doing work and being in a profession. Amongst those who had the most enthusiasm for vibe coding were claims that only “5% to 20% of the work is interesting” in a modern programming job. So, what does that say about the state of the industry?

One factor might be that industry is subject to a continual inundation of fads and trends often imposed on the profession from outside, facilitated by opportunists who can persuade management and executives of productivity boosts and reductions in expenditure. But another thing is that enthusiasm for “AI” is evidently higher amongst those whose development has a high level of “donkey work”. This leads to a pertinent question: have technologists failed their profession by perpetuating cumbersome, inadequate technologies?

If you have ever had to work with certain widely available, well-resourced software available today, you will be familiar with the sensation of having to seek advice or guidance on how some aspect of the software works or is to be used, only to discover that there is no documentation or that the standard of documentation is so dismal, that one can only wonder how people have allowed such software to be used so widely and in such critical applications in the first place. Not that wondering about it is in any way helpful.

Over the years, various remedies have been thrust onto the scene, from plain old Internet search hoping to dig up some gems of wisdom, discussion forums aspiring to provide relief to adherents of particular technologies, and the now-familiar question-and-answer site concept popularised by the likes of Stack Overflow and its affiliates. But alongside the frustration that inadequate documentation may bring is the frustration and general disillusionment of having to continually ask questions, sometimes relying on vain, petulant and uncooperative individuals, that should have been definitively answered long ago, with those answers recorded coherently for posterity in actual documentation.

We might then understand why some people could be tempted to use “AI” chatbots, desperately trying to get insight into things that should have been made clear by actual human beings in the first place. The chatbot might be misleading or counterfactual, but it is probably going to act somewhat cooperatively, not shame the user about their lack of knowledge, or withhold information to coerce a form of deference. Well, at least not yet, anyway.

This sorry situation is a consequence of the way the software industry has been heading over the last few decades. Under the banner of agility, competitiveness, and that crucial “time to market”, forever to be optimised and minimised, things like documentation are critically undervalued. “Read the code!” exhorts one techbro to the other, advocating the complete elimination of any commentary, deemed unnecessary because “the code is the documentation” and because what they coded was obvious to them when they were “in the zone”.

Never mind that the commentary would have revealed the intended behaviour of the code, along with the deficiencies of the code that was actually written. But, of course, no-one had any time for that, let alone any time for writing a coherent guide to the software, its architecture, and how other people might interact with it in various ways as users of different kinds and developers needing to fix and extend that code. All of that stuff was low-end work according to the techbro, as are other kinds of programming that they do not personally value. (Still, the collision of the butch “rewrite everything in Rust” brigade and “AI” can still provide some entertainment and maybe an occasional reality check.)

It also does not help that documentation has joined the other facets of development in being subjected to various technological and methodological fads that do more to create opportunities for certain industry players, as opposed to improving the quality and breadth of material available. Just as one’s heart sinks as the primary source of guidance for a project turns out to be a GitHub repository page, so it does when one encounters a documentation site that was produced by Sphinx, with plenty of hastily scribbled sections littered with “admonishments” and caveats.

Tim’s use of the term “angst” rightfully received a strongly worded response from one commenter, not least because labelling people’s reactions as if they were those people’s weaknesses in having to respond to seemingly overwhelming pressures is what Tim might himself regard as victim blaming. Tim seems like a generally good guy with a great deal of self-awareness, but I imagine that his position in the industry, along with that of elements of his readership, some probably having done very nicely indeed from their tenure at various West Coast behemoths, makes the general demographic less than reflective when they lament the shocking lack of alternatives to their favourite cloud services.

To an extent, it is a bit like former politicians who take on moral causes after their positions in power are over and as their influence steadily diminishes. Where were they when other people needed their help in furthering those causes? When adjustments to policy, so easily done, might have made a significant difference. Similarly, with Free Software and the provision of sustainable, ethical technology more broadly, the dominant ideology promoted vigorously by West Coast capitalism often seems to involve discretionary support to worthy causes by wealthy people, many of whom did just fine while those worthy causes were kept marginalised, and when various structural causes of inequality, noted by Tim himself, were conveniently ignored because that would involve wealthy people paying a bit more tax.

The Idiocracy

There is always some idiot who pipes up in online discussions about how they asked some chatbot about some topic, “and it said this”. It is almost like they want a prize for having done something that anyone else could have done. And what does this kind of “idle wondering” usage of chatbots, indulged by the consumption of colossal amounts of electricity, actually serve? If anyone were really interested in a topic, and were to have the knowledge and understanding to actually assess chatbot output, then they probably wouldn’t be using one in the first place.

That leaves the most likely kind of user as the sort of middle manager who doesn’t understand anything, who just pushes the paper on to someone who is supposed to “action” the information they were given. At what point is the human, relaying information they don’t understand to impress people about “what they know”, just a conduit for the activities of machines? And are they not then just a puppet participating in the game of assessing whether the machines are intelligent or not, abdicating their own genuine intelligence in the process?

To be fair to those wishing to put questions to machines and expecting some kind of summarised response, questions and other naturally constructed prose could conceivably help formulate more effective queries than simple combinations of keywords. The grammatical roles of words and the relationships between those words can disambiguate between the different meanings of some words and constrain the context of any particular search for information. Returning a summary that paraphrases the materials that were found might help in determining whether those materials happen to cover the right topics.

But what I discovered recently when being offered an “AI” search summary for a particular, highly specific, search term was the kind of thing people describe as hallucination. More accurately, though, it was more reminiscent of when a lazy student of a certain age, or someone who is “hustling” to impress their equally superficial superiors, states something as a fact, references a bunch of supposed supporting material, only for none of that material to actually confirm the fact or quote the term in question, let alone define that term as the thing confidently stated in their opening sentence.

Once upon a time, when machines were meant to be doing things that were described as “reasoning”, researchers were expected to show how the machine had worked through to its conclusion, revealing things like “facts” and “inferences”, and thus providing insights into the state of the machine’s encoded “knowledge base”. One would then be able to verify for oneself whether such a conclusion was sound or not. But now, it would seem, nobody really cares about accuracy or correctness, but only whether the damned machine has the right kind of “swagger” that would convince a room full of imbeciles with something that might sound credible enough to them.

And we are entering a time when we should be concerned about how people might readily check the correctness of “AI” queries. It was always bad enough when a search engine would produce results where the search terms didn’t appear in many of the results, even in a derived form, meaning that some “search engine optimisation” vulture had deceived the search engine with a bucket of false keywords stuffed somewhere into a page or site, but now the search providers have an excuse to push “AI search”.

Having trained their models on the bulk of the Web, they can then push out arbitrary “summaries” while deliberately obscuring the content that those models consumed. Thus, they can deny plagiarism by making source material difficult to find, while also omitting or excluding content that contradicts or refutes any dubious assertions or conclusions presented as fact by their services. They arguably don’t even have to try very hard to degrade the experience, giving the whole exercise the air of plausible deniability.

As with predatory social media, the training of “AI” also has considerable emotional costs for the people who are largely exploited in their work of having to train it. Not only are they not encouraged to properly correct disinformation, but they may not be suitably qualified to do so. After all, everyone can only be knowledgeable about so much. Having seen behind the curtain, their advice is not to trust it. For “AI” companies, hyping up their products, and to the idiocracy, the illusion of the perfect, shiny, all-knowing android has to be upheld at all costs, for the former so that the money can keep rolling in, and for the latter so that they can presumably look cleverer than they actually are.

The tiresome industry insider will criticise anyone suggesting that the average chatbot exhibits only superficial characteristics of intelligence, hand-waving towards mysterious models while playing the “you don’t know what I know” card, but it isn’t unreasonable to suggest that the whole phenomenon is barely above the level of a parlour trick. Does the chatbot really understand anything or do people just read meaning into what is effectively just banter? Is it merely a new Eliza for the disinformation age?

The Apologists

There are always plenty of people willing to pipe up when something that amuses or delights them is criticised in some way. One will undoubtedly encounter people who cosplay their fantasies of living in the future by infuriatingly using the term “an AI” in an unsarcastic and completely sincere way to refer to a system supposedly employing AI. In doing so, they effectively attribute general intelligence and even sentience to those systems. Criticise such sloppiness and you’ll get something like this in return:

“Nobody had any problem calling it artificial intelligence back when it was far less capable than it is now.”

Once again, such people fail to appreciate the distinction between artificial intelligence as a discipline and the promotion of the discipline in broader society, where despite the buzz in superficial media circles, applications of AI were often met with much skepticism. It was frequently obvious that the systems involved might have been exhibiting traits of ostensibly “intelligent” behaviour, but these systems could not be regarded as being intelligent in their own right or in a general sense.

Even with huge amounts of data and computing power brought to bear on such matters, chatbot output still has the smoke-and-mirrors character familiar from earlier demonstrations of the capabilities of AI. Sadly, popular culture is now configured to amplify the hype as opposed to deconstruct it. So, even pointing this out has people believing that researchers in earlier times would have been awestruck by today’s “AI” chatbots, when in fact they would almost immediately recognise the phenomenon. Indeed, they would be disturbed by the way society has embraced such technologies unquestioningly, perhaps observing that earlier demonstrations were mere laboratory experiments, potentially dangerous if they escaped and proliferated.

What has changed since those earlier times is the broader availability of computing power that delivers a more convincing “demo effect” of the technology. This makes decision-makers think they can dispense with humans and roll out the chatbots. Couple this with the way that the populace has been conditioned and manipulated in terms of expecting and accepting less from public services, private companies, their employers, in their careers, and in their lives more generally, and it is not surprising that people are consuming such technologies. Those who believe that they are only doing so recreationally are predictably dismissive about the negative effects on vulnerable people who might end up using such services because all of the responsible, humane options have been eliminated for the sake of “convenience”.

There is also the arrogance that people seek to exude of being more comfortable with technological change than the average person, even as they reveal their own insecurity about the nature of intelligence. To me, the idea that other creatures possess a range of intelligence is indisputable, and we are regularly presented with observations made about intelligence in the natural world, animal behaviour and cognition, that should merely confirm that we as humans are not quite as special as we might think. Many people engage constructively with such observations and show that they can readily accept notions of more pervasive intelligence in nature.

Meanwhile, the insecure but outwardly confident technophile presumably scoffs at the notion that, say, animals might employ forms of reasoning or possess forms of cognition or information processing that could rival those of humans. They would exhort others to not attribute “human” traits to other species, even as they readily ascribe fanciful characteristics to machines running mystery payloads of software, presumably because humans were involved in their creation.

Some apologists play the inevitability card, that this is simply another change following on from many that society has already absorbed and survived, and that this will somehow be digested, too, seeing society adapt and life go on as before. But here, even those who appreciate the challenges these technologies pose do not seem to understand the cultural and societal calculations that accompanied earlier forms of technological change. There are long-enduring cultures on this planet that have had social rules about the depiction of the human form for thousands, maybe tens of thousands of years, and yet our arrogant “modern” societies think we have nothing to learn from such cultures.

Again, the apologists might reveal their ignorance by scoffing at pigments, paints and dyes as technology, but they gave humans the ability to choose how they might be portrayed. We live in an age where the application of computational technologies may seek to eliminate any distinction between observed reality and precisely concocted fantasy in a way that is still scarcely believable even now. Fake pictures, video and audio can be presented as genuine representations and recordings of reality, leaving their audiences deceived.

Maybe older cultures did not need to see the emergence of such technology before they understood some fundamental social lessons that still remain unabsorbed by our supposedly “advanced” societies today. And while it might be said that perhaps those older cultures needed thousands of years to reach their own conclusions, we might also observe that our own societies do not have the luxury of thousands of years to tackle the effects of this corrosive fakery. This might even remind you of another pressing, existential threat to humanity.

One notable incident in this regard involved content creator Jeff Geerling, whose voice was cloned by a supplier of technology products to use in advertisements. Fortunately, the company in question backed down when challenged over the unlikely similarity of its synthesised voice to that of Geerling’s, and the incident was resolved relatively amicably and with incredible grace by Geerling himself. The incident highlighted the proliferation of such tools and their widespread availability for all kinds of applications.

Naturally, there are apologists for such tools, too, of the same school that presumably insists that nuclear weapons are not inherently bad, just how they might be used. Our societies show little sign of resilience in the face of such threats. Instead of robust legislation, regulation and education, we are left with the usual predictable media commentary about “scams” with various workarounds to “stay safe”. And, those same media outlets still promote the social media lifestyle, exhorting everyone to share their lives online and to feed the predatory social media monster.

The apologists might regard “AI” as harmless fun or empowering (to them), but there is an arms race in progress and an industry dealing in what might be described as information weaponry, building on the delivery mechanisms of predatory social media, to further degrade society’s resilience, making it impossible to trust voices, images, video and content. We might like to believe that we are the sophisticated ones, living in a “modern” society, in contrast to cultures where images of the human form are forbidden. One might wonder whether such rules are less about “taboos” – a culturally loaded term – and more about such societies having a fundamental realisation that our own societies fail to appreciate or understand, dismissing it with arrogant talk of our own “progress”.

Media coverage of “deepfake this” or “deepfake that” predictably circles around sensationalism, involving deceased celebrities if the audience risks being unmoved. But on a more ordinary level, is it progress to no longer be sure that the person sounding like or looking like a member of your family on the other end of a voice or video call really is that person? Is it progress to need a codeword to be somewhat sure?

And is it progress to only be sure if you are standing there in the same room as them, until the day when some ghoul, possibly one who has grown tired of monetising deceased celebrities, eventually introduces lifelike androids to impersonate random people? Even before that miserable day arrives, is it progress if some other ghoul decides to “deepfake” deceased relatives to torment those who are in mourning?

One of the more interesting contributions to the discussion about “AI angst” was that giving the Vatican’s view on “AI”, eliciting some considered responses. Naturally, the Catholic church is hardly the most popular institution for a variety of reasons, but one has to concede that on matters of theology and philosophy, on issues that shape humanity’s view of itself and its relationship with the natural world, the institution can hardly be considered to be staffed with lightweights, even if we might disagree – sometimes strongly – with its position on certain social issues.

Paying for Other People’s Privilege

As with many of society’s ills, one can always choose to metaphorically put one’s head in the sand and ignore those problems that mostly seem to only affect somebody else. Regardless of whether one does so, however, such problems have an annoying habit of landing in one’s lap, anyway. A few months ago, I got a mail from my hosting provider telling me that a “lot of traffic” visiting one of my sites was “causing excessive CPU usage and disruptive service for other customers”. Of course, this was due to a multitude of client addresses hammering my site – a repository browser – and crawling all over every last published resource.

It was suggested that I put my site “behind Cloudflare” since they offer various services to mitigate the effects of “bot” traffic, with my only guidance being a link to a blog about a service that Cloudflare offers. There was no guidance about what obligations towards Cloudflare I might have as a result, whether there might be payment involved, or whether Cloudflare might want or get something else from me, were I to sign up. In the end, I just put authentication in front of my site and re-enabled it, hoping that the bots would eventually give up if all they saw was the appropriate HTTP “authentication required” response.

Consumerists would point out that I’ve been doing all of this wrong, of course. Firstly, they would tell me that I should have moved my repositories to GitHub for the convenience of having Microsoft do the hosting and me paying nothing for the privilege. They would tell me that having GitHub suck all of my content into its “AI” consumption engine, for regurgitation to other users through their copyright laundering “AI” tools, is “no big deal” and that I should be happy to see my code used in so many other places. They would presumably purr about GitHub’s cumbersome and overworked user interface and its “collaborative tools” that, in classic Microsoft style, try and insert the service into everybody’s workflow.

But the end result of me not “going with the flow” and instead paying for the privilege of offering a service on my own terms, contributing to an independent hosting provider and keeping their business viable, allowing other interested parties access to my software, and generally trying to uphold my own privacy and those who wish to interact with me and my content, is that I and others who try and uphold our autonomy and defend our own interests are punished by the Internet equivalent of looters and pilferers. And the consumerist response involves an outcome that is not entirely unintended on the part of the Internet’s dominant corporate interests.

Just as it is rather convenient for the likes of Microsoft to promote “AI” tools in education, only to also offer “AI” plagiarism detection tools in their ubiquitous services, it also seems rather convenient that the degraded Internet environment, increasingly subverted to feed “AI” products, just happens to drive people towards services and platforms run by the Internet’s behemoths that offer “AI” products and services as part of their headline feature sets.

Naturally, such corporations would claim innocence of any random botnet involvement, rather like how conventional suppliers of physical products routinely claim ignorance of bad things in their supply chains, but ultimately the finger-pointing is of diminishing significance. If someone cried “gold” and now an entire landscape has been obliterated, does it matter who was driving which excavator as everyone piled in to make their fortune?

The coercion used by software companies and service providers to “opt in” customers and users is, of course, entirely deliberate. Their goal is to make everyone complicit in mass copyright infringement and, amplifying classic right-wing hypocrisy that appeals to “personal responsibility” yet weakens regulation and enforcement, normalise sentiments that “everybody is doing it” and so rule-makers should “not bother” trying to curtail antisocial behaviour.

Such collective antisocial behaviour affects Free Software in other ways. It causes a degradation of software quality and Free Software contributions, potentially for malicious purposes, grinding down contributors. All of this effectively conspires against independent, modestly-resourced Free Software projects, at the very least inhibiting or shutting down open collaboration, and at worst entirely eliminating such projects as alternatives to well-resourced corporate projects.

People may claim to be applying “AI” tools with the best of intentions, but those using the tools seem to be happy for them to blatantly plagiarise other works and then effectively mark their own exam, denying what is obvious to any moderately competent observer. Or in the ominous words of one such practitioner:

“Beats me. AI decided to do so and I didn’t question it.”

And, as we have seen before, the result is a degradation in Free Software offerings, of impaired desktop experiences, of inscrutable technologies promoted by vested interests and proliferated unquestioningly, and the continuing need for all of us who advocate Free Software adoption to apologise for the state of the software we recommend, as proprietary software companies cash in on the perpetual “lack of polish” or other deficiencies, perceived or real, of that software. Fewer and fewer viable choices remain, driving the average person into the arms of the monopolists, just as they always intended.

If nothing else, Free Software advocates should be pushing back hard against “AI” proliferation, but many of them won’t. I know to my personal cost what adhering to a set of principles entails, but many people are rather more flexible when it comes to following through on what they supposedly believe in. Free Software advocacy typically sits at the intersection of at least two professions: software development and legal practice. As software developers, plenty of people claim to understand the nuances of AI, readily telling you that you have it all wrong in your criticism of the technology.

And software practitioners are often frustrated by law practitioners and their inability to correctly perceive and interpret the nature of technology. Yet, if the legal profession has concerns about AI, why should other professions accept its use so unconditionally? But that is what people do: people who should know better, who claim expertise and the right to lecture others, and then somehow wave away any concerns because it suits them. When legal cases collapse due to “vibe lawyering”, justice may be denied and crimes may go unpunished. When software fails due to vibe coding, the effects risk being just as serious and, in some cases, even worse.

Practitioners, institutions and policy-makers seem to have, for their own reasons, an inability to confront awkward problems and the grind of getting the job done. Consider the case of a catastrophic data leak which imperiled thousands of people due to inappropriate tool usage and a lazy office culture where people couldn’t even be bothered to add two words to an e-mail subject line as the only, almost laughable, technical safeguard against data breaches.

When that becomes just another opportunity to peddle and to apply “AI”, it not only demonstrates that everyone responsible has effectively “checked out” and cannot be bothered to think very hard about the basics, despite the eager software practitioner’s tiresome refrain of “security!” in every technical forum they dignify with their presence. It also shows the ethical deficit these people have with the rest of the society and the people whose lives depend on their diligence. Because we know that “AI” will be just another scapegoat when things go wrong, excuses will be made, and the “vibe” will go on.

Lawmakers, meanwhile, seem infatuated with their new toy, as they also were with predatory social media, delighted to merely rub shoulders with indulged foreign oligarchs, potentially eyeing the possibilities of lucrative sidelines or post-political positions, instead of developing and furthering the interests of those that elected them to office. As has been the case with other topics of concern, notably software patenting, it seems that lawmakers can be very happy to listen to selfish commercial interests from beyond their electoral boundaries instead of the people they are supposed to represent. (Hint: “the south-west of England” is not the region one such lawmaker was elected to represent.)

Thus, blatant plagiarism, pilfering and infringement under the pretense of a “creative” act seems entirely reasonable to distracted lawmakers, never mind that letting some of the highest valued corporations on the planet have free and unencumbered access to the lucrative output of a nation’s supposedly prized creative industries is likely to plunge those industries into economic ruin. In the case of the United Kingdom, this would be only another chapter of the nation’s leadership stupidly squandering what remains of the cultural “soft power” that the nation once had, only instead of doing so to pander to the bigoted, the ignorant and the deceived, it would be to the kind of people who gladly facilitated that earlier deception.

Some might claim that “expanding copyright” to prevent “AI” misuse of content is wrong, noting that training activities are perfectly legal and justifiable (obviously ignoring the costs incurred by those of us who pay for our Web hosting), and likening the publication of a model to “publishing facts about copyrighted works”. But what about the publication of “AI”-generated works? The suggested “simple way” to “protect artists from AI predation” involves withholding the application of copyright to such works, preventing Big Content from monetising such works, and thus deterring Big Content from adopting “AI” and firing its creative workers.

While that sounds like a great economic “hack”, it doesn’t confront the broader phenomenon of the cheapening of content at all. Big Content has arguably already pivoted to technology, streaming, and the like, but even if they might suffer from such policies, it does not mean that creators will gain. There are plenty of “slop” creators out there today whose business models do not rely on asserting copyright for their works. Will manufacturers of weird jigsaw puzzles care? They just want a stream of free stuff to slap on their products, and what if someone clones them? Well, that was last season’s product.

Allowing the “AI” peddlers to consume and regurgitate copyrighted works without constraint, allowing them to circumvent copyright under the pretense of “creativity”, is still harmful even if the peddlers cannot “protect” those regurgitated works. If someone consumes a Free Software application or library, waves the magic wand of “AI”, and then publishes it, the mere availability of this dubious derivative cheapens the original software, undermines its licensing by steering potential users to a “public domain” clone, reduces incentives to continue developing the original software, and thereby reduces its viability. Flooding the zone with such slop may benefit corporations wanting to circumvent Free Software licensing, but it does not benefit Free Software.

The Atomisation

There are numerous social and economic threats from the introduction of technology under the banner of “AI” that might justifiably elicit a negative reaction from a lot of people. When such reactions are articulated, the response from “AI’s” cheerleaders tend to involve labelling them as “emotional”, “touchy” and “irrational”. Of course, this is just a cynical way of avoiding any kind of constructive discussion about the impact of the accompanying harmful economic agenda, reminiscent of all of the other shifts in industrial policy that left people disadvantaged and impoverished.

In previous transitions involving something that could be described as automation, there was always the chance that those whose manual work was eliminated by the introduction of machines might still benefit from the exercise. When the production of textiles or clothing was to be automated, for instance, there might conceivably have been work to be had in applying one’s expertise to the design of the machines themselves. And there might also have been a limit to the automation, preserving opportunities for those particularly skilled in those tasks beyond the capabilities of the machines.

But today’s enthusiasm for “AI” suggests that whole categories of jobs will be eliminated, that no-one will write code any more, or write prose, draw, paint, make music, and so on. And the intention of those setting this agenda is that there will not be any possibility of somehow migrating to either the automation side of this transition, especially since coding in “AI” companies is meant to be left to the “AI”, or to more specialised forms of paid labour.

Previous transitions were handled very poorly indeed. In Britain, the phrase “on your bike” was largely the economic strategy of the Thatcher government as it gutted various industries, effectively condemning regions of the country to underdevelopment, unemployment and hardship, exacerbating inequalities within the country and divisions that persist to this day. Those too young to know or to remember might recognise some of the cultural phenomena from that earlier time because we see them again now in similarly potent forms, not that they ever really went away.

The practice of “divide and rule” is used to pit disadvantaged and marginalised groups against each other, steadily degrading other sections of society to make them poorer, weaker and too concerned with their own survival to question the general direction of society. Foreigners, immigrants, those with health problems, those not blessed with wealth, those who have otherwise experienced misfortune that blights their lives, and others are conditioned to expect less from their own lives, to feel guilty about their own situation and for needing or expecting help, or even asking for it.

Accompanying this is the promotion of “charity” and demonisation of taxation. How can anyone argue against charity, one might ask, if it is to do good in the world? One can argue against charity being a phenomenon that, instead of being a way of helping others, is used to diminish the help dispensed by society and to make such help entirely discretionary, conditional on the whims of the supposedly generous donor. Such a phenomenon serves only the wealthy who then get to choose where society’s money is spent, instead of paying their taxes and allowing broader society to make such decisions for itself.

Thatcher’s Britain was famous for the pursuit of selfishness, but our societies today face what might be called the “atomisation” of society where everyone is encouraged to pursue their own particular agenda and reward their own selfishness. Every time someone pipes up with the idiotic remark “nah, I’m good”, especially when concern is expressed for the weak or defenceless in society, it is merely another expression of the more culturally established (and derided) “I’m alright, Jack”.

But what such selfish remarks effectively signal is “more for me” from someone who already has plenty. And it obviously signals “less for everyone else” regardless of whether everyone else can manage with less or not. It is where people are encouraged to look out for their own personal interests at the expense of society, not realising that society makes their Amazon Prime deliveries possible in the first place.

Those impressed (or maybe bamboozled) by “AI” may remain unconvinced that the phenomenon might be an unsustainable and unprofitable bubble, one that consumes more investment than it can ever pay back, and that its most “disruptive” form has no genuine or necessary applications. After all, it is very popular and, for some, a nice little component in their plump investment portfolio featuring Nvidia and the other technological horsemen of the apocalypse. Why on Earth would anyone question its viability? Or in other phrasing:

“What are the millions of people using GPT for, if it’s a solution looking for a problem?”

The answer to such a question is this: it is for hollowing out forms of work that are fulfilling or professional, leaving the tedious, hard-to-automate stuff to human beings who will end up doing commoditised work, with all the “hustle” for that work and the driving down of salaries that it entails. All so that people whose livelihoods and lifestyles have been ringfenced one way or another can be entertained for a few seconds at a time with their “AI” videos and other “slop”.

The Degradation of Expectations

As neoliberalism took hold, with the privatisation of public services and the deregulation of various markets, consumerism was the shiny trinket that was used to distract from any hardship that was experienced or from the structural weaknesses being introduced. Having more choice in the shops may have been welcome, and if you couldn’t afford to buy anything, there was plenty of credit sloshing around to let you join the party. As for those public services, there were shares you could buy to join that party, too.

Decades of neoliberalism has given us consumerism as a solution for everything, seeing the replacement of basic, universal services with market-driven “consumer choice” involving a bunch of different “providers” who may or may not offer the same quality of products or levels of service that the customer would have a right to expect. Exciting as it may be for some to be able to choose between different flavours of utility provider, postal service, train or bus company, and so on, it all rewards the people with plenty of time, money and a propensity for getting bored easily to “shop around for the best deals”, leaving everyone else disadvantaged by unscrupulous businesses who, according to neoliberalism, merely occupy a place in the market appropriate for the “value to the customer” that they deliver.

Even where public services are maintained, consumerism and the attraction of new toys amongst ostensibly bored or ideologically fixated politicians can easily start to corrupt those services for the benefit of private operators whose only interest and instinct is to make money while they can. Having new toys to play with supposedly makes life more exciting for similarly bored and easily distracted members of the general population, never mind that they burden other public services with the consequences of indulging antisocial behaviour and make the lives of other individuals miserable.

When such companies have, for example, a record of perpetuating their abuse of healthcare professionals overwhelmed during a pandemic, what right do they have to make demands of anyone so that they can keep going with business as usual? But compliant politicians will continue to pander to them. After all, the religion of neoliberalism elevates those companies and their greedy founders to objects of worship.

And if those companies can damage publicly run services to the point of sufficient public dissatisfaction, those politicians can claim that the state always fails at everything and should leave such “business” to business. Naturally, those politicians do not offer to resign from their own jobs, although they are, I suppose, already doing the bidding of private enterprise, just not being willing to forego their publicly funded salary.

In both the public and private realms, many of us are presumably familiar with certain trends. When interacting with companies to obtain help or support with their products and services, or when attempting to navigate the public bureaucracy, one might recognise what I call the notion of “penalty laps” to compensate for cheapened and hollowed-out services, making the customer or the user spend time doing unnecessary and pointless work to prove that they deserve actual support.

One cannot simply communicate with another human being, but must instead interact with a chatbot first, which merely parrots information that is readily available and often of little help to anyone actually requiring support. Or maybe a long sequence of questions, deployed on a Web site or over the telephone, must be carefully navigated before a human can be summoned to communicate. Such inconvenience is used to herd the increasingly unhappy customer or user into other channels, framed as being more “convenient” and almost certainly in the form of an “app”.

The easily amused or distracted customer or user may think that they are getting better service for free, but they are in fact subsidising the operating costs of the institution concerned. And so, businesses and institutions continue their externalisation of operating expenses, insisting that customers or users provide their own equipment to interact with a business or service, paying the advertised costs as well as the hidden costs of acquiring a nice phone, insuring it, replacing it when those institutions decide that it is “too old”. You pay to work for them now, in case you didn’t realise.

Proof of “work” may have been the selling point of ruinous, dubious and entirely unnecessary cryptocurrency schemes, but making people continually prove that they are somehow worthy is an established trait of an exploitative society. Given that the neoliberal society will continue to eliminate decent, fulfilling work, one might expect efficient mechanisms to help people find other opportunities. But instead of making opportunities for people who seek help, such a society and its institutions has such people effectively doing useless busy work applying for non-existent jobs in a largely fictional “market”.

And with no economic strategy or vision, but with a worldview that involves pulling the public purse strings as tightly as possible to close the purse, they perversely create plenty of jobs in the bureaucracy for people to administer penalties and to deliver judgement on other people’s personal situations. Didn’t apply for enough meaningless non-jobs or unsuitable, informal, casual work? Then the entitled people in the bureaucracy aspiring to be like their managers and political leaders, cultivating a belief that they “deserve” their opportunities unlike those lazy people on their books, will deny the help that people seek just to be able to turn their lives around.

Because the attitude in the neoliberal economies is that looking for a job should actually be a job. So, the branch of the state administering work-related benefits is mostly there to coerce people into looking for jobs that they aren’t suitable for, causing huge volumes of speculative applications that even the applicants know are senseless, making it harder for genuine recruitment to occur.

And then there are all the fake job adverts, either posted to cover up nepotism or corrupt practices, or to puff up some company’s image, or to give functionaries something to do. And people wonder why it is that there’s no real economic growth, allowing our glorious leaders to claim that there is no money to spend on building up society, that improving the quality of life for those who need it will just have to wait.

“We can’t afford it” is the perpetual excuse for a lack of public services and crumbling infrastructure. Can’t get to see a doctor or another healthcare specialist? With the zone flooded with “AI”, desperate people turn to desperate solutions with disastrous results. We should, of course, expect better support for the people who need it in our societies, from actual human professionals, but that requires investment and commitment. Expect to be fobbed off with technological toys that cosplay the experience of interacting with professionals instead. Unless you are wealthy or well-connected, because then only the best will do.

“AI” is just the latest escalation in the practice of “divide and rule”, facilitating the targeting of individuals to such a degree that the powerful can go beyond merely targeting minorities and smaller groups while having to indulge the majority to keep them passive and broadly supportive of such cruelty. With “AI”, the powerful can potentially pull each person apart from those closest to them, corrupting their communications and poisoning their relationships, to a degree not even achieved through the manipulation of people’s lives by predatory social media. Populists and the affluent will gladly embrace “AI” for all the amusement it offers and for as long as it lasts, unaware that there may no longer be a “safe” majority for them to hide within any more.

It is not exactly an exaggeration to consider “AI” as an existential threat requiring collective action, not least because of its ruinous power and resource consumption, coupled with the ineffective measures to tackle climate change that are formulated by those politicians always encouraging us to wait for better times. Such times will never arrive if the barons of “AI” and others who prioritise their own wealth are setting the agenda, of course.

There seem to be plenty of people who think that by enthusing about ruinous practices like “AI”, parroting the rhetoric of the oligarchs, and otherwise only caring about things that benefit them personally, that they will somehow get to join that club of the wealthy, that they too will get to go on the spaceship.

To those people, I can only say this: instead of joining the club, even you will find yourself alone, having been torn from the fabric of the society you helped to obliterate, but instead of living your best life, you will be miserable and there will be nobody left to defend you.

And by the way, there is no spaceship.

The Scandisplaining of Digital Freedoms

Monday, April 6th, 2026

Recently, the Norwegian Consumer Council has been enjoying a degree of publicity for a campaign they have been running about the “enshittification” of the Internet, riffing on the overused term coined by Cory Doctorow to describe the deliberate degradation of products and services given an absence of real choice and competition in the marketplace. Naturally, international news organisations have lapped this up as another example of supposedly progressive Scandinavian social and political priorities. That plucky Norway could show the rest of the world how to deal with predatory Big Tech.

As always, the story is rather more nuanced if one is more familiar with how things typically go in Norway and, I can well imagine, the rest of Scandinavia. First of all, one can justifiably wonder where these people have been living for the past quarter century. Time and again, Free Software advocates have pointed out that a reliance on proprietary software and platforms ultimately harms individuals, institutions and societies. Over ten years ago now, I myself sought to prevent the introduction of a proprietary groupware platform in a public institution that had been my employer. By the time I was meeting the hostile and dismissive leadership of that institution, I wasn’t even working there any more. The meeting ended with the overpromoted head of the institution, flanked by his privileged and/or hectoring enforcers, insisting that “Microsoft would never do anything that wasn’t in their customers’ best interests”. In the sitcom version of events, cue the laughter track.

I ended up doing some legwork in my own time to dig into the nature of the commercial arrangements between the institution and its supplier, but all the “commercially sensitive” bits involving actual monetary amounts were redacted. My own motivation to pursue the matter was rather tempered by the fact that some of those who felt that this commercial arrangement impeded various workplace freedoms of theirs would not pursue the matter themselves. After all, they didn’t want their nice salary and other bespoke workplace arrangements in their permanent employment position endangered by any kind of actual activism. Evidently, this was the job of the guy whose temporary contract had ended. Having presented my findings, nothing further happened and those precious freedoms were not generally upheld. But technical workarounds let various people pretend that business could proceed as usual. Their nests remained fully feathered. Screw the plebs: they would have to get used to Exchange, anyway.

I wish I could claim prescience in the whole affair, but it was pretty obvious how things would end up going. I remarked that at some point, “on-premises” Exchange would be phased out in favour of a cloud-based solution, likely to be what I tend to call Office 360. Fast-forward to recent times, and of course that is exactly what has been happening, with the institution presumably pleading poverty. Why not just make your employees customers of a foreign corporation, regardless of the wrapping of the institutional package? They have to take that deal whether they like it or not. Now, there may have been some chatter about these new arrangements. Right now, I cannot consult my own archives to check, but I would have been right to say “I told you so”. Some of the supposed champions of freedom may be more concerned that with a full-on migration to the cloud, all those neat workarounds of theirs might finally become obsolete. Maybe it will finally become time for them to face up to everybody else’s reality.

Once Upon a Time

For a time, Norway had a public agency that was meant to promote Free Software and interoperability in the public sector. Lobbied by the usual proprietary software vultures, the incoming right-wing government happily shut it down in a wave of the usual austerity that such governments love to inflict on public institutions, public infrastructure, and the wider population, just as they slash taxes for the wealthy in the name of “wealth creation”. Precisely these kinds of political choices, familiar from countries with more obvious records of punitive austerity, like Britain under the likes of Margaret Thatcher, John Major, David Cameron, and the subsequent clown car parade of prime ministers in the last Conservative administration, degrade societal resilience and undermine things like digital and technological sovereignty that are now suddenly in vogue.

An international audience might be surprised that supposedly egalitarian and progressive Norway might exhibit such traits. Comparable political shifts in Sweden and Denmark, undoubtedly inspired by the cruelty-enabling culture of “personal aspiration” (selfishness, in other words) promoted by British Conservatism, have similarly gone unnoticed or have been gradually forgotten. That a bunch of people in Norway haven’t managed to follow along rather suggests that the tradition of navel-gazing is alive and surprisingly well. After all, if the gravy train kept running from your station, then what was the problem again, exactly?

This latest initiative’s open letter to the Norwegian government notes that the French public sector made concerted efforts to introduce Free Software from 2012 onwards. How quickly people forget that back in 2012, that soon-to-be-culled Norwegian public agency was trying to bring the Norwegian public sector round to undertaking similar kinds of endeavour, doing it the celebrated Scandinavian way of not treading on too many toes. Naturally, such an approach was never going to be resistant to the kind of predatory corporate interests who routinely siphon billions of crowns, pounds, euros and dollars from the public sector locally and internationally for supplying their mediocre and often blatantly deficient products and services, subjecting governments and thus taxpayers to coercive, ruinous and yet seemingly perpetual contracts.

Those previous efforts might have been envisaged as a viable means to a righteous end, but they may have ended up being regarded simply as a nice supplement for those already engaging in the kind of advocacy that makes people feel like they’re “doing something”. It was another voice in the chorus of righteousness, and with an accompanying annual conference, it was yet another venue to talk about things and congratulate each other, rather than do those things necessary to actually advance the cause. It was even held in Svalbard on one occasion, if I remember correctly, because nothing says more about a commitment to sustainability than having a bunch of people jet off to the realm of polar bears and those melting ice floes.

So, what things would have advanced the cause, then? Well, the first thing would have been to actually fund Free Software at scale and to make sure that when people tout solutions for widespread use, they are actually fit for the job. And no, gathering up a bunch of existing projects and promoting them is absolutely not the same thing. When I investigated Free Software groupware solutions, the popular wisdom was that Kolab had “solved” groupware many years earlier. It turned out that Kolab had been rewritten as version 3 and was inadequate in a number of ways: a half-finished solution. Efforts to try and engage with the developers became futile. Despite pitching the software as a collaboratively developed Free Software project, all they really cared about was whether the software would support the operations of a now-liquidated Swiss company riding the privacy bandwagon and largely targeting the Jason Bourne brigade.

Such experiences made me suspect that Kolabs 1 and 2 might not have been adequate all along, either, possibly pitched as a good-enough solution to a problem that hadn’t been fully understood, all to serve various big-fish, small-pond commercial interests. Later on, I discovered Zarafa, which became Kopano, and wished I had found it earlier. It may have been a better choice, not least because the dopes insisting on Microsoft Everything would have seen the Web interface and thought that it was straight out of Redmond, unlike Microsoft’s own Web-based Outlook solution, perversely. Sadly, as a sign of our depressing times, Kopano is now becoming (or has become) a cloud-only product.

It may seem obvious, but it still needs saying: general advocacy and encouragement isn’t sufficient; people need working solutions. And experience also shows that one cannot leave it to “the market”, whatever that is in Free Software. For many years, I have used KMail to read and send e-mail. It remains surprisingly usable today, “surprisingly” because its developers decided at one point to adopt some weird middleware layer called Akonadi, entranced by the promises made by Microsoft and/or Apple to deliver pervasive “desktop search” capabilities in their own products. Whether Microsoft or Apple actually delivered or, more likely, abandoned or scaled back those promises, I am now compelled to run the command “akonadictl restart” almost every day to “unwedge” my mail client and get to see newly arrived mail.

(It also didn’t help that the developers introduced MySQL – now MariaDB – into the mix, and that in the maintenance of that product, which throughout its existence under its various names could uncharitably be described as Monty Widenius’ Flying Shitshow, someone decided to bump a version number in a minor (or actually a patch-level) release that caused the whole stack of software to refuse to access the arguably unnecessary database underpinning KMail, making my mail inaccessible. Fortunately, my case was heard within Debian, and remedies were eventually applied. Before that, I had to recompile the package with an appropriate workaround. A victory for Free Software pragmatism, but good luck to the average user suddenly staring down a potentially indefinite e-mail outage.)

Free Software groupware applications, like the overall desktop experience, stopped showing year-on-year progress in functionality some time ago. Already degraded in various ways when the developers of such technology became distracted by what the big players said they would be doing, the arrival of social media seemed to make some developers believe that the era of the mail program had ended altogether. It apparently became more important to some of those developers to add “share on Facebook” menu items to random applications than to ensure that their applications were still usefully serving their loyal users.

(Observations that technologies like ActivityPub and applications like Mastodon can supplant boring old e-mail and that they have shown considerable growth, and yet remain in a niche, rather overlook – indeed, neglect – the fundamental variety in groupware and collaborative technologies. A strategy of “ActivityPub everywhere” is like keeping the big hammer and throwing away all the other tools in the toolbox. One might suspect that it is only now gaining traction because there are people who want a similar kind of buzz to the one they get from their favourite doomscrolling services but feel bad going back to the same, increasingly disreputable dealer.)

The lesson here is that someone firstly needs to develop functional software, but then to check and double-check that functionality, as well as continuously verifying whether the software meets people’s needs. This cannot be left to random developers or to companies. The big Linux distributions never really cared enough about the average user to finish the job, merely bundling stuff and maybe hiring developers to either dabble with their projects or to make them only good enough for narrow corporate advantage. As far as Red Hat’s bottom line was concerned, all that ever really mattered was a placeholder desktop good enough to do a bit of point-and-click system administration for a bunch of file and print servers propping up a bunch of Windows desktops, or for software development most likely involving Java and targeting “the enterprise”. Such companies happily make their own employees use proprietary software and services for the kinds of tasks that the average user does, regardless of whether they might be using Free Software office and groupware suites instead.

The right approach would have been a concerted government initiative resistant to lobbying and corruption, not mere advocacy, nudging and cajoling. Genuine standards and interoperability could have been mandated and corrupted pseudo-standards like Microsoft’s fast-tracked office formats rejected. Agencies like Statistics Norway should have been taken to task for stipulating “.doc” as their chosen “interoperable” format, with those responsible sent back to finish, or maybe even begin, their education. One might have learned from experiences in other countries, like that of the public key encryption software Gpg4win in Germany, where a genuine governmental need transformed the financial viability of the GnuPG software project from one which had been chronically underfunded and practically relying on the charity of its principal developer to a thriving, viable enterprise.

Proprietary software lobbyists had criticised Norway’s earlier soft-touch efforts, claiming that the public agency concerned was subsidising uncompetitive software that was presumably the work of hippies and communists. There was one case of a public institution wanting to give money to a Free Software project in the realm of PDF generation, if I recall correctly. Upon discovering that it was Free Software, decision-makers refused to make the donation: after all, if those people were giving their code away, why pay after the fact? Such paper-pushing idiots evidently failed to understand that such windfalls may only happen once. Some of them would undoubtedly and routinely use the Norwegian word for “farmers” in the pejorative way for people they might consider ignorant, and yet farmers manage to understand that harvests do not magically occur and re-occur without cultivation and sustenance.

The right approach would also have involved mandating Free Software for publicly funded projects and for public infrastructure, as advocated by the FSFE’s Public Money Public Code campaign. Proprietary software interests would undoubtedly howl at such stipulations, claiming that their secret sauce software, supposedly written by Top Men, would be unfairly excluded from such markets. But just as even some ostensibly left-wing politicians have forgotten, “markets” only exist at the indulgence of governments and regulators, and they only operate in the public interest if properly framed and regulated. Don’t want to give your customers the freedom to maintain the code they are paying for? Feel free to seek opportunities elsewhere, then. Cushy lock-in deals for the locally well-connected should have gone the way of Norsk Data when that company fell to Earth.

Planet Norway

Attitudes to societal threats seem to be remarkably relaxed in our supposedly enlightened democracies and their institutions. The casual, pervasive use of predatory social media platforms continues, propped up by state institutions claiming that they need to have a presence in all the different channels, but where one suspects that a few managers and their appointees just want to play with some toys and puff up their public profiles. Instead of leveraging the resources of the state and providing reliable channels of communication, such bodies post announcements, updates and nonsense via foreign-owned hate speech venues. Norwegian political party leaders even decided at one point that they had to promote themselves on Snapchat, egged on by one of the national broadcasters. Now, we see the unquestioning adoption and promotion of “AI” and chatbots by institutions that risk being obliterated by such technologies.

At the individual level, concerns about “screen time” and the use of tablets and other devices in places like primary schools have been aired and may well be justified, given the likely developmental impact on children of such devices, but it all comes across as pearl-clutching when one suspects that the lifestyle of the vocal parents probably revolves around their phone, “apps”, streaming services, and rather too much screen time of their own. And some of those concerned about screen time would probably drop their objections if a study conveniently came along to assuage their worries. It is just like those very Scandinavian traits of stuffing tobacco products into one’s mouth or spending time at the tanning studio, neither of which are actually healthy. Over the years, various pseudo-academic figures and findings have occasionally floated up into public prominence, insisting that such things are perfectly fine. Why wouldn’t that happen with technology? The companies involved could certainly afford to pay for a bit of fake research and a few willing advocates.

One gets the impression that many of the different factions that might coalesce around a campaign about “enshittification” aren’t really trying to achieve systemic change: they merely want to negotiate a better deal. Such people knew what they were getting into by using free-of-charge services and often explicitly rejecting genuinely free alternatives which cost only modest sums to run. Such people were also aware that their data might wander off into the cloud and away to places where it would be mined and exploited for all it is worth, but caring about it was just too much bother. In institutions, all it takes is for Microsoft to “pinky swear” that it complies with data protection regulations. Institutional capabilities are then run down and alternatives abandoned, just so the toys can be unpacked, sending non-trivial sums overseas instead of cultivating knowledge, opportunities and wealth locally.

One wonders how seriously people really want to take such matters, or whether they just want a hobby and to feel good about a bit of casual activism. I am reminded of the climate litigation brought against the Norwegian state for its continuing policy of fossil fuel extraction, noting that climate change presents an existential threat reaching far beyond the confines of the nation, affecting the population of the entire planet, but where the country’s constitution at least worries about the state of the nation for future generations. The outcome – a defeat for the litigants – might not merely be described as positioning Norway as having “first world problems”: that would be business as usual. Instead, it might reasonably be described as situating Norway on a planet of its very own. One where the mere accumulation of money protects and even “benefits future generations“, evidently.

Hobby activism is typical of places where the stakes remain relatively low for those doing the campaigning, but on Planet Norway it arguably reaches another level, where hardship along any given dimension is often perceived to be a problem that only foreigners in poorer countries experience (or poorer planets, maybe). Or it is marginalised and framed as something that only affects a vanishingly small number of people, conveniently aided by policies and attitudes that seek to hide those who are struggling and blame them for their own predicament. But a reckoning is surely overdue even in matters of preference as opposed to need. After all, what good is it to advocate that children learn to code if nothing is done about the way “AI” is devastating Free Software and undermining paid work?

Unlike outsider perceptions of the money flowing freely in Norway thanks to the oil fund, the purse strings are generally not loosened for those whose professions have been gutted. And for younger people, they are more likely to be told to take shifts at Ikea than get the help they need. Yes, this is actually a thing. It also explains why on one local recruitment site, when filling out one’s employment history, the default value in the employer field reads (or read when I last checked) “Ikea Furuset”: one of the two full-scale Ikea stores serving the Oslo area. Not that there’s anything wrong with working at Ikea, but I doubt that the kind of aspirational parents hoping to give their little darlings a head start in the world, funding their higher education and other ambitions, would see it as a fitting venue for their offspring’s many talents.

Yet Another Elephant in the Room

This latest campaign recommends Free Software and open protocols in public procurement, which is what previous efforts pretty much did, too. The accompanying report even suggests funding alternatives, but then delegates this to discretionary funds and foundations, conveniently avoiding the structural issues. But the very reason why “dominant big tech companies have deep pockets” is through a perversion of economic incentives. Firstly, they have cultivated an “expectation of zero” where individual and institutional customers expect software and services to cost nothing. Thus, any investment in software is regarded as unnecessary because those nice corporations are giving away shiny free stuff. They also front-run various standards to make any kind of competition ruinously expensive to pursue.

And yet software cannot be developed without expenditure, and certainly not with the latest instrument being used to sustain those cultivated consumer expectations. “AI”, which is hyped to make it seem like software can be whipped up at a moment’s notice at no cost, relies on industrial scale plagiarism, colossal data centre and hardware investments, and ruinous levels of power consumption. What has funded these scorched earth tactics is a corrosive business model that inflicts highly lucrative but consequence-free, unpoliced advertising channels on billions of people. Developers at predominantly American technology company aren’t expected to work for free, after all. We are apparently meant to feel sorry for them. Some of them have to pay gentrification-level prices for their homes in places gentrified by themselves and their colleagues. Their bosses expect huge bonuses, their own yacht, island, spaceship…

Claiming the juvenile right of unrestricted free speech to drive engagement, Big Tech has largely allowed unregulated commerce to proceed, undermining traditional safeguards, endangering individuals, and threatening and even shuttering viable, responsible businesses. Any efforts that ignore such structural issues will fail to find the money required to make a difference. I, or my Web publisher, may be held to account for what I write, but anything goes on the predatory Big Tech platforms, whether it is the puerile variant of “free speech” cultivated by the increasingly fragile American dream, or whether it is fraudulent or outright illegal advertising promoting dishonest and criminal enterprises. Allowing such business to continue as usual simply enriches these predators while impoverishing ourselves.

The operators of those platforms are getting a free ride at a severe cost to us and our societies. It may not seem like it to the random punter getting “a great deal”, but we all pay for that deal in the end. And even little Norway has its own commerce platforms that look the other way, especially where anything related to the property bubble is concerned. Why not advertise your rental property using the fancy sales prospectus from a few years ago when you or a previous owner bought the property from the developers? Oh, “caveat emptor“, of course. How about advertising a property with a non-existent address? How much money is laundered even in little Norway, or is that kind of thing only done by foreign people in less enlightened countries? Maybe even the same people whose oil is “dirty” while Norway’s is “clean”.

Anything short of changing the flawed terms under which those companies operate, which one can barely believe are legal in the first place, is nothing more than consumerist tinkering. Naturally, there will be howling from entitled consumers, happy to have random people scurrying around with their urgent shopping deliveries, just as established, essential services like postal mail risk being degraded to the point of near uselessness or even eliminated altogether (which is something that might blow the minds of media people in countries like the UK where postal services still largely hold up and where deliveries still happen six days a week). But society cannot pander to people’s elevated levels of personal entitlement forever, despite the best efforts of populist politicians living in their own bubble of affluence.

I suppose I could be accused of being a simple “outsider” who still doesn’t understand Norway after all these years. Recently, I read a ridiculous piece claiming that Norwegian culture and society does not tend to focus on personalities. That would be news to readers of gossip magazines, newspapers, and even the finance magazine I would read in a medical specialist’s waiting room, keeping us all up to date on which members of the Norwegian financial and legal elites were suing which other members of those elites, all while name-dropping and generally cultivating individuals as movers and shakers.

But instead of cherry picking parts of the nation’s broader, pan-Scandinavian heritage – Janteloven in that particular case – and perpetuating all the other familiar tropes, so often featured in selective, favourable cultural projection delivered through credulous or lazy journalistic coverage, maybe lessons could be learned from one of Hans Christian Andersen’s more famous tales. It really doesn’t take very much to point out that notions of Scandinavian preparedness in the face of digital exploitation and its accompanying threats are somewhat overstated. Anyone caring to take a closer look at the emperor’s reputation as a fully clothed, well-tailored individual can do it, if they actually care to.

I just wouldn’t recommend anyone holding their breath for too long in anticipation of decisive, credible Scandinavian action that might show the wider world the way forward. After that sharp intake of breath witnessing the emperor in the buff, please exhale or you might eventually expire. Just like in other countries, there would have to be a cultural shift away from shopping for “big brand” software, wheeling in the consultants, having retreats and “away days” learning about the next set of goodies on the proprietary software treadmill, and treating predatory social media platforms as merely a harmless guilty pleasure. Maybe local commerce wouldn’t have to be mediated by “apps” and trillion dollar corporations, either. And there would have to be more than tame, hobbyist lobbying and performative activism: quite the challenge when rocking the boat in countries like Norway is simply never done.

But it all made for a good story about Scandinavia leading the way, as usual, so “job done”, I guess.

Adding Multicore Support for a MIPS Board to the Fiasco Microkernel

Monday, December 1st, 2025

I thought that before I forget, I should try and write down some of the different things I have tackled recently with regard to L4Re and the different frameworks and libraries I have been developing. There have been many pauses in this work this year, and it just hasn’t been a priority to record various achievements, particularly since the effort is generally ongoing.

Multicore MIPS

As established members of my readership may recall, my entry point into L4Re was initially focused on establishing support for MIPS-based single-board computers and devices. Despite developing support for many of the peripherals in the Ingenic JZ4780 used by the MIPS Creator CI20, one thing I never did was to enable dual-core processing in L4Re and the Fiasco microkernel in particular. I may have thought that such support was already present, but, well, that was an unreasonable belief.

This year, I was indulged with a board based on a later SoC that also has two processing cores. It supposedly has two threads per core as well, but I don’t actually believe it based on the way the unit in the chip responsible for core management behaves. In the almost illicit documentation I have, there is also no real mention of scheduling threads in hardware, so then I am left to guess whether these threads behave like cores. And whatever hardware interface is provided does not seem to be the MIPS MT implementation, either, since it is reported as not being present.

These days, one has to rely on archived copies of MIPS documentation after MIPS Technologies threw all of that overboard to bet the farm on RISC-V, which is admittedly a reasonable bet. Former owner, Imagination Technologies, disowned the architecture and purged any documentation they might have had from their Web site, although I think that any links to toolchains might still work, but that is perhaps prudent on the basis of upholding any Free Software licence obligations that remain.

(Weirdly, MIPS Technologies recently got themselves acquired by GlobalFoundries, meaning that they are now owned by the part of AMD that was sold off when AMD decided it was too costly to fabricate their own chips, becoming “fabless” instead, just as MIPS had always been.)

I also wonder what the benefit of simultaneous multithreading (SMT) is on MIPS over plain old symmetic multiprocessing (SMP) using multiple cores. Conceptually, SMT is meant to use fewer resources than SMP by sharing common resources between execution units, eliminating much of the costly functionality needed by dedicated cores. But MIPS is different from other architectures in that it does not, for example, maintain page tables in hardware for processes/tasks, which are the sort of costly things that one might want to see eliminated.

Instead, MIPS employs a translation lookaside buffer (TLB) to handle virtual memory mappings, and each “virtual processing element” (VPE) is apparently provided with an independent TLB in the MIPS MT implementation. It seems we can think about a VPE more or less as a conventional processor or core given the general description of it. However, each “thread context” (TC) with a VPE may share a TLB with other TCs, although they will have their own address space identifier (ASID), meaning that their own memory mappings may differ from that of other threads in the same process or task. Given that the ASID would typically be used to define independent address spaces at a process or task level, this seems like an odd decision. One can be forgiven for being confused!

In any case, I needed to familiarise myself with the documentation and with work previously done in the Linux kernel. That kernel work, it turned out, was a work of bravery seemingly based on the incomplete knowledge that we still rely on today. Unfortunately, it seems that certain details are incorrect, these pertaining to the interrupt request management and the arrangement of the appropriate hardware registers. The Linux support appeared to use a bank of registers that, instead of applying to interrupts directed at the second core in particular, seems to be some kind of global control for interrupts on both cores. Sadly, the contributor of this code is no longer communicative, and I just hope that he is well and finding satisfaction in whatever he does now.

Into Fiasco

The very first thing I had to do in Fiasco, however, was to get it working on this SoC. This was rather frustrating, and in the end, the problem was in some cache initialisation code. Because of course it was. Then, my efforts within Fiasco to support multiple cores were informed by existing support in the microkernel for initialising additional processors. Reacquainting myself with the kernel bootstrap process, I found that the architecture-specific entry point is the bootstrap_arch method, provided by the kernel thread implementation. This invokes the boot_all_secondary_cpus method, provided by the platform control abstraction.

Although there is support for the MIPS Concurrency Manager (CM) present in Fiasco, it is not useful for this particular SoC, so a certain amount of dedicated support code was going to be required. Indeed, the existing mechanisms were defined to use the MIPS CM to initialise these “secondary CPUs” in the generic platform control implementation. Fortunately, the C++ framework employed by Fiasco permits the overriding of operations, and I was able to fairly cleanly provide my own board/product-specific implementation that would use the appropriate functionality to be written by myself.

That additional functionality was support for the SoC’s own Core Control Unit (CCU), which is something that appears to be thankfully much simpler than the MIPS CM. The CCU provides facilities to start cores, to monitor and control interrupts, and to support inter-core communication using mailboxes. Of particular interest was the ability to start cores, to permit the cores to communicate via the mailboxes, and for such communication to generate mailbox interrupts. For the most part, the abstraction supporting the CCU is fairly simple, however.

Interrupt Handling

Perhaps the most challenging part of the exercise was that of describing the interrupt handling configuration. Although I was familiar with the code that connects the processor-level interrupts to the different peripheral interrupt handlers, it was not clear to me how these handlers would be allocated for, and assigned to, each processor or core. Indeed, it was not even clear how the interrupts behaved in the hardware.

I suppose I could have a long rant about the hardware documentation. Having already needed to dig up an address for the CCU, I noticed that the addresses for the interrupt controller in the manual for the chip were simply fictitious and very possibly originating in a copy-paste operation, given that the register banks conflicted with the clock and power management unit. More digging eventually revealed the actual location of these banks. One helpful aspect of the manual, however, was the information implicitly given about the spacing of these register banks, even though I think the number of banks is also a fiction, bound up with the issue of how many cores and/or threads the chip actually has.

The way that the chip appears to work is that each core can enable and mask (ignore) individual interrupts. It does so using its own coprocessor registers at the MIPS architecture level, but to identify and control the individual interrupt sources, it uses registers provided in the appropriate bank by the interrupt controller. In a single-core processor, there is only one set of registers, and the single core can switch them on and off for its own benefit. But with multiple cores, each core can apparently choose to receive or ignore interrupts, leaving the others to decide for themselves. And if we ignore the top-level control, we might even allow one core to set the preferences for itself and other cores, since it can access all of the register banks for the different cores in the interrupt controller.

Now, the fundamental interrupt handling for this family of chips has been consistent throughout my exposure to L4Re and Fiasco, with a specialisation of Irq_chip_gen providing access to the interrupt controller registers. Since this abstraction for the interrupt controller unit works and is still applicable, instead of trying to make it juggle multiple register banks, I decided to wrap it in another abstraction that would allow interrupts to be associated with a specific core or with all cores, replicating the same fundamental interface, and that would redirect operations to the individual core-specific units according to the association made for a given interrupt.

IPIs

In my exploration of the interrupt handling code, I repeatedly encountered the acronym IPI, which turns out to mean inter-processor interrupt, where one core may raise an interrupt in another core. Although it was apparent that the CCU’s mailbox facilities would be the vehicle to support such interrupts, it was not immediately obvious how these interrupts might be used within Fiasco. In fact, it was while trying to operate the kernel debugger, JDB, that I discovered one of their applications: the debugger needs to halt secondary cores or processors to be able to safely inspect the system.

Thus, I attempted to provide an IPI implementation by following the general pattern of implementations for other platforms, such as the RISC-V support within Fiasco, relying on that architecture’s closer heritage than, say, ARM or x86(-64), and generally hoping that the supporting code would provide more helpful clues. Sadly, there is not much formal guidance on such matters in the form of documentation or other explanatory materials for Fiasco, at least as far as I have discovered.

One aspect of my implementation that I suspect could be improved involves the representation of the different IPI conditions, along with usage of the atomic_reset operation in the generic MIPS IPI support. This employs an array to hold the different interrupt conditions occurring for a core, rather like a big status register occupying numerous words instead of bits, with atomic_reset obtaining the appropriate interrupt status and clearing any signalled condition.

Given that the CCU is able to maintain core-specific state in the mailbox registers, one might envisage a core setting bits in such registers to signal IPI conditions, with the interrupted core clearing these individually, doing so safely by using the locking mechanism provided by the CCU. However, since my augmentation of the existing IPI status management seemed to result in a functioning system, merely augmenting the existing code with the interrupt delivery and signalling mechanisms of the CCU, I did not feel particularly motivated to continue.

Bad Bits

It should be mentioned that throughout all of this effort, I encountered rather severe problems with the UART on the development board concerned. These would manifest themselves as garbled output from the board along with a lack of responsiveness or faulty interpretation of input. Characters in the usual ASCII range would be punctuated and eventually overrrun by “special” characters, some non-printable, and efforts to continue the session would typically need to be abandoned. This obviously did not help troubleshooting of either my boot payloads or the kernel debugger.

Some analysis of the problem was required, and in an attempt to understand why certain wholly normal characters would be transformed to abnormal ones, I wrote out some character values in their transmitted forms, also incorporating extra elements of the transmission:

0x0d is 00001101 - repeated: 0000110100001101 - stop bit exposed: 000011011000011011000011011000011011
0xc3 is 11000011                                                         -------- -------- --------

In trying to reproduce the observed character values, I looked for ways in which the bitstream would be misinterpreted and yield these erroneous characters. This exercise would repeatedly succeed, suggesting some kind of slippage in the acquisition of characters. It turned out that this was an anticipated problem, and the omission of appropriate level shifters for the UART pins meant that the signalling was effectively unreliable. A fix was introduced on subsequent board revisions, and in fact, the board was more generally refined and completed, since I had been using what was effectively a prototype.

General Remarks

A degree of familiarisation was required with the mechanisms in Fiasco for some common activities. Certain constructs exist such as per-CPU allocators that need to be used where resources are to be allocated for, and assigned to, individual CPUs. These allocator constructs provide a convenience operation allowing the “current” CPU, being the one running the code, to obtain its own resource. Although the details now elude me, there were some frustrations in deploying these constructs under certain circumstances, but I seemed to figure something out that got me to where I wanted to be in the end.

I also wanted to avoid changing existing Fiasco code, instead augmenting it with the necessary specialisations for this hardware. Apart from the existing cache initialisation routines, this was largely successful. In principle, I now have some largely clean patches that I could potentially submit upstream, especially since they have now stopped insisting on contributor licence agreements. I do wonder if they are still interested in this family of SoCs, however.

I might also stick my neck out at this point and note that if anyone is interested in development boards with MIPS-based processors and a degree of compatibility with Raspberry Pi HATs, they might get in contact with me, so that their interest can be recorded and the likelihood increased of such boards being produced for purchase.

Common Threads of Computer Company History

Tuesday, August 5th, 2025

When descending into the vaults of computing history, as I have found myself doing in recent years, and with the volume of historical material now available for online perusal, it has largely become possible to finally have the chance of re-evaluating some of the mythology cultivated by certain technological communities, some of it dating from around the time when such history was still being played out. History, it is said, is written by the winners, and this is true to a large extent. How often have we seen Apple being given the credit for many technological developments that were actually pioneered elsewhere?

But the losers, if they may be considered that, also have their own narratives about the failure of their own favourites. In “A Tall Tale of Denied Glory”, I explored some myths about Commodore’s Amiga Unix workstation and how it was claimed that this supposedly revolutionary product was destined for success, cheered on by the big names in workstation computing, only to be defeated by Commodore’s own management. The story turned out to be far more complicated than that, but it illustrates that in an earlier age where there was more limited awareness of an industry with broader horizons than many could contemplate, everyone could get round a simplistic tale and vent their frustration at the outcome.

Although different technological communities, typically aligned with certain manufacturers, did interact with each other in earlier eras, even if the interactions mostly focused on advocacy and argument about who had chosen the best system, there was always the chance of learning something from each other. However, few people probably had the opportunity to immerse themselves in the culture and folklore of many such communities at once. Today, we have the luxury of going back and learning about what we might have missed, reading people’s views, and even watching television programmes and videos made about the systems and platforms we just didn’t care for at the time.

It was actually while searching for something else, as most great discoveries seem to happen, that I encountered some more mentions of the Amiga Unix rumours, these being relatively unremarkable in their familiarity, although some of them were qualified by a claim by the person airing these rumours (for the nth time) that they had, in fact, worked for Sun. Of course, they could have been the mailboy for all I know, and my threshold for authority in potential source material for this matter is now set so high that it would probably have to be Scott McNealy for me to go along with these fanciful claims. However, a respondent claimed that a notorious video documenting the final days of Commodore covered the matter.

I will not link to this video for a number of reasons, the most trivial of which is that it just drags on for far too long. And, of course, one thing it does not substantially cover is the matter under discussion. A single screen of text just parrots the claims seen elsewhere about Sun planning to “OEM” the Amiga 3000UX without providing any additional context or verification. Maybe the most interesting thing for me was to see that Commodore were using Apollo workstations running the Mentor Graphics CAD suite, but then so were many other companies at one point in time or other.

In the video, we are confronted with the demise of a company, the accompanying desolation, cameraderie under adversity, and plenty of negative, angry, aggressive emotion coupled with regressive attitudes that cannot simply be explained away or excused, try as some commentators might. I found myself exploring yet another rabbit hole with a few amusing anecdotes and a glimpse into an era for which many people now have considerable nostalgia, but one that yielded few new insights.

Now, many of us may have been in similar workplace situations ourselves: hopeless, perhaps even deluded, management; a failing company shedding its workforce; the closure of the business altogether. Often, those involved may have sustained a belief in the merits of the enterprise and in its products and people, usually out of the necessity to keep going, whether or not the management might have bungled the company’s strategy and led it down a potentially irreversible path towards failure.

Such beliefs in the company may have been forged in earlier, more successful times, as a company grows and its products are favoured over those of the competition. A belief that one is offering something better than the competition can be highly motivating. Uncalibrated against the changing situation, however, it can lead to complacency and the experience of helplessly watching as the competition recover and recapture the market. Trapped in the moment, the sequence of events leading to such eventualities can be hard to unravel, and objectivity is usually left as a matter for future observers.

Thus, the belief often emerges that particular companies faced unique challenges, particularly by the adherents of those companies, simply because everything was so overwhelming and inexplicable when it all happened, like a perfect storm making an unexpected landfall. But, being aware of what various companies experienced, and in peeking over the fence or around the curtain at what yet another company may have experienced, it turns out that the stories of many of these companies all have some familiar, common themes. This should hardly surprise us: all of these companies will have operated largely within the same markets and faced common challenges in doing so.

A Tale of Two Companies

The successful microcomputer vendors of the 1980s, which were mostly those that actually survived the decade, all had to transition from one product generation to the next. Acorn, Apple and Commodore all managed to do so, moving up from 8-bit systems to more sophisticated systems using 32-bit architectures. But these transitions only got them so far, both in terms of hardware capabilities and the general sophistication of their systems, and by the early 1990s, another update to their technological platforms was due.

Acorn had created the ARM processor architecture, and this had mostly kept the company competitive in terms of hardware performance in its traditional markets. But it had chosen a compromised software platform, RISC OS, on which to base its Archimedes systems. It had also introduced a couple of Unix workstation products, themselves based on the Archimedes hardware, but these were trailing the pace in a much more competitive market. Acorn needed the newly independent ARM company to make faster, more capable chips, or it would need embrace other processor architectures. Without such a boost forthcoming, it dropped Unix and sought to expand in “longshot” markets like set-top boxes for video-on-demand and network computing.

Commodore had a somewhat easier time of it, at least as far as processors were concerned, riding on the back of what Motorola had to offer, which had been good enough during much of the 1980s. Like Acorn, Commodore made their own graphics chips and had enjoyed a degree of technical superiority over mainstream products as a result, but as Acorn had experienced, the industry had started to catch up, leading to a scramble to either deliver something better or to go with the mainstream. Unlike Acorn, Commodore did do a certain amount of business actually going with the mainstream and selling IBM-compatible PCs, although the increasing commoditisation of that business led the company to disengage and to focus on its own technologies.

Commodore had its own distractions, too. While Acorn pursued set-top boxes for high-bandwidth video-on-demand and interactive applications on metropolitan area networks, Commodore tried to leverage its own portfolio rather more directly, trading on its strengths in gaming and multimedia, hoping to be the one who might unite these things coherently and lucratively. In the late 1980s and early 1990s, Japanese games console manufacturers had embraced the Compact Disc format, but NEC’s PC Engine CD-ROM² and Sega’s Mega-CD largely bolted CD technology onto existing consoles. Philips and Sony, particularly the former, had avoided direct competition with games consoles, pitching their CD-i technology more at the rather more sedate “edutainment” market.

With CDTV, Commodore attempted to enter the same market at Philips, downplaying the device’s Amiga 500 foundations and fast-tracking the product to market, only belatedly offering the missing CD-ROM drive option for its best-selling Amiga 500 system that would allow existing customers to largely recreate the same configuration themselves. Both CD-i and CDTV were considered failures, but Commodore wouldn’t let go, eventually following up with one of the company’s final products, the CD32, aiming more directly at the console market. Although a relative success against the lacklustre competition, it came too late to save the company which had entered a steep decline only to be driven to bankruptcy by a patent aggressor.

Whether plucky little Commodore would have made a comeback without financial headwinds and patent industry predators is another matter. Early multimedia consoles had unconvincing video playback capabilities without full-motion video hardware add-ons, but systems like the 3DO Interactive Multiplayer sought to strengthen the core graphical and gaming capabilities of such products, introducing hardware-accelerated 3D graphics and high-quality audio. Within only a year or so of the CD32’s launch, more complete systems such as the Sega Saturn and, crucially, the Sony PlayStation would be available. Commodore’s game may well have been over, anyway.

Back in Cambridge, a few months after Commodore’s demise, Acorn entered into a collaboration with an array of other local technology, infrastructure and media companies to deliver network services offering “interactive television“, video-on-demand, and many of the amenities (shopping, education, collaboration) we take for granted on the Internet today, including access to the Web of that era. Although Acorn’s core technologies were amenable to such applications, they did need strengthening in some respects: like multimedia consoles, video decoding hardware was a prerequisite for Acorn’s set-top boxes, and although Acorn had developed its own competent software-based video decoding technology, the market was coalescing around the MPEG standard. Fortunately for Acorn, MPEG decoder hardware was gradually becoming a commodity.

Despite this interactive services trial being somewhat informative about the application of the technologies involved, the video-on-demand boom fizzled out, perhaps demonstrating to Acorn once again that deploying fancy technologies in a relatively affluent region of the country for motivated, well-served early adopters generally does not translate into broader market adoption. Particularly if that adoption depended on entrenched utility providers having to break open their corporate wallets and spend millions, if not billions, on infrastructure investments that would not repay themselves for years or even decades. The experience forced Acorn to refocus its efforts on the emerging network computer trend, leading the company down another path leading mostly nowhere.

Such distractions arguably served both companies poorly, causing them to neglect their core product lines and to either ignore or to downplay the increasing uncompetitiveness of those products. Commodore’s efforts to go upmarket and enter the potentially lucrative Unix market had begun too late and proceeded too slowly, starting with efforts around Motorola 68020-based systems that could have opened a small window of opportunity at the low end of the market if done rather earlier. Unix on the 68000 family was a tried and tested affair, delivered by numerous companies, and supplied by established Unix porting houses. All Commodore needed to do was to bring its legendary differentiation to the table.

Indeed, Acorn’s one-time stablemate, Torch Computers, pioneered low-end graphical Unix computing around the earlier 68010 processor with its Triple X workstation, seeking to upgrade to the 68020 with its Quad X workstation, but it had been hampered by a general lack of financing and an owner increasingly unwilling to continue such financing. Coincidentally, at more or less the same time that the assets of Torch were finally being dispersed, their 68030-based workstation having been under development, Commodore demonstrated the 68030-based Amiga 3000 for its impending release. By the time its Unix variant arrived, Commodore was needing to bring far more to the table than what it could reasonably offer.

Acorn themselves also struggled in their own moves upmarket. While the ARM had arrived with a reputation of superior performance against machines costing far more, the march of progress had eroded that lead. The designers of the ARM had made a virtue of a processor being able to make efficient use of its memory bandwidth, as opposed to letting the memory sit around idle as the processor digested each instruction. This facilitated cheaper systems where, in line with the design of Acorn’s 8-bit computers, the processor would take on numerous roles within the system including that of performing data transfers on behalf of hardware peripherals, doing so quite effectively and obviating the need for costly interfacing circuitry that would let hardware peripherals directly access the memory themselves.

But for more powerful systems, the architectural constraints can be rather different. A processor that is supposedly inefficient in its dealings with memory may at least benefit from peripherals directly accessing memory independently, raising the general utilisation of the memory in the system. And even a processor that is highly effective at keeping itself busy and highly efficient at utilising the memory might be better off untroubled by interrupts from hardware devices needing it to do work for them. There is also the matter of how closely coupled the processor and memory should be. When 8-bit processors ran at around the same speed as their memory devices, it made sense to maximise the use of that memory, but as processors increased in speed and memory struggled to keep pace, it made sense to decouple the two.

Other RISC processors such as those from MIPS arrived on the market making deliberate use of faster memory caches to satisfy those processors’ efficient memory utilisation while acknowledging the increasing disparity between processor and memory speeds. When upgrading the ARM, Acorn had to introduce a cache in its ARM3 to try and keep pace, doing so with acclaim amongst its customers as they saw a huge jump in performance. But such a jump was long overdue, coming after Acorn’s first Unix workstation had shipped and been largely overlooked by the wider industry.

Acorn’s second generation of workstations, being two configurations of the same basic model, utilised the ARM3 but lacked a hardware floating-point unit. Commodore could rely on the good old 68881 from Motorola, but Acorn’s FPA10 (floating-point accelerator) arrived so late that only days after its announcement, three years or so after those ARM3-based systems had been launched and two years later than expected, Acorn discontinued its Unix workstation effort altogether.

It is claimed that Commodore might have skipped the 68030 and gone straight for the 68040 in its Unix workstation, but indications are that the 68040 was probably scarce and expensive at first, and soon only Apple would be left as a major volume customer for the product. All of the other big Motorola 68000 family customers had migrated to other architectures or were still planning to, and this was what Commodore themselves resolved to do, formulating an ambitious new chipset called Hombre based around Hewlett-Packard’s PA-RISC architecture that was never realised.

Performance of Amiga and workstation systems in approximate chronological order of introduction

A chart showing how Unix workstation performance steadily improved, largely through the introduction of steadily faster RISC processors.

Acorn, meanwhile, finally got a chip upgrade from ARM in the form of the rather modest ARM6 series, choosing to develop new systems around the ARM600 and ARM610 variants, along with systems using upgraded sound and video hardware. One additional benefit of the newer ARM chips was an integrated memory management unit more suitable for Unix implementations than the one originally developed for the ARM. For followers of the company, such incoming enhancements provided a measure of hope that the company’s products would remain broadly competitive in hardware terms with mainstream personal computers.

Perhaps most important to most Acorn users at the time, given the modest gains they might see from the ARM600/610, was the prospect of better graphical capabilities, but Acorn chose not to release their intermediate designs along the way to their grand new system. And so, along came the Risc PC: a machine with two processor sockets and logic to allow one of the processors to be an x86-compatible processor that could run PC software. Once again, Acorn gave the whole hardware-based PC accelerator card concept another largely futile outing, failing to learn that while existing users may enjoy dabbling with software from another platform, it hardly ever attracts new customers in any serious numbers. Even Commodore had probably learned that lesson by then.

Nevertheless, Acorn’s Risc PC was a somewhat credible platform for Unix, if only Acorn hadn’t cancelled their own efforts in that realm. Prominent commentators and enthusiastic developers seized the moment, and with Free Software Unix implementations such as NetBSD and FreeBSD emerging from the shadow of litigation cast upon them, a community effort could be credibly pursued. Linux was also ported to ARM, but such work was actually begun on Acorn’s older A5000 model.

Acorn never seized this opportunity properly, however. Despite entering the network computer market in pursuit of some of Larry Ellison’s billions, expectations of the software in network computers had also increased. After all, networked computers have many of the responsibilities of those sophisticated minicomputers and workstations. But Acorn was still wedded to RISC OS and, for the most part, to ARM. And it ultimately proved that while RISC OS might present quite a nice graphical interface, it was actually NetBSD that could provide the necessary versatility and reliability being sought for such endeavours.

And as the 1990s got underway, the mundane personal computer started needing some of those workstation capabilities, too, eventually erasing the distinction between these two product categories. Tooling up for Unix might have seemed like a luxury, but it had been an exercise in technological necessity. Acorn’s RISC OS had its attractions, notably various user interface paradigms that really should have become more commonplace, together with a scalable vector font system that rendered anti-aliased characters on screen years before Apple or Microsoft managed to, one that permitted the accurate reproduction of those fonts on a dot-matrix printer, a laser printer, and everything in-between.

But the foundations of RISC OS were a legacy from Acorn’s 8-bit era, laid down hastily in an arguably cynical fashion to get the Archimedes out of the door and to postpone the consequences. Commodore inevitably had similar problems with its own legacy software technology, ostensibly more modern than Acorn’s when it was introduced in the Amiga, even having some heritage from another Cambridge endeavour. Acorn might have ported its differentiating technologies to Unix, following the path taken by Torch and its close relative, IXI, also using the opportunity to diversify its hardware options.

In all of this consideration given to Acorn and Commodore, it might seem that Apple, mentioned many paragraphs earlier, has been forgotten. In fact, Apple went through many of the same trials and ordeals as its smaller rivals. Indeed, having made so much money from the Macintosh, Apple’s own attempts to modernise itself and its products involve such a catalogue of projects and initiatives that even summarising them would expand this article considerably.

Only Apple would buy a supercomputer to attempt to devise its own processor architecture – Aquarius – only not to follow through and eventually be rescued by the pair of IBM and Motorola, humbled by an unanticipated decline in their financial and market circumstances. Or have several operating system projects – Opus, Pink, Star Trek, NuKernel, Copland – that were all started but never really finished. Or to get into personal digital assistants with the unfairly maligned Newton, or to consider redesigning the office entirely with its Workspace 2000 collaboration. And yet end up acquiring NeXT, revamping its technologies along that company’s lines, and still barely make it to the end of the decade.

The Final Chapters

Commodore got almost half-way through the 1990s before bankruptcy beckoned. Motorola’s 68060, informed by the work on the chip manufacturer’s abandoned 88000 RISC architecture, provided a considerable performance boost to its more established architecture, even if it now trailed the pack, perhaps only matching previous generations of SPARC and MIPS processors, and now played second fiddle to PowerPC in Motorola’s own line-up.

Acorn’s customers would be slightly luckier. Digital’s StrongARM almost entirely eclipsed ARM’s rather sedate ARM7-based offerings, except in floating-point performance in comparison to a single system-on-chip product, the ARM7500FE. This infusion of new technology was a blessing and a curse for Acorn and its devotees. The Risc PC could not make full use of this performance, and a new machine would be needed to truly make the most of it, also getting a long-overdue update in a range of core industry technologies.

Commodore’s devotees tend to make much of the company’s mismanagement. Deserved or otherwise, one may now be allowed to judge whether the company was truly unique in this regard. As Acorn’s network computer ambitions were curtailed, market conditions became more unfavourable to its increasingly marginalised platform, and the lack of investment in that core platform started to weigh heavily on the company and its customers. A shift in management resulted in a shift in business and yet another endeavour being initiated.

Acorn’s traditional business units were run down, the company’s next generation of personal computer hardware cancelled, and yet a somewhat tangential silicon design business was effectively being incubated elsewhere within the organisation. Meanwhile, Acorn, sitting on a substantial number of shares in ARM, supposedly presented a vulnerability for the latter and its corporate stability. So, a plan was hatched that saw Acorn sold off to a division of an investment bank based in a tax haven, the liberation of its shares in ARM, and the dispersal of Acorn’s assets at rather low prices. That, of course, included the newly incubated silicon design operation, bought by various figures in Acorn’s “senior management”.

Just as Commodore’s demise left customers and distributors seemingly abandoned, so did Acorn’s. While Commodore went through the indignity of rescues and relaunches, Acorn itself disappeared into the realms of anonymous holding companies, surfacing only occasionally in reports of product servicing agreements and other unglamorous matters. Acorn’s product lines were kept going for as long as could be feasible by distributors who had paid for the privilege, but without the decades of institutional experience of an organisation terminated almost overnight, there was never likely to be a glorious resurgence of its computer systems. Its software platform was developed further, primarily for set-top box applications, and survives today more as a curiosity than a contender.

In recent days, efforts have been made by Commodore devotees to secure the rights to trademarks associated with the company, these having apparently been licensed by various holding companies over the years. Various Acorn trademarks were also offloaded to licensors, leading to at least one opportunistic but ill-conceived and largely unwelcome attempt to trade on nostalgia and to cosplay the brand. Whether such attempts might occur in future remains uncertain: Acorn’s legacy intersects with that of the BBC, ARM and other institutions, and there is perhaps more sensitivity about how its trademarks might be used.

In all of this, I don’t want to downplay all of the reasons often given for these companies’ demise, Commodore’s in particular. In reading accounts of people who worked for the company, it is clear that it was not a well-run workplace, with exploitative and abusive behaviour featuring disturbingly regularly. Instead, I wish to highlight the lack of understanding in the communities around these companies and the attribution of success or failure to explanations that do not really hold up.

For instance, the Acorn Electron may have consumed many resources in its development and delivery, but it did not lead to Acorn’s “downfall”, as was claimed by one absurd comment I read recently. Acorn’s rescue by Olivetti was the consequence of several other things, too, including an ill-advised excursion into the US market, an attempt to move upmarket with an inadequate product range, some curious procurement and logistics practices, and a lack of capital from previous stock market flotations. And if there had been such a “downfall”, such people would not be piping up constantly about ARM being “the chip in everyone’s phone”, which is tiresomely fashionable these days. ARM may well have been just a short footnote in some dry text about processor architectures.

In these companies, some management decisions may have made sense, while others were clearly ill-considered. Similarly, those building the products could only do so much given the technological choices that had already been made. But more intriguing than the actual intrigues of business is to consider what these companies might have learned from each other, what the product developers might have borrowed from each other had they been able to, and what they might have achieved had they been able to collaborate somehow. Instead, both companies went into decline and ultimately fell, divided by the barriers of competition.

Update: It seems that the chart did not have the correct value for the Amiga 4000/040, due to a missing conversion from VAX MIPS to something resembling the original Dhrystone score. Thus, in integer performance as measured by this benchmark, the 68040 at 25MHz was broadly comparable to the R3000 at 25MHz, but was also already slipping behind faster R3000 parts even before the SuperSPARC and R4000 emerged.

On a tale of two pull requests

Sunday, June 15th, 2025

I was going to leave a comment on “A tale of two pull requests”, but would need to authenticate myself via one of the West Coast behemoths. So, for the benefit of readers of the FSFE Community Planet, here is my irritable comment in a more prominent form.

I don’t think I appreciate either the silent treatment or the aggression typically associated with various Free Software projects. Both communicate in some way that contributions are not really welcome: that the need for such contributions isn’t genuine, perhaps, or that the contributor somehow isn’t working hard enough or isn’t good enough to have their work integrated. Never mind that the contributor will, in many cases, be doing it in their own time and possibly even to fix something that was supposed to work in the first place.

All these projects complain about taking on the maintenance burden from contributions, yet they constantly churn up their own code and make work for themselves and any contributors still hanging on for the ride. There are projects that I used to care about that I just don’t care about any more. Primarily, for me, this would be Python: a technology I still use in my own conservative way, but where the drama and performance of Python’s own development can just shake itself out to its own disastrous conclusion as far as I am concerned. I am simply beyond caring.

Too bad that all the scurrying around trying to appeal to perceived market needs while ignoring actual needs, along with a stubborn determination to ignore instructive prior art in the various areas they are trying to improve, needlessly or otherwise, all fails to appreciate the frustrating experience for many of Python’s users today. Amongst other things, a parade of increasingly incoherent packaging tools just drives users away, heaping regret on those who chose the technology in the first place. Perhaps someone’s corporate benefactor should have invested in properly addressing these challenges, but that patronage was purely opportunism, as some are sadly now discovering.

Let the core developers of these technologies do end-user support and fix up their own software for a change. If it doesn’t happen, why should I care? It isn’t my role to sustain whatever lifestyle these people feel that they’re entitled to.

Consumerists Never Really Learn

Thursday, May 15th, 2025

Via an article about a Free Software initiative hoping to capitalise on the discontinuation of Microsoft Windows 10, I saw that the consumerists at Which? had published their own advice. Predictably, it mostly emphasises workarounds that merely perpetuate the kind of bad choices Which? has promoted over the years along with yet more shopping opportunities.

Those workarounds involve either continuing to delegate control to the same company whose abandonment of its users is the very topic of the article, or to switch to another surveillance economy supplier who will inevitably do the same when they deem it convenient. Meanwhile, the shopping opportunities involve buying a new computer – as one would entirely expect from Which? – or upgrading your existing computer, but only “if you’re using a desktop”. I guess adding more memory to a laptop or switching to solid-state media, both things that have rejuvenated a laptop from over a decade ago that continues to happily runs Linux, is beyond comprehension at Which? headquarters.

Only eventually do they suggest Ubuntu, presumably because it is the only Linux distribution they have heard of. I personally suggest Debian. That laptop happily running Linux was running Ubuntu, since that is what it was shipped with, but then Ubuntu first broke upgrades in an unhelpful way, hawking commercial support in the update interface to the confusion of the laptop’s principal user (and, by extension, to my confusion as I attempted to troubleshoot this anomalous behaviour), and also managed to put out a minor release of Dippy Dragon, or whatever it was, that was broken and rendered the machine unbootable without appropriate boot media.

Despite this being a known issue, they left this broken image around for people to download and use instead of fixing their mess and issuing a further update. That this also happened during the lockdown years when I wasn’t able to personally go and fix the problem in person, and when the laptop was also needed for things like interacting with public health services, merely reinforced my already dim view of some of Ubuntu’s release practices. Fortunately, some Debian installation media rescued the situation, and a switch to Debian was the natural outcome. It isn’t as if Ubuntu actually has any real benefits over Debian any more, anyway. If anything, the dubious custodianship of Ubuntu has made Debian the more sensible choice.

As for Which? and their advice, had the organisation actually used its special powers to shake up the corrupt computing industry, instead of offering little more than consumerist hints and tips, all the while neglecting the fundamental issues of trust, control, information systems architecture, sustainability and the kind of fair competition that the organisation is supposed to promote, then their readers wouldn’t be facing down an October deadline to fix a computer that Which? probably recommended in the first place, loaded up with anti-virus nonsense and other workarounds for the ecosystem they have lazily promoted over the years.

And maybe the British technology sector would be more than just the odd “local computer repair shop” scratching a living at one end of the scale, a bunch of revenue collectors for the US technology industry pulling down fat public sector contracts and soaking up unlimited amounts of taxpayer money at the other, and relatively little to mention in between. But that would entail more than casual shopping advice and fist-shaking at the consequences of a consumerist culture that the organisation did little to moderate, at least while it could consider itself both watchdog and top dog.

Dual Screen CI20

Sunday, December 15th, 2024

Following on from yesterday’s post, where a small display was driven over SPI from the MIPS Creator CI20, it made sense to exercise the HDMI output again. With a few small fixes to the configuration files, demonstrating that the HDMI output still worked, I suppose one thing just had to be done: to drive both displays at the same time.

The MIPS Creator CI20 driving an SPI display and a monitor via HDMI.

The MIPS Creator CI20 driving an SPI display and a monitor via HDMI.

Thus, two separate instances of the spectrum example, each utilising their own framebuffer, potentially multiplexed with other programs (but not actually done here), are displayed on their own screen. All it required was a configuration that started all the right programs and wired them up.

Again, we may contemplate what the CI20 was probably supposed to be: some kind of set-top box providing access to media files stored on memory cards or flash memory, possibly even downloaded from the Internet. On such a device, developed further into a product, there might well have been a front panel display indicating the status of the device, the current media file details, or just something as simple as the time and date.

Here, an LCD is used and not in any sensible orientation for use in such a product, either. We would want to use some kind of right-angle connector to make it face towards the viewer. Once upon a time, vacuum fluorescent displays were common for such applications, but I could imagine a simple, backlit, low-resolution monochrome LCD being an alternative now, maybe with RGB backlighting to suit the user’s preferences.

Then again, for prototyping, a bright LCD like this, decadent though it may seem, somehow manages to be cheaper than much simpler backlit, character matrix displays. And I also wonder how many people ever attached two displays to their CI20.

Testing Newer Work on Older Boards

Saturday, December 14th, 2024

Since I’ve been doing some housekeeping in my low-level development efforts, I had to get the MIPS Creator CI20 out and make sure I hadn’t broken too much, also checking that the newer enhancements could be readily ported to the CI20’s pinout and peripherals. It turns out that the Pimoroni Pirate Audio speaker board works just fine on the primary expansion header, at least to use the screen, and doesn’t need the backlight pin connected, either.

The Pirate Audio speaker hat on the MIPS Creator CI20.

The Pirate Audio speaker hat on the MIPS Creator CI20.

Of course, the CI20 was designed to be pinout-compatible with the original Raspberry Pi, which had a 26-pin expansion header. This was replaced by a 40-pin header in subsequent Raspberry Pi models, presumably wrongfooting various suppliers of accessories, but the real difficulties will have been experienced by those with these older boards, needing to worry about whether newer, 40-pin “hat” accessories could be adapted.

To access the Pirate Audio hat’s audio support, some additional wiring would, in principle, be necessary, but the CI20 doesn’t expose I2S functionality via its headers. (The CI20 has a more ambitious audio architecture involving a codec built into the JZ4780 SoC and a wireless chip capable of Bluetooth audio, not that I’ve ever exercised this even under Linux.) So, this demonstration is about as far as we can sensibly get with the CI20. I also tested the Waveshare panel and it seemed to work, too. More testing remains, of course!

A Small Update

Friday, December 6th, 2024

Following swiftly on from my last article, I decided to take the opportunity to extend my framebuffer components to support an interface utilised by the L4Re framework’s Mag component, which is a display multiplexer providing a kind of multiple window environment. I’m not sure if Mag is really supported any more, but it provided the basis of a number of L4Re examples for a while, and I brought it into use for my own demonstrations.

Eventually, having needed to remind myself of some of the details of my own software, I managed to deploy the collection of components required, each with their own specialised task, but most pertinently a SoC-specific SPI driver and a newly extended display-specific framebuffer driver. The framebuffer driver could now be connected directly to Mag in the Lua-based coordination script used by the Ned initialisation program, which starts up programs within L4Re, and Mag could now request a region of memory from the framebuffer driver for further use by other programs.

All of this extra effort merely provided another way of delivering a familiar demonstration, that being the colourful, mesmerising spectrum example once provided as part of the L4Re software distribution. This example also uses the programming interface mentioned above to request a framebuffer from Mag. It then plots its colourful output into this framebuffer.

The result is familiar from earlier articles:

The spectrum example on a screen driven by the ILI9486 controller.

The spectrum example on a screen driven by the ILI9486 controller.

The significant difference, however, is that underneath the application programs, a combination of interchangeable components provides the necessary adaptation to the combination of hardware devices involved. And the framebuffer component can now completely replace the fb-drv component that was also part of the L4Re distribution, thereby eliminating a dependency on a rather cumbersome and presumably obsolete piece of software.

Recent Progress

Monday, December 2nd, 2024

The last few months have not always been entirely conducive to making significant progress with various projects, particularly my ongoing investigations and experiments with L4Re, but I did manage to reacquaint myself with my previous efforts sufficiently to finally make some headway in November. This article tries to retrieve some of the more significant accomplishments, modest as they might be, to give an impression of how such work is undertaken.

Previously, I had managed to get my software to do somewhat useful things on MIPS-based single-board computer hardware, showing graphical content on a small screen. Various problems had arisen with regard to one revision of a single-board computer for which the screen was originally intended, causing me to shift my focus to more general system functionality within L4Re. With the arrival of the next revision of the board, I leveraged this general functionality, combining it with support for memory cards, to get my minimalist system to operate on the board itself. I rather surprised myself getting this working, it must be said.

Returning to the activity at the start of November, there were still some matters to be resolved. In parallel to my efforts with L4Re, I had been trying to troubleshoot the board’s operation under Linux. Linux is, in general, a topic upon which I do not wish to waste my words. However, with the newer board revision, I had also acquired another, larger, screen and had been investigating its operation, and there were performance-related issues experienced under Linux that needed to be verified under other conditions. This is where a separate software environment can be very useful.

Plugging a Leak

Before turning my attention to the larger screen, I had been running a form of stress test with the smaller screen, updating it intensively while also performing read operations from the memory card. What this demonstrated was that there were no obvious bandwidth issues with regard to data transfers occurring concurrently. Translating this discovery back to Linux remains an ongoing exercise, unfortunately. But another problem arose within my own software environment: after a while, the filesystem server would run out of memory. I felt that this problem now needed to be confronted.

Since I tend to make such problems for myself, I suspected a memory leak in some of my code, despite trying to be methodical in the way that allocated objects are handled. I considered various tools that might localise this particular leak, with AddressSanitizer and LeakSanitizer being potentially useful, merely requiring recompilation and being available for a wide selection of architectures as part of GCC. I also sought to demonstrate the problem in a virtual environment, this simply involving appropriate test programs running under QEMU. Unfortunately, the sanitizer functionality could not be linked into my binaries, at least with the Debian toolchains that I am using.

Eventually, I resolved to use simpler techniques. Wondering if the memory allocator might be fragmenting memory, I introduced a call to malloc_stats, just to get an impression of the state of the heap. After failing to gain much insight into the problem, I rolled up my sleeves and decided to just look through my code for anything I might have done with regard to allocating memory, just to see if I had overlooked anything as I sought to assemble a working system from its numerous pieces.

Sure enough, I had introduced an allocation for “convenience” in one kind of object, making a pool of memory available to that object if no specific pool had been presented to it. The memory pool itself would release its own memory upon disposal, but in focusing on getting everything working, I had neglected to introduce the corresponding top-level disposal operation. With this remedied, my stress test was now able to run seemingly indefinitely.

Separating Displays and Devices

I would return to my generic system support later, but the need to exercise the larger screen led me to consider the way I had previously introduced support for screens and displays. The smaller screen employs SPI as the communications mechanism between the SoC and the display controller, as does the larger screen, and I had implemented support for the smaller screen as a library combining the necessary initialisation and pixel data transfer code with code that would directly access the SPI peripheral using a SoC-specific library.

Clearly, this functionality needed to be separated into two distinct parts: the code retaining the details of initialising and operating the display via its controller, and the code performing the SPI communication for a specific SoC. Not doing this could require us to needlessly build multiple variants of the display driver for different SoCs or platforms, when in principle we should only need one display driver with knowledge of the controller and its peculiarities, this then being combined using interprocess communication with a single, SoC-specific driver for the communications.

A few years ago now, I had in fact implemented a “server” in L4Re to perform short SPI transfers on the Ben NanoNote, this to control the display backlight. It became appropriate to enhance this functionality to allow programs to make longer transfers using data held in shared memory, all of this occurring without those programs having privileged access to the underlying SPI peripheral in the SoC. Alongside the SPI server appropriate for the Ben NanoNote’s SoC, servers would be built for other SoCs, and only the appropriate one would be started on a given hardware device. This would then mediate access to the SPI peripheral, accepting requests from client programs within the established L4Re software architecture.

One important element in the enhanced SPI server functionality is the provision of shared memory that can be used for DMA transfers. Fortunately, this is mostly a matter of using the appropriate settings when requesting memory within L4Re, even though the mechanism has been made somewhat more complicated in recent times. It was also fortunate that I previously needed to consider such matters when implementing memory card support, saving me time in considering them now. The result is that a client program should be able to write into a memory region and the SPI server should be able to send the written data directly to the display controller without any need for additional copying.

Complementing the enhanced SPI servers are framebuffer components that use these servers to configure each kind of display, each providing an interface to their own client programs which, in turn, access the display and provide visual content. The smaller screen uses an ST7789 controller and is therefore supported by one kind of framebuffer component, whereas the larger screen uses an ILI9486 controller and has its own kind of component. In principle, the display controller support could be organised so that common code is reused and that support for additional controllers would only need specialisations to that generic code. Both of these controllers seem to implement the MIPI DBI specifications.

The particular display board housing the larger screen presented some additional difficulties, being very peculiarly designed to present what would seem to be an SPI interface to the hardware interfacing to the board, but where the ILI9486 controller’s parallel interface is apparently used on the board itself, with some shift registers and logic faking the serial interface to the outside world. This complicates the communications, requiring 16-bit values to be sent where 8-bit values would be used in genuine SPI command traffic.

The motivation for this weird design is presumably that of squeezing a little extra performance out of the controller that is only available when transferring pixel data via the parallel interface, especially desired by those making low-cost retrogaming systems with the Raspberry Pi. Various additional tweaks were needed to make the ILI9486 happy, such as an explicit reset pulse, with this being incorporated into my simplistic display component framework. Much more work is required in this area, and I hope to contemplate such matters in the not-too-distant future.

Discoveries and Remedies

Further testing brought some other issues to the fore. With one of the single-board computers, I had been using a microSD card with a capacity of about half a gigabyte, which would make it a traditional SD or SDSC (standard capacity) card, at least according to the broader SD card specifications. With another board, I had been using a card with a sixteen gigabyte capacity or thereabouts, aligning it with the SDHC (high capacity) format.

Starting to exercise my code a bit more on this larger card exposed memory mapping issues when accessing the card as a single region: on the 32-bit MIPS architecture used by the SoC, a pointer simply cannot address this entire region, and thus some pointer arithmetic occurred that had undesirable consequences. Constraining the size of mapped regions seemed like the easiest way of fixing this problem, at least for now.

More sustained testing revealed a couple of concurrency issues. One involved a path of invocation via a method testing for access to filesystem objects where I had overlooked that the method, deliberately omitting usage of a mutex, could be called from another component and thus circumvent the concurrency measures already in place. I may well have refactored components at some point, forgetting about this particular possibility.

Another issue was an oversight in the way an object providing access to file content releases its memory pages for other objects to use before terminating, part of the demand paging framework that has been developed. I had managed to overlook a window between two operations where an object seeking to acquire a page from the terminating object might obtain exclusive access to a page, but upon attempting to notify the terminating object, find that it has since been deallocated. This caused memory access errors.

Strangely, I had previously noticed one side of this potential situation in the terminating object, even writing up some commentary in the code, but I had failed to consider the other side of it lurking between those two operations. Building in the missing support involved getting the terminating object to wait for its counterparts, so that they may notify it about pages they were in the process of removing from its control. Hopefully, this resolves the problem, but perhaps the lesson is that if something anomalous is occurring, exhibiting certain unexpected effects, the cause should not be ignored or assumed to be harmless.

All of this proves to be quite demanding work, having to consider many aspects of a system at a variety of levels and across a breadth of components. Nevertheless, modest progress continues to be made, even if it is entirely on my own initiative. Hopefully, it remains of interest to a few of my readers, too.