Paul Boddie's Free Software-related blog


Archive for the ‘public sector’ Category

The Atomisation of Society

Sunday, April 26th, 2026

Just as it always was, and just as it also was with the Bitcoin and Blockchain fads, artificial intelligence is back in vogue, and now we must suffer hearing about “AI” all the time, shoehorned into everything technological and plenty of other things besides. I can hardly be in a minority in finding it all tiresome and even somewhat disheartening.

Of course, artificial intelligence is a broad discipline, and there are useful technologies doing useful work that may have been reported with varying levels of hype over the course of recent history. Over the last decade or so, machine learning has received a degree of media attention, often doing things related to mundane topics like pattern recognition, supporting applications in worthy domains such as medicine. Interestingly, activities like machine translation and visual object recognition are increasingly taken for granted by wider society, despite being problems that were once considered difficult to solve or tackle reliably.

If worthy domains are not your thing, and your phone is the centre of your life, you might be able to point it at some foreign language text and see it appear, magically translated, in a language you might understand. But this mostly isn’t what the current wave of “AI” hype is all about, even if such relatively useful applications are used to greenwash that hype.

Has your Internet search experience degraded to the point of near uselessness? Worry not: you can now be distracted by something pretending to be intelligent feeding you some misinformation instead. Never mind that an improved search experience, yielding useful, informative results was well within the technological grasp of the well-resourced corporations involved. Boring old part-of-speech tagging, maybe some semantic networks, and a bit of traditional information retrieval technology could go a long way. But, of course, that wouldn’t perpetuate the predatory advertising-fuelled surveillance economy or attract billions of speculative investment dollars.

In Britain, it isn’t enough to saddle the National Health Service with market economics, layers of bureaucracy, and numerous consultants and consultancies siphoning off considerable sums at the taxpayer’s expense, all as the fundamental challenges of the service’s information systems – and core activities – remain undiminished. Now, “AI” will drift in, touted by commercial opportunists and predators, new and old, alongside “data science” practitioners aiming to exploit private health information for their own benefit.

And as if by magic, new “efficiencies” will supposedly result. Never mind that without fixing the bread-and-butter issues, no amount of smoke and mirrors can deliver the magic envisaged. But, of course, it is far easier to wish away one’s problems than to confront them, especially if there are commercial interests with products and services to sell. Why not delegate hard problems to some supposedly all-knowing, all-understanding entity that will magically understand all those things for which the impatient politician or manager simply has no time (or intellectual stamina)?

It is also apparently not enough to saddle educators with extra responsibilities that in a well-organised society would be the task of social and healthcare workers and, in some situations, that of police outreach workers and the police service itself, all as funding is perennially slashed in the education sector. Instead, policy-makers sing tunelessly from the “AI” cult’s songsheet, fuelling yet another corrosive societal problem and, as always, dumping the consequences in the laps of educators for “careful consideration”. And, of course, the welfare of the students barely registers on the conscience of those policy-makers as they perform their victory lap.

As Winter Follows Summer

For the most part, the “AI” in the latest round of hype is that most easily understood by, or most relatable to, the average media commentator. It is the gadget that generates written or visual content as if it were some kind of creative being. For their managerial counterpart with the job title incorporating the dishonestly chosen words “customer satisfaction”, the hammer in their toolbox marked “chatbot” is thus the “go to” tool for every digital transformation job, just as it was in the early 2000s for organisations looking to somehow “deepen” the level of interaction with their customers.

Amusingly, many of today’s corporate chatbots are largely the same kind of keyword-scanning “choose an option” counterparts to the classic phone menu as their predecessors were. The most likely reason for this is that the much-hyped large language models, exposed to querying from random people on the Internet, risk incurring reputational damage to any given hapless organisation as they go off-script and start to parrot “poorly sourced” training data, to put it charitably, of which there may be quite a bit.

And it is the matter of what is used to feed the “AI” monster that should concern those of us in Free Software and every other industry that creates content. It is here that we find out who are allies are, who is really committed to a free and fair society, and who it is that instead chooses to betray us. Unsurprisingly, those who betray us are largely the same group as those who, once employed by a “rich uncle” in the form of some corporate behemoth dangling incentives and a comfortable lifestyle, try to explain away the problematic behaviour of that rich uncle. Worse still, they tend to demand the silencing of any uncomfortable criticism of either the behemoth or themselves in their sordid pact.

You will find these people regularly trolling others in any venue where comments may be left to supposedly discuss the headline content, basking in their privilege and telling everybody else what little everybody on the outside really knows. As layoffs roll in across the technology sector once again, it remains to be seen whether the enthusiasm for defending some corporation’s questionable ethical record remains buoyant or whether there might be some buyer’s remorse amongst those who took the easy money. But of course, these people will glide through another gilded door, ushered in by some other member of the old boys’ network. After all, some of them have been trying to undermine Free Software for years.

This time round, the layoffs are justified by management using the very thing that the corporate apologists seek to defend. Perhaps the most enthusiastic apologists for “AI” have not been shown the door just yet, but plenty of their colleagues have. Then again, there are plenty of people who see a harmful phenomenon and think: “That could never happen to me!” After all, they are the special ones. They might concede that something has come along to “shake up” the industry, but as the special ones, they are the ones who will adapt, who will turn this disruptive force into their “competitive advantage”.

In any online discussion related to “AI” and the software industry, you can expect to read the same old guff. Some self-proclaimed top-flight developer will tell everyone how much “more productive” they are by getting “AI” tooling to generate all those unit tests that they previously had to write by hand. Never mind that their claimed “insane level of productivity” is tempered by their own admitted need to review the slop created for them, a result of them describing their needs in “plain English” when it has been known for decades that the definition of a functional specification for a piece of software, especially when done in a natural language, is one of the harder tasks in systems development.

Many of these people will claim that this lets them “spend more time on the creative part” of their job, never mind that the kind of person who writes such drivel is usually the kind of person that most observers would struggle to categorise as creative. But their advocacy for systems that systematically trawl other people’s work, undergo “training” on it, and transform it for eventual regurgitation in a hopefully distinct and thus plausibly “original” form, and their sudden enthusiasm for the creative side of their work, will find its reward in the end. After all, creative jobs have been amongst the first to be eliminated in this new wave of “AI”, as managers (aping their tech idols) declare that graphic designers are no longer needed, just as I learned in one enlightening, spontaneous conversation a year and a half or so ago, or as anyone can discover just by following the news.

Your manager will surely not care how much more quickly all those unit tests are getting written – not that they cared before, exactly – nor will they value your “creativity” when the latest toy is dangled in front of them promising to do your job. Your peers will not care about any kind of epiphany you have, either. They will still be the special ones and you, like the rest of us, just jealous of their success. You will have had your chance to change the world for the better, and you will have blown that chance.

The Hoarding of Privilege

One might think that technologists might be attuned to the perils of technology for themselves and society, but one should never underestimate the lure of selfishness and self-interest, of not wishing to know about any negative consequences of their work or their interests. As one might expect, solidarity is the first casualty.

One sees talented developers who have spent their entire career writing their own code, tinkering with “AI” coding tools and enthusing about the output, wasting other people’s time pontificating about why the assembly language generated for some 8-bit microprocessor isn’t as plausible as a Python script for a task solved independently hundreds of times and published to the Internet every single time. One sees talented developers make nice new applications, only then to go off and use “AI” for the artwork or for other “creative” elements of the work.

They might say that they aren’t talented enough to do those tasks themselves, so they can justify delegating them to a machine. Maybe they just want something that fills the gap, or maybe they don’t want to pay for such work. But even if everyone is doing of all this for fun and for free, why do they not involve others in their creative exercise, people who might be able to draw pictures or write prose?

It sounds to me that they just do not value such creative endeavours or the role of those in society who are talented enough pursue those endeavours. And as they hit up “AI” to conjure up some art, featuring a particular subject, done in a particular style, they perhaps purposefully neglect the creators of the works that fed the “AI” in the first place: those very same people they breeze past as they go and hit all the buttons.

It must be so wonderful to be in a position of having made plenty of money in a profession that was once valued, maybe even lucrative, to now be able to check out for the rest of one’s career. To pull up the ladder on the next generation, eliminate their opportunities, and to indulge in that time-old practice of berating them for being lazy and not good enough. To indulge in “vibe coding” personally, maybe even promote it educationally, but then to label as decadent all recent practitioners in one’s own profession, simply not made of the same stuff as folk were in the “good old days”.

One encounters people who say that they are glad that they are retired or will soon be retiring. For some of them, this may be a legitimate reaction to an increasingly tiresome and possibly even hostile workplace culture, looking forward to some well-deserved relief tinged with remorse for a vocation that no longer delivers the satisfaction it once did. For others, it just sounds like they are content to leave the world in a worse state than it was, and to abandon the job of reversing this damage to subsequent generations: hardly an isolated occurrence in the modern era.

It can be all too easy for people to downplay the effects of new technologies, often with claims that appeal to ideas about “human nature”. As children go through their entire childhood exposed to technologies that are inadequately supervised and often exploitatively designed, some might claim that schoolchildren or students seeking to cheat on their assignments “isn’t a real issue”. And with this, we are presumably supposed to move on. But anyone who knows any teachers will also know that plagiarism is very much “a real issue”, both for students and teachers.

With ready access to online tools, even encouragement to use them, the temptation is too strong for many students, particularly those who have been struggling throughout their education. With a diminishing incentive to learn, and besieged by social pressures amplified by technology, plagiarism starts to look like all they have left in their arsenal. When we hear stories about disengaged young people and their apathy, imagine being the person who has to try and motivate these young people, especially when it seems like every last person with influence over this situation has abandoned everyone affected.

And plagiarism is, in fact, such an issue that Microsoft, having flooded the zone with “AI” toys, offers other toys to detect plagiarism. There is good money to be made by escalating a situation into a conflict and equipping both sides, as those in certain other professions and industries know all too well. I have read tales of educational institutions being infatuated with “AI” to the point of rolling it out, often excused in numerous ways, but mostly amounting to someone in a position of power wanting to try all the toys.

(And given that school administrations are routinely surrendering their classrooms to gambling industry practices and data surveillance predators, “AI” probably does seem like just another fun toy to play with.)

I have heard tales of decision-makers and executives trying to mitigate the harmful effects of their decisions, related to “AI” and otherwise, by leaning into the consumerisation of education and seeking to wave through students who may have resorted to plagiarism. Students who just want the piece of paper, so that they can get on all those celebrated ladders of society – good employment, decent housing – before they are pulled up and out of reach entirely.

But what will become of such students when they move on to the next stage of their education or development, increasingly out of their depth, feeling less and less adequate, confident, fulfilled or secure in their own existence? At what point do we justifiably label the abdication of responsibility, by those in authority and those who seek to profit from such misery, the abuse that it arguably is?

“You start with the hope that the next swipe will bring a reward, and eventually it just feels good to see the images floating upwards. And you believe that you’ll get something out of it in the end. But that’s the same dirty trick that gambling machines use. The house always wins.”

When society fails to protect its members as they grow up, how can those in privileged positions criticise people, overwhelmed and desensitised by torrents of increasingly harmful content, for not participating in or engaging with society? Maybe those elected and appointed to protect our interests could do exactly that, instead of looking out only for themselves and the vested interests who have been lining their party’s pockets.

Quality Uncontrolled

One potentially common element in the workplace and the classroom is how each group perceives the materials they encounter, along with their attitudes towards the cultural works of their own society. For children growing up in earlier times, one might expect the consumption of books, magazines, newspapers, television series, films, popular music and other such works to have played a defining role in their perception of culture and its value.

But after years of relentless and cynical commercialisation, children have increasingly been growing up in an environment of relentlessly derivative entertainment content. Instead of new and original stories, endless comic book franchise reboots and the like, largely to keep trademarks and other “intellectual property” warm and out of the public domain, may have habituated people to expect less and less from culture and to place a lower value on it.

Instead of being engaged by cultural works, those works become mere wallpaper, something to have on in the background while other, more harmful, forms of engagement steal the attention of the viewer or listener. And with their perspectives and beliefs unengaged and unchallenged by mainstream culture, people risk being sucked into a bubble all of their own, all too readily and easily fed by “AI”, reinforcing their flawed views and pandering to their prejudices.

Why would you want “AI” to generate content for your own consumption? Habituated to having one’s favourite superhero meet and/or fight one’s other favourite superhero in endless re-runs of generic, commercial filler, it isn’t such a big step to go on and have one’s tummy rubbed all the time by unchallenging and probably inaccurate content that happens to appeal to one’s existing biases, now readily produced in bulk and on demand by “AI”.

For some of us, the process of creating something new is part of the eventual reward, and the process of researching, understanding and communicating something is engaging all by itself. But the degradation of culture to a mere consumable risks the marginalisation or even eradication of the creative process, the investigative process, of historical and critical enquiry, all so that people can be soothed and entertained.

There is a relatively well-known quote about AI being supposed to free up a person’s time for pursuing their art by taking care of the cleaning and the laundry, but instead “AI” has made them and their art redundant and leaves them with the cleaning and the laundry. What might this have to do with the workplace? Well, to answer that, we have to address the phenomenon of the software product and its gradual erosion.

Angst or Realism?

When the well-known technologist Tim Bray raised the issue of “AI Angst“, alongside broader issues, the accompanying off-site discussion of the article predictably featured the usual consumerist “works for me” and aspirational “more productive” tropes. But it also exposed the sentiment that “AI” is also quite able to take the joy and the motivation out of doing work and being in a profession. Amongst those who had the most enthusiasm for vibe coding were claims that only “5% to 20% of the work is interesting” in a modern programming job. So, what does that say about the state of the industry?

One factor might be that industry is subject to a continual inundation of fads and trends often imposed on the profession from outside, facilitated by opportunists who can persuade management and executives of productivity boosts and reductions in expenditure. But another thing is that enthusiasm for “AI” is evidently higher amongst those whose development has a high level of “donkey work”. This leads to a pertinent question: have technologists failed their profession by perpetuating cumbersome, inadequate technologies?

If you have ever had to work with certain widely available, well-resourced software available today, you will be familiar with the sensation of having to seek advice or guidance on how some aspect of the software works or is to be used, only to discover that there is no documentation or that the standard of documentation is so dismal, that one can only wonder how people have allowed such software to be used so widely and in such critical applications in the first place. Not that wondering about it is in any way helpful.

Over the years, various remedies have been thrust onto the scene, from plain old Internet search hoping to dig up some gems of wisdom, discussion forums aspiring to provide relief to adherents of particular technologies, and the now-familiar question-and-answer site concept popularised by the likes of Stack Overflow and its affiliates. But alongside the frustration that inadequate documentation may bring is the frustration and general disillusionment of having to continually ask questions, sometimes relying on vain, petulant and uncooperative individuals, that should have been definitively answered long ago, with those answers recorded coherently for posterity in actual documentation.

We might then understand why some people could be tempted to use “AI” chatbots, desperately trying to get insight into things that should have been made clear by actual human beings in the first place. The chatbot might be misleading or counterfactual, but it is probably going to act somewhat cooperatively, not shame the user about their lack of knowledge, or withhold information to coerce a form of deference. Well, at least not yet, anyway.

This sorry situation is a consequence of the way the software industry has been heading over the last few decades. Under the banner of agility, competitiveness, and that crucial “time to market”, forever to be optimised and minimised, things like documentation are critically undervalued. “Read the code!” exhorts one techbro to the other, advocating the complete elimination of any commentary, deemed unnecessary because “the code is the documentation” and because what they coded was obvious to them when they were “in the zone”.

Never mind that the commentary would have revealed the intended behaviour of the code, along with the deficiencies of the code that was actually written. But, of course, no-one had any time for that, let alone any time for writing a coherent guide to the software, its architecture, and how other people might interact with it in various ways as users of different kinds and developers needing to fix and extend that code. All of that stuff was low-end work according to the techbro, as are other kinds of programming that they do not personally value. (Still, the collision of the butch “rewrite everything in Rust” brigade and “AI” can still provide some entertainment and maybe an occasional reality check.)

It also does not help that documentation has joined the other facets of development in being subjected to various technological and methodological fads that do more to create opportunities for certain industry players, as opposed to improving the quality and breadth of material available. Just as one’s heart sinks as the primary source of guidance for a project turns out to be a GitHub repository page, so it does when one encounters a documentation site that was produced by Sphinx, with plenty of hastily scribbled sections littered with “admonishments” and caveats.

Tim’s use of the term “angst” rightfully received a strongly worded response from one commenter, not least because labelling people’s reactions as if they were those people’s weaknesses in having to respond to seemingly overwhelming pressures is what Tim might himself regard as victim blaming. Tim seems like a generally good guy with a great deal of self-awareness, but I imagine that his position in the industry, along with that of elements of his readership, some probably having done very nicely indeed from their tenure at various West Coast behemoths, makes the general demographic less than reflective when they lament the shocking lack of alternatives to their favourite cloud services.

To an extent, it is a bit like former politicians who take on moral causes after their positions in power are over and as their influence steadily diminishes. Where were they when other people needed their help in furthering those causes? When adjustments to policy, so easily done, might have made a significant difference. Similarly, with Free Software and the provision of sustainable, ethical technology more broadly, the dominant ideology promoted vigorously by West Coast capitalism often seems to involve discretionary support to worthy causes by wealthy people, many of whom did just fine while those worthy causes were kept marginalised, and when various structural causes of inequality, noted by Tim himself, were conveniently ignored because that would involve wealthy people paying a bit more tax.

The Idiocracy

There is always some idiot who pipes up in online discussions about how they asked some chatbot about some topic, “and it said this”. It is almost like they want a prize for having done something that anyone else could have done. And what does this kind of “idle wondering” usage of chatbots, indulged by the consumption of colossal amounts of electricity, actually serve? If anyone were really interested in a topic, and were to have the knowledge and understanding to actually assess chatbot output, then they probably wouldn’t be using one in the first place.

That leaves the most likely kind of user as the sort of middle manager who doesn’t understand anything, who just pushes the paper on to someone who is supposed to “action” the information they were given. At what point is the human, relaying information they don’t understand to impress people about “what they know”, just a conduit for the activities of machines? And are they not then just a puppet participating in the game of assessing whether the machines are intelligent or not, abdicating their own genuine intelligence in the process?

To be fair to those wishing to put questions to machines and expecting some kind of summarised response, questions and other naturally constructed prose could conceivably help formulate more effective queries than simple combinations of keywords. The grammatical roles of words and the relationships between those words can disambiguate between the different meanings of some words and constrain the context of any particular search for information. Returning a summary that paraphrases the materials that were found might help in determining whether those materials happen to cover the right topics.

But what I discovered recently when being offered an “AI” search summary for a particular, highly specific, search term was the kind of thing people describe as hallucination. More accurately, though, it was more reminiscent of when a lazy student of a certain age, or someone who is “hustling” to impress their equally superficial superiors, states something as a fact, references a bunch of supposed supporting material, only for none of that material to actually confirm the fact or quote the term in question, let alone define that term as the thing confidently stated in their opening sentence.

Once upon a time, when machines were meant to be doing things that were described as “reasoning”, researchers were expected to show how the machine had worked through to its conclusion, revealing things like “facts” and “inferences”, and thus providing insights into the state of the machine’s encoded “knowledge base”. One would then be able to verify for oneself whether such a conclusion was sound or not. But now, it would seem, nobody really cares about accuracy or correctness, but only whether the damned machine has the right kind of “swagger” that would convince a room full of imbeciles with something that might sound credible enough to them.

And we are entering a time when we should be concerned about how people might readily check the correctness of “AI” queries. It was always bad enough when a search engine would produce results where the search terms didn’t appear in many of the results, even in a derived form, meaning that some “search engine optimisation” vulture had deceived the search engine with a bucket of false keywords stuffed somewhere into a page or site, but now the search providers have an excuse to push “AI search”.

Having trained their models on the bulk of the Web, they can then push out arbitrary “summaries” while deliberately obscuring the content that those models consumed. Thus, they can deny plagiarism by making source material difficult to find, while also omitting or excluding content that contradicts or refutes any dubious assertions or conclusions presented as fact by their services. They arguably don’t even have to try very hard to degrade the experience, giving the whole exercise the air of plausible deniability.

As with predatory social media, the training of “AI” also has considerable emotional costs for the people who are largely exploited in their work of having to train it. Not only are they not encouraged to properly correct disinformation, but they may not be suitably qualified to do so. After all, everyone can only be knowledgeable about so much. Having seen behind the curtain, their advice is not to trust it. For “AI” companies, hyping up their products, and to the idiocracy, the illusion of the perfect, shiny, all-knowing android has to be upheld at all costs, for the former so that the money can keep rolling in, and for the latter so that they can presumably look cleverer than they actually are.

The tiresome industry insider will criticise anyone suggesting that the average chatbot exhibits only superficial characteristics of intelligence, hand-waving towards mysterious models while playing the “you don’t know what I know” card, but it isn’t unreasonable to suggest that the whole phenomenon is barely above the level of a parlour trick. Does the chatbot really understand anything or do people just read meaning into what is effectively just banter? Is it merely a new Eliza for the disinformation age?

The Apologists

There are always plenty of people willing to pipe up when something that amuses or delights them is criticised in some way. One will undoubtedly encounter people who cosplay their fantasies of living in the future by infuriatingly using the term “an AI” in an unsarcastic and completely sincere way to refer to a system supposedly employing AI. In doing so, they effectively attribute general intelligence and even sentience to those systems. Criticise such sloppiness and you’ll get something like this in return:

“Nobody had any problem calling it artificial intelligence back when it was far less capable than it is now.”

Once again, such people fail to appreciate the distinction between artificial intelligence as a discipline and the promotion of the discipline in broader society, where despite the buzz in superficial media circles, applications of AI were often met with much skepticism. It was frequently obvious that the systems involved might have been exhibiting traits of ostensibly “intelligent” behaviour, but these systems could not be regarded as being intelligent in their own right or in a general sense.

Even with huge amounts of data and computing power brought to bear on such matters, chatbot output still has the smoke-and-mirrors character familiar from earlier demonstrations of the capabilities of AI. Sadly, popular culture is now configured to amplify the hype as opposed to deconstruct it. So, even pointing this out has people believing that researchers in earlier times would have been awestruck by today’s “AI” chatbots, when in fact they would almost immediately recognise the phenomenon. Indeed, they would be disturbed by the way society has embraced such technologies unquestioningly, perhaps observing that earlier demonstrations were mere laboratory experiments, potentially dangerous if they escaped and proliferated.

What has changed since those earlier times is the broader availability of computing power that delivers a more convincing “demo effect” of the technology. This makes decision-makers think they can dispense with humans and roll out the chatbots. Couple this with the way that the populace has been conditioned and manipulated in terms of expecting and accepting less from public services, private companies, their employers, in their careers, and in their lives more generally, and it is not surprising that people are consuming such technologies. Those who believe that they are only doing so recreationally are predictably dismissive about the negative effects on vulnerable people who might end up using such services because all of the responsible, humane options have been eliminated for the sake of “convenience”.

There is also the arrogance that people seek to exude of being more comfortable with technological change than the average person, even as they reveal their own insecurity about the nature of intelligence. To me, the idea that other creatures possess a range of intelligence is indisputable, and we are regularly presented with observations made about intelligence in the natural world, animal behaviour and cognition, that should merely confirm that we as humans are not quite as special as we might think. Many people engage constructively with such observations and show that they can readily accept notions of more pervasive intelligence in nature.

Meanwhile, the insecure but outwardly confident technophile presumably scoffs at the notion that, say, animals might employ forms of reasoning or possess forms of cognition or information processing that could rival those of humans. They would exhort others to not attribute “human” traits to other species, even as they readily ascribe fanciful characteristics to machines running mystery payloads of software, presumably because humans were involved in their creation.

Some apologists play the inevitability card, that this is simply another change following on from many that society has already absorbed and survived, and that this will somehow be digested, too, seeing society adapt and life go on as before. But here, even those who appreciate the challenges these technologies pose do not seem to understand the cultural and societal calculations that accompanied earlier forms of technological change. There are long-enduring cultures on this planet that have had social rules about the depiction of the human form for thousands, maybe tens of thousands of years, and yet our arrogant “modern” societies think we have nothing to learn from such cultures.

Again, the apologists might reveal their ignorance by scoffing at pigments, paints and dyes as technology, but they gave humans the ability to choose how they might be portrayed. We live in an age where the application of computational technologies may seek to eliminate any distinction between observed reality and precisely concocted fantasy in a way that is still scarcely believable even now. Fake pictures, video and audio can be presented as genuine representations and recordings of reality, leaving their audiences deceived.

Maybe older cultures did not need to see the emergence of such technology before they understood some fundamental social lessons that still remain unabsorbed by our supposedly “advanced” societies today. And while it might be said that perhaps those older cultures needed thousands of years to reach their own conclusions, we might also observe that our own societies do not have the luxury of thousands of years to tackle the effects of this corrosive fakery. This might even remind you of another pressing, existential threat to humanity.

One notable incident in this regard involved content creator Jeff Geerling, whose voice was cloned by a supplier of technology products to use in advertisements. Fortunately, the company in question backed down when challenged over the unlikely similarity of its synthesised voice to that of Geerling’s, and the incident was resolved relatively amicably and with incredible grace by Geerling himself. The incident highlighted the proliferation of such tools and their widespread availability for all kinds of applications.

Naturally, there are apologists for such tools, too, of the same school that presumably insists that nuclear weapons are not inherently bad, just how they might be used. Our societies show little sign of resilience in the face of such threats. Instead of robust legislation, regulation and education, we are left with the usual predictable media commentary about “scams” with various workarounds to “stay safe”. And, those same media outlets still promote the social media lifestyle, exhorting everyone to share their lives online and to feed the predatory social media monster.

The apologists might regard “AI” as harmless fun or empowering (to them), but there is an arms race in progress and an industry dealing in what might be described as information weaponry, building on the delivery mechanisms of predatory social media, to further degrade society’s resilience, making it impossible to trust voices, images, video and content. We might like to believe that we are the sophisticated ones, living in a “modern” society, in contrast to cultures where images of the human form are forbidden. One might wonder whether such rules are less about “taboos” – a culturally loaded term – and more about such societies having a fundamental realisation that our own societies fail to appreciate or understand, dismissing it with arrogant talk of our own “progress”.

Media coverage of “deepfake this” or “deepfake that” predictably circles around sensationalism, involving deceased celebrities if the audience risks being unmoved. But on a more ordinary level, is it progress to no longer be sure that the person sounding like or looking like a member of your family on the other end of a voice or video call really is that person? Is it progress to need a codeword to be somewhat sure?

And is it progress to only be sure if you are standing there in the same room as them, until the day when some ghoul, possibly one who has grown tired of monetising deceased celebrities, eventually introduces lifelike androids to impersonate random people? Even before that miserable day arrives, is it progress if some other ghoul decides to “deepfake” deceased relatives to torment those who are in mourning?

One of the more interesting contributions to the discussion about “AI angst” was that giving the Vatican’s view on “AI”, eliciting some considered responses. Naturally, the Catholic church is hardly the most popular institution for a variety of reasons, but one has to concede that on matters of theology and philosophy, on issues that shape humanity’s view of itself and its relationship with the natural world, the institution can hardly be considered to be staffed with lightweights, even if we might disagree – sometimes strongly – with its position on certain social issues.

Paying for Other People’s Privilege

As with many of society’s ills, one can always choose to metaphorically put one’s head in the sand and ignore those problems that mostly seem to only affect somebody else. Regardless of whether one does so, however, such problems have an annoying habit of landing in one’s lap, anyway. A few months ago, I got a mail from my hosting provider telling me that a “lot of traffic” visiting one of my sites was “causing excessive CPU usage and disruptive service for other customers”. Of course, this was due to a multitude of client addresses hammering my site – a repository browser – and crawling all over every last published resource.

It was suggested that I put my site “behind Cloudflare” since they offer various services to mitigate the effects of “bot” traffic, with my only guidance being a link to a blog about a service that Cloudflare offers. There was no guidance about what obligations towards Cloudflare I might have as a result, whether there might be payment involved, or whether Cloudflare might want or get something else from me, were I to sign up. In the end, I just put authentication in front of my site and re-enabled it, hoping that the bots would eventually give up if all they saw was the appropriate HTTP “authentication required” response.

Consumerists would point out that I’ve been doing all of this wrong, of course. Firstly, they would tell me that I should have moved my repositories to GitHub for the convenience of having Microsoft do the hosting and me paying nothing for the privilege. They would tell me that having GitHub suck all of my content into its “AI” consumption engine, for regurgitation to other users through their copyright laundering “AI” tools, is “no big deal” and that I should be happy to see my code used in so many other places. They would presumably purr about GitHub’s cumbersome and overworked user interface and its “collaborative tools” that, in classic Microsoft style, try and insert the service into everybody’s workflow.

But the end result of me not “going with the flow” and instead paying for the privilege of offering a service on my own terms, contributing to an independent hosting provider and keeping their business viable, allowing other interested parties access to my software, and generally trying to uphold my own privacy and those who wish to interact with me and my content, is that I and others who try and uphold our autonomy and defend our own interests are punished by the Internet equivalent of looters and pilferers. And the consumerist response involves an outcome that is not entirely unintended on the part of the Internet’s dominant corporate interests.

Just as it is rather convenient for the likes of Microsoft to promote “AI” tools in education, only to also offer “AI” plagiarism detection tools in their ubiquitous services, it also seems rather convenient that the degraded Internet environment, increasingly subverted to feed “AI” products, just happens to drive people towards services and platforms run by the Internet’s behemoths that offer “AI” products and services as part of their headline feature sets.

Naturally, such corporations would claim innocence of any random botnet involvement, rather like how conventional suppliers of physical products routinely claim ignorance of bad things in their supply chains, but ultimately the finger-pointing is of diminishing significance. If someone cried “gold” and now an entire landscape has been obliterated, does it matter who was driving which excavator as everyone piled in to make their fortune?

The coercion used by software companies and service providers to “opt in” customers and users is, of course, entirely deliberate. Their goal is to make everyone complicit in mass copyright infringement and, amplifying classic right-wing hypocrisy that appeals to “personal responsibility” yet weakens regulation and enforcement, normalise sentiments that “everybody is doing it” and so rule-makers should “not bother” trying to curtail antisocial behaviour.

Such collective antisocial behaviour affects Free Software in other ways. It causes a degradation of software quality and Free Software contributions, potentially for malicious purposes, grinding down contributors. All of this effectively conspires against independent, modestly-resourced Free Software projects, at the very least inhibiting or shutting down open collaboration, and at worst entirely eliminating such projects as alternatives to well-resourced corporate projects.

People may claim to be applying “AI” tools with the best of intentions, but those using the tools seem to be happy for them to blatantly plagiarise other works and then effectively mark their own exam, denying what is obvious to any moderately competent observer. Or in the ominous words of one such practitioner:

“Beats me. AI decided to do so and I didn’t question it.”

And, as we have seen before, the result is a degradation in Free Software offerings, of impaired desktop experiences, of inscrutable technologies promoted by vested interests and proliferated unquestioningly, and the continuing need for all of us who advocate Free Software adoption to apologise for the state of the software we recommend, as proprietary software companies cash in on the perpetual “lack of polish” or other deficiencies, perceived or real, of that software. Fewer and fewer viable choices remain, driving the average person into the arms of the monopolists, just as they always intended.

If nothing else, Free Software advocates should be pushing back hard against “AI” proliferation, but many of them won’t. I know to my personal cost what adhering to a set of principles entails, but many people are rather more flexible when it comes to following through on what they supposedly believe in. Free Software advocacy typically sits at the intersection of at least two professions: software development and legal practice. As software developers, plenty of people claim to understand the nuances of AI, readily telling you that you have it all wrong in your criticism of the technology.

And software practitioners are often frustrated by law practitioners and their inability to correctly perceive and interpret the nature of technology. Yet, if the legal profession has concerns about AI, why should other professions accept its use so unconditionally? But that is what people do: people who should know better, who claim expertise and the right to lecture others, and then somehow wave away any concerns because it suits them. When legal cases collapse due to “vibe lawyering”, justice may be denied and crimes may go unpunished. When software fails due to vibe coding, the effects risk being just as serious and, in some cases, even worse.

Practitioners, institutions and policy-makers seem to have, for their own reasons, an inability to confront awkward problems and the grind of getting the job done. Consider the case of a catastrophic data leak which imperiled thousands of people due to inappropriate tool usage and a lazy office culture where people couldn’t even be bothered to add two words to an e-mail subject line as the only, almost laughable, technical safeguard against data breaches.

When that becomes just another opportunity to peddle and to apply “AI”, it not only demonstrates that everyone responsible has effectively “checked out” and cannot be bothered to think very hard about the basics, despite the eager software practitioner’s tiresome refrain of “security!” in every technical forum they dignify with their presence. It also shows the ethical deficit these people have with the rest of the society and the people whose lives depend on their diligence. Because we know that “AI” will be just another scapegoat when things go wrong, excuses will be made, and the “vibe” will go on.

Lawmakers, meanwhile, seem infatuated with their new toy, as they also were with predatory social media, delighted to merely rub shoulders with indulged foreign oligarchs, potentially eyeing the possibilities of lucrative sidelines or post-political positions, instead of developing and furthering the interests of those that elected them to office. As has been the case with other topics of concern, notably software patenting, it seems that lawmakers can be very happy to listen to selfish commercial interests from beyond their electoral boundaries instead of the people they are supposed to represent. (Hint: “the south-west of England” is not the region one such lawmaker was elected to represent.)

Thus, blatant plagiarism, pilfering and infringement under the pretense of a “creative” act seems entirely reasonable to distracted lawmakers, never mind that letting some of the highest valued corporations on the planet have free and unencumbered access to the lucrative output of a nation’s supposedly prized creative industries is likely to plunge those industries into economic ruin. In the case of the United Kingdom, this would be only another chapter of the nation’s leadership stupidly squandering what remains of the cultural “soft power” that the nation once had, only instead of doing so to pander to the bigoted, the ignorant and the deceived, it would be to the kind of people who gladly facilitated that earlier deception.

Some might claim that “expanding copyright” to prevent “AI” misuse of content is wrong, noting that training activities are perfectly legal and justifiable (obviously ignoring the costs incurred by those of us who pay for our Web hosting), and likening the publication of a model to “publishing facts about copyrighted works”. But what about the publication of “AI”-generated works? The suggested “simple way” to “protect artists from AI predation” involves withholding the application of copyright to such works, preventing Big Content from monetising such works, and thus deterring Big Content from adopting “AI” and firing its creative workers.

While that sounds like a great economic “hack”, it doesn’t confront the broader phenomenon of the cheapening of content at all. Big Content has arguably already pivoted to technology, streaming, and the like, but even if they might suffer from such policies, it does not mean that creators will gain. There are plenty of “slop” creators out there today whose business models do not rely on asserting copyright for their works. Will manufacturers of weird jigsaw puzzles care? They just want a stream of free stuff to slap on their products, and what if someone clones them? Well, that was last season’s product.

Allowing the “AI” peddlers to consume and regurgitate copyrighted works without constraint, allowing them to circumvent copyright under the pretense of “creativity”, is still harmful even if the peddlers cannot “protect” those regurgitated works. If someone consumes a Free Software application or library, waves the magic wand of “AI”, and then publishes it, the mere availability of this dubious derivative cheapens the original software, undermines its licensing by steering potential users to a “public domain” clone, reduces incentives to continue developing the original software, and thereby reduces its viability. Flooding the zone with such slop may benefit corporations wanting to circumvent Free Software licensing, but it does not benefit Free Software.

The Atomisation

There are numerous social and economic threats from the introduction of technology under the banner of “AI” that might justifiably elicit a negative reaction from a lot of people. When such reactions are articulated, the response from “AI’s” cheerleaders tend to involve labelling them as “emotional”, “touchy” and “irrational”. Of course, this is just a cynical way of avoiding any kind of constructive discussion about the impact of the accompanying harmful economic agenda, reminiscent of all of the other shifts in industrial policy that left people disadvantaged and impoverished.

In previous transitions involving something that could be described as automation, there was always the chance that those whose manual work was eliminated by the introduction of machines might still benefit from the exercise. When the production of textiles or clothing was to be automated, for instance, there might conceivably have been work to be had in applying one’s expertise to the design of the machines themselves. And there might also have been a limit to the automation, preserving opportunities for those particularly skilled in those tasks beyond the capabilities of the machines.

But today’s enthusiasm for “AI” suggests that whole categories of jobs will be eliminated, that no-one will write code any more, or write prose, draw, paint, make music, and so on. And the intention of those setting this agenda is that there will not be any possibility of somehow migrating to either the automation side of this transition, especially since coding in “AI” companies is meant to be left to the “AI”, or to more specialised forms of paid labour.

Previous transitions were handled very poorly indeed. In Britain, the phrase “on your bike” was largely the economic strategy of the Thatcher government as it gutted various industries, effectively condemning regions of the country to underdevelopment, unemployment and hardship, exacerbating inequalities within the country and divisions that persist to this day. Those too young to know or to remember might recognise some of the cultural phenomena from that earlier time because we see them again now in similarly potent forms, not that they ever really went away.

The practice of “divide and rule” is used to pit disadvantaged and marginalised groups against each other, steadily degrading other sections of society to make them poorer, weaker and too concerned with their own survival to question the general direction of society. Foreigners, immigrants, those with health problems, those not blessed with wealth, those who have otherwise experienced misfortune that blights their lives, and others are conditioned to expect less from their own lives, to feel guilty about their own situation and for needing or expecting help, or even asking for it.

Accompanying this is the promotion of “charity” and demonisation of taxation. How can anyone argue against charity, one might ask, if it is to do good in the world? One can argue against charity being a phenomenon that, instead of being a way of helping others, is used to diminish the help dispensed by society and to make such help entirely discretionary, conditional on the whims of the supposedly generous donor. Such a phenomenon serves only the wealthy who then get to choose where society’s money is spent, instead of paying their taxes and allowing broader society to make such decisions for itself.

Thatcher’s Britain was famous for the pursuit of selfishness, but our societies today face what might be called the “atomisation” of society where everyone is encouraged to pursue their own particular agenda and reward their own selfishness. Every time someone pipes up with the idiotic remark “nah, I’m good”, especially when concern is expressed for the weak or defenceless in society, it is merely another expression of the more culturally established (and derided) “I’m alright, Jack”.

But what such selfish remarks effectively signal is “more for me” from someone who already has plenty. And it obviously signals “less for everyone else” regardless of whether everyone else can manage with less or not. It is where people are encouraged to look out for their own personal interests at the expense of society, not realising that society makes their Amazon Prime deliveries possible in the first place.

Those impressed (or maybe bamboozled) by “AI” may remain unconvinced that the phenomenon might be an unsustainable and unprofitable bubble, one that consumes more investment than it can ever pay back, and that its most “disruptive” form has no genuine or necessary applications. After all, it is very popular and, for some, a nice little component in their plump investment portfolio featuring Nvidia and the other technological horsemen of the apocalypse. Why on Earth would anyone question its viability? Or in other phrasing:

“What are the millions of people using GPT for, if it’s a solution looking for a problem?”

The answer to such a question is this: it is for hollowing out forms of work that are fulfilling or professional, leaving the tedious, hard-to-automate stuff to human beings who will end up doing commoditised work, with all the “hustle” for that work and the driving down of salaries that it entails. All so that people whose livelihoods and lifestyles have been ringfenced one way or another can be entertained for a few seconds at a time with their “AI” videos and other “slop”.

The Degradation of Expectations

As neoliberalism took hold, with the privatisation of public services and the deregulation of various markets, consumerism was the shiny trinket that was used to distract from any hardship that was experienced or from the structural weaknesses being introduced. Having more choice in the shops may have been welcome, and if you couldn’t afford to buy anything, there was plenty of credit sloshing around to let you join the party. As for those public services, there were shares you could buy to join that party, too.

Decades of neoliberalism has given us consumerism as a solution for everything, seeing the replacement of basic, universal services with market-driven “consumer choice” involving a bunch of different “providers” who may or may not offer the same quality of products or levels of service that the customer would have a right to expect. Exciting as it may be for some to be able to choose between different flavours of utility provider, postal service, train or bus company, and so on, it all rewards the people with plenty of time, money and a propensity for getting bored easily to “shop around for the best deals”, leaving everyone else disadvantaged by unscrupulous businesses who, according to neoliberalism, merely occupy a place in the market appropriate for the “value to the customer” that they deliver.

Even where public services are maintained, consumerism and the attraction of new toys amongst ostensibly bored or ideologically fixated politicians can easily start to corrupt those services for the benefit of private operators whose only interest and instinct is to make money while they can. Having new toys to play with supposedly makes life more exciting for similarly bored and easily distracted members of the general population, never mind that they burden other public services with the consequences of indulging antisocial behaviour and make the lives of other individuals miserable.

When such companies have, for example, a record of perpetuating their abuse of healthcare professionals overwhelmed during a pandemic, what right do they have to make demands of anyone so that they can keep going with business as usual? But compliant politicians will continue to pander to them. After all, the religion of neoliberalism elevates those companies and their greedy founders to objects of worship.

And if those companies can damage publicly run services to the point of sufficient public dissatisfaction, those politicians can claim that the state always fails at everything and should leave such “business” to business. Naturally, those politicians do not offer to resign from their own jobs, although they are, I suppose, already doing the bidding of private enterprise, just not being willing to forego their publicly funded salary.

In both the public and private realms, many of us are presumably familiar with certain trends. When interacting with companies to obtain help or support with their products and services, or when attempting to navigate the public bureaucracy, one might recognise what I call the notion of “penalty laps” to compensate for cheapened and hollowed-out services, making the customer or the user spend time doing unnecessary and pointless work to prove that they deserve actual support.

One cannot simply communicate with another human being, but must instead interact with a chatbot first, which merely parrots information that is readily available and often of little help to anyone actually requiring support. Or maybe a long sequence of questions, deployed on a Web site or over the telephone, must be carefully navigated before a human can be summoned to communicate. Such inconvenience is used to herd the increasingly unhappy customer or user into other channels, framed as being more “convenient” and almost certainly in the form of an “app”.

The easily amused or distracted customer or user may think that they are getting better service for free, but they are in fact subsidising the operating costs of the institution concerned. And so, businesses and institutions continue their externalisation of operating expenses, insisting that customers or users provide their own equipment to interact with a business or service, paying the advertised costs as well as the hidden costs of acquiring a nice phone, insuring it, replacing it when those institutions decide that it is “too old”. You pay to work for them now, in case you didn’t realise.

Proof of “work” may have been the selling point of ruinous, dubious and entirely unnecessary cryptocurrency schemes, but making people continually prove that they are somehow worthy is an established trait of an exploitative society. Given that the neoliberal society will continue to eliminate decent, fulfilling work, one might expect efficient mechanisms to help people find other opportunities. But instead of making opportunities for people who seek help, such a society and its institutions has such people effectively doing useless busy work applying for non-existent jobs in a largely fictional “market”.

And with no economic strategy or vision, but with a worldview that involves pulling the public purse strings as tightly as possible to close the purse, they perversely create plenty of jobs in the bureaucracy for people to administer penalties and to deliver judgement on other people’s personal situations. Didn’t apply for enough meaningless non-jobs or unsuitable, informal, casual work? Then the entitled people in the bureaucracy aspiring to be like their managers and political leaders, cultivating a belief that they “deserve” their opportunities unlike those lazy people on their books, will deny the help that people seek just to be able to turn their lives around.

Because the attitude in the neoliberal economies is that looking for a job should actually be a job. So, the branch of the state administering work-related benefits is mostly there to coerce people into looking for jobs that they aren’t suitable for, causing huge volumes of speculative applications that even the applicants know are senseless, making it harder for genuine recruitment to occur.

And then there are all the fake job adverts, either posted to cover up nepotism or corrupt practices, or to puff up some company’s image, or to give functionaries something to do. And people wonder why it is that there’s no real economic growth, allowing our glorious leaders to claim that there is no money to spend on building up society, that improving the quality of life for those who need it will just have to wait.

“We can’t afford it” is the perpetual excuse for a lack of public services and crumbling infrastructure. Can’t get to see a doctor or another healthcare specialist? With the zone flooded with “AI”, desperate people turn to desperate solutions with disastrous results. We should, of course, expect better support for the people who need it in our societies, from actual human professionals, but that requires investment and commitment. Expect to be fobbed off with technological toys that cosplay the experience of interacting with professionals instead. Unless you are wealthy or well-connected, because then only the best will do.

“AI” is just the latest escalation in the practice of “divide and rule”, facilitating the targeting of individuals to such a degree that the powerful can go beyond merely targeting minorities and smaller groups while having to indulge the majority to keep them passive and broadly supportive of such cruelty. With “AI”, the powerful can potentially pull each person apart from those closest to them, corrupting their communications and poisoning their relationships, to a degree not even achieved through the manipulation of people’s lives by predatory social media. Populists and the affluent will gladly embrace “AI” for all the amusement it offers and for as long as it lasts, unaware that there may no longer be a “safe” majority for them to hide within any more.

It is not exactly an exaggeration to consider “AI” as an existential threat requiring collective action, not least because of its ruinous power and resource consumption, coupled with the ineffective measures to tackle climate change that are formulated by those politicians always encouraging us to wait for better times. Such times will never arrive if the barons of “AI” and others who prioritise their own wealth are setting the agenda, of course.

There seem to be plenty of people who think that by enthusing about ruinous practices like “AI”, parroting the rhetoric of the oligarchs, and otherwise only caring about things that benefit them personally, that they will somehow get to join that club of the wealthy, that they too will get to go on the spaceship.

To those people, I can only say this: instead of joining the club, even you will find yourself alone, having been torn from the fabric of the society you helped to obliterate, but instead of living your best life, you will be miserable and there will be nobody left to defend you.

And by the way, there is no spaceship.

The Scandisplaining of Digital Freedoms

Monday, April 6th, 2026

Recently, the Norwegian Consumer Council has been enjoying a degree of publicity for a campaign they have been running about the “enshittification” of the Internet, riffing on the overused term coined by Cory Doctorow to describe the deliberate degradation of products and services given an absence of real choice and competition in the marketplace. Naturally, international news organisations have lapped this up as another example of supposedly progressive Scandinavian social and political priorities. That plucky Norway could show the rest of the world how to deal with predatory Big Tech.

As always, the story is rather more nuanced if one is more familiar with how things typically go in Norway and, I can well imagine, the rest of Scandinavia. First of all, one can justifiably wonder where these people have been living for the past quarter century. Time and again, Free Software advocates have pointed out that a reliance on proprietary software and platforms ultimately harms individuals, institutions and societies. Over ten years ago now, I myself sought to prevent the introduction of a proprietary groupware platform in a public institution that had been my employer. By the time I was meeting the hostile and dismissive leadership of that institution, I wasn’t even working there any more. The meeting ended with the overpromoted head of the institution, flanked by his privileged and/or hectoring enforcers, insisting that “Microsoft would never do anything that wasn’t in their customers’ best interests”. In the sitcom version of events, cue the laughter track.

I ended up doing some legwork in my own time to dig into the nature of the commercial arrangements between the institution and its supplier, but all the “commercially sensitive” bits involving actual monetary amounts were redacted. My own motivation to pursue the matter was rather tempered by the fact that some of those who felt that this commercial arrangement impeded various workplace freedoms of theirs would not pursue the matter themselves. After all, they didn’t want their nice salary and other bespoke workplace arrangements in their permanent employment position endangered by any kind of actual activism. Evidently, this was the job of the guy whose temporary contract had ended. Having presented my findings, nothing further happened and those precious freedoms were not generally upheld. But technical workarounds let various people pretend that business could proceed as usual. Their nests remained fully feathered. Screw the plebs: they would have to get used to Exchange, anyway.

I wish I could claim prescience in the whole affair, but it was pretty obvious how things would end up going. I remarked that at some point, “on-premises” Exchange would be phased out in favour of a cloud-based solution, likely to be what I tend to call Office 360. Fast-forward to recent times, and of course that is exactly what has been happening, with the institution presumably pleading poverty. Why not just make your employees customers of a foreign corporation, regardless of the wrapping of the institutional package? They have to take that deal whether they like it or not. Now, there may have been some chatter about these new arrangements. Right now, I cannot consult my own archives to check, but I would have been right to say “I told you so”. Some of the supposed champions of freedom may be more concerned that with a full-on migration to the cloud, all those neat workarounds of theirs might finally become obsolete. Maybe it will finally become time for them to face up to everybody else’s reality.

Once Upon a Time

For a time, Norway had a public agency that was meant to promote Free Software and interoperability in the public sector. Lobbied by the usual proprietary software vultures, the incoming right-wing government happily shut it down in a wave of the usual austerity that such governments love to inflict on public institutions, public infrastructure, and the wider population, just as they slash taxes for the wealthy in the name of “wealth creation”. Precisely these kinds of political choices, familiar from countries with more obvious records of punitive austerity, like Britain under the likes of Margaret Thatcher, John Major, David Cameron, and the subsequent clown car parade of prime ministers in the last Conservative administration, degrade societal resilience and undermine things like digital and technological sovereignty that are now suddenly in vogue.

An international audience might be surprised that supposedly egalitarian and progressive Norway might exhibit such traits. Comparable political shifts in Sweden and Denmark, undoubtedly inspired by the cruelty-enabling culture of “personal aspiration” (selfishness, in other words) promoted by British Conservatism, have similarly gone unnoticed or have been gradually forgotten. That a bunch of people in Norway haven’t managed to follow along rather suggests that the tradition of navel-gazing is alive and surprisingly well. After all, if the gravy train kept running from your station, then what was the problem again, exactly?

This latest initiative’s open letter to the Norwegian government notes that the French public sector made concerted efforts to introduce Free Software from 2012 onwards. How quickly people forget that back in 2012, that soon-to-be-culled Norwegian public agency was trying to bring the Norwegian public sector round to undertaking similar kinds of endeavour, doing it the celebrated Scandinavian way of not treading on too many toes. Naturally, such an approach was never going to be resistant to the kind of predatory corporate interests who routinely siphon billions of crowns, pounds, euros and dollars from the public sector locally and internationally for supplying their mediocre and often blatantly deficient products and services, subjecting governments and thus taxpayers to coercive, ruinous and yet seemingly perpetual contracts.

Those previous efforts might have been envisaged as a viable means to a righteous end, but they may have ended up being regarded simply as a nice supplement for those already engaging in the kind of advocacy that makes people feel like they’re “doing something”. It was another voice in the chorus of righteousness, and with an accompanying annual conference, it was yet another venue to talk about things and congratulate each other, rather than do those things necessary to actually advance the cause. It was even held in Svalbard on one occasion, if I remember correctly, because nothing says more about a commitment to sustainability than having a bunch of people jet off to the realm of polar bears and those melting ice floes.

So, what things would have advanced the cause, then? Well, the first thing would have been to actually fund Free Software at scale and to make sure that when people tout solutions for widespread use, they are actually fit for the job. And no, gathering up a bunch of existing projects and promoting them is absolutely not the same thing. When I investigated Free Software groupware solutions, the popular wisdom was that Kolab had “solved” groupware many years earlier. It turned out that Kolab had been rewritten as version 3 and was inadequate in a number of ways: a half-finished solution. Efforts to try and engage with the developers became futile. Despite pitching the software as a collaboratively developed Free Software project, all they really cared about was whether the software would support the operations of a now-liquidated Swiss company riding the privacy bandwagon and largely targeting the Jason Bourne brigade.

Such experiences made me suspect that Kolabs 1 and 2 might not have been adequate all along, either, possibly pitched as a good-enough solution to a problem that hadn’t been fully understood, all to serve various big-fish, small-pond commercial interests. Later on, I discovered Zarafa, which became Kopano, and wished I had found it earlier. It may have been a better choice, not least because the dopes insisting on Microsoft Everything would have seen the Web interface and thought that it was straight out of Redmond, unlike Microsoft’s own Web-based Outlook solution, perversely. Sadly, as a sign of our depressing times, Kopano is now becoming (or has become) a cloud-only product.

It may seem obvious, but it still needs saying: general advocacy and encouragement isn’t sufficient; people need working solutions. And experience also shows that one cannot leave it to “the market”, whatever that is in Free Software. For many years, I have used KMail to read and send e-mail. It remains surprisingly usable today, “surprisingly” because its developers decided at one point to adopt some weird middleware layer called Akonadi, entranced by the promises made by Microsoft and/or Apple to deliver pervasive “desktop search” capabilities in their own products. Whether Microsoft or Apple actually delivered or, more likely, abandoned or scaled back those promises, I am now compelled to run the command “akonadictl restart” almost every day to “unwedge” my mail client and get to see newly arrived mail.

(It also didn’t help that the developers introduced MySQL – now MariaDB – into the mix, and that in the maintenance of that product, which throughout its existence under its various names could uncharitably be described as Monty Widenius’ Flying Shitshow, someone decided to bump a version number in a minor (or actually a patch-level) release that caused the whole stack of software to refuse to access the arguably unnecessary database underpinning KMail, making my mail inaccessible. Fortunately, my case was heard within Debian, and remedies were eventually applied. Before that, I had to recompile the package with an appropriate workaround. A victory for Free Software pragmatism, but good luck to the average user suddenly staring down a potentially indefinite e-mail outage.)

Free Software groupware applications, like the overall desktop experience, stopped showing year-on-year progress in functionality some time ago. Already degraded in various ways when the developers of such technology became distracted by what the big players said they would be doing, the arrival of social media seemed to make some developers believe that the era of the mail program had ended altogether. It apparently became more important to some of those developers to add “share on Facebook” menu items to random applications than to ensure that their applications were still usefully serving their loyal users.

(Observations that technologies like ActivityPub and applications like Mastodon can supplant boring old e-mail and that they have shown considerable growth, and yet remain in a niche, rather overlook – indeed, neglect – the fundamental variety in groupware and collaborative technologies. A strategy of “ActivityPub everywhere” is like keeping the big hammer and throwing away all the other tools in the toolbox. One might suspect that it is only now gaining traction because there are people who want a similar kind of buzz to the one they get from their favourite doomscrolling services but feel bad going back to the same, increasingly disreputable dealer.)

The lesson here is that someone firstly needs to develop functional software, but then to check and double-check that functionality, as well as continuously verifying whether the software meets people’s needs. This cannot be left to random developers or to companies. The big Linux distributions never really cared enough about the average user to finish the job, merely bundling stuff and maybe hiring developers to either dabble with their projects or to make them only good enough for narrow corporate advantage. As far as Red Hat’s bottom line was concerned, all that ever really mattered was a placeholder desktop good enough to do a bit of point-and-click system administration for a bunch of file and print servers propping up a bunch of Windows desktops, or for software development most likely involving Java and targeting “the enterprise”. Such companies happily make their own employees use proprietary software and services for the kinds of tasks that the average user does, regardless of whether they might be using Free Software office and groupware suites instead.

The right approach would have been a concerted government initiative resistant to lobbying and corruption, not mere advocacy, nudging and cajoling. Genuine standards and interoperability could have been mandated and corrupted pseudo-standards like Microsoft’s fast-tracked office formats rejected. Agencies like Statistics Norway should have been taken to task for stipulating “.doc” as their chosen “interoperable” format, with those responsible sent back to finish, or maybe even begin, their education. One might have learned from experiences in other countries, like that of the public key encryption software Gpg4win in Germany, where a genuine governmental need transformed the financial viability of the GnuPG software project from one which had been chronically underfunded and practically relying on the charity of its principal developer to a thriving, viable enterprise.

Proprietary software lobbyists had criticised Norway’s earlier soft-touch efforts, claiming that the public agency concerned was subsidising uncompetitive software that was presumably the work of hippies and communists. There was one case of a public institution wanting to give money to a Free Software project in the realm of PDF generation, if I recall correctly. Upon discovering that it was Free Software, decision-makers refused to make the donation: after all, if those people were giving their code away, why pay after the fact? Such paper-pushing idiots evidently failed to understand that such windfalls may only happen once. Some of them would undoubtedly and routinely use the Norwegian word for “farmers” in the pejorative way for people they might consider ignorant, and yet farmers manage to understand that harvests do not magically occur and re-occur without cultivation and sustenance.

The right approach would also have involved mandating Free Software for publicly funded projects and for public infrastructure, as advocated by the FSFE’s Public Money Public Code campaign. Proprietary software interests would undoubtedly howl at such stipulations, claiming that their secret sauce software, supposedly written by Top Men, would be unfairly excluded from such markets. But just as even some ostensibly left-wing politicians have forgotten, “markets” only exist at the indulgence of governments and regulators, and they only operate in the public interest if properly framed and regulated. Don’t want to give your customers the freedom to maintain the code they are paying for? Feel free to seek opportunities elsewhere, then. Cushy lock-in deals for the locally well-connected should have gone the way of Norsk Data when that company fell to Earth.

Planet Norway

Attitudes to societal threats seem to be remarkably relaxed in our supposedly enlightened democracies and their institutions. The casual, pervasive use of predatory social media platforms continues, propped up by state institutions claiming that they need to have a presence in all the different channels, but where one suspects that a few managers and their appointees just want to play with some toys and puff up their public profiles. Instead of leveraging the resources of the state and providing reliable channels of communication, such bodies post announcements, updates and nonsense via foreign-owned hate speech venues. Norwegian political party leaders even decided at one point that they had to promote themselves on Snapchat, egged on by one of the national broadcasters. Now, we see the unquestioning adoption and promotion of “AI” and chatbots by institutions that risk being obliterated by such technologies.

At the individual level, concerns about “screen time” and the use of tablets and other devices in places like primary schools have been aired and may well be justified, given the likely developmental impact on children of such devices, but it all comes across as pearl-clutching when one suspects that the lifestyle of the vocal parents probably revolves around their phone, “apps”, streaming services, and rather too much screen time of their own. And some of those concerned about screen time would probably drop their objections if a study conveniently came along to assuage their worries. It is just like those very Scandinavian traits of stuffing tobacco products into one’s mouth or spending time at the tanning studio, neither of which are actually healthy. Over the years, various pseudo-academic figures and findings have occasionally floated up into public prominence, insisting that such things are perfectly fine. Why wouldn’t that happen with technology? The companies involved could certainly afford to pay for a bit of fake research and a few willing advocates.

One gets the impression that many of the different factions that might coalesce around a campaign about “enshittification” aren’t really trying to achieve systemic change: they merely want to negotiate a better deal. Such people knew what they were getting into by using free-of-charge services and often explicitly rejecting genuinely free alternatives which cost only modest sums to run. Such people were also aware that their data might wander off into the cloud and away to places where it would be mined and exploited for all it is worth, but caring about it was just too much bother. In institutions, all it takes is for Microsoft to “pinky swear” that it complies with data protection regulations. Institutional capabilities are then run down and alternatives abandoned, just so the toys can be unpacked, sending non-trivial sums overseas instead of cultivating knowledge, opportunities and wealth locally.

One wonders how seriously people really want to take such matters, or whether they just want a hobby and to feel good about a bit of casual activism. I am reminded of the climate litigation brought against the Norwegian state for its continuing policy of fossil fuel extraction, noting that climate change presents an existential threat reaching far beyond the confines of the nation, affecting the population of the entire planet, but where the country’s constitution at least worries about the state of the nation for future generations. The outcome – a defeat for the litigants – might not merely be described as positioning Norway as having “first world problems”: that would be business as usual. Instead, it might reasonably be described as situating Norway on a planet of its very own. One where the mere accumulation of money protects and even “benefits future generations“, evidently.

Hobby activism is typical of places where the stakes remain relatively low for those doing the campaigning, but on Planet Norway it arguably reaches another level, where hardship along any given dimension is often perceived to be a problem that only foreigners in poorer countries experience (or poorer planets, maybe). Or it is marginalised and framed as something that only affects a vanishingly small number of people, conveniently aided by policies and attitudes that seek to hide those who are struggling and blame them for their own predicament. But a reckoning is surely overdue even in matters of preference as opposed to need. After all, what good is it to advocate that children learn to code if nothing is done about the way “AI” is devastating Free Software and undermining paid work?

Unlike outsider perceptions of the money flowing freely in Norway thanks to the oil fund, the purse strings are generally not loosened for those whose professions have been gutted. And for younger people, they are more likely to be told to take shifts at Ikea than get the help they need. Yes, this is actually a thing. It also explains why on one local recruitment site, when filling out one’s employment history, the default value in the employer field reads (or read when I last checked) “Ikea Furuset”: one of the two full-scale Ikea stores serving the Oslo area. Not that there’s anything wrong with working at Ikea, but I doubt that the kind of aspirational parents hoping to give their little darlings a head start in the world, funding their higher education and other ambitions, would see it as a fitting venue for their offspring’s many talents.

Yet Another Elephant in the Room

This latest campaign recommends Free Software and open protocols in public procurement, which is what previous efforts pretty much did, too. The accompanying report even suggests funding alternatives, but then delegates this to discretionary funds and foundations, conveniently avoiding the structural issues. But the very reason why “dominant big tech companies have deep pockets” is through a perversion of economic incentives. Firstly, they have cultivated an “expectation of zero” where individual and institutional customers expect software and services to cost nothing. Thus, any investment in software is regarded as unnecessary because those nice corporations are giving away shiny free stuff. They also front-run various standards to make any kind of competition ruinously expensive to pursue.

And yet software cannot be developed without expenditure, and certainly not with the latest instrument being used to sustain those cultivated consumer expectations. “AI”, which is hyped to make it seem like software can be whipped up at a moment’s notice at no cost, relies on industrial scale plagiarism, colossal data centre and hardware investments, and ruinous levels of power consumption. What has funded these scorched earth tactics is a corrosive business model that inflicts highly lucrative but consequence-free, unpoliced advertising channels on billions of people. Developers at predominantly American technology company aren’t expected to work for free, after all. We are apparently meant to feel sorry for them. Some of them have to pay gentrification-level prices for their homes in places gentrified by themselves and their colleagues. Their bosses expect huge bonuses, their own yacht, island, spaceship…

Claiming the juvenile right of unrestricted free speech to drive engagement, Big Tech has largely allowed unregulated commerce to proceed, undermining traditional safeguards, endangering individuals, and threatening and even shuttering viable, responsible businesses. Any efforts that ignore such structural issues will fail to find the money required to make a difference. I, or my Web publisher, may be held to account for what I write, but anything goes on the predatory Big Tech platforms, whether it is the puerile variant of “free speech” cultivated by the increasingly fragile American dream, or whether it is fraudulent or outright illegal advertising promoting dishonest and criminal enterprises. Allowing such business to continue as usual simply enriches these predators while impoverishing ourselves.

The operators of those platforms are getting a free ride at a severe cost to us and our societies. It may not seem like it to the random punter getting “a great deal”, but we all pay for that deal in the end. And even little Norway has its own commerce platforms that look the other way, especially where anything related to the property bubble is concerned. Why not advertise your rental property using the fancy sales prospectus from a few years ago when you or a previous owner bought the property from the developers? Oh, “caveat emptor“, of course. How about advertising a property with a non-existent address? How much money is laundered even in little Norway, or is that kind of thing only done by foreign people in less enlightened countries? Maybe even the same people whose oil is “dirty” while Norway’s is “clean”.

Anything short of changing the flawed terms under which those companies operate, which one can barely believe are legal in the first place, is nothing more than consumerist tinkering. Naturally, there will be howling from entitled consumers, happy to have random people scurrying around with their urgent shopping deliveries, just as established, essential services like postal mail risk being degraded to the point of near uselessness or even eliminated altogether (which is something that might blow the minds of media people in countries like the UK where postal services still largely hold up and where deliveries still happen six days a week). But society cannot pander to people’s elevated levels of personal entitlement forever, despite the best efforts of populist politicians living in their own bubble of affluence.

I suppose I could be accused of being a simple “outsider” who still doesn’t understand Norway after all these years. Recently, I read a ridiculous piece claiming that Norwegian culture and society does not tend to focus on personalities. That would be news to readers of gossip magazines, newspapers, and even the finance magazine I would read in a medical specialist’s waiting room, keeping us all up to date on which members of the Norwegian financial and legal elites were suing which other members of those elites, all while name-dropping and generally cultivating individuals as movers and shakers.

But instead of cherry picking parts of the nation’s broader, pan-Scandinavian heritage – Janteloven in that particular case – and perpetuating all the other familiar tropes, so often featured in selective, favourable cultural projection delivered through credulous or lazy journalistic coverage, maybe lessons could be learned from one of Hans Christian Andersen’s more famous tales. It really doesn’t take very much to point out that notions of Scandinavian preparedness in the face of digital exploitation and its accompanying threats are somewhat overstated. Anyone caring to take a closer look at the emperor’s reputation as a fully clothed, well-tailored individual can do it, if they actually care to.

I just wouldn’t recommend anyone holding their breath for too long in anticipation of decisive, credible Scandinavian action that might show the wider world the way forward. After that sharp intake of breath witnessing the emperor in the buff, please exhale or you might eventually expire. Just like in other countries, there would have to be a cultural shift away from shopping for “big brand” software, wheeling in the consultants, having retreats and “away days” learning about the next set of goodies on the proprietary software treadmill, and treating predatory social media platforms as merely a harmless guilty pleasure. Maybe local commerce wouldn’t have to be mediated by “apps” and trillion dollar corporations, either. And there would have to be more than tame, hobbyist lobbying and performative activism: quite the challenge when rocking the boat in countries like Norway is simply never done.

But it all made for a good story about Scandinavia leading the way, as usual, so “job done”, I guess.

Pessimistic perspectives on technological sustainability

Tuesday, August 16th, 2022

I was recently perusing the Retro Computing Forum when I stumbled across a mention of Collapse OS. If your anxiety levels have not already been maxed out during the last few years of climate breakdown, psychological warfare, pandemic, and actual warmongering, accompanied by supply chain breakdowns, initially in technology and exacerbated by overconsumption and spivcoin, now also in commodities and exacerbated by many of those other factors (particularly the warmongering), then perhaps focusing on societal and civilisational collapse isn’t going to improve your mood or your outlook. Unusually, then, after my last, rather negative post on such topics, may I be the one to introduce some constructive input and perhaps even some slight optimism?

If I understand the motivations behind Collapse OS correctly, it is meant to provide a modest computing environment that can work on well-understood, commonplace, easily repaired and readily sourced hardware, with the software providing the environment itself being maintainable on the target hardware, as opposed to being cross-built on more powerful hardware and then deployed to simpler, less capable hardware. The envisaged scenario for its adoption is a world where powerful new hardware is no longer produced or readily available and where people must scavenge and “make do” with the hardware already produced. Although civilisation may have brought about its own collapse, the consolation is that so much hardware will have been strewn across the planet for a variety of purposes that even after semiconductor fabrication and sophisticated manufacturing have ceased, there will remain a bounty of hardware usable for people’s computational needs (whatever they may be).

I am not one to try and predict the future, and I don’t really want to imagine it as being along the same lines as the plot for one of Kevin Costner’s less successful movies, either, but I feel that Collapse OS and its peers, in considering various dystopian scenarios and strategies to mitigate their impacts, may actually offer more than just a hopefully sufficient kind of preparedness for a depressing future. In that future, without super-fast Internet, dopamine-fired social media, lifelike gaming, and streaming video services with huge catalogues of content available on demand, everyone has to accept that far less technology will be available to them: they get no choice in the matter. Investigating how they might manage is at the very least an interesting thought experiment. But we would be foolish to consider such matters as purely part of a possible future and not instructive in other ways.

An Overlap of Interests

As readers of my previous articles will be aware, I have something of an interest in older computers, open source hardware, and sustainable computing. Older computers lend themselves to analysis and enhancement even by individuals with modest capabilities and tools because they employ technologies that may have been regarded as “miniaturised” when they were new, but they were still amenable to manual assembly and repair. Similarly, open source hardware has grown to a broad phenomenon because the means to make computing systems and accessories has now become more accessible to individuals, as opposed to being the preserve of large and well-resourced businesses. Where these activities experience challenges, it is typically in the areas that have not yet become quite as democratised, such as semiconductor fabrication at the large-scale integration level, along with the development and manufacture of more advanced technology, such as components and devices that would be competitive with off-the-shelf commercial products.

Some of the angst around open source hardware concerns the lack of investment it receives from those who would benefit from it, but much of that investment would largely be concerned with establishing an ability to maintain some kind of parity with modern, proprietary hardware. Ignoring such performance-led requirements and focusing on simpler hardware projects, as many people already do, brings us a lot closer to retrocomputing and a lot closer to the constrained hardware scenario envisaged by Collapse OS. My own experiments with PIC32-based microcontrollers are not too far removed from this, and it would not be inconceivable to run a simple environment in the 64K of RAM and 256K of flash memory of the PIC32MX270, this being much more generous than many microcomputers and games consoles of the 1980s.

Although I relied on cross-compilation to build the programs that would run on the minimal hardware of the PIC32 microcontroller, Collapse OS emphasises self-hosting: that it is possible to build the software within the running software itself. After all, how sustainable would a frugal computing environment be if it needed a much more powerful development system to fix and improve it? For Collapse OS, such self-hosting is enabled by the use of the Forth programming language, as explained by the rationale for switching to Forth from a system implemented in assembly language. Such use of Forth is not particularly unusual: its frugal demands were prized in the microcomputer era and earlier, with its creator Charles Moore describing the characteristics of a computer designed to run Forth as needing around 8K of RAM and 8K of ROM, this providing a complete interactive system.

(If you are interested in self-hosting and bootstrapping, one place to start might be the bootstrapping wiki.)

For a short while, Forth was perhaps even thought to be the hot new thing in some circles within computing. One fairly famous example was the Jupiter Ace microcomputer, developed by former Sinclair Research designers, offering a machine that followed on fairly closely from Sinclair’s rudimentary ZX81. But in a high-minded way one might have expected from the Sinclair stable and the Cambridge scene, it offered Forth as its built-in language in response to all the other microcomputers offering “unstructured” BASIC dialects. Worthy as such goals might have been, the introduction of a machine with outdated hardware specifications condemned it in its target market as a home computer, with it offering primitive black-and-white display output against competitors offering multi-colour graphics, and offering limited amounts of memory as competitors launched with far more fitted as standard. Interestingly, the Z80 processor at the heart of the Ace was the primary target of Collapse OS, and one might wonder if the latter might actually be portable to the former, which would be an interesting project if any hardware collector wants to give it a try!

Other Forth-based computers were delivered such as the Canon Cat: an unusual “information appliance” that might have formed the basis of Apple’s Macintosh had that project not been diverted towards following up on the Apple Lisa. Dedicated Forth processors were even delivered, as anticipated already by Moore back in 1980, reminiscent of the Lisp machine era. However, one hardware-related legacy of Forth is that of the Open Firmware standard where a Forth environment provides an interactive command-line interface to a system’s bootloader. Collapse OS fits in pretty well with that kind of application of Forth. Curiously, someone did contact me when I first wrote about my PIC32 experiments, this person maintaining their own microcontroller Forth implementation, and in the context of this article I have re-established contact because I never managed to properly follow up on the matter.

Changing the Context

According to a broad interpretation of the Collapse OS hardware criteria, the PIC32MX270 would actually not be a bad choice. Like the AVR microcontrollers and the microprocessors of the 1980s, PIC32MX microcontrollers are available in convenient dual in-line packages, but unlike those older microprocessors they also offer the 32-bit MIPS architecture that is nicer to program than the awkward instruction sets of the likes of the Z80 and 6502, no matter how much nostalgia colours people’s preferences. However, instead of focusing on hardware suitability in a resource-constrained future, I want to consider the messages of simplicity and sustainability that underpin the Collapse OS initiative and might be relevant to the way we practise computing today.

When getting a PIC32 microcontroller to produce a video signal, part of the motivation was just to see how straightforward it might be to make a simple “single chip” microcomputer. Like many microcomputers back in the 1980s, it became tempting to consider how it might be used to deliver graphical demonstrations and games, but I also wondered what kind of role such a system might have in today’s world. Similar projects, including the first versions of the Maximite have emphasised such things as well, along with interfacing and educational applications (such as learning BASIC). Indeed, many low-end microcontroller-based computers attempt to recreate and to emphasise the sparse interfaces of 1980s microcomputers as a distraction-free experience for learning and teaching.

Eliminating distractions is a worthy goal, whether those distractions are things that we can conveniently seek out when our attention wanders, such as all our favourite, readily accessible Internet content, or whether they come in the form of the notifications that plague “modern” user interfaces. Another is simply reducing the level of consumption involved in our computational activities: civilisational collapse would certainly impose severe limits on that kind of consumption, but it would seem foolish to acknowledge that and then continue on the same path of ever-increasing consumption that also increasingly fails to deliver significant improvements in the user experience. When desktop applications, mobile “apps”, and Web sites frequently offer sluggish and yet overly-simplistic interfaces that are more infuriating than anything else, it might be wise to audit our progress and reconsider how we do certain things.

Human nature has us constantly exploring the boundaries of what is possible with technology, but some things which captivate people at any given point on the journey of technological progress may turn out to be distracting diversions from the route ultimately taken. In my trawl of microcomputing history over the last couple of years, I was reminded of an absurd but illustrative example of how certain technological exercises seem to become the all-consuming focus of several developers, almost being developed for the sake of it, before the fad in question flames out and everybody moves on. That example concerned “morphing” software, inspired by visual effects from movies such as Terminator 2, but operating on a simpler, less convincing level.

Suddenly, such effects were all over the television and for a few months in late 1993, everyone was supposedly interested in making the likeness of one famous person slowly change into the likeness of another, never mind that it really required a good library of images, this being somewhat before widespread digital imaging and widespread image availability. Fast-forward a few years, and it all seemed like a crazy mass delusion best never spoken of again. We might want to review our own time’s obsessions with performative animations and effects, along with the peculiarities of touch-based interfaces, the assumption of pervasive and fast connectivity, and how these drive hardware consumption and obsolescence.

Once again, some of this comes back to asking how people managed to do things in earlier times and why things sometimes seem so complicated now. Thinking back to the 1980s era of microcomputing, my favourite 8-bit computer of those times was the Acorn Electron, this being the one I had back then, and it was certainly possible to equip it to do word processing to a certain level. Acorn even pitched an expanded version as a messaging terminal for British Telecom, although I personally think that they could have made more of such opportunities, especially given the machine’s 80-column text capabilities being made available at such a low price. The user experience would not exactly be appealing by today’s standards, but then nor would that of Collapse OS, either.

When I got my PIC32 experiment working reasonably, I asked myself if it would be sufficient for tasks like simple messaging and writing articles like this. The answer, assuming that I would enhance that effort to use a USB keyboard and external storage, is probably the same as whether anyone might use a Maximite for such applications: it might not be as comfortable as on a modern system but it would be possible in some way. Given the tricks I used, certain things would actually be regressions from the Electron, such as the display resolution. Conversely, the performance of a 48MHz MIPS-based processor is obviously going to be superior to a 2MHz 6502, even when having to generate the video signal, thus allowing for some potential in other areas.

Reversing Technological Escalation

Using low-specification hardware for various applications today, considering even the PIC32 as low-spec and ignoring the microcomputers of the past, would also need us to pare back the demands that such applications have managed to accumulate over the years. As more powerful, higher-performance hardware has become available, software, specifications and standards have opportunistically grown to take advantage of that extra power, leaving many people bewildered by the result: their new computer being just as slow as their old one, for example.

Standards can be particularly vulnerable where entrenched interests drive hardware consumption whilst seeking to minimise the level of adaptation their own organisations will need to undertake in order to deliver solutions based on such standards. A severely constrained computing device may not have the capacity or performance to handle all the quirks of a “full fat” standard, but it might handle an essential core of that standard, ignoring all the edge cases and special treatment for certain companies’ products. Just as important, the developers of an implementation handling a standard also may not have the capacity or tenacity for a “full fat” standard, but they may do a reasonable job handling one that cuts out all the corporate cruft.

And beyond the technology needed to perform some kind of transaction as part of an activity, we might reconsider what is necessary to actually perform that activity. Here, we may consider the more blatant case of the average “modern” Web site or endpoint, where an activity may end up escalating and involving the performance of a number of transactions, many of which superfluous and, in the case of the pervasive cult of analytics, exploitative. What once may have been a simple Web form is often now an “experience” where the browser connects to dozens of sites, where all the scripts poll the client computer into oblivion, and where the functionality somehow doesn’t manage to work, anyway (as I recently experienced on one airline’s Web site).

Technologists and their employers may drive consumption, but so do their customers. Public institutions, utilities and other companies may lazily rely on easily procured products and services, these insisting (for “security” or “the best experience”) that only the latest devices or devices from named vendors may be used to gain access. Here, the opposite of standardisation occurs, where adherence to brand names dictates the provision of service, compounded by the upgrade treadmill familiar from desktop computing, bringing back memories of Microsoft and Intel ostensibly colluding to get people to replace their computer as often as possible.

A Broader Brush

We don’t need to go back to retrocomputing levels of technology to benefit from re-evaluating the prevalent technological habits of our era. I have previously discussed single-board computers like the MIPS Creator CI20 which, in comparison to contemporary boards from the Raspberry Pi series, was fairly competitive in terms of specification and performance (having twice the RAM of the Raspberry Pi Models A+, B and B+). Although hardly offering conventional desktop performance upon its introduction, the CI20 would have made a reasonable workstation in certain respects in earlier times: its 1GHz CPU and 1GB of RAM should certainly be plenty for many applications even now.

Sadly, starting up and using the main two desktop environments on the CI20 is an exercise in patience, and I recommend trying something like the MATE desktop environment just for something responsive. Using a Web browser like Firefox is a trial, and extensive site blocking is needed just to prevent the browser wanting to download things from all over the place, as it tries to do its bit in shoring up Google’s business model. My father was asking me the other day why a ten-year-old computer might be slow on a “modern” Web site but still perfectly adequate for watching video. I would love to hear the Firefox and Chrome developers, along with the “architects of the modern Web”, give any explanation for this that doesn’t sound like they are members of some kind of self-realisation cult.

If we can envisage a microcomputer, either a vintage one or a modern microcontroller-based one, performing useful computing activities, then we can most certainly envisage machines of ten or so years ago, even ones behind the performance curve, doing so as well. And by realising that, we might understand that we might even have the power to slow down the engineered obsolescence of computing hardware, bring usable hardware back into use, and since not everyone on the planet can afford the latest and greatest, we might even put usable hardware into the hands of more people who might benefit from it.

Naturally, this perspective is rather broader than one that only considers a future of hardship and scarcity, but hardship and scarcity are part of the present, just as they have always been part of the past. Applying many of the same concerns and countermeasures to today’s situation, albeit in less extreme forms, means that we have the power to mitigate today’s situation and, if we are optimistic, perhaps steer it away from becoming the extreme situation that the Collapse OS initiative seeks to prepare for.

Concrete Steps

I have, in the past, been accused of complaining about injustices too generally for my complaints to be taken seriously, never mind such injustices being blatant and increasingly obvious in our modern societies and expressed through the many crises of our times. So how might we seek to mitigate widespread hardware obsolescence and technology-driven overconsumption? Some suggestions in a concise list for those looking for actionable things:

  • Develop, popularise and mandate lightweight formats, protocols and standards
  • Encourage interoperability and tolerance for multiple user interfaces, clients and devices
  • Insist on an unlimited “right to repair” for computing devices including the software
  • Encourage long-term thinking in software and systems development

And now for some elucidation…

Mandatory Accessible Standards

This suggestion has already been described above, but where it would gain its power is in the idea of mandating that public institutions and businesses would be obliged to support lightweight formats, protocols and standards, and not simply as an implementation detail for their chosen “app”, like a REST endpoint might be, but actually as a formal mechanism providing service to those who would interact with those institutions. This would make the use of a broad range of different devices viable, and in the case of special-purpose devices for certain kinds of users, particularly those who would otherwise be handed a smartphone and told to “get with it”, it would offer a humane way of accessing services that is currently denied to them.

For simple dialogue-based interactions, existing formats such as JSON might even be sufficient as they are. I am reminded of a paper I read when putting together my degree thesis back in the 1990s, where the idea was that people would be able to run programs safely in their mail reader, with one example being that of submitting forms.

T-shirt ordering dialogues shown by Safe-Tcl

T-shirt ordering dialogues shown by Safe-Tcl running in a mail program, offering the recipient the chance to order some merchandise that might not be as popular now.

In that paper, most of the emphasis was on the safety of the execution environment as opposed to the way in which the transaction was to be encoded, but it is not implausible that one might have encoded the details of the transaction – the T-shirt size (with the recipient’s physical address presumably already being known to the sender) – in a serialised form of the programming language concerned (Safe-Tcl) as opposed to just dumping some unstructured text in the body of a mail. I would need to dig out my own thesis to see what ideas I had for serialised information. Certainly, such transactions even embellished with other details and choices and with explanatory information, prompts and questions do not require megabytes of HTML, CSS, JavaScript, images, videos and so on.

Interoperability and Device Choice

One thing that the Web was supposed to liberate us from was the insistence that to perform a particular task, we needed a particular application, and that particular application was only available on a particular platform. In the early days, HTML was deliberately simplistic in its display capabilities, and people had to put up with Web pages that looked very plain until things like font tags allowed people to go wild. With different factions stretching HTML in all sorts of directions, CSS was introduced to let people apply presentation attributes to documents, supposedly without polluting or corrupting the original HTML that would remain semantically pure. We all know how this turned out, particularly once the Web 2.0 crowd got going.

Back in the 1990s, I worked on an in-house application at my employer that used a document model inspired by SGML (as HTML had been), and the graphical user interface to the documents being exchanged initially offered a particular user interface paradigm when dealing with collections of data items, this being the one embraced by the Macintosh’s Finder when showing directory hierarchies in what we would now call a tree view. Unfortunately, users seemed to find expanding and hiding things by clicking on small triangles to be annoying, and so alternative presentation approaches were explored. Interestingly, the original paradigm would be familiar even now to those using generic XML editor software, but many people would accept that while such low-level editing capabilities are nice to have, higher-level representations of the data are usually much more preferable.

Such user preferences could quite easily be catered to through the availability of client software that works in the way they expect, rather than the providers of functionality or the operators of services trying to gauge what the latest fashions in user interfaces might be, as we have all seen when familiar Web sites change to mimic something one would expect to see on a smartphone, even with a large monitor on a desk with plenty of pixels to spare. With well-defined standards, if a client device or program were to see that it needed to allow a user to peruse a large collection of items or to choose a calendar date, it would defer to the conventions of that device or platform, giving the user the familiarity they expect.

This would also allow clients and devices with a wide range of capabilities to be used. The Web tried to deliver a reasonable text-only experience for a while, but most sites can hardly be considered usable in a textual browser these days. And although there is an “accessibility story” for the Web, it largely appears to involve retrofitting sites with semantic annotations to help users muddle through the verbose morass encoded in each page. Certainly, the Web of today does do one thing reasonably by mixing up structure and presentation: it can provide a means of specifying and navigating new kinds of data that might be unknown to the client, showing them something more than a text box. A decent way of extending the range of supported data types would be needed in any alternative, but it would need to spare everyone suddenly having scripts running all over the place.

Rights to Repair

The right to repair movement has traditionally been focused on physical repairs to commercial products, making sure that even if the manufacturer has abandoned a product and really wants you to buy something new from them, you can still choose to have the product repaired so that it can keep serving you well for some time to come. But if hardware remains capable enough to keep doing its job, and if we are able to slow down or stop the forces of enforced obsolescence, we also need to make sure that the software running on the hardware may also be repaired, maintained and updated. A right to repair very much applies to software.

Devotees of the cult of the smartphone, those who think that there is an “app” for everything, should really fall silent with shame. Not just for shoehorning every activity they can think of onto a device that is far from suitable for everyone, and not just for mandating commercial relationships with large multinational corporations for everyone, but also for the way that happily functioning smartphones have to be discarded because they run software that is too old and cannot be fixed or upgraded. Demanding the right to exercise the four freedoms of Free Software for our devices means that we get to decide when those devices are “too old” for what we want to use them for. If a device happens to be no longer usable for its original activity even after some Free Software repairs, we can repurpose it for something else, instead of having the vendor use those familiar security scare stories and pretending that they are acting in our best interests.

Long-Term Perspectives

If we are looking to preserve the viability of our computing devices by demanding interoperability to give them a chance to participate in the modern world and by demanding that they may be repaired, we also need to think about how the software we develop may itself remain viable, both in terms of the ability to run the software on available devices as well as the ability to keep maintaining, improving and repairing it. That potentially entails embracing unfashionable practices because “modern” practices do not exactly seem conducive to the kind of sustainable activities we have in mind.

I recently had the opportunity to contemplate the deployment of software in “virtual environments” containing entire software stacks, each of which running their own Web server program, that would receive their traffic from another Web server program running in the same virtual machine, all of this running in some cloud infrastructure. It was either that or using containers containing whole software distributions, these being deployed inside virtual machines containing their own software distributions. All because people like to use the latest and greatest stuff for everything, this stuff being constantly churned by fashionable development methodologies and downloaded needlessly over and over again from centralised Internet services run by monopolists.

Naturally, managing gigabytes of largely duplicated software is worlds, if not galaxies, away from the modest computing demands of things like Collapse OS, but it would be distasteful to anyone even a decade ago and shocking to anyone even a couple of decades ago. Unfashionable as it may seem now, software engineering courses once emphasised things like modularity and the need for formal interfaces between modules in systems. And a crucial benefit of separating out functionality into modules is to allow those modules to mature, do the job they were designed for, and to recede into the background and become something that can be relied upon and not need continual, intensive maintenance. There is almost nothing better than writing a library that one may use constantly but never need to touch again.

Thus, the idea that a precarious stack of precisely versioned software is required to deliver a solution is absurd, but it drives the attitude that established software distributions only deliver “old” software, and it drives the demand for wasteful container or virtual environment solutions whose advocates readily criticise traditional distributions whilst pilfering packages from them. Or as Docker users might all too easily say, “FROM debian:sid”. Part of the problem is that it is easy to rely on methods of mass consumption to solve problems with software – if something is broken, just update and see if it fixes it – but such attitudes permeate the entire development process, leading to continual instability and a software stack constantly in flux.

Dealing with a multitude of software requirements is certainly a challenging problem that established operating systems struggle to resolve convincingly, despite all the shoehorning of features into the Linux technology stack. Nevertheless, the topic of operating system design is rather outside the scope of this article. Closer to relevance is the matter of how people seem reluctant to pick a technology and stick with it, particularly in the realm of programming languages. Then again, I covered much of this before and fairly recently, too. Ultimately, we want to be establishing software stacks that people can readily acquaint themselves with decades down the line, without the modern-day caveats that “feature X changed in version Y” and that if you were not there at the time, you have quite the job to do to catch up with that and everything else that went on, including migrations to a variety of source management tools and venues, maybe even completely new programming languages.

A Different Mindset

If anything, Collapse OS makes us consider a future beyond tomorrow, next week, next year, or a few years’ time. Even if the wheels do not start falling off the vehicle of human civilisation, there are still plenty of other things that can go away without much notice. Corporations like Apple and Google might stick around, whether that is good news or not, but it does not stop them from pulling the plug on products and services. Projects and organisations do not always keep going forever, not least because they are often led by people who cannot keep going forever, either.

There are ways we can mitigate these threats to sustainability and longevity, however. We can share our software through accessible channels without insisting that others use those monopolist-run centralised hosting services. We can document our software so that others have a chance of understanding what we were thinking when we wrote it. We can try and keep the requirements for our software modest and give people a chance to deploy it on modest hardware. And we might think about what kind of world we are leaving behind and whether it is better than the world we were born into.

Public Money, Public Code, Public Control

Thursday, September 14th, 2017

An interesting article published by the UK Government Digital Service was referenced in a response to the LWN.net coverage of the recently-launched “Public Money, Public Code” campaign. Arguably, the article focuses a little too much on “in the open” and perhaps not enough on the matter of control. Transparency is a good thing, collaboration is a good thing, no-one can really argue about spending less tax money and getting more out of it, but it is the matter of control that makes this campaign and similar initiatives so important.

In one of the comments on the referenced article you can already see the kind of resistance that this worthy and overdue initiative will meet. There is this idea that the public sector should just buy stuff from companies and not be in the business of writing software. Of course, this denies the reality of delivering solutions where you have to pay attention to customer needs and not just have some package thrown through the doorway of the customer as big bucks are exchanged for the privilege. And where the public sector ends up managing its vendors, you inevitably get an under-resourced customer paying consultants to manage those vendors, maybe even their own consultant colleagues. Guess how that works out!

There is a culture of proprietary software vendors touting their wares or skills to public sector departments, undoubtedly insisting that their products are a result of their own technological excellence and that they are doing their customers a favour by merely doing business with them. But at the same time, those vendors need a steady – perhaps generous – stream of revenue consisting largely of public money. Those vendors do not want their customers to have any real control: they want their customers to be obliged to come back year after year for updates, support, further sales, and so on; they want more revenue opportunities rather than their customers empowering themselves and collaborating with each other. So who really needs whom here?

Some of these vendors undoubtedly think that the public sector is some kind of vehicle to support and fund enterprises. (Small- and medium-sized enterprises are often mentioned, but the big winners are usually the corporate giants.) Some may even think that the public sector is a vehicle for “innovation” where publicly-funded work gets siphoned off for businesses to exploit. Neither of these things cultivate a sustainable public sector, nor do they even create wealth effectively in wider society: they lock organisations into awkward, even perilous technological dependencies, and they undermine competition while inhibiting the spread of high-quality solutions and the effective delivery of services.

Unfortunately, certain flavours of government hate the idea that the state might be in a role of actually doing anything itself, preferring that its role be limited to delegating everything to “the market” where private businesses will magically do everything better and cheaper. In practice, under such conditions, some people may benefit (usually the rich and well-represented) but many others often lose out. And it is not unknown for the taxpayer to have to pick up the bill to fix the resulting mess that gets produced, anyway.

We need sustainable public services and a sustainable software-producing economy. By insisting on Free Software – public code – we can build the foundations of sustainability by promoting interoperability and sharing, maximising the opportunities for those wishing to improve public services by upholding proper competition and establishing fair relationships between customers and vendors. But this also obliges us to be vigilant to ensure that where politicians claim to support this initiative, they do not try and limit its impact by directing money away from software development to the more easily subverted process of procurement, while claiming that procured systems not be subject to the same demands.

Indeed, we should seek to expand our campaigning to cover public procurement in general. When public money is used to deliver any kind of system or service, it should not matter whether the code existed in some form already or not: it should be Free Software. Otherwise, we indulge those who put their own profits before the interests of a well-run public sector and a functioning society.

The Academic Barriers of Commercialisation

Monday, January 9th, 2017

Last year, the university through which I obtained my degree celebrated a “milestone” anniversary, meaning that I got even more announcements, notices and other such things than I was already getting from them before. Fortunately, not everything published into this deluge is bound up in proprietary formats (as one brochure was, sitting on a Web page in Flash-only form) or only reachable via a dubious “Libyan link-shortener” (as certain things were published via a social media channel that I have now quit). It is indeed infuriating to see one of the links in a recent HTML/plain text hybrid e-mail message using a redirect service hosted on the university’s own alumni sub-site, sending the reader to a bit.ly URL, which will redirect them off into the great unknown and maybe even back to the original site. But such things are what one comes to expect on today’s Internet with all the unquestioning use of random “cloud” services, each one profiling the unsuspecting visitor and betraying their privacy to make a few extra cents or pence.

But anyway, upon following a more direct – but still redirected – link to an article on the university Web site, I found myself looking around to see what gets published there these days. Personally, I find the main university Web site rather promotional and arguably only superficially informative – you can find out the required grades to take courses along with supposed student approval ratings and hypothetical salary expectations upon qualifying – but it probably takes more digging to get at the real detail than most people would be willing to do. I wouldn’t mind knowing what they teach now in their computer science courses, for instance. I guess I’ll get back to looking into that later.

Gatekeepers of Knowledge

However, one thing did catch my eye as I browsed around the different sections, encountering the “technology transfer” department with the expected rhetoric about maximising benefits to society: the inevitable “IP” policy in all its intimidating length, together with an explanatory guide to that policy. Now, I am rather familiar with such policies from my time at my last academic employer, having been obliged to sign some kind of statement of compliance at one point, but then apparently not having to do so when starting a subsequent contract. It was not as if enlightenment had come calling at the University of Oslo between these points in time such that the “IP rights” agreement now suddenly didn’t feature in the hiring paperwork; it was more likely that such obligations had presumably been baked into everybody’s terms of employment as yet another example of the university upper management’s dubious organisational reform and questionable human resources practices.

Back at Heriot-Watt University, credit is perhaps due to the authors of their explanatory guide to try and explain the larger policy document, because it is most likely that most people aren’t going to get through that much longer document and retain a clear head. But one potentially unintended reason for credit is that by being presented with a much less opaque treatment of the policy and its motivations, we are able to see with enhanced clarity many of the damaging misconceptions that have sadly become entrenched in higher education and academia, including the ways in which such policies actually do conflict with the sharing of knowledge that academic endeavour is supposed to be all about.

So, we get the sales pitch about new things needing investment…

However, often new technologies and inventions are not fully developed because development needs investment, and investment needs commercial returns, and to ensure commercial returns you need something to sell, and a freely available idea cannot be sold.

If we ignore various assumptions about investment or the precise economic mechanisms supposedly required to bring about such investment, we can immediately note that ideas on their own aren’t worth anything anyway, freely available or not. Although the Norwegian Industrial Property Office (or the Norwegian Patent Office if we use a more traditional name) uses the absurd vision slogan “turning ideas into values” (it should probably read “value”, but whatever), this perhaps says more about greedy profiteering through the sale of government-granted titles bound to arbitrary things than it does about what kinds of things have any kind of inherent value that you can take to the bank.

But assuming that we have moved beyond the realm of simple ideas and have entered the realm of non-trivial works, we find that we have also entered the realm of morality and attitude management:

That is why, in some cases, it is better for your efforts not to be published immediately, but instead to be protected and then published, for protection gives you something to sell, something to sell can bring in investment, and investment allows further development. Therefore in the interests of advancing the knowledge within the field you work in, it is important that you consider the commercial potential of your work from the outset, and if necessary ensure it is properly protected before you publish.

Once upon a time, the most noble pursuit in academic research was to freely share research with others so that societal, scientific and technological progress could be made. Now it appears that the average researcher should treat it as their responsibility to conceal their work from others, seek “protection” on it, and then release the encumbered details for mere perusal and the conditional participation of those once-valued peers. And they should, of course, be wise to the commercial potential of the work, whatever that is. Naturally, “intellectual property” offices in such institutions have an “if in doubt, see us” policy, meaning that they seek to interfere with research as soon as possible, and should someone fail to have “seen them”, that person’s loyalty may very well be called into question as if they had somehow squandered their employer’s property. In some institutions, this could very easily get people marginalised or “reorganised” if not immediately or obviously fired.

The Rewards of Labour

It is in matters of property and ownership where things get very awkward indeed. Many people would accept that employees of an organisation are producing output that becomes the property of that organisation. What fewer people might accept is that the customers of an organisation are also subject to having their own output taken to be the property of that organisation. The policy guide indicates that even undergraduate students may also be subject to an obligation to assign ownership of their work to the university: those visiting the university supposedly have to agree to this (although it doesn’t say anything about what their “home institution” might have to say about that), and things like final year projects are supposedly subject to university ownership.

So, just because you as a student have a supervisor bound by commercialisation obligations, you end up not only paying tuition fees to get your university education (directly or through taxation), but you also end up having your own work taken off you because it might be seen as some element in your supervisor’s “portfolio”. I suppose this marks a new low in workplace regulation and standards within a sector that already skirts the law with regard to how certain groups are treated by their employers.

One can justifiably argue that employees of academic institutions should not be allowed to run away with work funded by those institutions, particularly when such funding originally comes from other sources such as the general public. After all, such work is not exactly the private property of the researchers who created it, and to treat it as such would deny it to those whose resources made it possible in the first place. Any claims about “rightful rewards” needing to be given are arguably made to confuse rational thinking on the matter: after all, with appropriate salaries, the researchers are already being rewarded doing work that interests and stimulates them (unlike a lot of people in the world of work). One can argue that academics increasingly suffer from poorer salaries, working conditions and career stability, but such injustices are not properly remedied by creating other injustices to supposedly level things out.

A policy around what happens with the work done in an academic institution is important. But just as individuals should not be allowed to treat broadly-funded work as their own private property, neither should the institution itself claim complete ownership and consider itself entitled to do what it wishes with the results. It may be acting as a facilitator to allow research to happen, but by seeking to intervene in the process of research, it risks acting as an inhibitor. Consider the following note about “confidential information”:

This is, in short, anything which, if you told people about, might damage the commercial interests of the university. It specifically includes information relating to intellectual property that could be protected, but isn’t protected yet, and which if you told people about couldn’t be protected, and any special know how or clever but non patentable methods of doing things, like trade secrets. It specifically includes all laboratory notebooks, including those stored in an electronic fashion. You must be very careful with this sort of information. This is of particular relevance to something that may be patented, because if other people know about it then it can’t be.

Anyone working in even a moderately paranoid company may have read things like this. But here the context is an environment where knowledge should be shared to benefit and inform the research community. Instead, one gets the impression that the wish to control the propagation of knowledge is so great that some people would rather see the details of “clever but non patentable methods” destroyed than passed on openly for others to benefit from. Indeed, one must question whether “trade secrets” should even feature in a university environment at all.

Of course, the obsession with “laboratory notebooks”, “methods of doing things” and “trade secrets” in such policies betrays the typical origins of such drives for commercialisation: the apparently rich pickings to be had in the medical, pharmaceutical and biosciences domains. It is hardly a coincidence that the University of Oslo intensified its dubious “innovation” efforts under a figurehead with a background (or an interest) in exactly those domains: with a narrow personal focus, an apparent disdain for other disciplines, and a wider commercial atmosphere that gives such a strategy a “dead cert” air of impending fortune, we should perhaps expect no more of such a leadership creature (and his entourage) than the sum of that creature’s instincts and experiences. But then again, we should demand more from such people when their role is to cultivate an institution of learning and not to run a private research organisation at the public’s expense.

The Dirty Word

At no point in the policy guide does the word “monopoly” appear. Given that such a largely technical institution would undoubtedly be performing research where the method of “protection” would involve patents being sought, omitting the word “monopoly” might be that document’s biggest flaw. Heriot-Watt University originates from the merger of two separate institutions, one of which was founded by the well-known pioneer of steam engine technology, James Watt.

Recent discussion of Watt’s contributions to the development and proliferation of such technology has brought up claims that Watt’s own patents – the things that undoubtedly made him wealthy enough to fund an educational organisation – actually held up progress in the domain concerned for a number of decades. While he was clearly generous and sensible enough to spend his money on worthy causes, one can always challenge whether the questionable practices that resulted in the accumulation of such wealth can justify the benefits from the subsequent use of that wealth, particularly if those practices can be regarded as having had negative effects of society and may even have increased wealth inequality.

Questioning philanthropy is not a particularly fashionable thing to do. In capitalist societies, wealthy people are often seen as having made their fortunes in an honest fashion, enjoying a substantial “benefit of the doubt” that this was what really occurred. Criticising a rich person giving money to ostensibly good causes is seen as unkind to both the generous donor and to those receiving the donations. But we should question the means through which the likes of Bill Gates (in our time) and James Watt (in his own time) made their fortunes and the power that such fortunes give to such people to direct money towards causes of their own personal choosing, not to mention the way in which wealthy people also choose to influence public policy and the use of money given by significantly less wealthy individuals – the rest of us – gathered through taxation.

But back to monopolies. Can they really be compatible with the pursuit and sharing of knowledge that academia is supposed to be cultivating? Just as it should be shocking that secretive “confidentiality” rules exist in an academic context, it should appal us that researchers are encouraged to be competitively hostile towards their peers.

Removing the Barriers

It appears that some well-known institutions understand that the unhindered sharing of their work is their primary mission. MIT Media Lab now encourages the licensing of software developed under its roof as Free Software, not requiring special approval or any other kind of institutional stalling that often seems to take place as the “innovation” vultures pick over the things they think should be monetised. Although proprietary licensing still appears to be an option for those within the Media Lab organisation, at least it seems that people wanting to follow their principles and make their work available as Free Software can do so without being made to feel bad about it.

As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.

So says the director of the MIT Media Lab. It says a lot about the times we live in that this needs to be said at all. Free Software licensing is, as a mechanism to encourage sharing, a natural choice for software, but we should also expect similar measures to be adopted for other kinds of works. Papers and articles should at the very least be made available using content licences that permit sharing, even if the licence variants chosen by authors might seek to prohibit the misrepresentation of parts of their work by prohibiting remixes or derived works. (This may sound overly restrictive, but one should consider the way in which scientific articles are routinely misrepresented by climate change and climate science deniers.)

Free Software has encouraged an environment where sharing is safely and routinely done. Licences like the GNU General Public Licence seek to shield recipients from things like patent threats, particularly from organisations which might appear to want to share their works, but which might be tempted to use patents to regulate the further use of those works. Even in realms where patents have traditionally been tolerated, attempts have been made to shield others from the effects of patents, intended or otherwise: the copyleft hardware movement demands that shared hardware designs are patent-free, for instance.

In contrast, one might think that despite the best efforts of the guide’s authors, all the precautions and behavioural self-correction it encourages might just drive the average researcher to distraction. Or, just as likely, to ignoring most of the guidelines and feigning ignorance if challenged by their “innovation”-obsessed superiors. But in the drive to monetise every last ounce of effort there is one statement that is worth remembering:

If intellectual property is not assigned, this can create problems in who is allowed to exploit the work, and again work can go to waste due to a lack of clarity over who owns what.

In other words, in an environment where everybody wants a share of the riches, it helps to have everybody’s interests out in the open so that there may be no surprises later on. Now, it turns out that unclear ownership and overly casual management of contributions is something that has occasionally threatened Free Software projects, resulting in more sophisticated thinking about how contributions are managed.

And it is precisely this combination of Free Software licensing, or something analogous for other domains, with proper contribution and attribution management that will extend safe and efficient sharing of knowledge to the academic realm. Researchers just cannot have the same level of confidence when dealing with the “technology transfer” offices of their institution and of other institutions. Such offices only want to look after themselves while undermining everyone beyond the borders of their own fiefdoms.

Divide and Rule

It is unfortunate that academic institutions feel that they need to “pull their weight” and have to raise funds to make up for diminishing public funding. By turning their backs on the very reason for their own existence and seeking monopolies instead of sharing knowledge, they unwittingly participate in the “divide and rule” tactics blatantly pursued in the political arena: that everyone must fight each other for all that is left once the lion’s share of public funding has been allocated to prestige megaprojects and schemes that just happen to benefit the well-connected, the powerful and the influential people in society the most.

A properly-funded education sector is an essential component of a civilised society, and its institutions should not be obliged to “sharpen their elbows” in the scuffle for funding and thus deprive others of knowledge just to remain viable. Sadly, while austerity politics remains fashionable, it may be up to us in the Free Software realm to remind academia of its obligations and to show that sustainable ways of sharing knowledge exist and function well in the “real world”.

Indeed, it is up to us to keep such institutions honest and to prevent advocates of monopoly-driven “innovation” from being able to insist that their way is the only way, because just as “divide and rule” politics erects barriers between groups in wider society, commercialisation erects barriers that inhibit the essential functions of academic pursuit. And such barriers ultimately risk extinguishing academia altogether, along with all the benefits its institutions bring to society. If my university were not reinforcing such barriers with its “IP” policy, maybe its anniversary as a measure of how far we have progressed from monopolies and intellectual selfishness would have been worth celebrating after all.

The BBC Micro and the BBC Micro Bit

Sunday, March 22nd, 2015

At least on certain parts of the Internet as well as in other channels, there has been a degree of excitement about the announcement by the BBC of a computing device called the “Micro Bit“, with the BBC’s plan to give one of these devices to each child starting secondary school, presumably in September 2015, attracting particular attention amongst technology observers and television licence fee-payers alike. Details of the device are a little vague at the moment, but the announcement along with discussions of the role of the corporation and previous initiatives of this nature provides me with an opportunity to look back at the original BBC Microcomputer, evaluate some of the criticisms (and myths) around the associated Computer Literacy Project, and to consider the things that were done right and wrong, with the latter hopefully not about to be repeated in this latest endeavour.

As the public record reveals, at the start of the 1980s, the BBC wanted to engage its audience beyond television programmes describing the growing microcomputer revolution, and it was decided that to do this and to increase computer literacy generally, it would need to be able to demonstrate various concepts and technologies on a platform that would be able to support the range of activities to which computers were being put to use. Naturally, a demanding specification was constructed – clearly, the scope of microcomputing was increasing rapidly, and there was a lot to demonstrate – and various manufacturers were invited to deliver products that could be used as this reference platform. History indicates that a certain amount of acrimony followed – a complete description of which could fill an entire article of its own – but ultimately only Acorn Computers managed to deliver a machine that could do what the corporation was asking for.

An Ambitious Specification

It is worth considering what the BBC Micro was offering in 1981, especially when considering ill-informed criticism of the machine’s specifications by people who either prefer other systems or who felt that participating in the development of such a machine was none of the corporation’s business. The technologies to be showcased by the BBC’s programme-makers and supported by the materials and software developed for the machine included full-colour graphics, multi-channel sound, 80-column text, Viewdata/Teletext, cassette and diskette storage, local area networking, interfacing to printers, joysticks and other input/output devices, as well as to things like robots and user-developed devices. Although it is easy to pick out one or two of these requirements, move forwards a year or two, increase the budget two- or three-fold, or any combination of these things, and to nominate various other computers, there really were few existing systems that could deliver all of the above, at least at an affordable price at the time.

Some microcomputers of the early 1980s
Computer RAM Text Graphics Year Price
Apple II Plus Up to 64K 40 x 25 (upper case only) 280 x 192 (6 colours), 40 x 48 (16 colours) 1979 £1500 or more
Commodore PET 4032/8032 32K 40/80 x 25 Graphics characters (2 colours) 1980 £800 (4032), £1030 (8032) (including monochrome monitor)
Commodore VIC-20 5K 22 x 23 176 x 184 (8 colours) 1980 (1981 outside Japan) £199
IBM PC (Model 5150) 16K up to 256K 40/80 x 25 640 x 200 (2 colours), 320 x 200 (4 colours) 1981 £1736 (including monochrome monitor, presumably with 16K or 64K)
BBC Micro (Model B) 32K 80/40/20 x 32/24, Teletext 640 x 256 (2 colours), 320 x 256 (2/4 colours), 160 x 256 (4/8 colours) 1981 £399 (originally £335)
Research Machines LINK 480Z 64K (expandable to 256K) 40 x 24 (optional 80 x 24) 160 x 72, 80 x 72 (2 colours); expandable to 640 x 192 (2 colours), 320 x 192 (4 colours), 190 x 96 (8 colours or 16 shades) 1981 £818
ZX Spectrum 16K or 48K 32 x 24 256 x 192 (16 colours applied using attributes) 1982 £125 (16K), £175 (48K)
Commodore 64 64K 40 x 25 320 x 200 (16 colours applied using attributes) 1982 £399

Perhaps the closest competitor, already being used in a fairly limited fashion in educational establishments in the UK, was the Commodore PET. However, it is clear that despite the adaptability of that system, its display capabilities were becoming increasingly uncompetitive, and Commodore had chosen to focus on the chipsets that would power the VIC-20 and Commodore 64 instead. (The designer of the PET went on to make the very capable, and understandably more expensive, Victor 9000/Sirius 1.) That Apple products were notoriously expensive and, indeed, the target of Commodore’s aggressive advertising did not seem to prevent them from capturing the US education market from the PET, but they always remained severely uncompetitive in the UK as commentators of the time indeed noted.

Later, the ZX Spectrum and Commodore 64 were released. Technology was progressing rapidly, and in hindsight one might have advocated waiting around until more capable and cheaper products came to market. However, it can be argued that in fulfilling virtually all aspects of the ambitious specification and pricing, it would not be until the release of the Amstrad CPC series in 1984 that a suitable alternative product might have become available. Even then, these Amstrad computers actually benefited from the experience accumulated in the UK computing industry from the introduction of the BBC Micro: they were, if anything, an iteration within the same generation of microcomputers and would even have used the same 6502 CPU as the BBC Micro had it not been for time-to-market pressures and the readily-available expertise with the Zilog Z80 CPU amongst those in the development team. And yet, specific aspects of the specification would still be unfulfilled: the BBC Micro had hardware support for Teletext displays, although it would have been possible to emulate these with a bitmapped display and suitable software.

Arise Sir Clive

Much has been made of the disappointment of Sir Clive Sinclair that his computers were not adopted by the BBC as products to be endorsed and targeted at schools. Sinclair made his name developing products that were competitive on price, often seeking cost-reduction measures to reach attractive pricing levels, but such measures also served to make his products less desirable. If one reads reviews of microcomputers from the early 1980s, many reviewers explicitly mention the quality of the keyboard provided by the computers being reviewed: a “typewriter” keyboard with keys that “travel” appear to be much preferred over the “calculator” keyboards provided by computers like the ZX Spectrum, Oric 1 or Newbury NewBrain, and they appear to be vastly preferred over the “membrane” keyboards employed by the ZX80, ZX81 and Atari 400.

For target audiences in education, business, and in the home, it would have been inconceivable to promote a product with anything less than a “proper” keyboard. Ultimately, the world had to wait until the ZX Spectrum +2 released in 1986 for a Spectrum with such a keyboard, and that occurred only after the product line had been acquired by Amstrad. (One might also consider the ZX Spectrum+ in 1984, but its keyboard was more of a hybrid of the calculator keyboard that had been used before and the “full-travel” keyboards provided by its competitors.)

Some people claim that they owe nothing to the BBC Micro and everything to the ZX Spectrum (or, indeed, the computer they happened to own) for their careers in computing. Certainly, the BBC Micro was an expensive purchase for many people, although contrary to popular assertion it was not any more expensive than the Commodore 64 upon that computer’s introduction in the UK, and for those of us who wanted BBC compatibility at home on a more reasonable budget, the Acorn Electron was really the only other choice. But it would be as childish as the playground tribalism that had everyone insist that their computer was “the best” to insist that the BBC Micro had no influence on computer literacy in general, or on the expectations of what computer systems should provide. Many people who owned a ZX Spectrum freely admit that the BBC Micro coloured their experiences, some even subsequently seeking to buy one or one of its successors and to go on to build a successful software development career.

The Costly IBM PC

Some commentators seem to consider the BBC Micro as having been an unnecessary diversion from the widespread adoption of the IBM PC throughout British society. As was the case everywhere else, the de-facto “industry standard” of the PC architecture and DOS captured much of the business market and gradually invaded the education sector from the top down, although significantly better products existed both before and after its introduction. It is tempting with hindsight to believe that by placing an early bet on the success of the then-new PC architecture, business and education could have somehow benefited from standardising on the PC and DOS applications. And there has always been the persistent misguided belief amongst some people that schools should be training their pupils/students for a narrow version of “the world of work”, as opposed to educating them to be able to deal with all aspects of their lives once their school days are over.

What many people forget or fail to realise is that the early 1980s witnessed rapid technological improvement in microcomputing, that there were many different systems and platforms, some already successful and established (such as CP/M), and others arriving to disrupt ideas of what computing should be like (the Xerox Alto and Star having paved the way for the Apple Lisa and Macintosh, the Atari ST, and so on). It was not clear that the IBM PC would be successful at all: IBM had largely avoided embracing personal computing, and although the system was favourably reviewed and seen as having the potential for success, thanks to IBM’s extensive sales organisation, other giants of the mainframe and minicomputing era such as DEC and HP were pursuing their own personal computing strategies. Moreover, existing personal computers were becoming entrenched in certain markets, and early adopters were building a familiarity with those existing machines that was reflected in publications and materials available at the time.

Despite the technical advantages of the IBM PC over much of the competition at the beginning of the 1980s, it was also substantially more expensive than the mass-market products arriving in significant numbers, aimed at homes, schools and small businesses. With many people remaining intrigued but unconvinced by the need for a personal computer, it would have been impossible for a school to justify spending almost £2000 (probably around £8000 today) on something without proven educational value. Software would also need to be purchased, and the procurement of expensive and potentially non-localised products would have created even more controversy.

Ultimately, the Computer Literacy Project stimulated the production of a wide range of locally-produced products at relatively inexpensive prices, and while there may have been a few years of children learning BBC BASIC instead of one of the variants of BASIC for the IBM PC (before BASIC became a deprecated aspect of DOS-based computing), it is hard to argue that those children missed out on any valuable experience using DOS commands or specific DOS-based products, especially since DOS became a largely forgotten environment itself as technological progress introduced new paradigms and products, making “hard-wired”, product-specific experience obsolete.

The Good and the Bad

Not everything about the BBC Micro and its introduction can be considered unconditionally good. Choices needed to be made to deliver a product that could fulfil the desired specification within certain technological constraints. Some people like to criticise BBC BASIC as being “non-standard”, for example, which neglects the diversity of BASIC dialects that existed at the dawn of the 1980s. Typically, for such people “standard” equates to “Microsoft”, but back then Microsoft BASIC was a number of different things. Commodore famously paid a one-off licence fee to use Microsoft BASIC in its products, but the version for the Commodore 64 was regarded as lacking user-friendly support for graphics primitives and other interesting hardware features. Meanwhile, the MSX range of microcomputers featured Microsoft Extended BASIC which did provide convenient access to hardware features, although the MSX range of computers were not the success at the low end of the market that Microsoft had probably desired to complement its increasing influence at the higher end through the IBM PC. And it is informative in this regard to see just how many variants of Microsoft BASIC were produced, thanks to Microsoft’s widespread licensing of its software.

Nevertheless, the availability of one company’s products do not make a standard, particularly if interoperability between those products is limited. Neither BBC BASIC nor Microsoft BASIC can be regarded as anything other than de-facto standards in their own territories, and it is nonsensical to regard one as non-standard when the other has largely the same characteristics as a proprietary product in widespread use, even if it was licensed to others, as indeed both Microsoft BASIC and BBC BASIC were. Genuine attempts to standardise BASIC did indeed exist, notably BASICODE, which was used in the distribution of programs via public radio broadcasts. One suspects that people making casual remarks about standard and non-standard things remain unaware of such initiatives. Meanwhile, Acorn did deliver implementations of other standards-driven programming languages such as COMAL, Pascal, Logo, Lisp and Forth, largely adhering to any standards subject to the limitations of the hardware.

However, what undermined the BBC Micro and Acorn’s related initiatives over time was the control that they as a single vendor had over the platform and its technologies. At the time, a “winner takes all” mentality prevailed: Commodore under Jack Tramiel had declared a “price war” on other vendors and had caused difficulties for new and established manufacturers alike, with Atari eventually being sold to Tramiel (who had resigned from Commodore) by Warner Communications, but many companies disappeared or were absorbed by others before half of the decade had passed. Indeed, Acorn, who had released the Electron to compete with Sinclair Research at the lower end of the market, and who had been developing product lines to compete in the business sector, experienced financial difficulties and was ultimately taken over by Olivetti; Sinclair, meanwhile, experienced similar difficulties and was acquired by Amstrad. In such a climate, ideas of collaboration seemed far from everybody’s minds.

Since then, the protagonists of the era have been able to reflect on such matters, Acorn co-founder Hermann Hauser admitting that it may have been better to license Acorn’s Econet local area networking technology to interested competitors like Commodore. Although the sentiments might have something to do with revenues and influence – it was at Acorn that the ARM processor was developed, sowing the seeds of a successful licensing business today – the rest of us may well ask what might have happened had the market’s participants of the era cooperated on things like standards and interoperability, helping their customers to protect their investments in technology, and building a bigger “common” market for third-party products. What if they had competed on bringing technological improvements to market without demanding that people abandon their existing purchases (and cause confusion amongst their users) just because those people happened to already be using products from a different vendor? It is interesting to see the range of available BBC BASIC implementations and to consider a world where such building blocks could have been adopted by different manufacturers, providing a common, heterogeneous platform built on cooperation and standards, not the imposition of a single hardware or software platform.

But That Was Then

Back then, as Richard Stallman points out, proprietary software was the norm. It would have been even more interesting had the operating systems and the available software for microcomputers been Free Software, but that may have been asking too much at the time. And although computer designs were often shared and published, a tendency to prevent copying of commercial computer designs prevailed, with Acorn and Sinclair both employing proprietary integrated circuits mostly to reduce complexity and increase performance, but partly to obfuscate their hardware designs, too. Thus, it may have been too much to expect something like the BBC Micro to have been open hardware to any degree “back in the day”, although circuit diagrams were published in publicly-available technical documentation.

But we have different expectations now. We expect software to be freely available for inspection, modification and redistribution, knowing that this empowers the end-users and reassures them that the software does what they want it to do, and that they remain in control of their computing environment. Increasingly, we also expect hardware to exhibit the same characteristics, perhaps only accepting that some components are particularly difficult to manufacture and that there are physical and economic restrictions on how readily we may practise the modification and redistribution of a particular device. Crucially, we demand control over the software and hardware we use, and we reject attempts to prevent us from exercising that control.

The big lesson to be learned from the early 1980s, to be considered now in the mid-2010s, is not how to avoid upsetting a popular (but ultimately doomed) participant in the computing industry, as some commentators might have everybody believe. It is to avoid developing proprietary solutions that favour specific organisations and that, despite the general benefits of increased access to technology, ultimately disempower the end-user. And in this era of readily available Free Software and open hardware platforms, the lesson to be learned is to strengthen such existing platforms and to work with them, letting those products and solutions participate and interoperate with the newly-introduced initiative in question.

The BBC Micro was a product of its time and its development was very necessary to fill an educational need. Contrary to the laziest of reports, the Micro Bit plays a different role as an accessory rather than as a complete home computer, at least if we may interpret the apparent intentions of its creators. But as a product of this era, our expectations for the Micro Bit are greater: we expect transparency and interoperability, the ability to make our own (just as one can with the Arduino, as long as one does not seek to call it an Arduino without asking permission from the trademark owner), and the ability to control exactly how it works. Whether there is a need to develop a completely new hardware solution remains an unanswered question, but we may assert that should it be necessary, such a solution should be made available as open hardware to the maximum possible extent. And of course, the software controlling it should be Free Software.

As we edge gradually closer to September and the big deployment, it will be interesting to assess how the device and the associated initiative measures up to our expectations. Let us hope that the right lessons from the days of the BBC Micro have indeed been learned!

Når sannheten kommer frem

Monday, April 28th, 2014

For noen måneder siden skrev jeg om avgjørelsen foretatt av ledelsen til Universitetet i Oslo å innføre Microsoft Exchange som gruppevareløsning. I teksten viste jeg til kommentarer som handlet om Roundcube og hvorfor den løsningen ikke kunne brukes i fremtiden hos UiO. Nå etter at universitetets IT-avdeling publiserte en nyhetssak som omtaler SquirrelMail, får jeg muligheten til å grave opp og dele noen detaljer fra mine samtaler med talspersonen tilknyttet prosjektet som vurderte Exchange og andre løsninger.

Når man ser rundt på nettet, virker det ikke uvanlig at organisasjoner innførte Roundcube parallelt med SquirrelMail på grunn av bekymringer om “universell utforming” (UU). Men i samtaler med prosjektet ble jeg fortalt at diverse mangler i UU-støtte var en aktuell grunn for at Roundcube ikke kunne bli en del av en fremtidig løsning for webmail hos UiO. Nå kommer sannheten frem:

“Ved innføringen av Roundcube som UiOs primære webmail-tjeneste fikk SquirrelMail lov til å leve videre i parallell fordi Roundcube hadde noen mangler knyttet til universell utforming. Da disse var forbedret hadde ledelsen besluttet at e-post og kalender skulle samles i et nytt system.”

Det finnes to ting som fanger vår oppmerksomhet her:

  1. At Roundcube hadde mangler “ved innføringen”, mens Roundcube hadde vært i bruk i noen år før den tvilsomme prosessen ble satt i gang for å vurdere e-post og kalenderløsninger.
  2. At forbedringene kom tilfeldigvis for sent for å påvirke ledelsens beslutning å innføre Exchange.

I fjor sommer, uten å dele prosjektgruppens påstander om mangler i Roundcube direkte i offentlighet, hørte jeg med andre om det fantes kjente mangler og forbedringspotensial i Roundcube i dette området. Var UU virkelig noe som hindret utbredelsen av Roundcube? Jeg satte meg inn i UU-teknologier og prøvde noen av dem med Roundcube for å se om situasjonen kunne forbedres med egen innsats. Det kan hende at det var store mangler i Roundcube tilbake i 2011 da prosjektgruppen begynte sitt arbeid – jeg velger ikke å hevde noe slikt – men etter at slike mangler ikke kom frem i 2012 i prosjektets sluttrapport (der Roundcube faktisk ble anbefalt som en del av den åpne kandidaten), må vi konkludere at slike bekymringer var for lengst borte og at universitetets egen webmail-tjeneste, selv om den er tilpasset organisasjonens egen visuelle profil (som kan ha noe å si om vedlikehold), var og fortsatt er tilgjengelig for alle brukere.

Hvis vi våger å tro at utdraget ovenfor forteller sannheten må vi nå konkludere at ledelsens beslutning fant sted lenge før selve prosessen som skulle underbygge denne beslutningen ble avsluttet. Og her må vi anse ordene “ledelsen besluttet” i et annet lys enn det som ellers er vanlig – der ledelsen først drar nytte av kompetanse i organisasjonen, og så tar en informert beslutning – med å anta at, som tidligere skrevet, noen “måtte ha” noe de likte, tok beslutningen de ville ta uansett, og så fikk andre til å finne på unnskyldninger og et grunnlag som virker fornuftig nok overfor utenforstående.

Det er én sak at en tilstrekkelig og fungerende IT-løsning som også er fri programvare svartmales mens tilsynelatende upopulære proprietære løsninger som Fronter når ikke opp slik at problemer rapportert i 2004 ble omtalt igjen i 2012 uten at produsenten gjør noe vesentlig mer enn å love å “jobbe med tilgjengelighet” fremover. Det er en annen sak at bærekraftige investeringer i fri programvare virker så fremmed til beslutningstakere hos UiO at folk heller vil snu arbeidsdagen til den vanlige ansatte opp-ned, slik utskiftningen av e-postinfrastrukturen har gjort for noen, enn å undersøke muligheten til å foreta relativt små utviklingsoppgaver for å oppgradere eksisterende systemer, og slippe at folk må “[stå] på dag og natt” i løpet av “de siste månedene” (og her viser vi selvsagt ikke til folk i ledelsen).

At ansatte i IT-avdelingen fikk munnkurv omkring prosessen og måtte forlange at deler av beslutningsgrunnlaget ikke når frem til offentligheten. At “menige” IT-ansvarlige må løpe hit og dit for å sørge for at ingen blir altfor misfornøyd med utfallet til en avgjørelse hverken de eller andre vanlige ansatte hadde vesentlig innflytelse på. At andre må tilpasse seg preferansene til folk som egentlig burde hatt ingenting å si om hvordan andre utfører sitt daglig arbeid. Alt dette sier noe om ledelseskulturen og demokratiske tilstander internt i Universitetet i Oslo.

Men sannheten kommer langsomt frem.

Jeg må ha… derfor må vi ha…

Monday, February 17th, 2014

For et par år siden satte Universitetet i Oslo i gang en prosess for å vurdere gruppevareløsninger (systemer som skal motta, lagre og sende organisasjonens e-postmeldinger og samtidig tillate lagring og deling av kalenderinformasjon som avtaler og møter). Prosessen vurderte et ukjent antall løsninger, droppet alle bortsett fra fire hovedkandidater, og lot være å vurdere en av kandidatene fordi den var det eksisterende kalendersystemet i bruk hos universitetet. Prosessens utfall var en oppsummering av de tre gjenstående løsningene med fordeler og ulemper beskrevet i en 23-siders rapport.

Rapportens konklusjon var at én av løsningene ikke nådde opp til forventninger omkring brukervennlighet eller åpenhet, én hadde tilstrekkelig brukervennlighet og baseres på fri programvare og åpne standarder, og én ble betraktet som å være mest brukervennlig selv om den baseres på et proprietært produkt og proprietære teknologier. Det ble skrevet at universitetet kunne “eventuelt” drifte begge de to sistnevnte løsningene men at den åpne bør foretrekkes med mindre noen spesielle strategiske hensyn styrer valget mot den proprietære, og om det skulle være tilfellet, så ville implementeringen innebære en betydelig arbeidsmengde for institusjonens IT-organisasjon.

Man kan vel komme med kritikk til måten selve prosessen ble utført, men etter å ha lest rapporten ville man trodd at i en offentlig organisasjon som ofte sliter med å dekke alle behov med tilstrekkelige midler, ville man valgt løsningen som fortsetter “tradisjonen” for åpne løsninger og ikke belastet organisasjonen med “ressurskrevende innføring” og andre ulemper. Prosjektets konklusjonen, derimot, var at universitetet skulle innføre Microsoft Exchange – den proprietære løsningen – og dermed skifte ut vesentlige deler av institusjonens infrastruktur for e-post, overføre lagrede meldinger til den proprietære Exchange løsningen, og flytte brukere til nye programvarer og systemer.

Man begynner å få en følelse at på et eller annet sted i ledelseshierarkiet noen har sagt “jeg må ha”, og ettersom ingen har nektet dem noe de “måtte ha” tidligere i livet, så har de fått det de “måtte ha” i denne situasjonen også. Man får også en følelse at rapporten i sitt endelige utkast ble formulert slik at beslutningstakerne raskt kunne avfeie ulempene og risikoene og så få “grønt lys” for et valg de ville ta helt på begynnelsen av prosessen: når det står i “svart på hvitt” at innføringen av den forhåndsutvalgte løsningen er gjennomførbar, føler de at de har alt de trenger for å bare sette i gang, koster det som det bare må.

For Nyhets Skyld

Men tilbake til prosessen. Det første som forbauset meg med prosessen var at man skulle vurdere implementering av helt nye systemer i det hele tatt. Prosessen fikk sin opprinnelse i et tilsynelatende behov for en “integrert” e-post- og kalenderløsning: noe som anses som ganske uviktig hos mange, men man kan ikke nekte for at noen ville opplevd en slik integrasjon av tjenester som nyttig. Men oppgaven å integrere tjenester trenger ikke å innebære en innføring av kun ett stort system som skal dekke alle funksjonelle områder og som skal erstatte alle eksisterende systemer som hadde noe å gjøre med tjenestene som integreres: en slik forenklet oppfatning av hvordan teknologiske systemer fungerer ville føre til at man insisterer at hele Internettet drives fra kun én stormaskin eller på kun én teknologisk plattform; alle som har satt seg inn i hvordan nettet fungerer vet at det ikke er slik (selv om noen organisasjoner ville foretrukket at alle kunne overvåkes på ett sted).

Det kan hende at beslutningstakerne faktisk tror på utilstrekkelige og overforenklede modeller av hvordan teknologi anvendes, eller at de velger bare å avfeie virkeligheten med en “få det gjort” mentalitet som lett kobles sammen med vrangforestillingen at ett produkt og én leverandør er som regel “løsningen”. Men infrastruktur i de fleste organisasjonene vil alltid bestå av forskjellige systemer, ofte med forskjellige opphav, som ofte fungerer sammen for å levere det som virker som kun én løsning eller tjeneste til utenforstående. Akkurat dette har faktisk vært situasjonen inne i universitetets infrastruktur for e-post frem til nå. Hvis den nåværende infrastrukturen mangler en forbindelse mellom e-postmeldinger og kalenderdata, hvorfor kan man ikke legge til en ny komponent eller tjeneste for å realisere den manglende integrasjonen som folk savner?

Og så blir det interessant å se nærmere på løsninger som likner på den eksisterende infrastrukturen universitetet bruker, og på produkter og prosjekter som leverer den savnede delen av infrastrukturen. Hvis det finnes liknende løsninger og tilsvarende infrastrukturbeskrivelser, spesielt om de finnes som fri programvare akkurat som programvarene universitetet allerede bruker, og hvis avstanden mellom det som kjøres nå og en fremtidig “fullstendig” løsning består bare av noen tilleggskomponenter og litt arbeid, ville det ikke vært interessant å se litt nærmere på slike ting først?

Gruppevare: Et Perspektiv

Jeg har vært interessert i gruppevare og kalenderløsninger i ganske lenge. For noen år siden utviklet jeg en nettleser-basert applikasjon for å behandle personlige kommunikasjoner og avtaler, og denne brukte eksisterende teknologier og standarder for å utveksle, fremvise og redigere informasjonen som ble lagret og behandlet. Selv om jeg etterhvert ble mindre overbevist på akkurat den måten jeg hadde valgt å implementere applikasjonen, hadde jeg likevel fått et innblikk i teknologiene som brukes og standardene som finnes for å utveksle gruppevaredata. Tross alt, uansett hvordan meldinger og kalenderinformasjon lagres og håndteres må man fortsatt forholde seg til andre programmer og systemer. Og etter at jeg ble mer opptatt av wikisystemer og fikk muligheten til å implementere en kalendertjeneste for en av de mest utbredte wikiløsningene, ble jeg oppmerksom på nytt på standardiseringsretninger, praktiske forhold i implementasjon av gruppevaresystemer, og ikke minst hva slags eksisterende løsninger som fantes.

Et gruppevaresystem jeg hadde hørt om for mange år siden var Kolab, som består av noen godt etablerte komponenter og programmer (som også er fri programvare), ettersom produktet og prosjektet bak ble grunnlagt for å styrke tilbudet på serversiden slik at klientprogrammer som Evolution og KMail kunne kommunisere med fullstendige gruppevaretjenester som også ville være fri programvare. Et slikt behov ble identifisert da personer og organisasjoner tilknyttet Kolab leverte en e-postløsning bygget på KMail (med kryptering som viktig element) til en del av den tyske stat. Hvorfor bruke fri programvare kun på klientsiden og dermed måtte tåle skiftende forhold og varierende støtte for åpne standarder og interoperabilitet på serversiden?

Nesten Framme

Når man ser på Kolab er det slående hvor mye løsningen har til felles med universitetets nåværende infrastruktur: begge bruker LDAP som autentiseringsgrunnlag, begge bruker felles antispam og antivirus teknologier, og Roundcube står ganske sentralt som “webmail” løsning. Selv om noen funksjoner leveres av forskjellige programvarer, når man sammenlikner Kolab med infrastrukturen til UiO, kan man likevel påstå at avstanden mellom de ulike komponentene som brukes på hver sin side ikke er altfor stor for at man enten kunne bytte ut en komponent til fordel for komponenten Kolab helst vil bruke, eller kunne tilpasse Kolab slik at den kunne bruke den eksisterende komponenten istedenfor sin egen foretrukne komponent. Begge måter å håndtere slike forskjeller kunne benytte dokumentasjonen og ekspertisen som finnes på en mengde steder på nettet og i andre former som bøker og – ikke overraskende – organisasjonens egne fagpersoner.

Man trenger dog egentlig ikke å “bytte” til Kolab i det hele tatt: man kunne ta i bruk delene som dekker manglene i den nåværende infrastrukturen, eller man kunne ta i bruk andre komponenter som kan utføre slike jobber. Poenget er at det finnes alternativer som ligger ganske nær infrastrukturen som brukes i dag, og når man må velge mellom en storslått gjenimplementering av den infrastrukturen eller en mindre oppgradering som sannsynligvis kommer til å levere omtrent det samme resultatet, burde man begrunne og dokumentere avgjørelsen om å ikke se nærmere på slike nærliggende løsninger istedenfor å bare droppe dem fra vurderingen i stillhet og håpe at ingen ville merket at de forsvant uten å bli omtalt i det hele tatt.

I et dokument som oppsummerer prosjektets arbeid står det nettopp at SOGo – den foretrukne åpne løsningen – opptrer som et ledd mellom systemer som allerede er i drift. Man kunne godt tenkt at en vurdering av Kolab ville informert vurderingen av SOGo, og omvendt, slik at forståelsen for en eventuell oppgraderingsløsning kunne blitt grundigere og bedre, og kanskje førte til at en blanding av de mest interessante elementene tas i bruk slik at organisasjonen får ut det beste fra begge (og enda flere) løsninger.

Gjennom Nåløyet

Som skrevet ovenfor er jeg interessert i gruppevare og bestemte meg for å oppdage hvorfor en løsning slik som Kolab (og liknende løsninger, selvsagt) kunne vært fraværende fra sluttrapporten. Selv om det ikke ville vært riktig å publisere samtalene mellom prosjektets talsperson og meg, kan jeg likevel oppsummere det jeg lærte gjennom meldingsutvekslingen: en enkel oversikt over hvilke løsninger som ble vurdert (bortsett fra de som omtales i rapporten) finnes trolig ikke, og prosjektdeltakerne kunne ikke med det første huske hvorfor Kolab ikke ble tatt med til sluttrapporten. Dermed, når rapporten avfeier alt som ikke blir beskrevet i sin tekst, som om det fantes en grundig prosess for å sive ut alt som ikke nådde opp til prosjektets behov, kan dette anses nesten som ren bløff: det finnes ingen bevis for at evalueringen av tilgjengelige løsninger var på noen som helst måte omfattende eller balansert.

Man kan vel hevde at det skader ingen å publisere en rapport på nettet som velger ut verdige systemer for en organisasjons gruppevarebehov – konklusjonene vil naturligvis dreie seg ganske mye om hva slags organisasjon det er som omtales, og man kan alltid lese gjennom teksten og bedømme kompetansen til forfatterne – men det er også tilfellet at andre i samme situasjon ofte vil bruke materialer som likner på det de selv må produsere som utgangspunkt for sitt eget evalueringsarbeid. Når kjente systemer utelates vil lesere kanskje konkludere at slike systemer måtte ha hatt grunnleggende mangler eller vært fullstendig uaktuelle: hvorfor ellers ville de ikke blitt tatt med? Det er flott at organisasjoner foretar slikt arbeid på en åpen måte, men man bør ikke undervurdere propagandaverdien som sitter i en rapport som dropper noen systemer uten begrunnelse slik at systemene som gjenstår kan tolkes som de aller beste eller de eneste relevante, og at andre systemer bare ikke når opp og dermed har ingen plass i slike vurderinger hos andre institusjoner.

Etter litt frem og tilbake ble jeg endelig fortalt at Kolab ikke hadde vist tilstrekkelige livstegn i utvikler- og brukersamfunnet rundt prosjektet, og det var grunnen til at programvaren ikke ble vurdert videre; eksempler trukket frem handlet om prosjektets dokumentasjon som ikke var godt nok oppdatert. Selv om prosjektet har forbedringspotensial i noen områder – jeg har selv foreslått at noe gjøres med wikitjenesten til prosjektet i perioden etter jeg avsluttet mine samtaler med universitetets prosjektdeltakere og begynte å sette meg inn i situasjonen angående fri programvare og gruppevare – må det sies at tilbakemeldingene virket litt forhastede og tilfeldige: e-postlistene til Kolab-prosjektet viser relativt mye aktivitet, og det finnes ikke noen tegn som tyder på at noen fra universitetet tok kontakt med prosjektets utviklersamfunn for å fordype seg i mulige forbedringer i dokumentasjon, fremtidige planer, og muligheter for å gjenbruke kompetanse og materialer for komponenter og systemer som Kolab har til felles med andre løsninger, blant annet de som brukes hos universitetet.

Enda merkeligere var kommentarer fra prosjektgruppen som handlet om Roundcube. Rapporten omtalte Roundcube som en tilstrekkelig løsning som ikke bare brukes frem til i dag hos universitetet, men også ville vært brukt som erstatning for webmail-løsningen i SOGo. Plutselig, ifølge prosjektets talsperson, var Roundcube ikke godt nok i ett navngitt område, men rapporten brukte ikke noen som helst plass om slike påståtte mangler: rart når man tenker at det ville helt sikkert vært en veldig god anledning til å beskrive slike mangler og så forenkle beslutningsgrunnlaget vesentlig. Det han hende at slike ting ble funnet ut i etterkant av rapportens publisering, men man får et inntrykk – begrunnet eller ikke – at slike ting også kan finnes opp i etterkant for å begrunne en avgjørelse som ikke nødvendigvis trenger å bli forankret i faktagrunnlaget.

(Jeg ble bedt om å ikke dele prosjektgruppens oppfatninger om Roundcube med andre i offentligheten, og selv om jeg foreslo at de tok opp de påståtte manglene direkte med Roundcube-prosjektet, er jeg ikke overbevist at de hadde noen som helst hensikt å gjøre det. Beleilig at man kan kritisere noe eller noen uten at de har muligheten til å forklare eller forsvare seg!)

Og til slutt, brukte rapporten en god del plass for å beskrive proprietære Exchange-teknologier og hvordan man kunne få andre systemer til å bruke dem, samtidig som åpne og frie løsninger måtte tilpasses Outlook og det produktets avhengighet på proprietær kommunikasjon for å fungere med alle funksjoner slått på. Relativt lite plass og prioritering ble tildelt alternative klienter. Til tross for bekymringer om Thunderbird – den foretrukne e-postklienten frem til nå – og hvordan den skal utvides med kalenderfunksjon, har jeg aldri sett Kontact eller KMail nevnt en eneste gang, ikke en gang i Linux-sammenheng til tross for at Kontact har vært tilgjengelig i universitetets påbudte Linux-distribusjon – Red Hat Enterprise Linux – i årevis og fungerer helt greit med e-postsystemene som nå skal vrakes. Det kan hende at folk anser Kontact som gammeldags – en holdning jeg oppdaget på IRC for noen uker siden uten at den kunne begrunnes videre eller med mer substans – men den er en moden e-postklient med kalenderfunksjon som har fungert for mange over lengre tid. Pussig at ingen i vurderingsarbeidet vil nevne denne programvaren.

Må Bare Ha

Det begynner med en organisasjon som har godt fungerende systemer som kunne bygges på slik at nye behov tilfredsstilles. Imidlertid, ble det kjørt en prosess som ser ut til å ha begynt med forutsetningen at ingenting kan gjøres med disse godt fungerende systemene, og at de må skiftes ut med “ressurskrevende innføring” til fordel for et proprietært system som skal skape en dypere avhengighet på en beryktet leverandør (og monopolist). Og for å styrke denne tvilsomme forutsetningen ble kjente løsninger utelatt fra vurderingene som ble gjort, tilsynelatende slik at en enklere og “passende” avgjørelse kunne tas.

Inntrykket som blir igjen tyder på at prosessen ikke nødvendigvis ble brukt til å informere avgjørelser, men at avgjørelser informerte eller dirigerte prosessen. Hvorfor dette kan ha skjedd er kanskje en historie for en annen gang: en historie som handler om ulike perspektiver i forhold til etikk, demokrati, og investeringer i kunnskap og kompetanse. Og som man kunne forvente ut ifra det som er skrevet ovenfor, er det ikke nødvendigvis en historie som setter organisasjonens “dirigenter” i et så veldig godt lys.

Når man fokuserer på merkevarer og ikke standarder

Friday, January 24th, 2014

Det var interessant å lese et leserbrev i Uniforum med tittelen “Digitale samarbeidsverktøy i undervisning“: noe som trykker alle de riktige knappene i dagens akademiske landskap med et økende fokus på nettbasert opplæring, bredere tilgang til kurs, og mange andre ting som kanskje motiveres mer av “prestisje” og “å kapre kunder” enn å øke generell tilgjengelighet til en institusjons kunnskap og ekspertise. Brevforfatterne beskriver utbredt bruk av videokonferanseløsninger og skryter på en proprietær løsning som de tilsynelatende liker ganske godt, og henviser til universitetets anbefalinger for tekniske løsninger. Etter at man graver litt til å finne disse anbefalingene ser man ganske fort at de dreier seg om tre proprietære (eller delvis proprietære) systemer: Adobe Connect, Skype, og spesialiserte videokonferanseutstyr som finnes i noen møterom.

Det er i all sannsynlighet en konsekvens av forbrukersamfunnet vi lever i: at man tenker produkter og merkevarer og ikke standarder, og at etter man oppdager et spennende produkt vil man gjerne øke oppmerksomheten på produktet blant alle som har liknende behov. Men når man blir produkt- og merkevarefokusert kan man fort miste blikket over det egentlige problemet og den egentlige løsningen. At så mange fortsetter å insistere på Ibux når de kunne kjøpe generiske medikamenter for en brøkdel av merkevarens pris er bare et eksempel på at folk ikke lenger er vant til å vurdere de virkelige forholdene og vil heller ty til merkevarer for en enkel og rask løsning de ikke trenger å tenke så mye på.

Man bør kanskje ikke legge så veldig mye skyld på vanlige databrukere når de begår slike feil, spesielt når store deler av det offentlige her i landet fokuserer på proprietære produkter som, hvis de utnytter genuine standarder i det hele tatt, pleier å blande dem med proprietære teknologier for å styre kunden mot en forpliktelse overfor noen få leverandører i en nesten ubegrenset tid fremover. Men det er litt skuffende at “grønne” representanter ikke vurderer bærekraftige teknologiske løsninger – de som gjøres tilgjengelig som fri programvare og som bruker dokumenterte og åpne standarder – når det ville forventes at slike representanter foreslår bærekraftige løsninger i andre områder.

The Organisational Panic Button and the Magic Single Vendor Delusion

Wednesday, November 27th, 2013

I have had reason to consider the way organisations make technology choices in recent months, particularly where the public sector is concerned, and although my conclusions may not come as a surprise to some people, I think they sum up fairly well how bad decisions get made even if the intentions behind them are supposedly good ones. Approaching such matters from a technological point of view, being informed about things like interoperability, systems diversity, the way people adopt and use technology, and the details of how various technologies work, it can be easy to forget that decisions around acquisitions and strategies are often taken by people who have no appreciation of such things and no time or inclination to consider them either: as far as decision makers are concerned, such things are mere details that obscure the dramatic solution that shows them off as dynamic leaders getting things done.

Assuming the Position

So, assume for a moment that you are a decision-maker with decisions to make about technology, that you have in your organisation some problems that may or may not have technology as their root cause, and that because you claim to listen to what people in your organisation have to say about their workplace, you feel that clear and decisive measures are required to solve some of those problems. First of all, it is important to make sure that when people complain about something, they are not mixing that thing up with something else that really makes their life awkward, but let us assume that you and your advisers are aware of that issue and are good at getting to the heart of the real problem, whatever that may be. Next, people may ask for all sorts of things that they want but do not actually need – “an iPad in every meeting room, elevator and lavatory cubicle!” – and even if you also like the sound of such wild ideas, you also need to be able to restrain yourself and to acknowledge that it would simply be imprudent to indulge every whim of the workforce (or your own). After all, neither they nor you are royalty!

With distractions out of the way, you can now focus on the real problems. But remember: as an executive with no time for detail, the nuances of a supposedly technological problem – things like why people really struggle with some task in their workplace and what technical issues might be contributing to this discomfort – these things are distractions, too. As someone who has to decide a lot of things, you want short and simple summaries and to give short and simple remedies, delegating to other people to flesh out the details and to make things happen. People might try and get you to understand the detail, but you can always delegate the task of entertaining such explanations and representations to other people, naturally telling them not to waste too much time on executing the plan.

Architectural ornamentation in central Oslo

Architectural ornamentation in central Oslo

On the Wrong Foot

So, let us just consider what we now know (or at least suspect) about the behaviour of someone in an executive position who has an organisation-wide problem to solve. They need to demonstrate leadership, vision and intent, certainly: it is worth remembering that such positions are inherently political, and if there is anything we should all know about politics by now, it is that it is often far more attractive to make one’s mark, define one’s legacy, fulfil one’s vision, reserve one’s place in the history books than it is to just keep things running efficiently and smoothly and to keep people generally satisfied with their lot in life; this principle alone explains why the city of Oslo is so infatuated with prestige projects and wants to host the Winter Olympics in a few years’ time (presumably things like functioning public transport, education, healthcare, even an electoral process that does not almost deliberately disenfranchise entire groups of voters, will all be faultless by then). It is far more exciting being a politician if you can continually announce exciting things, leaving the non-visionary stuff to your staff.

Executives also like to keep things as uncluttered as possible, even if the very nature of a problem is complicated, and at their level in the organisation they want the explanations and the directives to be as simple as possible. Again, this probably explains the “rip it up and start over” mentality that one sees in government, especially after changes in government even if consecutive governments have ideological similarities: it is far better to be seen to be different and bold than to be associated with your discredited predecessors.

But what do these traits lead to? Well, let us return to an organisational problem with complicated technical underpinnings. Naturally, decision-makers at the highest levels will not want to be bored with the complications – at the classic “10000 foot” view, nothing should be allowed to encroach on the elegant clarity of the decision – and even the consideration of those complications may be discouraged amongst those tasked to implement the solution. Such complications may be regarded as a legacy of an untidy and unruly past that was not properly governed or supervised (and are thus mere symptoms of an underlying malaise that must be dealt with), and the need to consider them may draw time and resources away from an “urgently needed” solution that deals with the issue no matter what it takes.

How many times have we been told “not to spend too much time” on something? And yet, that thing may need to be treated thoroughly so that it does not recur over and over again. And as too many people have come to realise or experience, blame very often travels through delegation: people given a task to see through are often deprived of resources to do it adequately, but this will not shield them from recriminations and reprisals afterwards.

It should not demand too much imagination to realise that certain important things will be sacrificed or ignored within such a decision-making framework. Executives will seek simplistic solutions that almost favour an ignorance of the actual problem at hand. Meanwhile, the minions or underlings doing the work may seek to stay as close as possible to the exact word of the directive handed down to them from on high, abandoning any objective assessment of the problem domain, so as to be able to say if or when things go wrong that they were only following the instructions given to them, and that as everything falls to pieces it was the very nature of the vision that led to its demise rather than the work they did, or that they took the initiative to do something “unsanctioned” themselves.

The Magic Single Vendor Temptation

We can already see that an appreciation of the finer points of a problem will be an early casualty in the flawed framework described above, but when pressure also exists to “just do something” and when possible tendencies to “make one’s mark” lie just below the surface, decision-makers also do things like ignore the best advice available to them, choosing instead to just go over the heads of the people they employ to have opinions about matters of technology. Such antics are not uncommon: there must be thousands or even millions of people with the experience of seeing consultants breeze into their workplace and impart opinions about the work being done that are supposedly more accurate, insightful and valuable than the actual experiences of the people paid to do that very work. But sometimes hubris can get the better of the decision-maker to the extent that their own experiences are somehow more valid than those supposed experts on the payroll who cannot seem to make up their minds about something as mundane as which technology to use.

And so, the executive may be tempted to take a page from their own playbook: maybe they used a product in one of their previous organisations that had something to do with the problem area; maybe they know someone in their peer group who has an opinion on the topic; maybe they can also show that they “know about these things” by choosing such a product. And with so many areas of life now effectively remedied by going and buying a product that instantly eradicates any deficiency, need, shortcoming or desire, why would this not work for some organisational problem? “What do you mean ‘network provisioning problems’? I can get the Internet on my phone! Just tell everybody to do that!”

When the tendency to avoid complexity meets the apparent simplicity of consumerism (and of solutions encountered in their final form in the executive’s previous endeavours), the temptation to solve a problem at a single stroke or a single click of the “buy” button becomes great indeed. So what if everyone affected by the decision has different needs? The product will surely meet all those needs: the vendor will make sure of that. And if the vendor cannot deliver, then perhaps those people should reconsider their needs. “I’ve seen this product work perfectly elsewhere. Why do you people have to be so awkward?” After all, the vendor can work magic: the salespeople practically told us so!

Nothing wrong here: a public transport "real time" system failure; all the trains are arriving "now"

Nothing wrong here: a public transport "real time" system failure; all the trains are arriving "now"

The Threat to Diversity

In those courses in my computer science degree that dealt with the implementation of solutions at the organisational level, as opposed to the actual implementation of software, attempts were made to impress upon us students the need to consider the requirements of any given problem domain because any solution that neglects the realities of the problem domain will struggle with acceptance and flirt with failure. Thus, the impatient executive approach involving the single vendor and their magic product that “does it all” and “solves the problem” flirts openly and readily with failure.

Technological diversity within an organisation frequently exists for good reason, not to irritate decision-makers and their helpers, and the larger the organisation the larger the potential diversity to be found. Extrapolating from narrow experiences – insisting that a solution must be good enough for everyone because “it is good enough for my people” – risks neglecting the needs of large sections of an organisation and denying the benefits of diversity within the organisation. In turn, this risks the health of those parts of an organisation whose needs have now been ignored.

But diversity goes beyond what people happen to be using to do their job right now. By maintaining the basis for diversity within an organisation, it remains possible to retain the freedom for people to choose the most appropriate systems and platforms for their work. Conversely, undermining diversity by imposing a single vendor solution on everyone, especially when such solutions also neglect open standards and interoperability, threatens the ability for people to make choices central to their own work, and thus threatens the vitality of that work itself.

Stories abound of people in technical disciplines who “also had to have a Windows computer” to do administrative chores like fill out their expenses, hours, travel claims, and all the peripheral tasks in a workplace, even though they used a functioning workstation or other computer that would have been adequate to perform the same chores within a framework that might actually have upheld interoperability and choice. Who pays for all these extra computers, and who benefits from such redundancy? And when some bright spark in the administration suggests throwing away the “special” workstation, putting administrative chores above the real work, what damage does this do to the working environment, to productivity, and to the capabilities of the organisation?

Moreover, the threat to diversity is more serious than many people presumably understand. Any single vendor solution imposed across an organisation also threatens the independence of the institution when that solution also informs and dictates the terms under which other solutions are acquired and introduced. Any decision-maker who regards their “one product for everybody” solution as adequate in one area may find themselves supporting a “one vendor for everything” policy that infects every aspect of the organisation’s existence, especially if they are deluded enough to think that they getting a “good deal” by buying all their things from that one vendor and thus unquestioningly going along with it all for “economic reasons”. At that point, one has to wonder whether the organisation itself is in control of its own acquisitions, systems or strategies any longer.

Somebody Else’s Problem

People may find it hard to get worked up about the tools and systems their employer uses. Surely, they think, what people have chosen to run a part of the organisation is a matter only for those who work with that specific thing from one day to the next. When other people complain about such matters, it is easy to marginalise them and to accuse them of making trouble for the sake of doing so. But such reactions are short-sighted: when other people’s tools are being torn out and replaced by something less than desirable, bystanders may not feel any urgency to react or even think about showing any sympathy at all, but when tendencies exist to tackle other parts of an organisation with simplistic rationalisation exercises, who knows whose tools might be the next ones to be tampered with?

And from what we know from unfriendly solutions that shun interoperability and that prefer other solutions from the same vendor (or that vendor’s special partners), when one person’s tool or system gets the single vendor treatment, it is not necessarily only that person who will be affected: suddenly, other people who need to exchange information with that person may find themselves having to “upgrade” to a different set of tools that are now required for them just to be able to continue that exchange. One person’s loss of control may mean that many people lose control of their working environment, too. The domino effect that follows may result in an organisation transformed for the worse based only on the uninformed gut instincts of someone with the power to demand that something be done the apparently easy way.

Inconvenience: a crane operating over one pavement while sitting on the other, with a sign reading "please use the pavement on the other side"

Inconvenience: a crane operating over one pavement while sitting on the other, with a sign reading "please use the pavement on the other side"

Getting the Message Across

For those of us who want to see Free Software and open standards in organisations, the dangers of the top-down single vendor strategy are obvious, but other people may find it difficult to relate to the issues. There are, however, analogies that can be illustrative, and as I perused a publication related to my former employer I came across an interesting complaint that happens to nicely complement an analogy I had been considering for a while. The complaint in question is about some supplier management software that insists that bank account numbers can only have 18 digits at most, but this fails to consider the situation where payments to Russian and Chinese accounts might need account numbers with more than 18 digits, and the complainant vents his frustration at “the new super-elite of decision makers” who have decided that they know better than the people actually doing the work.

If that “super-elite” were to call all the shots, their solution would surely involve making everyone get an account with an account number that could only ever have 18 digits. “Not supported by your bank? Change bank! Not supported in your country? Change your banking country!” They might not stop there, either: why not just insist on everyone having an account at just one organisation-mandated bank? “Who cares if you don’t want a customer relationship with another bank? You want to get paid, don’t you?”

At one former employer of mine, setting up a special account at a particular bank was actually how things were done, but ignoring peculiarities related to the nature of certain kinds of institutions, making everyone needlessly conform through some dubiously justified, executive-imposed initiative whether it be requiring them to have an account with the organisation’s bank, or requiring them to use only certain vendor-sanctioned software (and as a consequence requiring them to buy certain vendor-sanctioned products so that they may have a chance of using them at work or to interact with their workplace from home) is an imposition too far. Rationalisation is a powerful argument for shaking things up, but it is often used by those who do not care how much it manages to transfer the inconvenience in an organisation to the individual and to other parties.

Bearing the Costs

We have seen how the organisational cost of short-sighted, buy-and-forget decision-making can end up being borne by those whose interests have been ignored or marginalised “for the good of the organisation”, and we can see how this can very easily impose costs across the whole organisation, too. But another aspect of this way of deciding things can also be costly: in the hurry to demonstrate the banishment of an organisational problem with a flourish, incremental solutions that might have dealt with the problem more effectively can become as marginalised as the influence of the people tasked with the job of seeing any eventual solution through. When people are loudly demanding improvements and solutions, an equally dramatic response probably does not involve reviewing the existing infrastructure, identifying areas that can provide significant improvement without significant inconvenience or significant additional costs, and committing to improve the existing solutions quietly and effectively.

Thus, when faced with disillusionment – that people may have decided for themselves that whatever it was that they did not like is now beyond redemption – decision-makers are apt to pander to such disillusionment by replacing any existing thing with something completely new. Especially if it reinforces their own blinkered view of an organisational problem or “confirms” what they “already know”, decision-makers may gladly embrace such dramatic acts as a demonstration of the resolve expected of a decisive leader as they stand to look good by visibly banishing the source of disillusionment. But when such pandering neglects relatively inexpensive, incremental improvements and instead incurs significant costs and disruptions for the organisation, one can justifiably question the motivations behind such dramatic acts and the level of competence brought to bear on resolving the original source of discomfort.

The electrical waste collection

The electrical waste collection

Mission Accomplished?

Thinking that putting down money with a single vendor will solve everybody’s problems, purging diversity from an organisation and stipulating the uniformity encouraged by that vendor, is an overly simplistic and even deluded approach to organisational change. Change in any organisation can be very expensive and must therefore be managed carefully. Change for the sake of change is therefore incredibly irresponsible. And change imposed to gratify the perception of change or progress, made on a superficial basis and incurring unnecessary and avoidable burdens within an organisation whilst risking that organisation’s independence and viability, is nothing other than indefensible.

Be wary of the “single vendor fixes it all” delusion, especially if all the signs point to a decision made at the highest levels of your organisation: it is the sign of the organisational panic button being pressed while someone declares “Mission Accomplished!” Because at the same time they will be thinking “We will have progress whatever the cost!” And you, not them, will be the one bearing the cost.