Paul Boddie's Free Software-related blog


Archive for the ‘democracy’ Category

The Atomisation of Society

Sunday, April 26th, 2026

Just as it always was, and just as it also was with the Bitcoin and Blockchain fads, artificial intelligence is back in vogue, and now we must suffer hearing about “AI” all the time, shoehorned into everything technological and plenty of other things besides. I can hardly be in a minority in finding it all tiresome and even somewhat disheartening.

Of course, artificial intelligence is a broad discipline, and there are useful technologies doing useful work that may have been reported with varying levels of hype over the course of recent history. Over the last decade or so, machine learning has received a degree of media attention, often doing things related to mundane topics like pattern recognition, supporting applications in worthy domains such as medicine. Interestingly, activities like machine translation and visual object recognition are increasingly taken for granted by wider society, despite being problems that were once considered difficult to solve or tackle reliably.

If worthy domains are not your thing, and your phone is the centre of your life, you might be able to point it at some foreign language text and see it appear, magically translated, in a language you might understand. But this mostly isn’t what the current wave of “AI” hype is all about, even if such relatively useful applications are used to greenwash that hype.

Has your Internet search experience degraded to the point of near uselessness? Worry not: you can now be distracted by something pretending to be intelligent feeding you some misinformation instead. Never mind that an improved search experience, yielding useful, informative results was well within the technological grasp of the well-resourced corporations involved. Boring old part-of-speech tagging, maybe some semantic networks, and a bit of traditional information retrieval technology could go a long way. But, of course, that wouldn’t perpetuate the predatory advertising-fuelled surveillance economy or attract billions of speculative investment dollars.

In Britain, it isn’t enough to saddle the National Health Service with market economics, layers of bureaucracy, and numerous consultants and consultancies siphoning off considerable sums at the taxpayer’s expense, all as the fundamental challenges of the service’s information systems – and core activities – remain undiminished. Now, “AI” will drift in, touted by commercial opportunists and predators, new and old, alongside “data science” practitioners aiming to exploit private health information for their own benefit.

And as if by magic, new “efficiencies” will supposedly result. Never mind that without fixing the bread-and-butter issues, no amount of smoke and mirrors can deliver the magic envisaged. But, of course, it is far easier to wish away one’s problems than to confront them, especially if there are commercial interests with products and services to sell. Why not delegate hard problems to some supposedly all-knowing, all-understanding entity that will magically understand all those things for which the impatient politician or manager simply has no time (or intellectual stamina)?

It is also apparently not enough to saddle educators with extra responsibilities that in a well-organised society would be the task of social and healthcare workers and, in some situations, that of police outreach workers and the police service itself, all as funding is perennially slashed in the education sector. Instead, policy-makers sing tunelessly from the “AI” cult’s songsheet, fuelling yet another corrosive societal problem and, as always, dumping the consequences in the laps of educators for “careful consideration”. And, of course, the welfare of the students barely registers on the conscience of those policy-makers as they perform their victory lap.

As Winter Follows Summer

For the most part, the “AI” in the latest round of hype is that most easily understood by, or most relatable to, the average media commentator. It is the gadget that generates written or visual content as if it were some kind of creative being. For their managerial counterpart with the job title incorporating the dishonestly chosen words “customer satisfaction”, the hammer in their toolbox marked “chatbot” is thus the “go to” tool for every digital transformation job, just as it was in the early 2000s for organisations looking to somehow “deepen” the level of interaction with their customers.

Amusingly, many of today’s corporate chatbots are largely the same kind of keyword-scanning “choose an option” counterparts to the classic phone menu as their predecessors were. The most likely reason for this is that the much-hyped large language models, exposed to querying from random people on the Internet, risk incurring reputational damage to any given hapless organisation as they go off-script and start to parrot “poorly sourced” training data, to put it charitably, of which there may be quite a bit.

And it is the matter of what is used to feed the “AI” monster that should concern those of us in Free Software and every other industry that creates content. It is here that we find out who are allies are, who is really committed to a free and fair society, and who it is that instead chooses to betray us. Unsurprisingly, those who betray us are largely the same group as those who, once employed by a “rich uncle” in the form of some corporate behemoth dangling incentives and a comfortable lifestyle, try to explain away the problematic behaviour of that rich uncle. Worse still, they tend to demand the silencing of any uncomfortable criticism of either the behemoth or themselves in their sordid pact.

You will find these people regularly trolling others in any venue where comments may be left to supposedly discuss the headline content, basking in their privilege and telling everybody else what little everybody on the outside really knows. As layoffs roll in across the technology sector once again, it remains to be seen whether the enthusiasm for defending some corporation’s questionable ethical record remains buoyant or whether there might be some buyer’s remorse amongst those who took the easy money. But of course, these people will glide through another gilded door, ushered in by some other member of the old boys’ network. After all, some of them have been trying to undermine Free Software for years.

This time round, the layoffs are justified by management using the very thing that the corporate apologists seek to defend. Perhaps the most enthusiastic apologists for “AI” have not been shown the door just yet, but plenty of their colleagues have. Then again, there are plenty of people who see a harmful phenomenon and think: “That could never happen to me!” After all, they are the special ones. They might concede that something has come along to “shake up” the industry, but as the special ones, they are the ones who will adapt, who will turn this disruptive force into their “competitive advantage”.

In any online discussion related to “AI” and the software industry, you can expect to read the same old guff. Some self-proclaimed top-flight developer will tell everyone how much “more productive” they are by getting “AI” tooling to generate all those unit tests that they previously had to write by hand. Never mind that their claimed “insane level of productivity” is tempered by their own admitted need to review the slop created for them, a result of them describing their needs in “plain English” when it has been known for decades that the definition of a functional specification for a piece of software, especially when done in a natural language, is one of the harder tasks in systems development.

Many of these people will claim that this lets them “spend more time on the creative part” of their job, never mind that the kind of person who writes such drivel is usually the kind of person that most observers would struggle to categorise as creative. But their advocacy for systems that systematically trawl other people’s work, undergo “training” on it, and transform it for eventual regurgitation in a hopefully distinct and thus plausibly “original” form, and their sudden enthusiasm for the creative side of their work, will find its reward in the end. After all, creative jobs have been amongst the first to be eliminated in this new wave of “AI”, as managers (aping their tech idols) declare that graphic designers are no longer needed, just as I learned in one enlightening, spontaneous conversation a year and a half or so ago, or as anyone can discover just by following the news.

Your manager will surely not care how much more quickly all those unit tests are getting written – not that they cared before, exactly – nor will they value your “creativity” when the latest toy is dangled in front of them promising to do your job. Your peers will not care about any kind of epiphany you have, either. They will still be the special ones and you, like the rest of us, just jealous of their success. You will have had your chance to change the world for the better, and you will have blown that chance.

The Hoarding of Privilege

One might think that technologists might be attuned to the perils of technology for themselves and society, but one should never underestimate the lure of selfishness and self-interest, of not wishing to know about any negative consequences of their work or their interests. As one might expect, solidarity is the first casualty.

One sees talented developers who have spent their entire career writing their own code, tinkering with “AI” coding tools and enthusing about the output, wasting other people’s time pontificating about why the assembly language generated for some 8-bit microprocessor isn’t as plausible as a Python script for a task solved independently hundreds of times and published to the Internet every single time. One sees talented developers make nice new applications, only then to go off and use “AI” for the artwork or for other “creative” elements of the work.

They might say that they aren’t talented enough to do those tasks themselves, so they can justify delegating them to a machine. Maybe they just want something that fills the gap, or maybe they don’t want to pay for such work. But even if everyone is doing of all this for fun and for free, why do they not involve others in their creative exercise, people who might be able to draw pictures or write prose?

It sounds to me that they just do not value such creative endeavours or the role of those in society who are talented enough pursue those endeavours. And as they hit up “AI” to conjure up some art, featuring a particular subject, done in a particular style, they perhaps purposefully neglect the creators of the works that fed the “AI” in the first place: those very same people they breeze past as they go and hit all the buttons.

It must be so wonderful to be in a position of having made plenty of money in a profession that was once valued, maybe even lucrative, to now be able to check out for the rest of one’s career. To pull up the ladder on the next generation, eliminate their opportunities, and to indulge in that time-old practice of berating them for being lazy and not good enough. To indulge in “vibe coding” personally, maybe even promote it educationally, but then to label as decadent all recent practitioners in one’s own profession, simply not made of the same stuff as folk were in the “good old days”.

One encounters people who say that they are glad that they are retired or will soon be retiring. For some of them, this may be a legitimate reaction to an increasingly tiresome and possibly even hostile workplace culture, looking forward to some well-deserved relief tinged with remorse for a vocation that no longer delivers the satisfaction it once did. For others, it just sounds like they are content to leave the world in a worse state than it was, and to abandon the job of reversing this damage to subsequent generations: hardly an isolated occurrence in the modern era.

It can be all too easy for people to downplay the effects of new technologies, often with claims that appeal to ideas about “human nature”. As children go through their entire childhood exposed to technologies that are inadequately supervised and often exploitatively designed, some might claim that schoolchildren or students seeking to cheat on their assignments “isn’t a real issue”. And with this, we are presumably supposed to move on. But anyone who knows any teachers will also know that plagiarism is very much “a real issue”, both for students and teachers.

With ready access to online tools, even encouragement to use them, the temptation is too strong for many students, particularly those who have been struggling throughout their education. With a diminishing incentive to learn, and besieged by social pressures amplified by technology, plagiarism starts to look like all they have left in their arsenal. When we hear stories about disengaged young people and their apathy, imagine being the person who has to try and motivate these young people, especially when it seems like every last person with influence over this situation has abandoned everyone affected.

And plagiarism is, in fact, such an issue that Microsoft, having flooded the zone with “AI” toys, offers other toys to detect plagiarism. There is good money to be made by escalating a situation into a conflict and equipping both sides, as those in certain other professions and industries know all too well. I have read tales of educational institutions being infatuated with “AI” to the point of rolling it out, often excused in numerous ways, but mostly amounting to someone in a position of power wanting to try all the toys.

(And given that school administrations are routinely surrendering their classrooms to gambling industry practices and data surveillance predators, “AI” probably does seem like just another fun toy to play with.)

I have heard tales of decision-makers and executives trying to mitigate the harmful effects of their decisions, related to “AI” and otherwise, by leaning into the consumerisation of education and seeking to wave through students who may have resorted to plagiarism. Students who just want the piece of paper, so that they can get on all those celebrated ladders of society – good employment, decent housing – before they are pulled up and out of reach entirely.

But what will become of such students when they move on to the next stage of their education or development, increasingly out of their depth, feeling less and less adequate, confident, fulfilled or secure in their own existence? At what point do we justifiably label the abdication of responsibility, by those in authority and those who seek to profit from such misery, the abuse that it arguably is?

“You start with the hope that the next swipe will bring a reward, and eventually it just feels good to see the images floating upwards. And you believe that you’ll get something out of it in the end. But that’s the same dirty trick that gambling machines use. The house always wins.”

When society fails to protect its members as they grow up, how can those in privileged positions criticise people, overwhelmed and desensitised by torrents of increasingly harmful content, for not participating in or engaging with society? Maybe those elected and appointed to protect our interests could do exactly that, instead of looking out only for themselves and the vested interests who have been lining their party’s pockets.

Quality Uncontrolled

One potentially common element in the workplace and the classroom is how each group perceives the materials they encounter, along with their attitudes towards the cultural works of their own society. For children growing up in earlier times, one might expect the consumption of books, magazines, newspapers, television series, films, popular music and other such works to have played a defining role in their perception of culture and its value.

But after years of relentless and cynical commercialisation, children have increasingly been growing up in an environment of relentlessly derivative entertainment content. Instead of new and original stories, endless comic book franchise reboots and the like, largely to keep trademarks and other “intellectual property” warm and out of the public domain, may have habituated people to expect less and less from culture and to place a lower value on it.

Instead of being engaged by cultural works, those works become mere wallpaper, something to have on in the background while other, more harmful, forms of engagement steal the attention of the viewer or listener. And with their perspectives and beliefs unengaged and unchallenged by mainstream culture, people risk being sucked into a bubble all of their own, all too readily and easily fed by “AI”, reinforcing their flawed views and pandering to their prejudices.

Why would you want “AI” to generate content for your own consumption? Habituated to having one’s favourite superhero meet and/or fight one’s other favourite superhero in endless re-runs of generic, commercial filler, it isn’t such a big step to go on and have one’s tummy rubbed all the time by unchallenging and probably inaccurate content that happens to appeal to one’s existing biases, now readily produced in bulk and on demand by “AI”.

For some of us, the process of creating something new is part of the eventual reward, and the process of researching, understanding and communicating something is engaging all by itself. But the degradation of culture to a mere consumable risks the marginalisation or even eradication of the creative process, the investigative process, of historical and critical enquiry, all so that people can be soothed and entertained.

There is a relatively well-known quote about AI being supposed to free up a person’s time for pursuing their art by taking care of the cleaning and the laundry, but instead “AI” has made them and their art redundant and leaves them with the cleaning and the laundry. What might this have to do with the workplace? Well, to answer that, we have to address the phenomenon of the software product and its gradual erosion.

Angst or Realism?

When the well-known technologist Tim Bray raised the issue of “AI Angst“, alongside broader issues, the accompanying off-site discussion of the article predictably featured the usual consumerist “works for me” and aspirational “more productive” tropes. But it also exposed the sentiment that “AI” is also quite able to take the joy and the motivation out of doing work and being in a profession. Amongst those who had the most enthusiasm for vibe coding were claims that only “5% to 20% of the work is interesting” in a modern programming job. So, what does that say about the state of the industry?

One factor might be that industry is subject to a continual inundation of fads and trends often imposed on the profession from outside, facilitated by opportunists who can persuade management and executives of productivity boosts and reductions in expenditure. But another thing is that enthusiasm for “AI” is evidently higher amongst those whose development has a high level of “donkey work”. This leads to a pertinent question: have technologists failed their profession by perpetuating cumbersome, inadequate technologies?

If you have ever had to work with certain widely available, well-resourced software available today, you will be familiar with the sensation of having to seek advice or guidance on how some aspect of the software works or is to be used, only to discover that there is no documentation or that the standard of documentation is so dismal, that one can only wonder how people have allowed such software to be used so widely and in such critical applications in the first place. Not that wondering about it is in any way helpful.

Over the years, various remedies have been thrust onto the scene, from plain old Internet search hoping to dig up some gems of wisdom, discussion forums aspiring to provide relief to adherents of particular technologies, and the now-familiar question-and-answer site concept popularised by the likes of Stack Overflow and its affiliates. But alongside the frustration that inadequate documentation may bring is the frustration and general disillusionment of having to continually ask questions, sometimes relying on vain, petulant and uncooperative individuals, that should have been definitively answered long ago, with those answers recorded coherently for posterity in actual documentation.

We might then understand why some people could be tempted to use “AI” chatbots, desperately trying to get insight into things that should have been made clear by actual human beings in the first place. The chatbot might be misleading or counterfactual, but it is probably going to act somewhat cooperatively, not shame the user about their lack of knowledge, or withhold information to coerce a form of deference. Well, at least not yet, anyway.

This sorry situation is a consequence of the way the software industry has been heading over the last few decades. Under the banner of agility, competitiveness, and that crucial “time to market”, forever to be optimised and minimised, things like documentation are critically undervalued. “Read the code!” exhorts one techbro to the other, advocating the complete elimination of any commentary, deemed unnecessary because “the code is the documentation” and because what they coded was obvious to them when they were “in the zone”.

Never mind that the commentary would have revealed the intended behaviour of the code, along with the deficiencies of the code that was actually written. But, of course, no-one had any time for that, let alone any time for writing a coherent guide to the software, its architecture, and how other people might interact with it in various ways as users of different kinds and developers needing to fix and extend that code. All of that stuff was low-end work according to the techbro, as are other kinds of programming that they do not personally value. (Still, the collision of the butch “rewrite everything in Rust” brigade and “AI” can still provide some entertainment and maybe an occasional reality check.)

It also does not help that documentation has joined the other facets of development in being subjected to various technological and methodological fads that do more to create opportunities for certain industry players, as opposed to improving the quality and breadth of material available. Just as one’s heart sinks as the primary source of guidance for a project turns out to be a GitHub repository page, so it does when one encounters a documentation site that was produced by Sphinx, with plenty of hastily scribbled sections littered with “admonishments” and caveats.

Tim’s use of the term “angst” rightfully received a strongly worded response from one commenter, not least because labelling people’s reactions as if they were those people’s weaknesses in having to respond to seemingly overwhelming pressures is what Tim might himself regard as victim blaming. Tim seems like a generally good guy with a great deal of self-awareness, but I imagine that his position in the industry, along with that of elements of his readership, some probably having done very nicely indeed from their tenure at various West Coast behemoths, makes the general demographic less than reflective when they lament the shocking lack of alternatives to their favourite cloud services.

To an extent, it is a bit like former politicians who take on moral causes after their positions in power are over and as their influence steadily diminishes. Where were they when other people needed their help in furthering those causes? When adjustments to policy, so easily done, might have made a significant difference. Similarly, with Free Software and the provision of sustainable, ethical technology more broadly, the dominant ideology promoted vigorously by West Coast capitalism often seems to involve discretionary support to worthy causes by wealthy people, many of whom did just fine while those worthy causes were kept marginalised, and when various structural causes of inequality, noted by Tim himself, were conveniently ignored because that would involve wealthy people paying a bit more tax.

The Idiocracy

There is always some idiot who pipes up in online discussions about how they asked some chatbot about some topic, “and it said this”. It is almost like they want a prize for having done something that anyone else could have done. And what does this kind of “idle wondering” usage of chatbots, indulged by the consumption of colossal amounts of electricity, actually serve? If anyone were really interested in a topic, and were to have the knowledge and understanding to actually assess chatbot output, then they probably wouldn’t be using one in the first place.

That leaves the most likely kind of user as the sort of middle manager who doesn’t understand anything, who just pushes the paper on to someone who is supposed to “action” the information they were given. At what point is the human, relaying information they don’t understand to impress people about “what they know”, just a conduit for the activities of machines? And are they not then just a puppet participating in the game of assessing whether the machines are intelligent or not, abdicating their own genuine intelligence in the process?

To be fair to those wishing to put questions to machines and expecting some kind of summarised response, questions and other naturally constructed prose could conceivably help formulate more effective queries than simple combinations of keywords. The grammatical roles of words and the relationships between those words can disambiguate between the different meanings of some words and constrain the context of any particular search for information. Returning a summary that paraphrases the materials that were found might help in determining whether those materials happen to cover the right topics.

But what I discovered recently when being offered an “AI” search summary for a particular, highly specific, search term was the kind of thing people describe as hallucination. More accurately, though, it was more reminiscent of when a lazy student of a certain age, or someone who is “hustling” to impress their equally superficial superiors, states something as a fact, references a bunch of supposed supporting material, only for none of that material to actually confirm the fact or quote the term in question, let alone define that term as the thing confidently stated in their opening sentence.

Once upon a time, when machines were meant to be doing things that were described as “reasoning”, researchers were expected to show how the machine had worked through to its conclusion, revealing things like “facts” and “inferences”, and thus providing insights into the state of the machine’s encoded “knowledge base”. One would then be able to verify for oneself whether such a conclusion was sound or not. But now, it would seem, nobody really cares about accuracy or correctness, but only whether the damned machine has the right kind of “swagger” that would convince a room full of imbeciles with something that might sound credible enough to them.

And we are entering a time when we should be concerned about how people might readily check the correctness of “AI” queries. It was always bad enough when a search engine would produce results where the search terms didn’t appear in many of the results, even in a derived form, meaning that some “search engine optimisation” vulture had deceived the search engine with a bucket of false keywords stuffed somewhere into a page or site, but now the search providers have an excuse to push “AI search”.

Having trained their models on the bulk of the Web, they can then push out arbitrary “summaries” while deliberately obscuring the content that those models consumed. Thus, they can deny plagiarism by making source material difficult to find, while also omitting or excluding content that contradicts or refutes any dubious assertions or conclusions presented as fact by their services. They arguably don’t even have to try very hard to degrade the experience, giving the whole exercise the air of plausible deniability.

As with predatory social media, the training of “AI” also has considerable emotional costs for the people who are largely exploited in their work of having to train it. Not only are they not encouraged to properly correct disinformation, but they may not be suitably qualified to do so. After all, everyone can only be knowledgeable about so much. Having seen behind the curtain, their advice is not to trust it. For “AI” companies, hyping up their products, and to the idiocracy, the illusion of the perfect, shiny, all-knowing android has to be upheld at all costs, for the former so that the money can keep rolling in, and for the latter so that they can presumably look cleverer than they actually are.

The tiresome industry insider will criticise anyone suggesting that the average chatbot exhibits only superficial characteristics of intelligence, hand-waving towards mysterious models while playing the “you don’t know what I know” card, but it isn’t unreasonable to suggest that the whole phenomenon is barely above the level of a parlour trick. Does the chatbot really understand anything or do people just read meaning into what is effectively just banter? Is it merely a new Eliza for the disinformation age?

The Apologists

There are always plenty of people willing to pipe up when something that amuses or delights them is criticised in some way. One will undoubtedly encounter people who cosplay their fantasies of living in the future by infuriatingly using the term “an AI” in an unsarcastic and completely sincere way to refer to a system supposedly employing AI. In doing so, they effectively attribute general intelligence and even sentience to those systems. Criticise such sloppiness and you’ll get something like this in return:

“Nobody had any problem calling it artificial intelligence back when it was far less capable than it is now.”

Once again, such people fail to appreciate the distinction between artificial intelligence as a discipline and the promotion of the discipline in broader society, where despite the buzz in superficial media circles, applications of AI were often met with much skepticism. It was frequently obvious that the systems involved might have been exhibiting traits of ostensibly “intelligent” behaviour, but these systems could not be regarded as being intelligent in their own right or in a general sense.

Even with huge amounts of data and computing power brought to bear on such matters, chatbot output still has the smoke-and-mirrors character familiar from earlier demonstrations of the capabilities of AI. Sadly, popular culture is now configured to amplify the hype as opposed to deconstruct it. So, even pointing this out has people believing that researchers in earlier times would have been awestruck by today’s “AI” chatbots, when in fact they would almost immediately recognise the phenomenon. Indeed, they would be disturbed by the way society has embraced such technologies unquestioningly, perhaps observing that earlier demonstrations were mere laboratory experiments, potentially dangerous if they escaped and proliferated.

What has changed since those earlier times is the broader availability of computing power that delivers a more convincing “demo effect” of the technology. This makes decision-makers think they can dispense with humans and roll out the chatbots. Couple this with the way that the populace has been conditioned and manipulated in terms of expecting and accepting less from public services, private companies, their employers, in their careers, and in their lives more generally, and it is not surprising that people are consuming such technologies. Those who believe that they are only doing so recreationally are predictably dismissive about the negative effects on vulnerable people who might end up using such services because all of the responsible, humane options have been eliminated for the sake of “convenience”.

There is also the arrogance that people seek to exude of being more comfortable with technological change than the average person, even as they reveal their own insecurity about the nature of intelligence. To me, the idea that other creatures possess a range of intelligence is indisputable, and we are regularly presented with observations made about intelligence in the natural world, animal behaviour and cognition, that should merely confirm that we as humans are not quite as special as we might think. Many people engage constructively with such observations and show that they can readily accept notions of more pervasive intelligence in nature.

Meanwhile, the insecure but outwardly confident technophile presumably scoffs at the notion that, say, animals might employ forms of reasoning or possess forms of cognition or information processing that could rival those of humans. They would exhort others to not attribute “human” traits to other species, even as they readily ascribe fanciful characteristics to machines running mystery payloads of software, presumably because humans were involved in their creation.

Some apologists play the inevitability card, that this is simply another change following on from many that society has already absorbed and survived, and that this will somehow be digested, too, seeing society adapt and life go on as before. But here, even those who appreciate the challenges these technologies pose do not seem to understand the cultural and societal calculations that accompanied earlier forms of technological change. There are long-enduring cultures on this planet that have had social rules about the depiction of the human form for thousands, maybe tens of thousands of years, and yet our arrogant “modern” societies think we have nothing to learn from such cultures.

Again, the apologists might reveal their ignorance by scoffing at pigments, paints and dyes as technology, but they gave humans the ability to choose how they might be portrayed. We live in an age where the application of computational technologies may seek to eliminate any distinction between observed reality and precisely concocted fantasy in a way that is still scarcely believable even now. Fake pictures, video and audio can be presented as genuine representations and recordings of reality, leaving their audiences deceived.

Maybe older cultures did not need to see the emergence of such technology before they understood some fundamental social lessons that still remain unabsorbed by our supposedly “advanced” societies today. And while it might be said that perhaps those older cultures needed thousands of years to reach their own conclusions, we might also observe that our own societies do not have the luxury of thousands of years to tackle the effects of this corrosive fakery. This might even remind you of another pressing, existential threat to humanity.

One notable incident in this regard involved content creator Jeff Geerling, whose voice was cloned by a supplier of technology products to use in advertisements. Fortunately, the company in question backed down when challenged over the unlikely similarity of its synthesised voice to that of Geerling’s, and the incident was resolved relatively amicably and with incredible grace by Geerling himself. The incident highlighted the proliferation of such tools and their widespread availability for all kinds of applications.

Naturally, there are apologists for such tools, too, of the same school that presumably insists that nuclear weapons are not inherently bad, just how they might be used. Our societies show little sign of resilience in the face of such threats. Instead of robust legislation, regulation and education, we are left with the usual predictable media commentary about “scams” with various workarounds to “stay safe”. And, those same media outlets still promote the social media lifestyle, exhorting everyone to share their lives online and to feed the predatory social media monster.

The apologists might regard “AI” as harmless fun or empowering (to them), but there is an arms race in progress and an industry dealing in what might be described as information weaponry, building on the delivery mechanisms of predatory social media, to further degrade society’s resilience, making it impossible to trust voices, images, video and content. We might like to believe that we are the sophisticated ones, living in a “modern” society, in contrast to cultures where images of the human form are forbidden. One might wonder whether such rules are less about “taboos” – a culturally loaded term – and more about such societies having a fundamental realisation that our own societies fail to appreciate or understand, dismissing it with arrogant talk of our own “progress”.

Media coverage of “deepfake this” or “deepfake that” predictably circles around sensationalism, involving deceased celebrities if the audience risks being unmoved. But on a more ordinary level, is it progress to no longer be sure that the person sounding like or looking like a member of your family on the other end of a voice or video call really is that person? Is it progress to need a codeword to be somewhat sure?

And is it progress to only be sure if you are standing there in the same room as them, until the day when some ghoul, possibly one who has grown tired of monetising deceased celebrities, eventually introduces lifelike androids to impersonate random people? Even before that miserable day arrives, is it progress if some other ghoul decides to “deepfake” deceased relatives to torment those who are in mourning?

One of the more interesting contributions to the discussion about “AI angst” was that giving the Vatican’s view on “AI”, eliciting some considered responses. Naturally, the Catholic church is hardly the most popular institution for a variety of reasons, but one has to concede that on matters of theology and philosophy, on issues that shape humanity’s view of itself and its relationship with the natural world, the institution can hardly be considered to be staffed with lightweights, even if we might disagree – sometimes strongly – with its position on certain social issues.

Paying for Other People’s Privilege

As with many of society’s ills, one can always choose to metaphorically put one’s head in the sand and ignore those problems that mostly seem to only affect somebody else. Regardless of whether one does so, however, such problems have an annoying habit of landing in one’s lap, anyway. A few months ago, I got a mail from my hosting provider telling me that a “lot of traffic” visiting one of my sites was “causing excessive CPU usage and disruptive service for other customers”. Of course, this was due to a multitude of client addresses hammering my site – a repository browser – and crawling all over every last published resource.

It was suggested that I put my site “behind Cloudflare” since they offer various services to mitigate the effects of “bot” traffic, with my only guidance being a link to a blog about a service that Cloudflare offers. There was no guidance about what obligations towards Cloudflare I might have as a result, whether there might be payment involved, or whether Cloudflare might want or get something else from me, were I to sign up. In the end, I just put authentication in front of my site and re-enabled it, hoping that the bots would eventually give up if all they saw was the appropriate HTTP “authentication required” response.

Consumerists would point out that I’ve been doing all of this wrong, of course. Firstly, they would tell me that I should have moved my repositories to GitHub for the convenience of having Microsoft do the hosting and me paying nothing for the privilege. They would tell me that having GitHub suck all of my content into its “AI” consumption engine, for regurgitation to other users through their copyright laundering “AI” tools, is “no big deal” and that I should be happy to see my code used in so many other places. They would presumably purr about GitHub’s cumbersome and overworked user interface and its “collaborative tools” that, in classic Microsoft style, try and insert the service into everybody’s workflow.

But the end result of me not “going with the flow” and instead paying for the privilege of offering a service on my own terms, contributing to an independent hosting provider and keeping their business viable, allowing other interested parties access to my software, and generally trying to uphold my own privacy and those who wish to interact with me and my content, is that I and others who try and uphold our autonomy and defend our own interests are punished by the Internet equivalent of looters and pilferers. And the consumerist response involves an outcome that is not entirely unintended on the part of the Internet’s dominant corporate interests.

Just as it is rather convenient for the likes of Microsoft to promote “AI” tools in education, only to also offer “AI” plagiarism detection tools in their ubiquitous services, it also seems rather convenient that the degraded Internet environment, increasingly subverted to feed “AI” products, just happens to drive people towards services and platforms run by the Internet’s behemoths that offer “AI” products and services as part of their headline feature sets.

Naturally, such corporations would claim innocence of any random botnet involvement, rather like how conventional suppliers of physical products routinely claim ignorance of bad things in their supply chains, but ultimately the finger-pointing is of diminishing significance. If someone cried “gold” and now an entire landscape has been obliterated, does it matter who was driving which excavator as everyone piled in to make their fortune?

The coercion used by software companies and service providers to “opt in” customers and users is, of course, entirely deliberate. Their goal is to make everyone complicit in mass copyright infringement and, amplifying classic right-wing hypocrisy that appeals to “personal responsibility” yet weakens regulation and enforcement, normalise sentiments that “everybody is doing it” and so rule-makers should “not bother” trying to curtail antisocial behaviour.

Such collective antisocial behaviour affects Free Software in other ways. It causes a degradation of software quality and Free Software contributions, potentially for malicious purposes, grinding down contributors. All of this effectively conspires against independent, modestly-resourced Free Software projects, at the very least inhibiting or shutting down open collaboration, and at worst entirely eliminating such projects as alternatives to well-resourced corporate projects.

People may claim to be applying “AI” tools with the best of intentions, but those using the tools seem to be happy for them to blatantly plagiarise other works and then effectively mark their own exam, denying what is obvious to any moderately competent observer. Or in the ominous words of one such practitioner:

“Beats me. AI decided to do so and I didn’t question it.”

And, as we have seen before, the result is a degradation in Free Software offerings, of impaired desktop experiences, of inscrutable technologies promoted by vested interests and proliferated unquestioningly, and the continuing need for all of us who advocate Free Software adoption to apologise for the state of the software we recommend, as proprietary software companies cash in on the perpetual “lack of polish” or other deficiencies, perceived or real, of that software. Fewer and fewer viable choices remain, driving the average person into the arms of the monopolists, just as they always intended.

If nothing else, Free Software advocates should be pushing back hard against “AI” proliferation, but many of them won’t. I know to my personal cost what adhering to a set of principles entails, but many people are rather more flexible when it comes to following through on what they supposedly believe in. Free Software advocacy typically sits at the intersection of at least two professions: software development and legal practice. As software developers, plenty of people claim to understand the nuances of AI, readily telling you that you have it all wrong in your criticism of the technology.

And software practitioners are often frustrated by law practitioners and their inability to correctly perceive and interpret the nature of technology. Yet, if the legal profession has concerns about AI, why should other professions accept its use so unconditionally? But that is what people do: people who should know better, who claim expertise and the right to lecture others, and then somehow wave away any concerns because it suits them. When legal cases collapse due to “vibe lawyering”, justice may be denied and crimes may go unpunished. When software fails due to vibe coding, the effects risk being just as serious and, in some cases, even worse.

Practitioners, institutions and policy-makers seem to have, for their own reasons, an inability to confront awkward problems and the grind of getting the job done. Consider the case of a catastrophic data leak which imperiled thousands of people due to inappropriate tool usage and a lazy office culture where people couldn’t even be bothered to add two words to an e-mail subject line as the only, almost laughable, technical safeguard against data breaches.

When that becomes just another opportunity to peddle and to apply “AI”, it not only demonstrates that everyone responsible has effectively “checked out” and cannot be bothered to think very hard about the basics, despite the eager software practitioner’s tiresome refrain of “security!” in every technical forum they dignify with their presence. It also shows the ethical deficit these people have with the rest of the society and the people whose lives depend on their diligence. Because we know that “AI” will be just another scapegoat when things go wrong, excuses will be made, and the “vibe” will go on.

Lawmakers, meanwhile, seem infatuated with their new toy, as they also were with predatory social media, delighted to merely rub shoulders with indulged foreign oligarchs, potentially eyeing the possibilities of lucrative sidelines or post-political positions, instead of developing and furthering the interests of those that elected them to office. As has been the case with other topics of concern, notably software patenting, it seems that lawmakers can be very happy to listen to selfish commercial interests from beyond their electoral boundaries instead of the people they are supposed to represent. (Hint: “the south-west of England” is not the region one such lawmaker was elected to represent.)

Thus, blatant plagiarism, pilfering and infringement under the pretense of a “creative” act seems entirely reasonable to distracted lawmakers, never mind that letting some of the highest valued corporations on the planet have free and unencumbered access to the lucrative output of a nation’s supposedly prized creative industries is likely to plunge those industries into economic ruin. In the case of the United Kingdom, this would be only another chapter of the nation’s leadership stupidly squandering what remains of the cultural “soft power” that the nation once had, only instead of doing so to pander to the bigoted, the ignorant and the deceived, it would be to the kind of people who gladly facilitated that earlier deception.

Some might claim that “expanding copyright” to prevent “AI” misuse of content is wrong, noting that training activities are perfectly legal and justifiable (obviously ignoring the costs incurred by those of us who pay for our Web hosting), and likening the publication of a model to “publishing facts about copyrighted works”. But what about the publication of “AI”-generated works? The suggested “simple way” to “protect artists from AI predation” involves withholding the application of copyright to such works, preventing Big Content from monetising such works, and thus deterring Big Content from adopting “AI” and firing its creative workers.

While that sounds like a great economic “hack”, it doesn’t confront the broader phenomenon of the cheapening of content at all. Big Content has arguably already pivoted to technology, streaming, and the like, but even if they might suffer from such policies, it does not mean that creators will gain. There are plenty of “slop” creators out there today whose business models do not rely on asserting copyright for their works. Will manufacturers of weird jigsaw puzzles care? They just want a stream of free stuff to slap on their products, and what if someone clones them? Well, that was last season’s product.

Allowing the “AI” peddlers to consume and regurgitate copyrighted works without constraint, allowing them to circumvent copyright under the pretense of “creativity”, is still harmful even if the peddlers cannot “protect” those regurgitated works. If someone consumes a Free Software application or library, waves the magic wand of “AI”, and then publishes it, the mere availability of this dubious derivative cheapens the original software, undermines its licensing by steering potential users to a “public domain” clone, reduces incentives to continue developing the original software, and thereby reduces its viability. Flooding the zone with such slop may benefit corporations wanting to circumvent Free Software licensing, but it does not benefit Free Software.

The Atomisation

There are numerous social and economic threats from the introduction of technology under the banner of “AI” that might justifiably elicit a negative reaction from a lot of people. When such reactions are articulated, the response from “AI’s” cheerleaders tend to involve labelling them as “emotional”, “touchy” and “irrational”. Of course, this is just a cynical way of avoiding any kind of constructive discussion about the impact of the accompanying harmful economic agenda, reminiscent of all of the other shifts in industrial policy that left people disadvantaged and impoverished.

In previous transitions involving something that could be described as automation, there was always the chance that those whose manual work was eliminated by the introduction of machines might still benefit from the exercise. When the production of textiles or clothing was to be automated, for instance, there might conceivably have been work to be had in applying one’s expertise to the design of the machines themselves. And there might also have been a limit to the automation, preserving opportunities for those particularly skilled in those tasks beyond the capabilities of the machines.

But today’s enthusiasm for “AI” suggests that whole categories of jobs will be eliminated, that no-one will write code any more, or write prose, draw, paint, make music, and so on. And the intention of those setting this agenda is that there will not be any possibility of somehow migrating to either the automation side of this transition, especially since coding in “AI” companies is meant to be left to the “AI”, or to more specialised forms of paid labour.

Previous transitions were handled very poorly indeed. In Britain, the phrase “on your bike” was largely the economic strategy of the Thatcher government as it gutted various industries, effectively condemning regions of the country to underdevelopment, unemployment and hardship, exacerbating inequalities within the country and divisions that persist to this day. Those too young to know or to remember might recognise some of the cultural phenomena from that earlier time because we see them again now in similarly potent forms, not that they ever really went away.

The practice of “divide and rule” is used to pit disadvantaged and marginalised groups against each other, steadily degrading other sections of society to make them poorer, weaker and too concerned with their own survival to question the general direction of society. Foreigners, immigrants, those with health problems, those not blessed with wealth, those who have otherwise experienced misfortune that blights their lives, and others are conditioned to expect less from their own lives, to feel guilty about their own situation and for needing or expecting help, or even asking for it.

Accompanying this is the promotion of “charity” and demonisation of taxation. How can anyone argue against charity, one might ask, if it is to do good in the world? One can argue against charity being a phenomenon that, instead of being a way of helping others, is used to diminish the help dispensed by society and to make such help entirely discretionary, conditional on the whims of the supposedly generous donor. Such a phenomenon serves only the wealthy who then get to choose where society’s money is spent, instead of paying their taxes and allowing broader society to make such decisions for itself.

Thatcher’s Britain was famous for the pursuit of selfishness, but our societies today face what might be called the “atomisation” of society where everyone is encouraged to pursue their own particular agenda and reward their own selfishness. Every time someone pipes up with the idiotic remark “nah, I’m good”, especially when concern is expressed for the weak or defenceless in society, it is merely another expression of the more culturally established (and derided) “I’m alright, Jack”.

But what such selfish remarks effectively signal is “more for me” from someone who already has plenty. And it obviously signals “less for everyone else” regardless of whether everyone else can manage with less or not. It is where people are encouraged to look out for their own personal interests at the expense of society, not realising that society makes their Amazon Prime deliveries possible in the first place.

Those impressed (or maybe bamboozled) by “AI” may remain unconvinced that the phenomenon might be an unsustainable and unprofitable bubble, one that consumes more investment than it can ever pay back, and that its most “disruptive” form has no genuine or necessary applications. After all, it is very popular and, for some, a nice little component in their plump investment portfolio featuring Nvidia and the other technological horsemen of the apocalypse. Why on Earth would anyone question its viability? Or in other phrasing:

“What are the millions of people using GPT for, if it’s a solution looking for a problem?”

The answer to such a question is this: it is for hollowing out forms of work that are fulfilling or professional, leaving the tedious, hard-to-automate stuff to human beings who will end up doing commoditised work, with all the “hustle” for that work and the driving down of salaries that it entails. All so that people whose livelihoods and lifestyles have been ringfenced one way or another can be entertained for a few seconds at a time with their “AI” videos and other “slop”.

The Degradation of Expectations

As neoliberalism took hold, with the privatisation of public services and the deregulation of various markets, consumerism was the shiny trinket that was used to distract from any hardship that was experienced or from the structural weaknesses being introduced. Having more choice in the shops may have been welcome, and if you couldn’t afford to buy anything, there was plenty of credit sloshing around to let you join the party. As for those public services, there were shares you could buy to join that party, too.

Decades of neoliberalism has given us consumerism as a solution for everything, seeing the replacement of basic, universal services with market-driven “consumer choice” involving a bunch of different “providers” who may or may not offer the same quality of products or levels of service that the customer would have a right to expect. Exciting as it may be for some to be able to choose between different flavours of utility provider, postal service, train or bus company, and so on, it all rewards the people with plenty of time, money and a propensity for getting bored easily to “shop around for the best deals”, leaving everyone else disadvantaged by unscrupulous businesses who, according to neoliberalism, merely occupy a place in the market appropriate for the “value to the customer” that they deliver.

Even where public services are maintained, consumerism and the attraction of new toys amongst ostensibly bored or ideologically fixated politicians can easily start to corrupt those services for the benefit of private operators whose only interest and instinct is to make money while they can. Having new toys to play with supposedly makes life more exciting for similarly bored and easily distracted members of the general population, never mind that they burden other public services with the consequences of indulging antisocial behaviour and make the lives of other individuals miserable.

When such companies have, for example, a record of perpetuating their abuse of healthcare professionals overwhelmed during a pandemic, what right do they have to make demands of anyone so that they can keep going with business as usual? But compliant politicians will continue to pander to them. After all, the religion of neoliberalism elevates those companies and their greedy founders to objects of worship.

And if those companies can damage publicly run services to the point of sufficient public dissatisfaction, those politicians can claim that the state always fails at everything and should leave such “business” to business. Naturally, those politicians do not offer to resign from their own jobs, although they are, I suppose, already doing the bidding of private enterprise, just not being willing to forego their publicly funded salary.

In both the public and private realms, many of us are presumably familiar with certain trends. When interacting with companies to obtain help or support with their products and services, or when attempting to navigate the public bureaucracy, one might recognise what I call the notion of “penalty laps” to compensate for cheapened and hollowed-out services, making the customer or the user spend time doing unnecessary and pointless work to prove that they deserve actual support.

One cannot simply communicate with another human being, but must instead interact with a chatbot first, which merely parrots information that is readily available and often of little help to anyone actually requiring support. Or maybe a long sequence of questions, deployed on a Web site or over the telephone, must be carefully navigated before a human can be summoned to communicate. Such inconvenience is used to herd the increasingly unhappy customer or user into other channels, framed as being more “convenient” and almost certainly in the form of an “app”.

The easily amused or distracted customer or user may think that they are getting better service for free, but they are in fact subsidising the operating costs of the institution concerned. And so, businesses and institutions continue their externalisation of operating expenses, insisting that customers or users provide their own equipment to interact with a business or service, paying the advertised costs as well as the hidden costs of acquiring a nice phone, insuring it, replacing it when those institutions decide that it is “too old”. You pay to work for them now, in case you didn’t realise.

Proof of “work” may have been the selling point of ruinous, dubious and entirely unnecessary cryptocurrency schemes, but making people continually prove that they are somehow worthy is an established trait of an exploitative society. Given that the neoliberal society will continue to eliminate decent, fulfilling work, one might expect efficient mechanisms to help people find other opportunities. But instead of making opportunities for people who seek help, such a society and its institutions has such people effectively doing useless busy work applying for non-existent jobs in a largely fictional “market”.

And with no economic strategy or vision, but with a worldview that involves pulling the public purse strings as tightly as possible to close the purse, they perversely create plenty of jobs in the bureaucracy for people to administer penalties and to deliver judgement on other people’s personal situations. Didn’t apply for enough meaningless non-jobs or unsuitable, informal, casual work? Then the entitled people in the bureaucracy aspiring to be like their managers and political leaders, cultivating a belief that they “deserve” their opportunities unlike those lazy people on their books, will deny the help that people seek just to be able to turn their lives around.

Because the attitude in the neoliberal economies is that looking for a job should actually be a job. So, the branch of the state administering work-related benefits is mostly there to coerce people into looking for jobs that they aren’t suitable for, causing huge volumes of speculative applications that even the applicants know are senseless, making it harder for genuine recruitment to occur.

And then there are all the fake job adverts, either posted to cover up nepotism or corrupt practices, or to puff up some company’s image, or to give functionaries something to do. And people wonder why it is that there’s no real economic growth, allowing our glorious leaders to claim that there is no money to spend on building up society, that improving the quality of life for those who need it will just have to wait.

“We can’t afford it” is the perpetual excuse for a lack of public services and crumbling infrastructure. Can’t get to see a doctor or another healthcare specialist? With the zone flooded with “AI”, desperate people turn to desperate solutions with disastrous results. We should, of course, expect better support for the people who need it in our societies, from actual human professionals, but that requires investment and commitment. Expect to be fobbed off with technological toys that cosplay the experience of interacting with professionals instead. Unless you are wealthy or well-connected, because then only the best will do.

“AI” is just the latest escalation in the practice of “divide and rule”, facilitating the targeting of individuals to such a degree that the powerful can go beyond merely targeting minorities and smaller groups while having to indulge the majority to keep them passive and broadly supportive of such cruelty. With “AI”, the powerful can potentially pull each person apart from those closest to them, corrupting their communications and poisoning their relationships, to a degree not even achieved through the manipulation of people’s lives by predatory social media. Populists and the affluent will gladly embrace “AI” for all the amusement it offers and for as long as it lasts, unaware that there may no longer be a “safe” majority for them to hide within any more.

It is not exactly an exaggeration to consider “AI” as an existential threat requiring collective action, not least because of its ruinous power and resource consumption, coupled with the ineffective measures to tackle climate change that are formulated by those politicians always encouraging us to wait for better times. Such times will never arrive if the barons of “AI” and others who prioritise their own wealth are setting the agenda, of course.

There seem to be plenty of people who think that by enthusing about ruinous practices like “AI”, parroting the rhetoric of the oligarchs, and otherwise only caring about things that benefit them personally, that they will somehow get to join that club of the wealthy, that they too will get to go on the spaceship.

To those people, I can only say this: instead of joining the club, even you will find yourself alone, having been torn from the fabric of the society you helped to obliterate, but instead of living your best life, you will be miserable and there will be nobody left to defend you.

And by the way, there is no spaceship.

The Scandisplaining of Digital Freedoms

Monday, April 6th, 2026

Recently, the Norwegian Consumer Council has been enjoying a degree of publicity for a campaign they have been running about the “enshittification” of the Internet, riffing on the overused term coined by Cory Doctorow to describe the deliberate degradation of products and services given an absence of real choice and competition in the marketplace. Naturally, international news organisations have lapped this up as another example of supposedly progressive Scandinavian social and political priorities. That plucky Norway could show the rest of the world how to deal with predatory Big Tech.

As always, the story is rather more nuanced if one is more familiar with how things typically go in Norway and, I can well imagine, the rest of Scandinavia. First of all, one can justifiably wonder where these people have been living for the past quarter century. Time and again, Free Software advocates have pointed out that a reliance on proprietary software and platforms ultimately harms individuals, institutions and societies. Over ten years ago now, I myself sought to prevent the introduction of a proprietary groupware platform in a public institution that had been my employer. By the time I was meeting the hostile and dismissive leadership of that institution, I wasn’t even working there any more. The meeting ended with the overpromoted head of the institution, flanked by his privileged and/or hectoring enforcers, insisting that “Microsoft would never do anything that wasn’t in their customers’ best interests”. In the sitcom version of events, cue the laughter track.

I ended up doing some legwork in my own time to dig into the nature of the commercial arrangements between the institution and its supplier, but all the “commercially sensitive” bits involving actual monetary amounts were redacted. My own motivation to pursue the matter was rather tempered by the fact that some of those who felt that this commercial arrangement impeded various workplace freedoms of theirs would not pursue the matter themselves. After all, they didn’t want their nice salary and other bespoke workplace arrangements in their permanent employment position endangered by any kind of actual activism. Evidently, this was the job of the guy whose temporary contract had ended. Having presented my findings, nothing further happened and those precious freedoms were not generally upheld. But technical workarounds let various people pretend that business could proceed as usual. Their nests remained fully feathered. Screw the plebs: they would have to get used to Exchange, anyway.

I wish I could claim prescience in the whole affair, but it was pretty obvious how things would end up going. I remarked that at some point, “on-premises” Exchange would be phased out in favour of a cloud-based solution, likely to be what I tend to call Office 360. Fast-forward to recent times, and of course that is exactly what has been happening, with the institution presumably pleading poverty. Why not just make your employees customers of a foreign corporation, regardless of the wrapping of the institutional package? They have to take that deal whether they like it or not. Now, there may have been some chatter about these new arrangements. Right now, I cannot consult my own archives to check, but I would have been right to say “I told you so”. Some of the supposed champions of freedom may be more concerned that with a full-on migration to the cloud, all those neat workarounds of theirs might finally become obsolete. Maybe it will finally become time for them to face up to everybody else’s reality.

Once Upon a Time

For a time, Norway had a public agency that was meant to promote Free Software and interoperability in the public sector. Lobbied by the usual proprietary software vultures, the incoming right-wing government happily shut it down in a wave of the usual austerity that such governments love to inflict on public institutions, public infrastructure, and the wider population, just as they slash taxes for the wealthy in the name of “wealth creation”. Precisely these kinds of political choices, familiar from countries with more obvious records of punitive austerity, like Britain under the likes of Margaret Thatcher, John Major, David Cameron, and the subsequent clown car parade of prime ministers in the last Conservative administration, degrade societal resilience and undermine things like digital and technological sovereignty that are now suddenly in vogue.

An international audience might be surprised that supposedly egalitarian and progressive Norway might exhibit such traits. Comparable political shifts in Sweden and Denmark, undoubtedly inspired by the cruelty-enabling culture of “personal aspiration” (selfishness, in other words) promoted by British Conservatism, have similarly gone unnoticed or have been gradually forgotten. That a bunch of people in Norway haven’t managed to follow along rather suggests that the tradition of navel-gazing is alive and surprisingly well. After all, if the gravy train kept running from your station, then what was the problem again, exactly?

This latest initiative’s open letter to the Norwegian government notes that the French public sector made concerted efforts to introduce Free Software from 2012 onwards. How quickly people forget that back in 2012, that soon-to-be-culled Norwegian public agency was trying to bring the Norwegian public sector round to undertaking similar kinds of endeavour, doing it the celebrated Scandinavian way of not treading on too many toes. Naturally, such an approach was never going to be resistant to the kind of predatory corporate interests who routinely siphon billions of crowns, pounds, euros and dollars from the public sector locally and internationally for supplying their mediocre and often blatantly deficient products and services, subjecting governments and thus taxpayers to coercive, ruinous and yet seemingly perpetual contracts.

Those previous efforts might have been envisaged as a viable means to a righteous end, but they may have ended up being regarded simply as a nice supplement for those already engaging in the kind of advocacy that makes people feel like they’re “doing something”. It was another voice in the chorus of righteousness, and with an accompanying annual conference, it was yet another venue to talk about things and congratulate each other, rather than do those things necessary to actually advance the cause. It was even held in Svalbard on one occasion, if I remember correctly, because nothing says more about a commitment to sustainability than having a bunch of people jet off to the realm of polar bears and those melting ice floes.

So, what things would have advanced the cause, then? Well, the first thing would have been to actually fund Free Software at scale and to make sure that when people tout solutions for widespread use, they are actually fit for the job. And no, gathering up a bunch of existing projects and promoting them is absolutely not the same thing. When I investigated Free Software groupware solutions, the popular wisdom was that Kolab had “solved” groupware many years earlier. It turned out that Kolab had been rewritten as version 3 and was inadequate in a number of ways: a half-finished solution. Efforts to try and engage with the developers became futile. Despite pitching the software as a collaboratively developed Free Software project, all they really cared about was whether the software would support the operations of a now-liquidated Swiss company riding the privacy bandwagon and largely targeting the Jason Bourne brigade.

Such experiences made me suspect that Kolabs 1 and 2 might not have been adequate all along, either, possibly pitched as a good-enough solution to a problem that hadn’t been fully understood, all to serve various big-fish, small-pond commercial interests. Later on, I discovered Zarafa, which became Kopano, and wished I had found it earlier. It may have been a better choice, not least because the dopes insisting on Microsoft Everything would have seen the Web interface and thought that it was straight out of Redmond, unlike Microsoft’s own Web-based Outlook solution, perversely. Sadly, as a sign of our depressing times, Kopano is now becoming (or has become) a cloud-only product.

It may seem obvious, but it still needs saying: general advocacy and encouragement isn’t sufficient; people need working solutions. And experience also shows that one cannot leave it to “the market”, whatever that is in Free Software. For many years, I have used KMail to read and send e-mail. It remains surprisingly usable today, “surprisingly” because its developers decided at one point to adopt some weird middleware layer called Akonadi, entranced by the promises made by Microsoft and/or Apple to deliver pervasive “desktop search” capabilities in their own products. Whether Microsoft or Apple actually delivered or, more likely, abandoned or scaled back those promises, I am now compelled to run the command “akonadictl restart” almost every day to “unwedge” my mail client and get to see newly arrived mail.

(It also didn’t help that the developers introduced MySQL – now MariaDB – into the mix, and that in the maintenance of that product, which throughout its existence under its various names could uncharitably be described as Monty Widenius’ Flying Shitshow, someone decided to bump a version number in a minor (or actually a patch-level) release that caused the whole stack of software to refuse to access the arguably unnecessary database underpinning KMail, making my mail inaccessible. Fortunately, my case was heard within Debian, and remedies were eventually applied. Before that, I had to recompile the package with an appropriate workaround. A victory for Free Software pragmatism, but good luck to the average user suddenly staring down a potentially indefinite e-mail outage.)

Free Software groupware applications, like the overall desktop experience, stopped showing year-on-year progress in functionality some time ago. Already degraded in various ways when the developers of such technology became distracted by what the big players said they would be doing, the arrival of social media seemed to make some developers believe that the era of the mail program had ended altogether. It apparently became more important to some of those developers to add “share on Facebook” menu items to random applications than to ensure that their applications were still usefully serving their loyal users.

(Observations that technologies like ActivityPub and applications like Mastodon can supplant boring old e-mail and that they have shown considerable growth, and yet remain in a niche, rather overlook – indeed, neglect – the fundamental variety in groupware and collaborative technologies. A strategy of “ActivityPub everywhere” is like keeping the big hammer and throwing away all the other tools in the toolbox. One might suspect that it is only now gaining traction because there are people who want a similar kind of buzz to the one they get from their favourite doomscrolling services but feel bad going back to the same, increasingly disreputable dealer.)

The lesson here is that someone firstly needs to develop functional software, but then to check and double-check that functionality, as well as continuously verifying whether the software meets people’s needs. This cannot be left to random developers or to companies. The big Linux distributions never really cared enough about the average user to finish the job, merely bundling stuff and maybe hiring developers to either dabble with their projects or to make them only good enough for narrow corporate advantage. As far as Red Hat’s bottom line was concerned, all that ever really mattered was a placeholder desktop good enough to do a bit of point-and-click system administration for a bunch of file and print servers propping up a bunch of Windows desktops, or for software development most likely involving Java and targeting “the enterprise”. Such companies happily make their own employees use proprietary software and services for the kinds of tasks that the average user does, regardless of whether they might be using Free Software office and groupware suites instead.

The right approach would have been a concerted government initiative resistant to lobbying and corruption, not mere advocacy, nudging and cajoling. Genuine standards and interoperability could have been mandated and corrupted pseudo-standards like Microsoft’s fast-tracked office formats rejected. Agencies like Statistics Norway should have been taken to task for stipulating “.doc” as their chosen “interoperable” format, with those responsible sent back to finish, or maybe even begin, their education. One might have learned from experiences in other countries, like that of the public key encryption software Gpg4win in Germany, where a genuine governmental need transformed the financial viability of the GnuPG software project from one which had been chronically underfunded and practically relying on the charity of its principal developer to a thriving, viable enterprise.

Proprietary software lobbyists had criticised Norway’s earlier soft-touch efforts, claiming that the public agency concerned was subsidising uncompetitive software that was presumably the work of hippies and communists. There was one case of a public institution wanting to give money to a Free Software project in the realm of PDF generation, if I recall correctly. Upon discovering that it was Free Software, decision-makers refused to make the donation: after all, if those people were giving their code away, why pay after the fact? Such paper-pushing idiots evidently failed to understand that such windfalls may only happen once. Some of them would undoubtedly and routinely use the Norwegian word for “farmers” in the pejorative way for people they might consider ignorant, and yet farmers manage to understand that harvests do not magically occur and re-occur without cultivation and sustenance.

The right approach would also have involved mandating Free Software for publicly funded projects and for public infrastructure, as advocated by the FSFE’s Public Money Public Code campaign. Proprietary software interests would undoubtedly howl at such stipulations, claiming that their secret sauce software, supposedly written by Top Men, would be unfairly excluded from such markets. But just as even some ostensibly left-wing politicians have forgotten, “markets” only exist at the indulgence of governments and regulators, and they only operate in the public interest if properly framed and regulated. Don’t want to give your customers the freedom to maintain the code they are paying for? Feel free to seek opportunities elsewhere, then. Cushy lock-in deals for the locally well-connected should have gone the way of Norsk Data when that company fell to Earth.

Planet Norway

Attitudes to societal threats seem to be remarkably relaxed in our supposedly enlightened democracies and their institutions. The casual, pervasive use of predatory social media platforms continues, propped up by state institutions claiming that they need to have a presence in all the different channels, but where one suspects that a few managers and their appointees just want to play with some toys and puff up their public profiles. Instead of leveraging the resources of the state and providing reliable channels of communication, such bodies post announcements, updates and nonsense via foreign-owned hate speech venues. Norwegian political party leaders even decided at one point that they had to promote themselves on Snapchat, egged on by one of the national broadcasters. Now, we see the unquestioning adoption and promotion of “AI” and chatbots by institutions that risk being obliterated by such technologies.

At the individual level, concerns about “screen time” and the use of tablets and other devices in places like primary schools have been aired and may well be justified, given the likely developmental impact on children of such devices, but it all comes across as pearl-clutching when one suspects that the lifestyle of the vocal parents probably revolves around their phone, “apps”, streaming services, and rather too much screen time of their own. And some of those concerned about screen time would probably drop their objections if a study conveniently came along to assuage their worries. It is just like those very Scandinavian traits of stuffing tobacco products into one’s mouth or spending time at the tanning studio, neither of which are actually healthy. Over the years, various pseudo-academic figures and findings have occasionally floated up into public prominence, insisting that such things are perfectly fine. Why wouldn’t that happen with technology? The companies involved could certainly afford to pay for a bit of fake research and a few willing advocates.

One gets the impression that many of the different factions that might coalesce around a campaign about “enshittification” aren’t really trying to achieve systemic change: they merely want to negotiate a better deal. Such people knew what they were getting into by using free-of-charge services and often explicitly rejecting genuinely free alternatives which cost only modest sums to run. Such people were also aware that their data might wander off into the cloud and away to places where it would be mined and exploited for all it is worth, but caring about it was just too much bother. In institutions, all it takes is for Microsoft to “pinky swear” that it complies with data protection regulations. Institutional capabilities are then run down and alternatives abandoned, just so the toys can be unpacked, sending non-trivial sums overseas instead of cultivating knowledge, opportunities and wealth locally.

One wonders how seriously people really want to take such matters, or whether they just want a hobby and to feel good about a bit of casual activism. I am reminded of the climate litigation brought against the Norwegian state for its continuing policy of fossil fuel extraction, noting that climate change presents an existential threat reaching far beyond the confines of the nation, affecting the population of the entire planet, but where the country’s constitution at least worries about the state of the nation for future generations. The outcome – a defeat for the litigants – might not merely be described as positioning Norway as having “first world problems”: that would be business as usual. Instead, it might reasonably be described as situating Norway on a planet of its very own. One where the mere accumulation of money protects and even “benefits future generations“, evidently.

Hobby activism is typical of places where the stakes remain relatively low for those doing the campaigning, but on Planet Norway it arguably reaches another level, where hardship along any given dimension is often perceived to be a problem that only foreigners in poorer countries experience (or poorer planets, maybe). Or it is marginalised and framed as something that only affects a vanishingly small number of people, conveniently aided by policies and attitudes that seek to hide those who are struggling and blame them for their own predicament. But a reckoning is surely overdue even in matters of preference as opposed to need. After all, what good is it to advocate that children learn to code if nothing is done about the way “AI” is devastating Free Software and undermining paid work?

Unlike outsider perceptions of the money flowing freely in Norway thanks to the oil fund, the purse strings are generally not loosened for those whose professions have been gutted. And for younger people, they are more likely to be told to take shifts at Ikea than get the help they need. Yes, this is actually a thing. It also explains why on one local recruitment site, when filling out one’s employment history, the default value in the employer field reads (or read when I last checked) “Ikea Furuset”: one of the two full-scale Ikea stores serving the Oslo area. Not that there’s anything wrong with working at Ikea, but I doubt that the kind of aspirational parents hoping to give their little darlings a head start in the world, funding their higher education and other ambitions, would see it as a fitting venue for their offspring’s many talents.

Yet Another Elephant in the Room

This latest campaign recommends Free Software and open protocols in public procurement, which is what previous efforts pretty much did, too. The accompanying report even suggests funding alternatives, but then delegates this to discretionary funds and foundations, conveniently avoiding the structural issues. But the very reason why “dominant big tech companies have deep pockets” is through a perversion of economic incentives. Firstly, they have cultivated an “expectation of zero” where individual and institutional customers expect software and services to cost nothing. Thus, any investment in software is regarded as unnecessary because those nice corporations are giving away shiny free stuff. They also front-run various standards to make any kind of competition ruinously expensive to pursue.

And yet software cannot be developed without expenditure, and certainly not with the latest instrument being used to sustain those cultivated consumer expectations. “AI”, which is hyped to make it seem like software can be whipped up at a moment’s notice at no cost, relies on industrial scale plagiarism, colossal data centre and hardware investments, and ruinous levels of power consumption. What has funded these scorched earth tactics is a corrosive business model that inflicts highly lucrative but consequence-free, unpoliced advertising channels on billions of people. Developers at predominantly American technology company aren’t expected to work for free, after all. We are apparently meant to feel sorry for them. Some of them have to pay gentrification-level prices for their homes in places gentrified by themselves and their colleagues. Their bosses expect huge bonuses, their own yacht, island, spaceship…

Claiming the juvenile right of unrestricted free speech to drive engagement, Big Tech has largely allowed unregulated commerce to proceed, undermining traditional safeguards, endangering individuals, and threatening and even shuttering viable, responsible businesses. Any efforts that ignore such structural issues will fail to find the money required to make a difference. I, or my Web publisher, may be held to account for what I write, but anything goes on the predatory Big Tech platforms, whether it is the puerile variant of “free speech” cultivated by the increasingly fragile American dream, or whether it is fraudulent or outright illegal advertising promoting dishonest and criminal enterprises. Allowing such business to continue as usual simply enriches these predators while impoverishing ourselves.

The operators of those platforms are getting a free ride at a severe cost to us and our societies. It may not seem like it to the random punter getting “a great deal”, but we all pay for that deal in the end. And even little Norway has its own commerce platforms that look the other way, especially where anything related to the property bubble is concerned. Why not advertise your rental property using the fancy sales prospectus from a few years ago when you or a previous owner bought the property from the developers? Oh, “caveat emptor“, of course. How about advertising a property with a non-existent address? How much money is laundered even in little Norway, or is that kind of thing only done by foreign people in less enlightened countries? Maybe even the same people whose oil is “dirty” while Norway’s is “clean”.

Anything short of changing the flawed terms under which those companies operate, which one can barely believe are legal in the first place, is nothing more than consumerist tinkering. Naturally, there will be howling from entitled consumers, happy to have random people scurrying around with their urgent shopping deliveries, just as established, essential services like postal mail risk being degraded to the point of near uselessness or even eliminated altogether (which is something that might blow the minds of media people in countries like the UK where postal services still largely hold up and where deliveries still happen six days a week). But society cannot pander to people’s elevated levels of personal entitlement forever, despite the best efforts of populist politicians living in their own bubble of affluence.

I suppose I could be accused of being a simple “outsider” who still doesn’t understand Norway after all these years. Recently, I read a ridiculous piece claiming that Norwegian culture and society does not tend to focus on personalities. That would be news to readers of gossip magazines, newspapers, and even the finance magazine I would read in a medical specialist’s waiting room, keeping us all up to date on which members of the Norwegian financial and legal elites were suing which other members of those elites, all while name-dropping and generally cultivating individuals as movers and shakers.

But instead of cherry picking parts of the nation’s broader, pan-Scandinavian heritage – Janteloven in that particular case – and perpetuating all the other familiar tropes, so often featured in selective, favourable cultural projection delivered through credulous or lazy journalistic coverage, maybe lessons could be learned from one of Hans Christian Andersen’s more famous tales. It really doesn’t take very much to point out that notions of Scandinavian preparedness in the face of digital exploitation and its accompanying threats are somewhat overstated. Anyone caring to take a closer look at the emperor’s reputation as a fully clothed, well-tailored individual can do it, if they actually care to.

I just wouldn’t recommend anyone holding their breath for too long in anticipation of decisive, credible Scandinavian action that might show the wider world the way forward. After that sharp intake of breath witnessing the emperor in the buff, please exhale or you might eventually expire. Just like in other countries, there would have to be a cultural shift away from shopping for “big brand” software, wheeling in the consultants, having retreats and “away days” learning about the next set of goodies on the proprietary software treadmill, and treating predatory social media platforms as merely a harmless guilty pleasure. Maybe local commerce wouldn’t have to be mediated by “apps” and trillion dollar corporations, either. And there would have to be more than tame, hobbyist lobbying and performative activism: quite the challenge when rocking the boat in countries like Norway is simply never done.

But it all made for a good story about Scandinavia leading the way, as usual, so “job done”, I guess.

Some thoughts about technological sustainability

Saturday, February 12th, 2022

It was interesting to see an apparently recent article “On the Sustainability of Free Software” published by the FSFE in the context of the Upcycling Android campaign. I have been advocating for sustainable Free Software for some time. When I wasn’t posting articles about my own Python-like language, electronics projects or microkernel-based system development, it seems that I was posting quite a few about sustainable software, hardware and technology like these:

So, I hardly feel it necessary to go back through much of the same material again. Frustratingly, very little has improved over the years, it would seem: some new initiatives emerge, and such things always manage to excite some people, but the same old underlying causes of a general lack of sustainability remain, these including access to affordable, long-lasting and supportable hardware, and the properly funded development of Free Software and the hardware that would run it.

Of course, I wouldn’t even be bothered to write this if I didn’t feel that there might be some positive insights to share, and recent events have prompted me to do so. Hopefully, I can formulate them concisely and constructively in the following paragraphs.

The Open Hardware Crisis

Alright, so that was a provocative heading – hardly positive or constructive – and with so much hardware hacking (of the good kind) going on these days, it might be tempting to ask “what crisis?” Well, evidently, some people think there has been a crisis around the certification of hardware by the Free Software Foundation: that the Respects Your Freedom criteria don’t really help get hardware designed and made that would support (or be supported by) Free Software; that the criteria fail to recognise practical limitations with some elements of hardware, imposing rigid and wasteful measures that fail to enhance the rights of users and that might even impair the longevity and repairability of devices.

A lot of the hardware we rely on nowadays depends on features that cannot easily be supported by Free Software. The system I use has integrated graphics that require proprietary firmware from AMD to work in any half-decent way, as do many processors and interfacing chips, some not even working in any real way at all without it. Although FPGA technology has become more accessible and has invigorated the field of open hardware, there are still considerable challenges around the availability of Free Software toolchains for those kinds of devices. It is also not completely clear how programmable logic devices intersect with the realm of Free Software, either, as far as I can tell. Should people expect the corresponding source code and the means to generate a “bitstream” for the FPGAs in a system? I would think so, given appropriate licensing, but I am not familiar enough with the legal and regulatory constraints to allow me to expect it to be so.

The discussion around this may sound like a storm in a teacup, especially if you do not follow the appropriate organisations, figures, and their mailing lists (and I recommend saving your time and not bothering to, either), but it also sounds rather like tactics have prevailed over strategy. The fact is that without hardware being made to run Free Software, there isn’t really going to be much of a Free Software movement. So, instead of recommending increasingly ancient phones as “ethical gifts” or hoping that the latest crop of “Linux phones” will deliver a package that will not only run Linux but also prove to be usable as a phone, maybe a move away from consumerism is advised. Consumerism, of course, being the tendency to solve every problem by choosing the “best deal” the market happens to be offering today.

What the likes of the FSF need to do is to invest in hardware platforms that are amenable to the deployment of Free Software. This does not necessarily mean totally rejecting hardware if it has unfortunate characteristics such as proprietary firmware, particularly if there is no acceptable alternative, but the initiative has to start somewhere, however imperfect that somewhere might be. Much as we would all like to spend thousands of dollars on hardware that meets some kind of liberty threshold, most of us don’t have that kind of money and would accept some kind of compromise (just as we have to do most of the time, anyway), especially if we felt there was a chance to make up for any deficiencies later on. Trying to start from an impossible position means that there is no “here and now”, never mind “later on”.

Sadly, several attempts at open hardware platforms have struggled and could not be sustained. Some of these were criticised for having some supposed flaw or other that apparently made them unacceptable to the broader Free Software community, and yet they could have led to products that might have remedied such supposed flaws. Meanwhile, consumerist instincts had all the money chasing the latest projects, and yet here we are in 2022, barely any better off than we were in 2012. Had the FSF and company actually supported hardware projects that sought to support their own vision, as opposed to just casually endorsing projects and hoping they came good, we might be in better place by now.

The Ethical Software Crisis

One depressingly recurring theme is the lack of support given to Free Software developers and to Free Software development, even as billion-to-trillion-dollar corporations bank substantial profits on the back of Free Software. As soon as some random developer deletes his JavaScript package from some repository or other, or even switches it out for something that breaks the hyperactive “continuous integration” of hundreds or thousands of projects, everyone laments the fragility of “the system” and embeds that XKCD cartoon with the precarious structure that you’ve all presumably seen. Business-as-usual, however, is soon restored.

Many of the remedies for overworked, underpaid, burned-out developers have the same, familiar consumerist or neoliberal traits. Trait number one is, of course, to let a million projects bloom, carefully selecting the winners and discarding any that fail to keep up with the constant technological churn that also plagues our societies. Beyond that, things like bounties and donations are proposed, and funding platforms helpfully materialise to facilitate the transaction, themselves mostly funded not by bounties and donations (other than the cut of other people’s bounties and donations, of course) but by venture capital money. Because who would actually want to be going from one “gig” to the next when they could actually get a salary?

And it is revealing that organisations engaging in Free Software development tend to have an enthusiasm not for hiring actual developers but for positions like “community manager”, frequently with the responsibility for encouraging contributions from that desirable stream of eager volunteers. Alongside this, funding is sought from a variety of sources, some of whom being public institutions or progressive organisations perhaps sensing a growing crisis and feeling that something should be done. Other sources are perhaps more about doing “philanthropic work” on behalf of wealthy patrons, although I think I would think twice about taking money from people whose wealth has been built on the back of facilitating psychological warfare on entire populations, undermining public health policies and climate change mitigation, enabling inter-ethnic violence and hate generally, and providing a broadcasting platform for extremists. But as they say, beggars can’t be choosers, right?

It may, of course, be argued that big companies are big employers of Free Software developers. Certainly, lots of people seem to work on Free Software projects in companies like, ahem, Blue Hat. And some of that corporate development does deliver usable software, or at least it helps to mitigate the usability issues of the software being produced elsewhere, maybe in another part of the very same corporation. Large, stable organisations may well be the key to providing developers with secure incomes and the space to focus on producing high-quality, well-designed, long-lasting software. Then again, such organisations sometimes exhibit ethical deficiencies in their own collective activities by seeking to aggressively protect revenue streams by limiting interoperability, reduce costs through offshoring, assert patents against others, and impose needless technological change on their customers and the broader market simply to achieve a temporary competitive advantage.

Free Software organisations should be advocating for quality, stable employment for software developers. For too long, Free Software has been perceived as something for nothing where “everybody else” pays, even as organisations and individuals happily pay substantial sums for hardware and for proprietary software. Deferring to “the market” does nobody any good in the end: “the market” will only pay for what it absolutely has to, and businesses doing nicely selling solutions (who might claim that “the market” works for them and should be good enough for everyone) all too frequently rely on practically invisible infrastructure projects that they get for free. It arguably doesn’t matter if it would be public institutions, as opposed to businesses, ending up hiring people as long as they get decent contracts and aren’t at risk of all being laid off because some right-wing government wants to slash taxes for rich people, as tends to happen every few years.

And Free Software organisations should be advocating for ethical software development. Although the public mood in general may lag rather too far behind that of more informed commentary, the awareness many of us have of the substantial ethical concerns around various applications of computing – artificial intelligence, social media/manipulation platforms, surveillance, “cryptocurrencies”, and so on – requires us to uphold our principles, to recognise where our own principles fall short, and to embrace other causes that seek to safeguard the rights of individuals, the health of our societies, and the viability of our planet as our home.

The Accessible Infrastructure Crisis

Some of that ethical software development would also recognise the longevity we hope our societies may ultimately have. And yet we have every reason to worry about our societies becoming less equitable, less inclusive, and less accessible. The unquestioning adoption of technology-driven, consumerist solutions has led to many of the interactions us individuals have with institutions and providers of infrastructure being mediated by random companies who have inserted themselves into every kind of transaction they have perceived as highly profitable. Meanwhile, technologists have often embraced change through newer and newer technology for its own sake, not for the sake of actual progress or for making life easier.

While devices like smartphones have been liberating for many, providing capabilities that one could only have dreamed of only a few decades ago, they also introduce the risk of imposing relationships and restrictions on individuals to the point where those unable to acquire or use technological devices may find themselves excluded from public facilities, commercial transactions, and even voting in elections or other processes of participatory democracy. Such conditions may be the result of political ideology, the outsourcing and offshoring of supposedly non-essential activities, and the trimming back of the public sector, with any consequences, conflicts of interest, and even corrupt dealings being ignored or deliberately overlooked, dismissed as “nothing that would happen here”.

The risk to Free Software and to our societies is that we as individuals no longer collectively control our infrastructure through our representatives, nor do we control the means of interacting with it, the adoption of technology, or the pace such technology is introduced and obsoleted. When the suggestion to problems using supposedly public infrastructure is to “get a new phone” or “upgrade your computer”, we are actually being exploited by corporate interests and their facilitators. Anyone participating in such a cynical deployment of technology must, I suppose, reconcile their sense of a job well done with the sight of their fellow citizens being obstructed, impoverished or even denied their rights.

Although Free Software organisations have tried to popularise unencumbered sources of mobile software and to promote techniques and technologies to lengthen the lifespan of mobile devices, more fundamental measures are required to reverse the harmful course taken by many of our societies. Some of these measures are political or social, and some are technological. All of them are necessary.

We must reject the notion that progress is dependent on technological consumption. While computers and computing devices have managed to keep getting faster, despite warnings that such trends would meet their demise one way or another, improvements in their operational effectiveness in many regards have been limited. We may be able to view higher quality video today than we could ten years ago, and user interfaces may be pushing around many more pixels, but the results of our interactions are not necessarily more substantial. Yet, the ever-increasing demands of things like Web browsers means that systems become obsolete and are replaced with newer, faster systems to do exactly the same things in any qualitative sense. This wastefulness, burdening individuals with needless expenditure and burdening the environment with even more consumption, must stop.

We must demand interoperability and openness with regard to public infrastructure and even commercial platforms. It should be forbidden to insist that specific products be used to interact with public services and amenities or with commercial operators. The definition and adoption of genuinely open standards would be central to any such demands, and we would need to insist that such standards must encompass every aspect of such interactions and activities, without permitting companies to extend them in incompatible, proprietary ways that would deliberately undermine such initiatives.

We must insist that individuals never be under any obligation to commercial interests when interacting with public infrastructure: that their obligations are only to the public bodies concerned. And, similarly, we should insist that when dealing with companies, they may not also require us to enter into ongoing commercial relationships with other companies, purely as a condition of any transaction. It is unacceptable, for example, that individuals require such a relationship with a foreign technology company purely to gain access to essential services or to conduct purchases or similar transactions.

Ideas for Remedies

In conclusion, we need to care about technological freedoms – our choices of hardware and software, things like online privacy, and all that – but we also need to recognise and care about the social, economic and political conditions that threaten such freedoms. We can’t expect to set up a nice Free Software computing system and to use it forever when forces in society compel us to upgrade every few years. Nor can we expect people to make the hardware for such systems, let alone at an affordable price, when technological indulgence drives the sophistication of hardware to levels where investment in open hardware production is prohibitively costly. And we can’t expect to use our Free Software systems if consumerist and/or corrupt choices sacrifice interoperability and pander to entrenched commercial interests.

The vision here, if we can even call it that, is that we might embrace the “essential” nature of our computing needs and thus embrace hardware with adequate levels of sophistication that could, if everyone were honest with themselves, get the job done just fine. Years ago, people used to say how “Linux is great because it works on older systems”, but these days you apparently cannot even install some distributions with less than 1GB of RAM, and Blue Hat is apparently going to put all the bloat of a “modern” Web browser into its installer. And once we have installed our system, do we really need a video playing in the background of a Web page as we navigate a simple list of train times? Do we even need something as sophisticated as a Web browser for that at all?

Embracing mature, proven, reliable, well-understood hardware would help hardware designers to get their efforts right, and if hardware standards and modularity were adopted, there would still be the chance to introduce improvements and enhancements. Such hardware characteristics would also help with the software support: instead of rushing the long, difficult journey of introducing support for poorly documented components from unhelpful manufacturers eager to retire those components and to start making future products, the aim would be to support components with long commercial lifespans whose software support is well established and hopefully facilitated by the manufacturer. And with suitably standardised or modular hardware, creativity and refinement could be directed towards other aspects of the hardware which are often neglected or underestimated such as ergonomics and the other aspects of traditional product design.

With stable hardware, there might be more software options, too. Although many would propose “just putting Linux on it” for any given device, one need only consider the realm of smartphones to realise that such convenient answers are not necessarily the most obviously correct ones, particularly for certain definitions of “Linux”. Instead of choosing Linux because it probably supports the hardware, only to find that as much time is spent fixing that support and swimming against the currents of upstream development as would have been spent implementing that support elsewhere, other systems with more desirable properties could be considered and deployed. We might even encourage different systems to share functionality, instead of wrapping it up in a specific framework that resists portability. Such systems would also aspire to avoid the churn throughout the GNU-plus-Linux-plus-graphical-stack-of-the-day familiar to many of us, potentially allowing us to use familiar software over much longer periods than we have generally been allowed to before, retrocomputing platforms aside.

But to let all of this happen and to offer a viable foundation, we must also ensure that such systems can be used in the wider world. Otherwise, this would merely be an exercise in retrocomputing. Now, there is an argument that there are plenty of existing standards that might facilitate this vision, and perhaps going along with the famous saying about standards (the good thing being that there are so many to choose from), we might wish to avoid the topic of yet another widely-referenced XKCD cartoon by actually adopting some of them instead of creating more of them. That is not to say that we would necessarily want to go along with the full breadth of some standards, however. XML deliberately narrowed down SGML to be a more usable technology, despite its own reputation for complexity. Since some standards were probably “front-run” by companies wishing to elevate their own products, within which various proposed features were already implemented, thus forcing their competitors to play catch up, it is entirely possible that various features are superfluous or frivolous.

There have already been attempts to simplify the Web or to make a simpler Web-like platform, Gemini being one of them, and there are persuasive arguments that such technologies should be considered as separate from the traditional Web. After all, the best of intentions in delivering a simple, respectful experience can easily be undermined by enthusiasm for the latest frameworks and fashions, or by the insistence that less than respectful techniques and technologies – user surveillance, to take one example – be introduced to “help understand” or “better serve” users. A distinct technology might offer easy ways of resisting such temptations by simply failing to support them conveniently, but the greater risk is that it might not even get adopted significantly at all.

New standards might well be necessary, but revising and reforming existing ones might well be more productive, and there is merit in focusing standards on the essentials. After all, people used the Web for real work twenty years ago, too. And some would argue that today’s Web is just reimplementing the client-server paradigm but with JavaScript on the front end, grinding your CPU, with application-specific communications conducted between the browser and the server. Such communications will, for the most part, not be specified and be prone to changing and breaking, interoperability being the last thing on everybody’s mind. Formalising such communications, and adopting technologies more appropriate to each device and to each user, might actually be beneficial: instead of megabytes of JavaScript passing across the network and through the browser, the user would get to choose how they access such services, which programs they might use, rather than having “an experience” foisted upon them.

Such an approach would actually return us to something close to the original vision of the Web. But standards surely have to be seen as the basis of the Free Software we might hope to use, and as the primary vehicle for the persuasion of others. Public institutions and businesses care about reaching the biggest possible audience, and this has brought us to a rather familiar sight: the anointment of two viable players in a particular market and no others. Back in the 1990s, the two chosen ones in desktop computing were Microsoft and Apple, the latter kept afloat by the former so as to avoid being perceived as a de-facto monopoly and thereby avoiding being subject to proper regulation. Today, Apple and Google are the gatekeepers in mobile computing, with even Microsoft being an unwelcome complication.

Such organisations want to offer solutions that supposedly reach “the most users”, will happily commission “apps” for the big two players (and Microsoft, sometimes, because habits and favouritism die hard), and will probably shy away from suggesting other solutions, labelling them as confusing or unreliable, mostly because they just don’t want to care: their job is done, the boxes ticked, more effort gives no more reward. But standards offer the possibility of reaching every user, of meeting legal accessibility requirements, and potentially allowing such organisations to delegate the provision of solutions to their favourite entity: “the market”. Naturally, some kind of validation of standards compliance would probably be required, but this need not be overly restrictive nor the business of every last government department or company.

So, I suppose a combination of genuinely open standards facilitating Free Software and accessible public and private services, with users able to adopt and retain open and long-lasting hardware, might be a glimpse of some kind of vision. How people might make good enough money to be able to live decently is another question entirely, but then again, perhaps cultivating simpler, durable, sustainable infrastructure might create opportunities in the development of products that use it, allowing people to focus on improving those products, that infrastructure and the services they collectively deliver as opposed to chasing every last fad and fashion, running faster and faster and yet having the constant feeling of falling behind. As many people seem to experience in many other aspects of their lives.

Well, I hope the positivity was in there somewhere!

The Academic Barriers of Commercialisation

Monday, January 9th, 2017

Last year, the university through which I obtained my degree celebrated a “milestone” anniversary, meaning that I got even more announcements, notices and other such things than I was already getting from them before. Fortunately, not everything published into this deluge is bound up in proprietary formats (as one brochure was, sitting on a Web page in Flash-only form) or only reachable via a dubious “Libyan link-shortener” (as certain things were published via a social media channel that I have now quit). It is indeed infuriating to see one of the links in a recent HTML/plain text hybrid e-mail message using a redirect service hosted on the university’s own alumni sub-site, sending the reader to a bit.ly URL, which will redirect them off into the great unknown and maybe even back to the original site. But such things are what one comes to expect on today’s Internet with all the unquestioning use of random “cloud” services, each one profiling the unsuspecting visitor and betraying their privacy to make a few extra cents or pence.

But anyway, upon following a more direct – but still redirected – link to an article on the university Web site, I found myself looking around to see what gets published there these days. Personally, I find the main university Web site rather promotional and arguably only superficially informative – you can find out the required grades to take courses along with supposed student approval ratings and hypothetical salary expectations upon qualifying – but it probably takes more digging to get at the real detail than most people would be willing to do. I wouldn’t mind knowing what they teach now in their computer science courses, for instance. I guess I’ll get back to looking into that later.

Gatekeepers of Knowledge

However, one thing did catch my eye as I browsed around the different sections, encountering the “technology transfer” department with the expected rhetoric about maximising benefits to society: the inevitable “IP” policy in all its intimidating length, together with an explanatory guide to that policy. Now, I am rather familiar with such policies from my time at my last academic employer, having been obliged to sign some kind of statement of compliance at one point, but then apparently not having to do so when starting a subsequent contract. It was not as if enlightenment had come calling at the University of Oslo between these points in time such that the “IP rights” agreement now suddenly didn’t feature in the hiring paperwork; it was more likely that such obligations had presumably been baked into everybody’s terms of employment as yet another example of the university upper management’s dubious organisational reform and questionable human resources practices.

Back at Heriot-Watt University, credit is perhaps due to the authors of their explanatory guide to try and explain the larger policy document, because it is most likely that most people aren’t going to get through that much longer document and retain a clear head. But one potentially unintended reason for credit is that by being presented with a much less opaque treatment of the policy and its motivations, we are able to see with enhanced clarity many of the damaging misconceptions that have sadly become entrenched in higher education and academia, including the ways in which such policies actually do conflict with the sharing of knowledge that academic endeavour is supposed to be all about.

So, we get the sales pitch about new things needing investment…

However, often new technologies and inventions are not fully developed because development needs investment, and investment needs commercial returns, and to ensure commercial returns you need something to sell, and a freely available idea cannot be sold.

If we ignore various assumptions about investment or the precise economic mechanisms supposedly required to bring about such investment, we can immediately note that ideas on their own aren’t worth anything anyway, freely available or not. Although the Norwegian Industrial Property Office (or the Norwegian Patent Office if we use a more traditional name) uses the absurd vision slogan “turning ideas into values” (it should probably read “value”, but whatever), this perhaps says more about greedy profiteering through the sale of government-granted titles bound to arbitrary things than it does about what kinds of things have any kind of inherent value that you can take to the bank.

But assuming that we have moved beyond the realm of simple ideas and have entered the realm of non-trivial works, we find that we have also entered the realm of morality and attitude management:

That is why, in some cases, it is better for your efforts not to be published immediately, but instead to be protected and then published, for protection gives you something to sell, something to sell can bring in investment, and investment allows further development. Therefore in the interests of advancing the knowledge within the field you work in, it is important that you consider the commercial potential of your work from the outset, and if necessary ensure it is properly protected before you publish.

Once upon a time, the most noble pursuit in academic research was to freely share research with others so that societal, scientific and technological progress could be made. Now it appears that the average researcher should treat it as their responsibility to conceal their work from others, seek “protection” on it, and then release the encumbered details for mere perusal and the conditional participation of those once-valued peers. And they should, of course, be wise to the commercial potential of the work, whatever that is. Naturally, “intellectual property” offices in such institutions have an “if in doubt, see us” policy, meaning that they seek to interfere with research as soon as possible, and should someone fail to have “seen them”, that person’s loyalty may very well be called into question as if they had somehow squandered their employer’s property. In some institutions, this could very easily get people marginalised or “reorganised” if not immediately or obviously fired.

The Rewards of Labour

It is in matters of property and ownership where things get very awkward indeed. Many people would accept that employees of an organisation are producing output that becomes the property of that organisation. What fewer people might accept is that the customers of an organisation are also subject to having their own output taken to be the property of that organisation. The policy guide indicates that even undergraduate students may also be subject to an obligation to assign ownership of their work to the university: those visiting the university supposedly have to agree to this (although it doesn’t say anything about what their “home institution” might have to say about that), and things like final year projects are supposedly subject to university ownership.

So, just because you as a student have a supervisor bound by commercialisation obligations, you end up not only paying tuition fees to get your university education (directly or through taxation), but you also end up having your own work taken off you because it might be seen as some element in your supervisor’s “portfolio”. I suppose this marks a new low in workplace regulation and standards within a sector that already skirts the law with regard to how certain groups are treated by their employers.

One can justifiably argue that employees of academic institutions should not be allowed to run away with work funded by those institutions, particularly when such funding originally comes from other sources such as the general public. After all, such work is not exactly the private property of the researchers who created it, and to treat it as such would deny it to those whose resources made it possible in the first place. Any claims about “rightful rewards” needing to be given are arguably made to confuse rational thinking on the matter: after all, with appropriate salaries, the researchers are already being rewarded doing work that interests and stimulates them (unlike a lot of people in the world of work). One can argue that academics increasingly suffer from poorer salaries, working conditions and career stability, but such injustices are not properly remedied by creating other injustices to supposedly level things out.

A policy around what happens with the work done in an academic institution is important. But just as individuals should not be allowed to treat broadly-funded work as their own private property, neither should the institution itself claim complete ownership and consider itself entitled to do what it wishes with the results. It may be acting as a facilitator to allow research to happen, but by seeking to intervene in the process of research, it risks acting as an inhibitor. Consider the following note about “confidential information”:

This is, in short, anything which, if you told people about, might damage the commercial interests of the university. It specifically includes information relating to intellectual property that could be protected, but isn’t protected yet, and which if you told people about couldn’t be protected, and any special know how or clever but non patentable methods of doing things, like trade secrets. It specifically includes all laboratory notebooks, including those stored in an electronic fashion. You must be very careful with this sort of information. This is of particular relevance to something that may be patented, because if other people know about it then it can’t be.

Anyone working in even a moderately paranoid company may have read things like this. But here the context is an environment where knowledge should be shared to benefit and inform the research community. Instead, one gets the impression that the wish to control the propagation of knowledge is so great that some people would rather see the details of “clever but non patentable methods” destroyed than passed on openly for others to benefit from. Indeed, one must question whether “trade secrets” should even feature in a university environment at all.

Of course, the obsession with “laboratory notebooks”, “methods of doing things” and “trade secrets” in such policies betrays the typical origins of such drives for commercialisation: the apparently rich pickings to be had in the medical, pharmaceutical and biosciences domains. It is hardly a coincidence that the University of Oslo intensified its dubious “innovation” efforts under a figurehead with a background (or an interest) in exactly those domains: with a narrow personal focus, an apparent disdain for other disciplines, and a wider commercial atmosphere that gives such a strategy a “dead cert” air of impending fortune, we should perhaps expect no more of such a leadership creature (and his entourage) than the sum of that creature’s instincts and experiences. But then again, we should demand more from such people when their role is to cultivate an institution of learning and not to run a private research organisation at the public’s expense.

The Dirty Word

At no point in the policy guide does the word “monopoly” appear. Given that such a largely technical institution would undoubtedly be performing research where the method of “protection” would involve patents being sought, omitting the word “monopoly” might be that document’s biggest flaw. Heriot-Watt University originates from the merger of two separate institutions, one of which was founded by the well-known pioneer of steam engine technology, James Watt.

Recent discussion of Watt’s contributions to the development and proliferation of such technology has brought up claims that Watt’s own patents – the things that undoubtedly made him wealthy enough to fund an educational organisation – actually held up progress in the domain concerned for a number of decades. While he was clearly generous and sensible enough to spend his money on worthy causes, one can always challenge whether the questionable practices that resulted in the accumulation of such wealth can justify the benefits from the subsequent use of that wealth, particularly if those practices can be regarded as having had negative effects of society and may even have increased wealth inequality.

Questioning philanthropy is not a particularly fashionable thing to do. In capitalist societies, wealthy people are often seen as having made their fortunes in an honest fashion, enjoying a substantial “benefit of the doubt” that this was what really occurred. Criticising a rich person giving money to ostensibly good causes is seen as unkind to both the generous donor and to those receiving the donations. But we should question the means through which the likes of Bill Gates (in our time) and James Watt (in his own time) made their fortunes and the power that such fortunes give to such people to direct money towards causes of their own personal choosing, not to mention the way in which wealthy people also choose to influence public policy and the use of money given by significantly less wealthy individuals – the rest of us – gathered through taxation.

But back to monopolies. Can they really be compatible with the pursuit and sharing of knowledge that academia is supposed to be cultivating? Just as it should be shocking that secretive “confidentiality” rules exist in an academic context, it should appal us that researchers are encouraged to be competitively hostile towards their peers.

Removing the Barriers

It appears that some well-known institutions understand that the unhindered sharing of their work is their primary mission. MIT Media Lab now encourages the licensing of software developed under its roof as Free Software, not requiring special approval or any other kind of institutional stalling that often seems to take place as the “innovation” vultures pick over the things they think should be monetised. Although proprietary licensing still appears to be an option for those within the Media Lab organisation, at least it seems that people wanting to follow their principles and make their work available as Free Software can do so without being made to feel bad about it.

As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.

So says the director of the MIT Media Lab. It says a lot about the times we live in that this needs to be said at all. Free Software licensing is, as a mechanism to encourage sharing, a natural choice for software, but we should also expect similar measures to be adopted for other kinds of works. Papers and articles should at the very least be made available using content licences that permit sharing, even if the licence variants chosen by authors might seek to prohibit the misrepresentation of parts of their work by prohibiting remixes or derived works. (This may sound overly restrictive, but one should consider the way in which scientific articles are routinely misrepresented by climate change and climate science deniers.)

Free Software has encouraged an environment where sharing is safely and routinely done. Licences like the GNU General Public Licence seek to shield recipients from things like patent threats, particularly from organisations which might appear to want to share their works, but which might be tempted to use patents to regulate the further use of those works. Even in realms where patents have traditionally been tolerated, attempts have been made to shield others from the effects of patents, intended or otherwise: the copyleft hardware movement demands that shared hardware designs are patent-free, for instance.

In contrast, one might think that despite the best efforts of the guide’s authors, all the precautions and behavioural self-correction it encourages might just drive the average researcher to distraction. Or, just as likely, to ignoring most of the guidelines and feigning ignorance if challenged by their “innovation”-obsessed superiors. But in the drive to monetise every last ounce of effort there is one statement that is worth remembering:

If intellectual property is not assigned, this can create problems in who is allowed to exploit the work, and again work can go to waste due to a lack of clarity over who owns what.

In other words, in an environment where everybody wants a share of the riches, it helps to have everybody’s interests out in the open so that there may be no surprises later on. Now, it turns out that unclear ownership and overly casual management of contributions is something that has occasionally threatened Free Software projects, resulting in more sophisticated thinking about how contributions are managed.

And it is precisely this combination of Free Software licensing, or something analogous for other domains, with proper contribution and attribution management that will extend safe and efficient sharing of knowledge to the academic realm. Researchers just cannot have the same level of confidence when dealing with the “technology transfer” offices of their institution and of other institutions. Such offices only want to look after themselves while undermining everyone beyond the borders of their own fiefdoms.

Divide and Rule

It is unfortunate that academic institutions feel that they need to “pull their weight” and have to raise funds to make up for diminishing public funding. By turning their backs on the very reason for their own existence and seeking monopolies instead of sharing knowledge, they unwittingly participate in the “divide and rule” tactics blatantly pursued in the political arena: that everyone must fight each other for all that is left once the lion’s share of public funding has been allocated to prestige megaprojects and schemes that just happen to benefit the well-connected, the powerful and the influential people in society the most.

A properly-funded education sector is an essential component of a civilised society, and its institutions should not be obliged to “sharpen their elbows” in the scuffle for funding and thus deprive others of knowledge just to remain viable. Sadly, while austerity politics remains fashionable, it may be up to us in the Free Software realm to remind academia of its obligations and to show that sustainable ways of sharing knowledge exist and function well in the “real world”.

Indeed, it is up to us to keep such institutions honest and to prevent advocates of monopoly-driven “innovation” from being able to insist that their way is the only way, because just as “divide and rule” politics erects barriers between groups in wider society, commercialisation erects barriers that inhibit the essential functions of academic pursuit. And such barriers ultimately risk extinguishing academia altogether, along with all the benefits its institutions bring to society. If my university were not reinforcing such barriers with its “IP” policy, maybe its anniversary as a measure of how far we have progressed from monopolies and intellectual selfishness would have been worth celebrating after all.

On Not Liking Computers

Monday, November 21st, 2016

Adam Williamson recently wrote about how he no longer really likes computers. This attracted many responses from people who misunderstood him and decided to dispense career advice, including doses of the usual material about “following one’s passion” or “changing one’s direction” (which usually involves becoming some kind of “global nomad”), which do make me wonder how some of these people actually pay their bills. Do they have a wealthy spouse or wealthy parents or “an inheritance”, or do they just do lucrative contracting for random entities whose nature or identities remain deliberately obscure to avoid thinking about where the money for those jobs really comes from? Particularly the latter would be the “global nomad” way, as far as I can tell.

But anyway, Adam appears to like his job: it’s just that he isn’t interested in technological pursuits outside working hours. At some level, I think we can all sympathise with that. For those of us who have similarly pessimistic views about computing, it’s worth presenting a list of reasons why we might not be so enthusiastic about technology any more, particularly for those of us who also care about the ethical dimensions, not merely whether the technology itself is “any good” or whether it provides a sufficient intellectual challenge. By the way, this is my own list: I don’t know Adam from, well, Adam!

Lack of Actual Progress

One may be getting older and noticing that the same technological problems keep occurring again and again, never getting resolved, while seeing people with no sense of history provoke change for change’s – not progress’s – sake. After a while, or when one gets to a certain age, one expects technology to just work and that people might have figured out how to get things to communicate with each other, or whatever, by building on what went before. But then it usually seems to be the case that some boy genius or other wanted a clear run at solving such problems from scratch, developing lots of flashy features but not the mundane reliability that everybody really wanted.

People then get told that such “advanced” technology is necessarily complicated. Whereas once upon a time, you could pick up a telephone, dial a number, have someone answer, and conduct a half-decent conversation, now you have to make sure that the equipment is all connected up properly, that all the configurations are correct, that the Internet provider isn’t short-changing you or trying to suppress your network traffic. And then you might dial and not get through, or you might have the call mysteriously cut out, or the audio quality might be like interviewing a gang of squabbling squirrels speaking from the bottom of a dustbin/trashcan.

Depreciating Qualifications

One may be seeing a profession that requires a fair amount of educational investment – which, thanks to inept/corrupt politicians, also means a fair amount of financial investment – become devalued to the point that its practitioners are regarded as interchangeable commodities who can be coerced into working for as little as possible. So much for the “knowledge economy” when its practitioners risk ending up earning less than people doing so-called “menial” work who didn’t need to go through a thorough higher education or keep up an ongoing process of self-improvement to remain “relevant”. (Not that there’s anything wrong with “menial” work: without people doing unfashionable jobs, everything would grind to a halt very quickly, whereas quite a few things I’ve done might as well not exist, so little difference they made to anything.)

Now we get told that programming really will be the domain of “artificial intelligence” this time around. That instead of humans writing code, “high priests” will merely direct computers to write the software they need. Of course, such stuff sounds great in Wired magazine and rather amusing to anyone with any actual experience of software projects. Unfortunately, politicians (and other “thought leaders”) read such things one day and then slash away at budgets the next. And in a decade’s time, we’ll be suffering the same “debate” about a lack of “engineering talent” with the same “insights” from the usual gaggle of patent lobbyists and vested interests.

Neoliberal Fantasy Economics

One may have encountered the “internship” culture where as many people as possible try to get programmers and others in the industry to work for nothing, making them feel as if they need to do so in order to prove their worth for a hypothetical employment position or to demonstrate that they are truly committed to some corporate-aligned goal. One reads or hears people advocating involvement in “open source” not to uphold the four freedoms (to use, share, modify and distribute software), but instead to persuade others to “get on the radar” of an employer whose code has been licensed as Free Software (or something pretending to be so) largely to get people to work for them for free.

Now, I do like the idea of employers getting to know potential employees by interacting in a Free Software project, but it should really only occur when the potential employee is already doing something they want to do because it interests them and is in their interests. And no-one should be persuaded into doing work for free on the vague understanding that they might get hired for doing so.

The Expendable Volunteers

One may have seen the exploitation of volunteer effort where people are made to feel that they should “step up” for the benefit of something they believe in, often requiring volunteers to sacrifice their own time and money to do such free work, and often seeing those volunteers being encouraged to give money directly to the cause, as if all their other efforts were not substantial contributions in themselves. While striving to make a difference around the edges of their own lives, volunteers are often working in opposition to well-resourced organisations whose employees have the luxury of countering such volunteer efforts on a full-time basis and with a nice salary. Those people can go home in the evenings and at weekends and tune it all out if they want to.

No wonder volunteers burn out or decide that they just don’t have time or aren’t sufficiently motivated any more. The sad thing is that some organisations ignore this phenomenon because there are plenty of new volunteers wanting to “get active” and “be visible”, perhaps as a way of marketing themselves. Then again, some communities are content to alienate existing users if they can instead attract the mythical “10x” influx of new users to take their place, so we shouldn’t really be surprised, I suppose.

Blame the Powerless

One may be exposed to the culture that if you care about injustices or wrongs then bad or unfortunate situations are your responsibility even if you had nothing to do with their creation. This culture pervades society and allows the powerful to do what they like, to then make everyone else feel bad about the consequences, and to virtually force people to just accept the results if they don’t have the energy at the end of a busy day to do the legwork of bringing people to account.

So, those of us with any kind of conscience at all might already be supporting people trying to do the right thing like helping others, holding people to account, protecting the vulnerable, and so on. But at the same time, we aren’t short of people – particularly in the media and in politics – telling us how bad things are, with an air of expectation that we might take responsibility for something supposedly done on our behalf that has had grave consequences. (The invasion and bombing of foreign lands is one depressingly recurring example.) Sadly, the feeling of powerlessness many people have, as the powerful go round doing what they like regardless, is exploited by the usual cynical “divide and rule” tactics of other powerful people who merely see the opportunities in the misuse of power and the misery it causes. And so, selfishness and tribalism proliferate, demotivating anyone wanting the world to become a better place.

Reversal of Liberties

One may have had the realisation that technology is no longer merely about creating opportunities or making things easier, but is increasingly about controlling and monitoring people and making things complicated and difficult. That sustainability is sacrificed so that companies can cultivate recurring and rich profit opportunities by making people dependent on obsolete products that must be replaced regularly. And that technology exacerbates societal ills rather than helping to eradicate them.

We have the modern Web whose average site wants to “dial out” to a cast of recurring players – tracking sites, content distribution networks (providing advertising more often than not), font resources, image resources, script resources – all of which contribute to making the “signal-to-noise” ratio of the delivered content smaller and smaller all the time. Where everything has to maintain a channel of communication to random servers to constantly update them about what the user is doing, where they spent most of their time, what they looked at and what they clicked on. All of this requiring hundreds of megabytes of program code and data, burning up CPU time, wasting energy, making computers slow and steadily obsolete, forcing people to throw things away and to buy more things to throw away soon enough.

We have the “app” ecosystem experience, with restrictions on access, competition and interoperability, with arbitrarily-curated content: the walled gardens that the likes of Apple and Microsoft failed to impose on everybody at the dawn of the “consumer Internet” but do so now under the pretences of convenience and safety. We have social networking empires that serve fake news to each person’s little echo chamber, whipping up bubbles of hate and distracting people from what is really going on in the world and what should really matter. We have “cloud” services that often offer mediocre user experiences but which offer access from “any device”, with users opting in to both the convenience of being able to get their messages or files from their phone and the surveillance built into such services for commercial and governmental exploitation.

We have planned obsolescence designed into software and hardware, with customers obliged to buy new products to keep doing the things they want to do with those products and to keep it a relatively secure experience. And we have dodgy batteries sealed into devices, with the obligation apparently falling on the customers themselves to look after their own safety and – when the product fails – the impact of that product on the environment. By burdening the hapless user of technology with so many caveats that their life becomes dominated by them, those things become a form of tyranny, too.

Finding Meaning

Many people need to find meaning in their work and to feel that their work aligns with their own priorities. Some people might be able to do work that is unchallenging or uninteresting and then pursue their interests and goals in their own time, but this may be discouraging and demotivating over the longer term. When people’s work is not orthogonal to their own beliefs and interests but instead actively undermines them, the result is counterproductive and even damaging to those beliefs and interests and to others who share them.

For example, developing proprietary software or services in a full-time job, although potentially intellectually challenging, is likely to undermine any realistic level of commitment in one’s own free time to Free Software that does the same thing. Some people may prioritise a stimulating job over the things they believe in, feeling that their work still benefits others in a different way. Others may feel that they are betraying Free Software users by making people reliant on proprietary software and causing interoperability problems when those proprietary software users start assuming that everything should revolve around them, their tools, their data, and their expectations.

Although Adam wasn’t framing this shift in perspectives in terms of his job or career, it might have an impact on some people in that regard. I sometimes think of the interactions between my personal priorities and my career. Indeed, the way that Adam can seemingly stash his technological pursuits within the confines of his day job, while leaving the rest of his time for other things, was some kind of vision that I once had for studying and practising computer science. I think he is rather lucky in that his employer’s interests and his own are aligned sufficiently for him to be able to consider his workplace a venue for furthering those interests, doing so sufficiently to not need to try and make up the difference at home.

We live in an era of computational abundance and yet so much of that abundance is applied ineffectively and inappropriately. I wish I had a concise solution to the complicated equation involving technology and its effects on our quality of life, if not for the application of technology in society in general, then at least for individuals, and not least for myself. Maybe a future article needs to consider what we should expect from technology, as its application spreads ever wider, such that the technology we use and experience upholds our rights and expectations as human beings instead of undermining and marginalising them.

It’s not hard to see how even those who were once enthusiastic about computers can end up resenting them and disliking what they have become.

Leaving the PSF

Sunday, May 10th, 2015

It didn’t all start with a poorly-considered April Fools’ joke about hosting a Python conference in Cuba, but the resulting private mailing list discussion managed to persuade me not to continue as a voting member of the Python Software Foundation (PSF). In recent years, upon returning from vacation, discovering tens if not hundreds of messages whipping up a frenzy about some topic supposedly pertinent to the activities of the PSF, and reading through such messages as if I should inform my own position on the matter, was undoubtedly one of the chores of being a member. This time, my vacation plans were slightly unusual, so I was at least spared the surprise of getting the bulk of people’s opinions in one big serving.

I was invited to participate in the PSF at a time when it was an invitation-only affair. My own modest contributions to the EuroPython conference were the motivating factor, and it would seem that I hadn’t alienated enough people for my nomination to be opposed. (This cannot be said for some other people who did eventually become members as well after their opponents presumably realised the unkindness of their ways.) Being asked to participate was an honour, although I remarked at the time that I wasn’t sure what contribution I might make to such an organisation. Becoming a Fellow of the FSFE was an active choice I made myself because I align myself closely with the agenda the FSFE chooses to pursue, but the PSF is somewhat more vague or more ambivalent about its own agenda: promoting Python is all very well, but should the organisation promote proprietary software that increases Python adoption, or would this undermine the foundations on which Python was built and is sustained? Being invited to participate in an organisation with often unclear objectives combines a degree of passivity with an awareness that some of the decisions being taken may well contradict some of the principles I have actively chosen to support in other organisations. Such as the FSFE, of course.

Don’t get me wrong: there are a lot of vital activities performed within the PSF. For instance, the organisation has a genuine need to enforce its trademarks and to stop other people from claiming the Python name as their own, and the membership can indeed assist in such matters, as can the wider community. But looking at my archives of the private membership mailing list, a lot of noise has been produced on other, more mundane matters. For a long time, it seemed as if the only business of the PSF membership – as opposed to the board who actually make the big decisions – was to nominate and vote on new members, thus giving the organisation the appearance of only really existing for its own sake. Fortunately, organisational reform has made the matter of recruiting members largely obsolete, and some initiatives have motivated other, more meaningful activities. However, I cannot be the only person who has noted that such activities could largely be pursued outside the PSF and within the broader community instead, as indeed these activities typically are.

PyCon

Some of the more divisive topics that have caused the most noise have had some connection with PyCon: the North American Python conference that mostly replaced the previous International Python Conference series (from back when people thought that conferences had to be professionally organised and run, in contrast to PyCon and most, if not all, other significant Python conferences today). Indeed, this lack of separation between the PSF and PyCon has been a significant concern of mine. I will probably never attend a PyCon, partly because it resides in North America as a physical event, partly because its size makes it completely uninteresting to me as an attendee, and largely because I increasingly find the programme uninteresting for a variety of other reasons. When the PSF members’ time is spent discussing or at least exposed to the discussion of PyCon business, it can just add to the burden of membership for those who wish to focus on the supposed core objectives of the organisation.

What may well be worse, however, is that PyCon exposes the PSF to substantial liability issues. As the conference headed along a trajectory of seemingly desirable and ambitious growth, it collided with the economic downturn caused by the global financial crisis of 2008, incurring a not insignificant loss. Fortunately, this outcome has not since been repeated, and the organisation had sufficient liquidity to avoid any serious consequences. Some have argued that it was precisely because profits from previous years’ conferences had been accumulated that the organisation was able to pay its bills, but such good fortune cannot explain away the fundamental liability and the risks it brings to the viability of the organisation, especially if fortune happens not to be on its side in future.

Volunteering

In recent times, I have been more sharply focused on the way volunteers are treated by organisations who rely on their services to fulfil their mission. Sadly, the PSF has exhibited a poor record in various respects on this matter. Once upon a time, the Python language Web site was redesigned under contract, but the burden of maintenance fell on community volunteers. Over time, discontentment forced the decision to change the technology and a specification was drawn up under a degree of consultation. Unfortunately, the priorities of certain stakeholders – that is, community volunteers doing a fair amount of hard work in their own time – were either ignored or belittled, leaving them confronted with either having to adapt to a suboptimal workflow not of their own choosing, as well as spending time and energy developing that workflow, or just quitting and leaving it to other people to tidy up the mess that those other people (and the hired contractors) had made.

Understandably, the volunteers quit, leaving a gap in the Web site functionality that took a year to reinstate. But what was most disappointing was the way those volunteers were branded as uncooperative and irresponsible in an act of revisionism by those who clearly failed to appreciate the magnitude of the efforts of those volunteers in the first place. Indeed, the views of the affected volunteers were even belittled when efforts were championed to finally restore the functionality, with it being stated by one motivated individual that the history of the problem was not of his concern. When people cannot themselves choose the basis of their own involvement in a volunteer-run organisation without being vilified for letting people down or for “holding the organisation to ransom”, the latter being a remarkable accusation given the professionalism that was actually shown in supporting a transition to other volunteers, one must question whether such an organisation deserves to attract any volunteers at all.

Politics

As discussion heated up over the PyCon Cuba affair, the usual clash of political views emerged, with each side accusing the other of ignorance and not understanding the political or cultural situation, apparently blinkered by their own cultural and political biases. I remember brazen (and ill-informed) political advocacy being a component in one of the Python community blogging “planets” before I found the other one, back when there was a confusing level of duplication between the two and when nobody knew which one was the “real” one (which now appears to consist of a lot of repetition and veiled commercial advertising), and I find it infuriating when people decide to use such matters as an excuse to lecture others and to promote their own political preferences.

I have become aware of a degree of hostility within the PSF towards the Free Software Foundation, with the latter being regarded as a “political” organisation, perhaps due to hard feelings experienced when the FSF had to educate the custodians of Python about software licensing (which really came about in the first place because of the way Python development had been moved around, causing various legal representatives to play around with the licensing, arguably to make their own mark and to stop others getting all the credit). And I detect a reluctance in some quarters to defend software freedom within the PSF, with a reluctance to align the PSF with other entities that support software and digital freedoms. At least the FSF can be said to have an honest political agenda, where those who support it more or less know where they stand.

In contrast, the PSF seems to cultivate all kinds of internal squabbling and agenda-setting: true politics in the worst sense of the word. On one occasion I was more or less told that my opinion was not welcome or, indeed, could ever be of interest on a topic related to diversity. Thankfully, diversity politics moved to a dedicated mailing list and I was thereafter mostly able to avoid being told by another Anglo-Saxon male that my own perspectives didn’t matter on that or on any other topic. How it is that someone I don’t actually know can presume to know in any detail what perspectives or experiences I might have to offer on any matter remains something of a mystery to me.

Looking through my archives, there appears to be a lot of noise, squabbling, quipping, and recrimination over the last five years or so. In the midst of the recent finger-wagging, someone dared to mention that maybe Cubans, wherever they are, might actually deserve to have a conference. Indeed, other places were mentioned where the people who live there, through no fault of their own, would also be the object of political grandstanding instead of being treated like normal people wanting to participate in a wider community.

I mostly regard April Fools’ jokes as part of a tedious tradition, part of the circus that distracts people away from genuine matters of concern, perhaps even an avenue of passive aggression in certain circles, a way to bully people and then insist – as cowards do – that it was “just a joke”. The lack of a separation of the PSF’s interests combined with the allure of the circus conspired to make fools out of the people involved in creating the joke and of many in the accompanying debate. I find myself uninterested in spending my own time indulging such distractions, especially when those distractions are products of flaws in the organisation that nobody wishes to fix, and when there are more immediate and necessary activities to pursue in the wider arena of Free Software that, as a movement in its own right, some in the PSF practically refuse to acknowledge.

Effects

Leaving the PSF won’t really change any of my commitments, but it will at least reduce the level of background noise I have to deal with. Such an underwhelming and unfortunate assessment is something the organisation will have to rectify in time if it wishes to remain relevant and to deserve the continued involvement of its members. I do have confidence in some of the reform and improvement processes being conducted by volunteers with too little time of their own to pursue them, and I hope that they make the organisation a substantially better and more effective one, as they continue to play to an audience of people with much to say but, more often than not, little to add.

I would have been tempted to remain in the PSF and to pursue various initiatives if the organisation were a multiplier of effect for any given input of effort. Instead, it currently acts as a divider of effect for all the effort one would apparently need to put in to achieve anything. That isn’t how any organisation, let alone one relying on volunteer time and commitment, should be functioning.

A Footnote

On political matters and accusations of ignorance being traded, my own patience is wearing thin indeed, and this probably nudged me into finally making this decision. It probably doesn’t help that I recently made a trip to Britain where election season has been in full swing, with unashamed displays of wilful idiocy openly paraded on a range of topics, indulged by the curated ignorance of the masses, with the continued destruction of British society, nature and the economy looking inevitable as the perpetrators insist they know best now and will undoubtedly in the future protest their innocence when challenged on the legacy of their ruinous rule, adopting the “it wasn’t me” manner of a petulant schoolchild so befitting of the basis of the nepotism that got most of them where they are today.

International Day Against DRM

Wednesday, May 6th, 2015

A discussion on the International Day Against DRM got my attention, and instead of replying on the site in question, I thought I’d write something about it here. The assertion was that “this war has been lost“, to which it was noted that “ownership isn’t for everyone”.

True enough: people are becoming conditioned to accept that they can enjoy nice things but not have any control of them or, indeed, any right to secure them for themselves. So, you effectively have the likes of Spotify effectively reinventing commercial radio where the interface is so soul-crushingly awful that it’s almost more convenient to have to call the radio station to request that they play a track. Or at least it was when I was confronted with it on someone’s smartphone fairly recently.

Meanwhile, the ignorant will happily trumpet the corporate propaganda claiming that those demanding digital rights are “communists”, when the right to own things to enjoy on your own terms has actually been taken away by those corporations and their pocket legislators. Maybe people should remember that when they’re next out shopping for gadgets or, heaven forbid, voting in a public election.

An Aside on Music

Getting older means that one can happily and justifiably regard a lot of new cultural output as inferior to what came before, which means that if one happened to stop buying music when DRM got imposed, deciding not to bother with new music doesn’t create such a big problem after all. I have plenty of legitimately purchased music to listen to already, and I didn’t need to have the potential enjoyment of any new work inconvenienced by only being able to play that work on certain devices or on somebody else’s terms.

Naturally, the music industry blames the decline in new music sales on “piracy”, but in fact people just got used to getting their music in more convenient ways, or they decided that they already have enough music and don’t really need any more. I remember how some people would buy a CD or two every weekend just as a treat or to have something new to listen to, and the music industry made a very nice living from this convenient siphoning of society’s disposable income, but that was just a bubble: the prices were low enough for people to not really miss the money, but the prices were also high enough and provided generous-enough margins for the music industry to make a lot of money from such casual purchasers while they could.

Note that I emphasised “potential” above. That’s another thing that the music business got away with for years: the loyalty of their audiences. How many people bought new material from an artist they liked only to discover that it wasn’t as good as they’d hoped? After a while, people just lose interest. This despite the effective state subsidy of the music business through public broadcasters endlessly and annoyingly playing and promoting that industry’s proprietary content. And there is music from even a few years ago that you wouldn’t be able to persuade anyone to sell you any more. It is like they don’t want your money, or at least if it is not being handed over on precisely their terms, which for a long time now has seemed to involve the customer going back and paying them again and again for something they already bought (under threat of legal penalties for “format shifting” in order to compel such repeat business).

It isn’t a surprise that the focus now is on music (and video) streaming and that actually buying media to play offline is becoming harder and harder. The focus of the content industries is on making it more and more difficult to acquire their content in ways that make it possible to experience that content on sustainable terms. Just as standard music CDs became corrupted with DRM mechanisms that bring future access to the content into doubt, so have newer technologies been encumbered with inconvenient and illegitimate mechanisms to deny people legitimate access. And as the campaign against DRM notes, some of the outcomes are simply discriminatory and shameful.

Our Legacy

Even content that has not been “protected” has proven difficult to recover simply due to technological progress and material, cultural and intellectual decay. It would appal many people that anyone would put additional barriers around content just to maximise revenues when the risk is that the “protectors” of such content will either inadvertently (their competence not being particularly noted) or deliberately (their vindictiveness being especially noted) consign that content to the black hole of prehistory just to stop anyone else actually enjoying it without them benefiting from the act. In some cases, one would think that content destruction is really what the supposed guardians of the content actually want, especially when there’s no more easy money to be made.

Of course, such profiteers don’t actually care about things like cultural legacy or the historical record, but society should care about such things. Regardless of who paid for something to be made – and frequently it was the artist, with the publishers only really offering financing that would most appropriately be described as “predatory” – such content is part of our culture and our legacy. That is why we should resist DRM, we should not support its proponents when buying devices and content or when electing our representatives, and it is why we should try and limit copyright terms so that legacy content may stand a chance of being recovered and responsibly archived.

We owe it to ourselves and to future generations to resist DRM.

Lenovo: What Were They Thinking?

Wednesday, February 25th, 2015

In the past few days, there have been plenty of reports of Lenovo shipping products with a form of adware known as Superfish, originating from a company of the same name, that interferes with the normal operation of Web browser software to provide “shopping suggestions” in content displayed by the browser. This would be irritating enough all by itself, but what made the bundled software involved even more worrying was that it also manages to insert itself as an eavesdropper on the user’s supposedly secure communications, meaning that communications conducted between the user and Internet sites such as online banks, merchants, workplaces and private-and-confidential services are all effectively compromised.

Making things even worse still, the mechanism employed to pursue this undesirable eavesdropping managed to prove highly insecure in itself, exposing Lenovo customers to attack from others. So, we start this sordid affair with a Lenovo “business decision” about bundling some company’s software and end up with Lenovo’s customers having their security compromised for the dubious “benefit” of being shown additional, unsolicited advertisements in Web pages that didn’t have them in the first place. One may well ask what Lenovo’s decision-makers were thinking?

Symptoms of a Disease

Indeed, this affair gives us a fine opportunity to take a critical look at the way the bundling of software has corrupted the sale of personal computers for years, if not decades. First of all, most customers have never been given a choice of operating system or to be able to buy a computer without an operating system, considering the major channels and vendors to which most buyers are exposed: the most widely-available and widely-advertised computers only offer some Windows variant, and manufacturers typically insist that they cannot offer anything else – or even nothing at all – for a variety of feeble reasons. And when asked to provide a refund for this unwanted product that has been forced on the purchaser, some manufacturers even claim that it is free or that someone else has subsidised the cost, and that there is no refund to be had.

This subsidy – some random company acting like a kind of wealthy distant relative paying for the “benefit” of bundled proprietary software – obviously raises competition-related issues, but it also raises the issue of why anyone would want to pay for someone else to get something at no cost. Even in a consumer culture where getting more goodies is seen as surely being a good thing because it means more toys to play with, one cannot help but be a little suspicious: surely something is too good to be true if someone wants to give you things that they would otherwise make you pay for? And now we know that it is: the financial transaction that enriched Lenovo was meant to give Superfish access to its customers’ sensitive information.

Of course, Lenovo’s updated statement on the matter (expect more updates, particularly if people start to talk about class action lawsuits) tries to downplay the foul play: the somewhat incoherent language (example: “Superfish technology is purely based on contextual/image and not behavioral”) denies things like user profiling and uses terminology that is open to quite a degree of interpretation (example: “Users are not tracked nor re-targeted”). What the company lawyers clearly don’t want to talk about is what information was being collected and where it was being whisked off to, keeping the legal attack surface minimal and keeping those denials of negligence strenuous (“we did not know about this potential security vulnerability until yesterday”). Maybe some detail about those “server connections shut down in January” would shed some light on these matters, but the lawyers know that with that comes the risk of exposing a paper trail showing that everybody knew what they were getting into.

Your Money isn’t Good Enough

One might think that going to a retailer, giving them your money, and getting a product to take home would signal the start of a happy and productive experience with a purchase. But it seems that for some manufacturers, getting the customer’s money just isn’t enough: they just have to make a bit of money on the side, and perhaps keep making money from the product after the customer has taken it home, too. Consumer electronics and products from the “content industries” have in particular fallen victim to the introduction of advertising. Even though you thought you had bought something outright, advertisements and other annoyances sneak into the experience, often in the hope that you will pay extra to make them go away.

And so, you get the feeling that your money somehow isn’t good enough for these people. Maybe if you were richer or knew the right people, your money would be good enough and you wouldn’t need to suffer adverts or people spying on you, but you aren’t rich or well-connected and just have to go along with the indignity of it all. Naturally, the manufacturers would take offence at such assertions; they would claim that they have to take bribes subsidies to be able to keep their own prices competitive with the rest of the market, and of course everybody else is taking the money. That might be almost believable if it weren’t for the fact that the prices of things like bundled operating systems and “productivity software” – the stuff that you can’t get a refund for – are completely at the discretion of the organisations who make it. (It also doesn’t help these companies that they seem to be unable to deliver a quality product with a stable set of internal components, or that they introduce stupid hardware features that make their products excruciating to use.)

Everybody Hurts

For the most part, it probably is the case that if you are well-resourced and well-connected, you can buy the most expensive computer with the most expensive proprietary software for it, and maybe the likes of Lenovo won’t have tainted it with their adware-of-the-month. But naturally, proprietary software doesn’t provide you with any inherent assurances that it hasn’t been compromised: only Free Software can offer you that, and even then you must be able to insist on the right to be able to build and install that software on the hardware yourself. Coincidentally, I did once procure a Lenovo computer from a retailer that only supplied them with GNU/Linux preinstalled, with Lenovo being a common choice amongst such retailers because the distribution channel apparently made it possible for them to resell such products without Windows or other proprietary products ever becoming involved.

But sometimes the rich and well-connected become embroiled in surveillance and spying in situations of their own making. Having seen people become so infatuated with Microsoft Outlook that they seemingly need to have something bearing the name on every device they use, it is perhaps not surprising that members of the European Parliament had apparently installed Microsoft’s mobile application bearing the Outlook brand. Unfortunately for them, Microsoft’s “app” sends sensitive information including their authentication credentials off into the cloud, putting their communications (and the safety of their correspondents, in certain cases) at risk.

Some apologists may indeed claim that Microsoft and their friends and partners collecting everybody’s sensitive details for their own convenience is “not an issue for the average user”, but in fact it is a huge issue: when people become conditioned into thinking that surrendering their privacy, accepting the inconveniences of intrusive advertising, always being in debt to the companies from which they have bought things (even when those purchases have actually kept those companies in business), and giving up control of their own belongings are all “normal” things and that they do not deserve any better, then we all start to lose control over the ways in which we use technology as well as the technologies we are able to use. Notions of ownership and democracy quickly become attacked and eroded.

What Were They Thinking?

We ultimately risk some form of authority, accountable or otherwise, telling us that we no longer deserve to be able to enjoy things like privacy. Their reasons are always scary ones, but in practice it usually has something to do with them not wanting ordinary people doing unexpected or bothersome things that might question or undermine their own very comfortable (and often profitable) position telling everybody else what to do, what to worry about, what to buy, and so on. And it turns out that a piece of malware that just has to see everything in its rampant quest to monetize every last communication of the unwitting user now gives us a chance to really think about how we really want our computers and their suppliers to behave.

So, what were they thinking at Lenovo? That Superfish was an easy way to make a few extra bucks? That their customers don’t deserve anything better than to have their private communications infused with advertising? That their customers don’t need to know that people are tampering with their Internet connection? That the private information of their customers was theirs to sell to anyone offering them some money? Did nobody consider the implications of any of this at all, or was there a complete breakdown in ethics amongst those responsible? Was it negligence or contempt for their own customers that facilitated this pursuit of greed?

Sadly, the evidence from past privacy scandals involving major companies indicates that regulatory or criminal proceedings are unlikely, merely fuelling suspicions that supposed corporate incompetence – the existence of conveniently unlocked backdoors – actually serves various authorities rather nicely. It is therefore up to us to remain vigilant and, of course, to exercise our own forms of reward for those who act in our interests, along with punishment for those whose behaviour is unacceptable in a fair and democratic society.

Maybe after a break from seeing any of it for a while, our business and our money will matter more to Lenovo than that of some shady “advertising” outfit with the dubious and slightly unbelievable objective of showing more adverts to people while they do their online banking. And by then, maybe Lenovo (and everyone else) will let us install whatever software we like on their products, because many people aren’t going to be trusting the bundled software for a long time to come after this. Not that they should ever have trusted it in the first place, of course.

Terms of the Pirates

Tuesday, September 3rd, 2013

According to the privacy policy on the Web site of the UK Pirate Party, “The information generated by the cookie about your use of the website (including your IP address) will be transmitted to and stored by Google on servers in the United States.” Would it not be more appropriate for the pirates to do their own visitor analysis using a Free Software solution like Piwik?

Come on, pirates, you can do better than this!

Licensing in a Post Copyright World: Some Clarifications

Sunday, July 28th, 2013

Every now and then, someone voices their dissatisfaction with the GNU General Public License (GPL). A recent example is the oddly titled Licensing in a Post Copyright World: odd because if anything copyright is getting stronger, even though public opposition to copyright legislation and related measures is also growing. Here I present some necessary clarifications for anyone reading the above article. This is just a layman’s interpretation, not legal advice.

Licence Incompatibility

It is no secret that code licensed only under specific versions of the GPL cannot be combined with code under other specific versions of the GPL such that the resulting combination will have a coherent and valid licence. But why are the licences incompatible? Because the decision was taken to strengthen the GPL in version 3 (GPLv3), but since this means adding more conditions to the licence that were not present in version 2 (GPLv2), and since GPLv2 does not let people who are not the authors of the code involved add new conditions, the additional conditions of GPLv3 cannot be applied to the “GPLv2 only” licensed code. Meanwhile, the “GPLv3 only” licensed code requires these additional conditions and does not allow people who are not the authors of the code to strip them away to make the resulting whole distributable under GPLv2. There are ways to resolve this as I mention below.

(There apparently was an initiative to make version 2.2 of the GPL as a more incremental revision of the licence, although incorporating AGPLv3 provisions, but according to one of the central figures in the GPL drafting activity, work progressed on GPLv3 instead. I am sure some people wouldn’t have liked the GPLv2.2 anyway, as the AGPLv3 provisions seem to be one of many things they don’t like.)

Unnecessary Amendments

Why is the above explanation about licence compatibility so awkward? Because of the “only” stipulation that people put on their code, against the advice of the authors of the licence. It turns out that some people have so little trust in the organisation that wrote the licence they have nevertheless chosen to use that in a flourish of self-assertion, they needlessly stipulate “only” instead of “or any later version” and feel that they have mastered the art of licensing.

So the problems experienced by projects who put “only” everywhere, becoming “stuck” on certain GPL versions is a situation of their own making, like someone seeing a patch of wet cement and realising that their handprint can be preserved for future generations to enjoy. Other projects suffer from such distrust, too, because even if they use “or any later version” to future-proof their licensing, they can be held back by the “only” crowd if they make use of that crowd’s software, rendering the licence upgrade option ineffective.

It is somewhat difficult to make licences that request that people play fair and at the same time do not require people to actually do anything to uphold that fairness, so when those who write the licences give some advice, it is somewhat impertinent to reject that advice and then to blame those very people for one’s own mistake later on. Even people who have done the recommended thing, but who suffer from “only” proliferation amongst the things on which their code depends should be blaming the people who put “only” everywhere, not the people who happened to write the licence in the first place.

A Political Movement

The article mentions that the GPL has become a “political platform”. But the whole notion of copyleft has been political from the beginning because it is all about a social contract between the developers and the end-users: not exactly the preservation of a monopoly on a creative work that the initiators of copyright had in mind. The claim is made that Apple shuns GPLv3 because it is political. In fact, companies like Apple and Nokia chiefly avoid GPLv3 because the patent language has been firmed up and makes those companies commit to not suing recipients of the code at will. (Nokia trumpeted a patent promise at one point, as if the company was exhibiting extreme generosity, but it turned out that they were obliged to license the covered patents because of the terms of GPLv2.) Apple has arguably only accepted the GPL in the past because the company could live with the supposed inconvenience of working with a wider development community on that community’s terms. As projects like WebKit have shown, even when obliged to participate under a copyleft licence, Apple can make collaboration so awkward that some participants (such as Google) would rather cultivate their own fork than deal with Apple’s obsession to control everything.

It is claimed that “the license terms are a huge problem for companies”, giving the example of Apple wanting to lock down their products and forbid anyone from installing anything other than Apple-approved software on devices that they have paid for and have in their own possession, claiming that letting people take control of their devices would obligate manufacturers to “get rid of the devices’ security systems”. In fact, it is completely possible to give the choice to users to either live with the restrictions imposed by the vendor and be able to access whichever online “app” store is offered by that vendor, or to let those users “root” or “jailbreak” their device and to tell them that they must find other sources of software and content. Such choices do not break any security systems at all, or at least not ones that we should be caring very much about.

People like to portray the FSF as being inflexible and opposed to the interests of businesses. However, the separation of the AGPL and the GPL contradicts such convenient assertions. Meanwhile, the article seems to suggest that we should blame the GPL for Apple’s inflexibility, which is, of course, absurd.

Blaming the Messenger

The article blames the AGPLv3 for the proliferation of “open core” business models. Pointing the finger at the licence and blaming it for the phenomenon is disingenuous since one could very easily concoct a licence that requires people to choose either no-cost usage, where they must share their code, or paid usage, where they get to keep their code secret. The means by which people can impose such a choice is their ownership of the code.

Although people can enforce an “open core” model more easily using copyleft licensing as opposed to permissive licensing, this is a product of the copyright ownership or assignment regime in place for a project, not something that magically materialises because a copyleft licence was chosen. It should be remembered that copyleft licences effectively regulate and work best with projects having decentralised ownership. Indeed, people have become more aware of copyright and licensing transfers and assignments perhaps as a result of “open core” business models and centralised project ownership, and they should be distrustful of commercial entities wanting such transfers and assignments to be made, regardless of any Free Software licence chosen, because they designate a privileged status in a project. Skepticism has even been shown towards the preference that projects transfer enforcement rights, if not outright ownership, to the FSF. Such skepticism is only healthy, even if one should probably give the FSF the benefit of the doubt as to the organisation’s intentions, in contrast to some arbitrary company who may change strategy from quarter to quarter.

The article also blames the GPLv3 or the AGPLv3 for the behaviour of “licence trolls”, but this is disingenuous. If Oracle offers a product with a choice of AGPLv3 or a special commercial licence, and if as a consequence those who want permissively licensed software for use in their proprietary products cannot get such software under permissive licences, it is the not the fault of any copyleft licence for merely existing: it is the fault (if this is even a matter of blame) of those releasing the software and framing the licence choices. Again, you do not need the FSF’s copyleft licences to exist to offer customers a choice of paying money or making compromises on how they offer their own work.

Of course, if people really cared about the state of projects that have switched licences, they would step up and provide a viable fork of the code starting from a point just before the licence change, but as can often be the case with permissively licensed software and a community of users dependent on a strong vendor, most people who claim to care are really looking for someone else to do the work so that they can continue to enjoy free gifts with as few obligations attached as possible. There are permissively licensed software projects with vibrant development communities, but remaining vibrant requires people to cooperate and for ownership to be distributed, if one really values community development and is not just looking for someone with money to provide free stuff. Addressing fundamental matters of project ownership and governance will get you much further than waving a magic wand and preferring permissive licensing, because you will be affected by those former things whichever way you decide to go with the latter.

Defining the New Normal

The article refers to BusyBox being “infamous” for having its licence enforced. That is a great way of framing reasonable behaviour in such a way as to suggest that people must be perverse for wanting to stand behind the terms under which, and mechanisms through which, they contributed their effort to a project. What is perverse is choosing a licence where such terms and mechanisms are defined and then waiving the obligation to defend it: it would not only be far easier to just choose another licence instead, but it would also be more honest to everyone wanting to use that project as well as everyone contributing to the project, too. The former group would have legal clarity and not the nods and winks of the project leadership; the latter group would know not to waste their time most likely helping people make proprietary software, if that is something they object to.

Indeed, when people contribute to a project it is on the basis of the social contract of the licence. When the licence is a copyleft licence, people will care whether others uphold their obligations. Some people say that they do not want the licence enforced on a project they contribute to. They have a right to express their own preference, but they cannot speak for everyone else who contributed under the explicit social contract that is the licence. Where even one person who has a contribution to a project sees their code used against the terms of the licence, that person has the right to demand that the situation be remedied. Denying individuals such rights because “they didn’t contribute very much” or “the majority don’t want to enforce the licence” (or even claiming that people are “holding the project to ransom”) sets a dangerous precedent and risks making the licence unenforceable for such projects as well as leaving the licence itself as a worthless document that has nothing to say about the culture or functioning of the project.

Some people wonder, “Why do you care what people do with your code? You have given it away.” Firstly, you have not given it away: you have shared it with people with the expectation that they will continue to share it. Copyleft licensing is all about the rights of the end-user, not about letting people do what they want with your code so that the end-user gets a binary dropped in their lap with no way of knowing what it is, what it does, or having any way of enjoying the rights given to the people who made that binary. As smartphone purchasers are discovering, binary-only shipments lead to unsustainable computing where devices are made obsolete not by fundamental changes in technology or physical wear and tear but by the unavailability of fixed, improved or maintained software that keep such devices viable.

Agreeing on the Licence

Disregarding the incompatibility between GPL versions, as discussed above, it appears more tempting to blame the GPL for situations of GPL-incompatibility than it does to blame other licences written after GPLv2 for causing such incompatibility in the first place. The article mentions that Sun deliberately made the CDDL incompatible with the GPL, presumably because they did not want people incorporating Solaris code into the GNU or Linux projects, thus maintaining that “competitive edge”. We all know how that worked out for Solaris: it can now be considered a legacy platform like AIX, HP-UX, and IRIX. Those who like to talk up GPL incompatibilities also like to overlook the fact that GPLv3 provides additional compatibility with other licences that had not been written in a GPLv2-compatible fashion.

The article mentions MoinMoin as being affected by a need for GPLv2 compatibility amongst its dependencies. In fact, MoinMoin is licensed under the GPLv2 or any later version, so those combining MoinMoin with various Apache Software Licence 2.0 licensed dependencies could distribute the result under GPLv3 or any later version. For those projects who stipulated GPLv2 only (against better advice) or even ones who just want the choice of upgrading the licence to GPLv3 or any later version, it is claimed that projects cannot change this largely because the provenance of the code is frequently uncertain, but the Mercurial project managed to track down contributors and relicensed to GPLv2 or any later version. It is a question of having the will and the discipline to achieve this. If you do not know who wrote your project’s code, not even permissive licences will protect you from claims of tainted code, should such claims ever arise.

The Fear Factor

Contrary to popular belief, all licences require someone to do (or not do) something. When people are not willing to go along with what a licence requires, we get into the territory of licence violation, unless people are taking the dishonest route of not upholding the licence and thus potentially betraying their project’s contributors. And when people fall foul of the licence, either inadvertently or through dishonesty, people want to know what might happen next.

It is therefore interesting that the article chooses to dignify claims of a GPL “death penalty”, given that such claims are largely made by people wanting to scare off others from Free Software, as was indeed shown when there may have been money and reputations to be made by engaging in punditry on the Google versus Oracle case. Not only have the actions taken to uphold the GPL been reasonable (contrary to insinuations about “infamous” reputations), but the licence revision process actually took such concerns seriously: version 3 of the GPL offers increased confidence in what the authors of the GPL family of licences actually meant. Obviously, by shunning GPLv3 and stipulating GPLv2 “only”, recipients of code licensed in such a way do not get the benefit of such increased clarity, but it is still likely that the fact that the licence authors sought to clarify such things may indeed weigh on interpretations of GPLv2, bringing some benefit in any case.

The Scapegoat

People like to invoke outrage by mentioning Richard Stallman’s name and some of the things he has said. Unfortunately for those people, Stallman has frequently been shown to be right. Interestingly, he has been right about issues that people probably did not consider to be of serious concern at the time they were raised, so that mentions of patents in GPLv2 not only proved to be far-sighted and useful in ensuring at least a workable level of protection for Free Software developers, but they also alerted Free Software communities, motivated people to resist patent expansionism, and predicted the unfortunate situation of endless, costly litigation that society currently suffers from. Such things are presumably an example of “specific usecases that were relevant at the time the license was written” according to the article, but if licence authors ignore such things, others may choose to consider them and claim some freedom in interpreting the licence on their behalf. In any case, should things like patents and buy-to-rent business models ever become extinct, a tidying up of the licence text for those who cannot bear to be reminded of them will surely do just fine.

Especially certain elements in the Python community seem to have a problem with Stallman and copyleft licensing, some blaming disagreements with, and the influence of, the FSF during the Python 1.6 licensing fiasco where the FSF rightly pointed out that references to venues (“Commonwealth of Virginia”) and having “click to accept” buttons in the licence text (with implicit acceptance through usage) would cause problems. Indeed, it is all very well lamenting that the interactions of licences with local law is not well understood, but one would think that where people have experience with such matters, others might choose to listen to such opinions.

It is a misrepresentation of Stallman’s position to claim that he wants strong copyright, as the article claims: in fact, he appears to want a strengthening of the right to share; copyleft is only a strategy to achieve this in a world with increasingly stronger copyright legislation. His objections to the Swedish Pirate Party’s proposals on five year copyright terms merely follow previous criticisms of additional instruments – in this case end-user licence agreements (EULAs) – that allow some parties to circumvent copyright restrictions on other people’s work whilst imposing additional restrictions – in previous cases, software patents – on their own and others’ works. Finding out what Stallman’s real position might require a bit of work, but it isn’t secret and in fact even advocates significantly reduced copyright terms, just as the Pirate Party advocates. If one is going to describe someone else’s position on a topic, it is best not to claim anything at all if the alternative is to just make stuff up instead.

The article ramps up the ridicule by claiming that the FSF itself claims that “cloud computing is the devil, cell phones are exclusively tracking devices”. Ridiculing those with legitimate concerns about technology and how it is used builds a culture of passive acceptance that plays into the hands of those who will exploit public apathy to do precisely what people labelled as “paranoid” or “radical” had warned everyone about. Recent events have demonstrated the dangers of such fashionable and conformist ridicule and the complacency it builds in society.

All Things to All People

Just as Richard Stallman cannot seemingly be all things to all people – being right about things like the threat of patents, for example, is just so annoying to those who cannot bring themselves to take such matters seriously – so the FSF and the GPL cannot be all things to all people, either. But then they are not claiming to be! The FSF recognises other software licences as Free Software and even recommends non-copyleft licences from time to time.

For those of us who prefer to uphold the rights of the end-user, so that they may exercise control over their computing environment and computing experience, the existence of the GPL and related copyleft licences is invaluable. Such licences may be complicated, but such complications are a product of a world in which various instruments are available to undermine the rights of the end-user. And defining a predictable framework through which such licences may be applied is one of the responsibilities that the FSF has taken upon itself to carry out.

Indeed, few other organisations have been able to offer what the FSF and closely associated organisations have provided over the years in terms of licensing and related expertise. Maybe such lists of complaints about the FSF or the GPL are a continuation of the well-established advertising tradition of attacking a well-known organisation to make another organisation or its products look good. The problem is that nobody really looks good as a result: people believe the bizarre insinuations of political propaganda and are less inclined to check what the facts say on whichever matter is being discussed.

People are more likely to make bad choices when they have only been able to make uninformed choices. The article seeks to inform people about some of the practicalities of licence compatibility but overemphasises sources with an axe to grind – and, in some cases, sources with rather dubious motivations – that are only likely to drive people away from reliable sources of information, filling the knowledge gap of the reader with innuendo from third parties instead. If the intention is to promote permissive licensing or merely licences that are shorter than the admittedly lengthy GPL, we would all be better served if those wishing to do so would stick to factual representations of both licensing practice and licence author intent.

And as for choosing a licence, some people have considered such matters before. Seeking to truly understand licences means having all the facts on the table, not just the ones one would like others to consider combined with random conjecture on the subject. I hope I have, at least, brought some of the missing facts to the table.