The Atomisation of Society
Sunday, April 26th, 2026Just as it always was, and just as it also was with the Bitcoin and Blockchain fads, artificial intelligence is back in vogue, and now we must suffer hearing about “AI” all the time, shoehorned into everything technological and plenty of other things besides. I can hardly be in a minority in finding it all tiresome and even somewhat disheartening.
Of course, artificial intelligence is a broad discipline, and there are useful technologies doing useful work that may have been reported with varying levels of hype over the course of recent history. Over the last decade or so, machine learning has received a degree of media attention, often doing things related to mundane topics like pattern recognition, supporting applications in worthy domains such as medicine. Interestingly, activities like machine translation and visual object recognition are increasingly taken for granted by wider society, despite being problems that were once considered difficult to solve or tackle reliably.
If worthy domains are not your thing, and your phone is the centre of your life, you might be able to point it at some foreign language text and see it appear, magically translated, in a language you might understand. But this mostly isn’t what the current wave of “AI” hype is all about, even if such relatively useful applications are used to greenwash that hype.
Has your Internet search experience degraded to the point of near uselessness? Worry not: you can now be distracted by something pretending to be intelligent feeding you some misinformation instead. Never mind that an improved search experience, yielding useful, informative results was well within the technological grasp of the well-resourced corporations involved. Boring old part-of-speech tagging, maybe some semantic networks, and a bit of traditional information retrieval technology could go a long way. But, of course, that wouldn’t perpetuate the predatory advertising-fuelled surveillance economy or attract billions of speculative investment dollars.
In Britain, it isn’t enough to saddle the National Health Service with market economics, layers of bureaucracy, and numerous consultants and consultancies siphoning off considerable sums at the taxpayer’s expense, all as the fundamental challenges of the service’s information systems – and core activities – remain undiminished. Now, “AI” will drift in, touted by commercial opportunists and predators, new and old, alongside “data science” practitioners aiming to exploit private health information for their own benefit.
And as if by magic, new “efficiencies” will supposedly result. Never mind that without fixing the bread-and-butter issues, no amount of smoke and mirrors can deliver the magic envisaged. But, of course, it is far easier to wish away one’s problems than to confront them, especially if there are commercial interests with products and services to sell. Why not delegate hard problems to some supposedly all-knowing, all-understanding entity that will magically understand all those things for which the impatient politician or manager simply has no time (or intellectual stamina)?
It is also apparently not enough to saddle educators with extra responsibilities that in a well-organised society would be the task of social and healthcare workers and, in some situations, that of police outreach workers and the police service itself, all as funding is perennially slashed in the education sector. Instead, policy-makers sing tunelessly from the “AI” cult’s songsheet, fuelling yet another corrosive societal problem and, as always, dumping the consequences in the laps of educators for “careful consideration”. And, of course, the welfare of the students barely registers on the conscience of those policy-makers as they perform their victory lap.
As Winter Follows Summer
For the most part, the “AI” in the latest round of hype is that most easily understood by, or most relatable to, the average media commentator. It is the gadget that generates written or visual content as if it were some kind of creative being. For their managerial counterpart with the job title incorporating the dishonestly chosen words “customer satisfaction”, the hammer in their toolbox marked “chatbot” is thus the “go to” tool for every digital transformation job, just as it was in the early 2000s for organisations looking to somehow “deepen” the level of interaction with their customers.
Amusingly, many of today’s corporate chatbots are largely the same kind of keyword-scanning “choose an option” counterparts to the classic phone menu as their predecessors were. The most likely reason for this is that the much-hyped large language models, exposed to querying from random people on the Internet, risk incurring reputational damage to any given hapless organisation as they go off-script and start to parrot “poorly sourced” training data, to put it charitably, of which there may be quite a bit.
And it is the matter of what is used to feed the “AI” monster that should concern those of us in Free Software and every other industry that creates content. It is here that we find out who are allies are, who is really committed to a free and fair society, and who it is that instead chooses to betray us. Unsurprisingly, those who betray us are largely the same group as those who, once employed by a “rich uncle” in the form of some corporate behemoth dangling incentives and a comfortable lifestyle, try to explain away the problematic behaviour of that rich uncle. Worse still, they tend to demand the silencing of any uncomfortable criticism of either the behemoth or themselves in their sordid pact.
You will find these people regularly trolling others in any venue where comments may be left to supposedly discuss the headline content, basking in their privilege and telling everybody else what little everybody on the outside really knows. As layoffs roll in across the technology sector once again, it remains to be seen whether the enthusiasm for defending some corporation’s questionable ethical record remains buoyant or whether there might be some buyer’s remorse amongst those who took the easy money. But of course, these people will glide through another gilded door, ushered in by some other member of the old boys’ network. After all, some of them have been trying to undermine Free Software for years.
This time round, the layoffs are justified by management using the very thing that the corporate apologists seek to defend. Perhaps the most enthusiastic apologists for “AI” have not been shown the door just yet, but plenty of their colleagues have. Then again, there are plenty of people who see a harmful phenomenon and think: “That could never happen to me!” After all, they are the special ones. They might concede that something has come along to “shake up” the industry, but as the special ones, they are the ones who will adapt, who will turn this disruptive force into their “competitive advantage”.
In any online discussion related to “AI” and the software industry, you can expect to read the same old guff. Some self-proclaimed top-flight developer will tell everyone how much “more productive” they are by getting “AI” tooling to generate all those unit tests that they previously had to write by hand. Never mind that their claimed “insane level of productivity” is tempered by their own admitted need to review the slop created for them, a result of them describing their needs in “plain English” when it has been known for decades that the definition of a functional specification for a piece of software, especially when done in a natural language, is one of the harder tasks in systems development.
Many of these people will claim that this lets them “spend more time on the creative part” of their job, never mind that the kind of person who writes such drivel is usually the kind of person that most observers would struggle to categorise as creative. But their advocacy for systems that systematically trawl other people’s work, undergo “training” on it, and transform it for eventual regurgitation in a hopefully distinct and thus plausibly “original” form, and their sudden enthusiasm for the creative side of their work, will find its reward in the end. After all, creative jobs have been amongst the first to be eliminated in this new wave of “AI”, as managers (aping their tech idols) declare that graphic designers are no longer needed, just as I learned in one enlightening, spontaneous conversation a year and a half or so ago, or as anyone can discover just by following the news.
Your manager will surely not care how much more quickly all those unit tests are getting written – not that they cared before, exactly – nor will they value your “creativity” when the latest toy is dangled in front of them promising to do your job. Your peers will not care about any kind of epiphany you have, either. They will still be the special ones and you, like the rest of us, just jealous of their success. You will have had your chance to change the world for the better, and you will have blown that chance.
The Hoarding of Privilege
One might think that technologists might be attuned to the perils of technology for themselves and society, but one should never underestimate the lure of selfishness and self-interest, of not wishing to know about any negative consequences of their work or their interests. As one might expect, solidarity is the first casualty.
One sees talented developers who have spent their entire career writing their own code, tinkering with “AI” coding tools and enthusing about the output, wasting other people’s time pontificating about why the assembly language generated for some 8-bit microprocessor isn’t as plausible as a Python script for a task solved independently hundreds of times and published to the Internet every single time. One sees talented developers make nice new applications, only then to go off and use “AI” for the artwork or for other “creative” elements of the work.
They might say that they aren’t talented enough to do those tasks themselves, so they can justify delegating them to a machine. Maybe they just want something that fills the gap, or maybe they don’t want to pay for such work. But even if everyone is doing of all this for fun and for free, why do they not involve others in their creative exercise, people who might be able to draw pictures or write prose?
It sounds to me that they just do not value such creative endeavours or the role of those in society who are talented enough pursue those endeavours. And as they hit up “AI” to conjure up some art, featuring a particular subject, done in a particular style, they perhaps purposefully neglect the creators of the works that fed the “AI” in the first place: those very same people they breeze past as they go and hit all the buttons.
It must be so wonderful to be in a position of having made plenty of money in a profession that was once valued, maybe even lucrative, to now be able to check out for the rest of one’s career. To pull up the ladder on the next generation, eliminate their opportunities, and to indulge in that time-old practice of berating them for being lazy and not good enough. To indulge in “vibe coding” personally, maybe even promote it educationally, but then to label as decadent all recent practitioners in one’s own profession, simply not made of the same stuff as folk were in the “good old days”.
One encounters people who say that they are glad that they are retired or will soon be retiring. For some of them, this may be a legitimate reaction to an increasingly tiresome and possibly even hostile workplace culture, looking forward to some well-deserved relief tinged with remorse for a vocation that no longer delivers the satisfaction it once did. For others, it just sounds like they are content to leave the world in a worse state than it was, and to abandon the job of reversing this damage to subsequent generations: hardly an isolated occurrence in the modern era.
It can be all too easy for people to downplay the effects of new technologies, often with claims that appeal to ideas about “human nature”. As children go through their entire childhood exposed to technologies that are inadequately supervised and often exploitatively designed, some might claim that schoolchildren or students seeking to cheat on their assignments “isn’t a real issue”. And with this, we are presumably supposed to move on. But anyone who knows any teachers will also know that plagiarism is very much “a real issue”, both for students and teachers.
With ready access to online tools, even encouragement to use them, the temptation is too strong for many students, particularly those who have been struggling throughout their education. With a diminishing incentive to learn, and besieged by social pressures amplified by technology, plagiarism starts to look like all they have left in their arsenal. When we hear stories about disengaged young people and their apathy, imagine being the person who has to try and motivate these young people, especially when it seems like every last person with influence over this situation has abandoned everyone affected.
And plagiarism is, in fact, such an issue that Microsoft, having flooded the zone with “AI” toys, offers other toys to detect plagiarism. There is good money to be made by escalating a situation into a conflict and equipping both sides, as those in certain other professions and industries know all too well. I have read tales of educational institutions being infatuated with “AI” to the point of rolling it out, often excused in numerous ways, but mostly amounting to someone in a position of power wanting to try all the toys.
(And given that school administrations are routinely surrendering their classrooms to gambling industry practices and data surveillance predators, “AI” probably does seem like just another fun toy to play with.)
I have heard tales of decision-makers and executives trying to mitigate the harmful effects of their decisions, related to “AI” and otherwise, by leaning into the consumerisation of education and seeking to wave through students who may have resorted to plagiarism. Students who just want the piece of paper, so that they can get on all those celebrated ladders of society – good employment, decent housing – before they are pulled up and out of reach entirely.
But what will become of such students when they move on to the next stage of their education or development, increasingly out of their depth, feeling less and less adequate, confident, fulfilled or secure in their own existence? At what point do we justifiably label the abdication of responsibility, by those in authority and those who seek to profit from such misery, the abuse that it arguably is?
When society fails to protect its members as they grow up, how can those in privileged positions criticise people, overwhelmed and desensitised by torrents of increasingly harmful content, for not participating in or engaging with society? Maybe those elected and appointed to protect our interests could do exactly that, instead of looking out only for themselves and the vested interests who have been lining their party’s pockets.
Quality Uncontrolled
One potentially common element in the workplace and the classroom is how each group perceives the materials they encounter, along with their attitudes towards the cultural works of their own society. For children growing up in earlier times, one might expect the consumption of books, magazines, newspapers, television series, films, popular music and other such works to have played a defining role in their perception of culture and its value.
But after years of relentless and cynical commercialisation, children have increasingly been growing up in an environment of relentlessly derivative entertainment content. Instead of new and original stories, endless comic book franchise reboots and the like, largely to keep trademarks and other “intellectual property” warm and out of the public domain, may have habituated people to expect less and less from culture and to place a lower value on it.
Instead of being engaged by cultural works, those works become mere wallpaper, something to have on in the background while other, more harmful, forms of engagement steal the attention of the viewer or listener. And with their perspectives and beliefs unengaged and unchallenged by mainstream culture, people risk being sucked into a bubble all of their own, all too readily and easily fed by “AI”, reinforcing their flawed views and pandering to their prejudices.
Why would you want “AI” to generate content for your own consumption? Habituated to having one’s favourite superhero meet and/or fight one’s other favourite superhero in endless re-runs of generic, commercial filler, it isn’t such a big step to go on and have one’s tummy rubbed all the time by unchallenging and probably inaccurate content that happens to appeal to one’s existing biases, now readily produced in bulk and on demand by “AI”.
For some of us, the process of creating something new is part of the eventual reward, and the process of researching, understanding and communicating something is engaging all by itself. But the degradation of culture to a mere consumable risks the marginalisation or even eradication of the creative process, the investigative process, of historical and critical enquiry, all so that people can be soothed and entertained.
There is a relatively well-known quote about AI being supposed to free up a person’s time for pursuing their art by taking care of the cleaning and the laundry, but instead “AI” has made them and their art redundant and leaves them with the cleaning and the laundry. What might this have to do with the workplace? Well, to answer that, we have to address the phenomenon of the software product and its gradual erosion.
Angst or Realism?
When the well-known technologist Tim Bray raised the issue of “AI Angst“, alongside broader issues, the accompanying off-site discussion of the article predictably featured the usual consumerist “works for me” and aspirational “more productive” tropes. But it also exposed the sentiment that “AI” is also quite able to take the joy and the motivation out of doing work and being in a profession. Amongst those who had the most enthusiasm for vibe coding were claims that only “5% to 20% of the work is interesting” in a modern programming job. So, what does that say about the state of the industry?
One factor might be that industry is subject to a continual inundation of fads and trends often imposed on the profession from outside, facilitated by opportunists who can persuade management and executives of productivity boosts and reductions in expenditure. But another thing is that enthusiasm for “AI” is evidently higher amongst those whose development has a high level of “donkey work”. This leads to a pertinent question: have technologists failed their profession by perpetuating cumbersome, inadequate technologies?
If you have ever had to work with certain widely available, well-resourced software available today, you will be familiar with the sensation of having to seek advice or guidance on how some aspect of the software works or is to be used, only to discover that there is no documentation or that the standard of documentation is so dismal, that one can only wonder how people have allowed such software to be used so widely and in such critical applications in the first place. Not that wondering about it is in any way helpful.
Over the years, various remedies have been thrust onto the scene, from plain old Internet search hoping to dig up some gems of wisdom, discussion forums aspiring to provide relief to adherents of particular technologies, and the now-familiar question-and-answer site concept popularised by the likes of Stack Overflow and its affiliates. But alongside the frustration that inadequate documentation may bring is the frustration and general disillusionment of having to continually ask questions, sometimes relying on vain, petulant and uncooperative individuals, that should have been definitively answered long ago, with those answers recorded coherently for posterity in actual documentation.
We might then understand why some people could be tempted to use “AI” chatbots, desperately trying to get insight into things that should have been made clear by actual human beings in the first place. The chatbot might be misleading or counterfactual, but it is probably going to act somewhat cooperatively, not shame the user about their lack of knowledge, or withhold information to coerce a form of deference. Well, at least not yet, anyway.
This sorry situation is a consequence of the way the software industry has been heading over the last few decades. Under the banner of agility, competitiveness, and that crucial “time to market”, forever to be optimised and minimised, things like documentation are critically undervalued. “Read the code!” exhorts one techbro to the other, advocating the complete elimination of any commentary, deemed unnecessary because “the code is the documentation” and because what they coded was obvious to them when they were “in the zone”.
Never mind that the commentary would have revealed the intended behaviour of the code, along with the deficiencies of the code that was actually written. But, of course, no-one had any time for that, let alone any time for writing a coherent guide to the software, its architecture, and how other people might interact with it in various ways as users of different kinds and developers needing to fix and extend that code. All of that stuff was low-end work according to the techbro, as are other kinds of programming that they do not personally value. (Still, the collision of the butch “rewrite everything in Rust” brigade and “AI” can still provide some entertainment and maybe an occasional reality check.)
It also does not help that documentation has joined the other facets of development in being subjected to various technological and methodological fads that do more to create opportunities for certain industry players, as opposed to improving the quality and breadth of material available. Just as one’s heart sinks as the primary source of guidance for a project turns out to be a GitHub repository page, so it does when one encounters a documentation site that was produced by Sphinx, with plenty of hastily scribbled sections littered with “admonishments” and caveats.
Tim’s use of the term “angst” rightfully received a strongly worded response from one commenter, not least because labelling people’s reactions as if they were those people’s weaknesses in having to respond to seemingly overwhelming pressures is what Tim might himself regard as victim blaming. Tim seems like a generally good guy with a great deal of self-awareness, but I imagine that his position in the industry, along with that of elements of his readership, some probably having done very nicely indeed from their tenure at various West Coast behemoths, makes the general demographic less than reflective when they lament the shocking lack of alternatives to their favourite cloud services.
To an extent, it is a bit like former politicians who take on moral causes after their positions in power are over and as their influence steadily diminishes. Where were they when other people needed their help in furthering those causes? When adjustments to policy, so easily done, might have made a significant difference. Similarly, with Free Software and the provision of sustainable, ethical technology more broadly, the dominant ideology promoted vigorously by West Coast capitalism often seems to involve discretionary support to worthy causes by wealthy people, many of whom did just fine while those worthy causes were kept marginalised, and when various structural causes of inequality, noted by Tim himself, were conveniently ignored because that would involve wealthy people paying a bit more tax.
The Idiocracy
There is always some idiot who pipes up in online discussions about how they asked some chatbot about some topic, “and it said this”. It is almost like they want a prize for having done something that anyone else could have done. And what does this kind of “idle wondering” usage of chatbots, indulged by the consumption of colossal amounts of electricity, actually serve? If anyone were really interested in a topic, and were to have the knowledge and understanding to actually assess chatbot output, then they probably wouldn’t be using one in the first place.
That leaves the most likely kind of user as the sort of middle manager who doesn’t understand anything, who just pushes the paper on to someone who is supposed to “action” the information they were given. At what point is the human, relaying information they don’t understand to impress people about “what they know”, just a conduit for the activities of machines? And are they not then just a puppet participating in the game of assessing whether the machines are intelligent or not, abdicating their own genuine intelligence in the process?
To be fair to those wishing to put questions to machines and expecting some kind of summarised response, questions and other naturally constructed prose could conceivably help formulate more effective queries than simple combinations of keywords. The grammatical roles of words and the relationships between those words can disambiguate between the different meanings of some words and constrain the context of any particular search for information. Returning a summary that paraphrases the materials that were found might help in determining whether those materials happen to cover the right topics.
But what I discovered recently when being offered an “AI” search summary for a particular, highly specific, search term was the kind of thing people describe as hallucination. More accurately, though, it was more reminiscent of when a lazy student of a certain age, or someone who is “hustling” to impress their equally superficial superiors, states something as a fact, references a bunch of supposed supporting material, only for none of that material to actually confirm the fact or quote the term in question, let alone define that term as the thing confidently stated in their opening sentence.
Once upon a time, when machines were meant to be doing things that were described as “reasoning”, researchers were expected to show how the machine had worked through to its conclusion, revealing things like “facts” and “inferences”, and thus providing insights into the state of the machine’s encoded “knowledge base”. One would then be able to verify for oneself whether such a conclusion was sound or not. But now, it would seem, nobody really cares about accuracy or correctness, but only whether the damned machine has the right kind of “swagger” that would convince a room full of imbeciles with something that might sound credible enough to them.
And we are entering a time when we should be concerned about how people might readily check the correctness of “AI” queries. It was always bad enough when a search engine would produce results where the search terms didn’t appear in many of the results, even in a derived form, meaning that some “search engine optimisation” vulture had deceived the search engine with a bucket of false keywords stuffed somewhere into a page or site, but now the search providers have an excuse to push “AI search”.
Having trained their models on the bulk of the Web, they can then push out arbitrary “summaries” while deliberately obscuring the content that those models consumed. Thus, they can deny plagiarism by making source material difficult to find, while also omitting or excluding content that contradicts or refutes any dubious assertions or conclusions presented as fact by their services. They arguably don’t even have to try very hard to degrade the experience, giving the whole exercise the air of plausible deniability.
As with predatory social media, the training of “AI” also has considerable emotional costs for the people who are largely exploited in their work of having to train it. Not only are they not encouraged to properly correct disinformation, but they may not be suitably qualified to do so. After all, everyone can only be knowledgeable about so much. Having seen behind the curtain, their advice is not to trust it. For “AI” companies, hyping up their products, and to the idiocracy, the illusion of the perfect, shiny, all-knowing android has to be upheld at all costs, for the former so that the money can keep rolling in, and for the latter so that they can presumably look cleverer than they actually are.
The tiresome industry insider will criticise anyone suggesting that the average chatbot exhibits only superficial characteristics of intelligence, hand-waving towards mysterious models while playing the “you don’t know what I know” card, but it isn’t unreasonable to suggest that the whole phenomenon is barely above the level of a parlour trick. Does the chatbot really understand anything or do people just read meaning into what is effectively just banter? Is it merely a new Eliza for the disinformation age?
The Apologists
There are always plenty of people willing to pipe up when something that amuses or delights them is criticised in some way. One will undoubtedly encounter people who cosplay their fantasies of living in the future by infuriatingly using the term “an AI” in an unsarcastic and completely sincere way to refer to a system supposedly employing AI. In doing so, they effectively attribute general intelligence and even sentience to those systems. Criticise such sloppiness and you’ll get something like this in return:
“Nobody had any problem calling it artificial intelligence back when it was far less capable than it is now.”
Once again, such people fail to appreciate the distinction between artificial intelligence as a discipline and the promotion of the discipline in broader society, where despite the buzz in superficial media circles, applications of AI were often met with much skepticism. It was frequently obvious that the systems involved might have been exhibiting traits of ostensibly “intelligent” behaviour, but these systems could not be regarded as being intelligent in their own right or in a general sense.
Even with huge amounts of data and computing power brought to bear on such matters, chatbot output still has the smoke-and-mirrors character familiar from earlier demonstrations of the capabilities of AI. Sadly, popular culture is now configured to amplify the hype as opposed to deconstruct it. So, even pointing this out has people believing that researchers in earlier times would have been awestruck by today’s “AI” chatbots, when in fact they would almost immediately recognise the phenomenon. Indeed, they would be disturbed by the way society has embraced such technologies unquestioningly, perhaps observing that earlier demonstrations were mere laboratory experiments, potentially dangerous if they escaped and proliferated.
What has changed since those earlier times is the broader availability of computing power that delivers a more convincing “demo effect” of the technology. This makes decision-makers think they can dispense with humans and roll out the chatbots. Couple this with the way that the populace has been conditioned and manipulated in terms of expecting and accepting less from public services, private companies, their employers, in their careers, and in their lives more generally, and it is not surprising that people are consuming such technologies. Those who believe that they are only doing so recreationally are predictably dismissive about the negative effects on vulnerable people who might end up using such services because all of the responsible, humane options have been eliminated for the sake of “convenience”.
There is also the arrogance that people seek to exude of being more comfortable with technological change than the average person, even as they reveal their own insecurity about the nature of intelligence. To me, the idea that other creatures possess a range of intelligence is indisputable, and we are regularly presented with observations made about intelligence in the natural world, animal behaviour and cognition, that should merely confirm that we as humans are not quite as special as we might think. Many people engage constructively with such observations and show that they can readily accept notions of more pervasive intelligence in nature.
Meanwhile, the insecure but outwardly confident technophile presumably scoffs at the notion that, say, animals might employ forms of reasoning or possess forms of cognition or information processing that could rival those of humans. They would exhort others to not attribute “human” traits to other species, even as they readily ascribe fanciful characteristics to machines running mystery payloads of software, presumably because humans were involved in their creation.
Some apologists play the inevitability card, that this is simply another change following on from many that society has already absorbed and survived, and that this will somehow be digested, too, seeing society adapt and life go on as before. But here, even those who appreciate the challenges these technologies pose do not seem to understand the cultural and societal calculations that accompanied earlier forms of technological change. There are long-enduring cultures on this planet that have had social rules about the depiction of the human form for thousands, maybe tens of thousands of years, and yet our arrogant “modern” societies think we have nothing to learn from such cultures.
Again, the apologists might reveal their ignorance by scoffing at pigments, paints and dyes as technology, but they gave humans the ability to choose how they might be portrayed. We live in an age where the application of computational technologies may seek to eliminate any distinction between observed reality and precisely concocted fantasy in a way that is still scarcely believable even now. Fake pictures, video and audio can be presented as genuine representations and recordings of reality, leaving their audiences deceived.
Maybe older cultures did not need to see the emergence of such technology before they understood some fundamental social lessons that still remain unabsorbed by our supposedly “advanced” societies today. And while it might be said that perhaps those older cultures needed thousands of years to reach their own conclusions, we might also observe that our own societies do not have the luxury of thousands of years to tackle the effects of this corrosive fakery. This might even remind you of another pressing, existential threat to humanity.
One notable incident in this regard involved content creator Jeff Geerling, whose voice was cloned by a supplier of technology products to use in advertisements. Fortunately, the company in question backed down when challenged over the unlikely similarity of its synthesised voice to that of Geerling’s, and the incident was resolved relatively amicably and with incredible grace by Geerling himself. The incident highlighted the proliferation of such tools and their widespread availability for all kinds of applications.
Naturally, there are apologists for such tools, too, of the same school that presumably insists that nuclear weapons are not inherently bad, just how they might be used. Our societies show little sign of resilience in the face of such threats. Instead of robust legislation, regulation and education, we are left with the usual predictable media commentary about “scams” with various workarounds to “stay safe”. And, those same media outlets still promote the social media lifestyle, exhorting everyone to share their lives online and to feed the predatory social media monster.
The apologists might regard “AI” as harmless fun or empowering (to them), but there is an arms race in progress and an industry dealing in what might be described as information weaponry, building on the delivery mechanisms of predatory social media, to further degrade society’s resilience, making it impossible to trust voices, images, video and content. We might like to believe that we are the sophisticated ones, living in a “modern” society, in contrast to cultures where images of the human form are forbidden. One might wonder whether such rules are less about “taboos” – a culturally loaded term – and more about such societies having a fundamental realisation that our own societies fail to appreciate or understand, dismissing it with arrogant talk of our own “progress”.
Media coverage of “deepfake this” or “deepfake that” predictably circles around sensationalism, involving deceased celebrities if the audience risks being unmoved. But on a more ordinary level, is it progress to no longer be sure that the person sounding like or looking like a member of your family on the other end of a voice or video call really is that person? Is it progress to need a codeword to be somewhat sure?
And is it progress to only be sure if you are standing there in the same room as them, until the day when some ghoul, possibly one who has grown tired of monetising deceased celebrities, eventually introduces lifelike androids to impersonate random people? Even before that miserable day arrives, is it progress if some other ghoul decides to “deepfake” deceased relatives to torment those who are in mourning?
One of the more interesting contributions to the discussion about “AI angst” was that giving the Vatican’s view on “AI”, eliciting some considered responses. Naturally, the Catholic church is hardly the most popular institution for a variety of reasons, but one has to concede that on matters of theology and philosophy, on issues that shape humanity’s view of itself and its relationship with the natural world, the institution can hardly be considered to be staffed with lightweights, even if we might disagree – sometimes strongly – with its position on certain social issues.
Paying for Other People’s Privilege
As with many of society’s ills, one can always choose to metaphorically put one’s head in the sand and ignore those problems that mostly seem to only affect somebody else. Regardless of whether one does so, however, such problems have an annoying habit of landing in one’s lap, anyway. A few months ago, I got a mail from my hosting provider telling me that a “lot of traffic” visiting one of my sites was “causing excessive CPU usage and disruptive service for other customers”. Of course, this was due to a multitude of client addresses hammering my site – a repository browser – and crawling all over every last published resource.
It was suggested that I put my site “behind Cloudflare” since they offer various services to mitigate the effects of “bot” traffic, with my only guidance being a link to a blog about a service that Cloudflare offers. There was no guidance about what obligations towards Cloudflare I might have as a result, whether there might be payment involved, or whether Cloudflare might want or get something else from me, were I to sign up. In the end, I just put authentication in front of my site and re-enabled it, hoping that the bots would eventually give up if all they saw was the appropriate HTTP “authentication required” response.
Consumerists would point out that I’ve been doing all of this wrong, of course. Firstly, they would tell me that I should have moved my repositories to GitHub for the convenience of having Microsoft do the hosting and me paying nothing for the privilege. They would tell me that having GitHub suck all of my content into its “AI” consumption engine, for regurgitation to other users through their copyright laundering “AI” tools, is “no big deal” and that I should be happy to see my code used in so many other places. They would presumably purr about GitHub’s cumbersome and overworked user interface and its “collaborative tools” that, in classic Microsoft style, try and insert the service into everybody’s workflow.
But the end result of me not “going with the flow” and instead paying for the privilege of offering a service on my own terms, contributing to an independent hosting provider and keeping their business viable, allowing other interested parties access to my software, and generally trying to uphold my own privacy and those who wish to interact with me and my content, is that I and others who try and uphold our autonomy and defend our own interests are punished by the Internet equivalent of looters and pilferers. And the consumerist response involves an outcome that is not entirely unintended on the part of the Internet’s dominant corporate interests.
Just as it is rather convenient for the likes of Microsoft to promote “AI” tools in education, only to also offer “AI” plagiarism detection tools in their ubiquitous services, it also seems rather convenient that the degraded Internet environment, increasingly subverted to feed “AI” products, just happens to drive people towards services and platforms run by the Internet’s behemoths that offer “AI” products and services as part of their headline feature sets.
Naturally, such corporations would claim innocence of any random botnet involvement, rather like how conventional suppliers of physical products routinely claim ignorance of bad things in their supply chains, but ultimately the finger-pointing is of diminishing significance. If someone cried “gold” and now an entire landscape has been obliterated, does it matter who was driving which excavator as everyone piled in to make their fortune?
The coercion used by software companies and service providers to “opt in” customers and users is, of course, entirely deliberate. Their goal is to make everyone complicit in mass copyright infringement and, amplifying classic right-wing hypocrisy that appeals to “personal responsibility” yet weakens regulation and enforcement, normalise sentiments that “everybody is doing it” and so rule-makers should “not bother” trying to curtail antisocial behaviour.
Such collective antisocial behaviour affects Free Software in other ways. It causes a degradation of software quality and Free Software contributions, potentially for malicious purposes, grinding down contributors. All of this effectively conspires against independent, modestly-resourced Free Software projects, at the very least inhibiting or shutting down open collaboration, and at worst entirely eliminating such projects as alternatives to well-resourced corporate projects.
People may claim to be applying “AI” tools with the best of intentions, but those using the tools seem to be happy for them to blatantly plagiarise other works and then effectively mark their own exam, denying what is obvious to any moderately competent observer. Or in the ominous words of one such practitioner:
And, as we have seen before, the result is a degradation in Free Software offerings, of impaired desktop experiences, of inscrutable technologies promoted by vested interests and proliferated unquestioningly, and the continuing need for all of us who advocate Free Software adoption to apologise for the state of the software we recommend, as proprietary software companies cash in on the perpetual “lack of polish” or other deficiencies, perceived or real, of that software. Fewer and fewer viable choices remain, driving the average person into the arms of the monopolists, just as they always intended.
If nothing else, Free Software advocates should be pushing back hard against “AI” proliferation, but many of them won’t. I know to my personal cost what adhering to a set of principles entails, but many people are rather more flexible when it comes to following through on what they supposedly believe in. Free Software advocacy typically sits at the intersection of at least two professions: software development and legal practice. As software developers, plenty of people claim to understand the nuances of AI, readily telling you that you have it all wrong in your criticism of the technology.
And software practitioners are often frustrated by law practitioners and their inability to correctly perceive and interpret the nature of technology. Yet, if the legal profession has concerns about AI, why should other professions accept its use so unconditionally? But that is what people do: people who should know better, who claim expertise and the right to lecture others, and then somehow wave away any concerns because it suits them. When legal cases collapse due to “vibe lawyering”, justice may be denied and crimes may go unpunished. When software fails due to vibe coding, the effects risk being just as serious and, in some cases, even worse.
Practitioners, institutions and policy-makers seem to have, for their own reasons, an inability to confront awkward problems and the grind of getting the job done. Consider the case of a catastrophic data leak which imperiled thousands of people due to inappropriate tool usage and a lazy office culture where people couldn’t even be bothered to add two words to an e-mail subject line as the only, almost laughable, technical safeguard against data breaches.
When that becomes just another opportunity to peddle and to apply “AI”, it not only demonstrates that everyone responsible has effectively “checked out” and cannot be bothered to think very hard about the basics, despite the eager software practitioner’s tiresome refrain of “security!” in every technical forum they dignify with their presence. It also shows the ethical deficit these people have with the rest of the society and the people whose lives depend on their diligence. Because we know that “AI” will be just another scapegoat when things go wrong, excuses will be made, and the “vibe” will go on.
Lawmakers, meanwhile, seem infatuated with their new toy, as they also were with predatory social media, delighted to merely rub shoulders with indulged foreign oligarchs, potentially eyeing the possibilities of lucrative sidelines or post-political positions, instead of developing and furthering the interests of those that elected them to office. As has been the case with other topics of concern, notably software patenting, it seems that lawmakers can be very happy to listen to selfish commercial interests from beyond their electoral boundaries instead of the people they are supposed to represent. (Hint: “the south-west of England” is not the region one such lawmaker was elected to represent.)
Thus, blatant plagiarism, pilfering and infringement under the pretense of a “creative” act seems entirely reasonable to distracted lawmakers, never mind that letting some of the highest valued corporations on the planet have free and unencumbered access to the lucrative output of a nation’s supposedly prized creative industries is likely to plunge those industries into economic ruin. In the case of the United Kingdom, this would be only another chapter of the nation’s leadership stupidly squandering what remains of the cultural “soft power” that the nation once had, only instead of doing so to pander to the bigoted, the ignorant and the deceived, it would be to the kind of people who gladly facilitated that earlier deception.
Some might claim that “expanding copyright” to prevent “AI” misuse of content is wrong, noting that training activities are perfectly legal and justifiable (obviously ignoring the costs incurred by those of us who pay for our Web hosting), and likening the publication of a model to “publishing facts about copyrighted works”. But what about the publication of “AI”-generated works? The suggested “simple way” to “protect artists from AI predation” involves withholding the application of copyright to such works, preventing Big Content from monetising such works, and thus deterring Big Content from adopting “AI” and firing its creative workers.
While that sounds like a great economic “hack”, it doesn’t confront the broader phenomenon of the cheapening of content at all. Big Content has arguably already pivoted to technology, streaming, and the like, but even if they might suffer from such policies, it does not mean that creators will gain. There are plenty of “slop” creators out there today whose business models do not rely on asserting copyright for their works. Will manufacturers of weird jigsaw puzzles care? They just want a stream of free stuff to slap on their products, and what if someone clones them? Well, that was last season’s product.
Allowing the “AI” peddlers to consume and regurgitate copyrighted works without constraint, allowing them to circumvent copyright under the pretense of “creativity”, is still harmful even if the peddlers cannot “protect” those regurgitated works. If someone consumes a Free Software application or library, waves the magic wand of “AI”, and then publishes it, the mere availability of this dubious derivative cheapens the original software, undermines its licensing by steering potential users to a “public domain” clone, reduces incentives to continue developing the original software, and thereby reduces its viability. Flooding the zone with such slop may benefit corporations wanting to circumvent Free Software licensing, but it does not benefit Free Software.
The Atomisation
There are numerous social and economic threats from the introduction of technology under the banner of “AI” that might justifiably elicit a negative reaction from a lot of people. When such reactions are articulated, the response from “AI’s” cheerleaders tend to involve labelling them as “emotional”, “touchy” and “irrational”. Of course, this is just a cynical way of avoiding any kind of constructive discussion about the impact of the accompanying harmful economic agenda, reminiscent of all of the other shifts in industrial policy that left people disadvantaged and impoverished.
In previous transitions involving something that could be described as automation, there was always the chance that those whose manual work was eliminated by the introduction of machines might still benefit from the exercise. When the production of textiles or clothing was to be automated, for instance, there might conceivably have been work to be had in applying one’s expertise to the design of the machines themselves. And there might also have been a limit to the automation, preserving opportunities for those particularly skilled in those tasks beyond the capabilities of the machines.
But today’s enthusiasm for “AI” suggests that whole categories of jobs will be eliminated, that no-one will write code any more, or write prose, draw, paint, make music, and so on. And the intention of those setting this agenda is that there will not be any possibility of somehow migrating to either the automation side of this transition, especially since coding in “AI” companies is meant to be left to the “AI”, or to more specialised forms of paid labour.
Previous transitions were handled very poorly indeed. In Britain, the phrase “on your bike” was largely the economic strategy of the Thatcher government as it gutted various industries, effectively condemning regions of the country to underdevelopment, unemployment and hardship, exacerbating inequalities within the country and divisions that persist to this day. Those too young to know or to remember might recognise some of the cultural phenomena from that earlier time because we see them again now in similarly potent forms, not that they ever really went away.
The practice of “divide and rule” is used to pit disadvantaged and marginalised groups against each other, steadily degrading other sections of society to make them poorer, weaker and too concerned with their own survival to question the general direction of society. Foreigners, immigrants, those with health problems, those not blessed with wealth, those who have otherwise experienced misfortune that blights their lives, and others are conditioned to expect less from their own lives, to feel guilty about their own situation and for needing or expecting help, or even asking for it.
Accompanying this is the promotion of “charity” and demonisation of taxation. How can anyone argue against charity, one might ask, if it is to do good in the world? One can argue against charity being a phenomenon that, instead of being a way of helping others, is used to diminish the help dispensed by society and to make such help entirely discretionary, conditional on the whims of the supposedly generous donor. Such a phenomenon serves only the wealthy who then get to choose where society’s money is spent, instead of paying their taxes and allowing broader society to make such decisions for itself.
Thatcher’s Britain was famous for the pursuit of selfishness, but our societies today face what might be called the “atomisation” of society where everyone is encouraged to pursue their own particular agenda and reward their own selfishness. Every time someone pipes up with the idiotic remark “nah, I’m good”, especially when concern is expressed for the weak or defenceless in society, it is merely another expression of the more culturally established (and derided) “I’m alright, Jack”.
But what such selfish remarks effectively signal is “more for me” from someone who already has plenty. And it obviously signals “less for everyone else” regardless of whether everyone else can manage with less or not. It is where people are encouraged to look out for their own personal interests at the expense of society, not realising that society makes their Amazon Prime deliveries possible in the first place.
Those impressed (or maybe bamboozled) by “AI” may remain unconvinced that the phenomenon might be an unsustainable and unprofitable bubble, one that consumes more investment than it can ever pay back, and that its most “disruptive” form has no genuine or necessary applications. After all, it is very popular and, for some, a nice little component in their plump investment portfolio featuring Nvidia and the other technological horsemen of the apocalypse. Why on Earth would anyone question its viability? Or in other phrasing:
“What are the millions of people using GPT for, if it’s a solution looking for a problem?”
The answer to such a question is this: it is for hollowing out forms of work that are fulfilling or professional, leaving the tedious, hard-to-automate stuff to human beings who will end up doing commoditised work, with all the “hustle” for that work and the driving down of salaries that it entails. All so that people whose livelihoods and lifestyles have been ringfenced one way or another can be entertained for a few seconds at a time with their “AI” videos and other “slop”.
The Degradation of Expectations
As neoliberalism took hold, with the privatisation of public services and the deregulation of various markets, consumerism was the shiny trinket that was used to distract from any hardship that was experienced or from the structural weaknesses being introduced. Having more choice in the shops may have been welcome, and if you couldn’t afford to buy anything, there was plenty of credit sloshing around to let you join the party. As for those public services, there were shares you could buy to join that party, too.
Decades of neoliberalism has given us consumerism as a solution for everything, seeing the replacement of basic, universal services with market-driven “consumer choice” involving a bunch of different “providers” who may or may not offer the same quality of products or levels of service that the customer would have a right to expect. Exciting as it may be for some to be able to choose between different flavours of utility provider, postal service, train or bus company, and so on, it all rewards the people with plenty of time, money and a propensity for getting bored easily to “shop around for the best deals”, leaving everyone else disadvantaged by unscrupulous businesses who, according to neoliberalism, merely occupy a place in the market appropriate for the “value to the customer” that they deliver.
Even where public services are maintained, consumerism and the attraction of new toys amongst ostensibly bored or ideologically fixated politicians can easily start to corrupt those services for the benefit of private operators whose only interest and instinct is to make money while they can. Having new toys to play with supposedly makes life more exciting for similarly bored and easily distracted members of the general population, never mind that they burden other public services with the consequences of indulging antisocial behaviour and make the lives of other individuals miserable.
When such companies have, for example, a record of perpetuating their abuse of healthcare professionals overwhelmed during a pandemic, what right do they have to make demands of anyone so that they can keep going with business as usual? But compliant politicians will continue to pander to them. After all, the religion of neoliberalism elevates those companies and their greedy founders to objects of worship.
And if those companies can damage publicly run services to the point of sufficient public dissatisfaction, those politicians can claim that the state always fails at everything and should leave such “business” to business. Naturally, those politicians do not offer to resign from their own jobs, although they are, I suppose, already doing the bidding of private enterprise, just not being willing to forego their publicly funded salary.
In both the public and private realms, many of us are presumably familiar with certain trends. When interacting with companies to obtain help or support with their products and services, or when attempting to navigate the public bureaucracy, one might recognise what I call the notion of “penalty laps” to compensate for cheapened and hollowed-out services, making the customer or the user spend time doing unnecessary and pointless work to prove that they deserve actual support.
One cannot simply communicate with another human being, but must instead interact with a chatbot first, which merely parrots information that is readily available and often of little help to anyone actually requiring support. Or maybe a long sequence of questions, deployed on a Web site or over the telephone, must be carefully navigated before a human can be summoned to communicate. Such inconvenience is used to herd the increasingly unhappy customer or user into other channels, framed as being more “convenient” and almost certainly in the form of an “app”.
The easily amused or distracted customer or user may think that they are getting better service for free, but they are in fact subsidising the operating costs of the institution concerned. And so, businesses and institutions continue their externalisation of operating expenses, insisting that customers or users provide their own equipment to interact with a business or service, paying the advertised costs as well as the hidden costs of acquiring a nice phone, insuring it, replacing it when those institutions decide that it is “too old”. You pay to work for them now, in case you didn’t realise.
Proof of “work” may have been the selling point of ruinous, dubious and entirely unnecessary cryptocurrency schemes, but making people continually prove that they are somehow worthy is an established trait of an exploitative society. Given that the neoliberal society will continue to eliminate decent, fulfilling work, one might expect efficient mechanisms to help people find other opportunities. But instead of making opportunities for people who seek help, such a society and its institutions has such people effectively doing useless busy work applying for non-existent jobs in a largely fictional “market”.
And with no economic strategy or vision, but with a worldview that involves pulling the public purse strings as tightly as possible to close the purse, they perversely create plenty of jobs in the bureaucracy for people to administer penalties and to deliver judgement on other people’s personal situations. Didn’t apply for enough meaningless non-jobs or unsuitable, informal, casual work? Then the entitled people in the bureaucracy aspiring to be like their managers and political leaders, cultivating a belief that they “deserve” their opportunities unlike those lazy people on their books, will deny the help that people seek just to be able to turn their lives around.
Because the attitude in the neoliberal economies is that looking for a job should actually be a job. So, the branch of the state administering work-related benefits is mostly there to coerce people into looking for jobs that they aren’t suitable for, causing huge volumes of speculative applications that even the applicants know are senseless, making it harder for genuine recruitment to occur.
And then there are all the fake job adverts, either posted to cover up nepotism or corrupt practices, or to puff up some company’s image, or to give functionaries something to do. And people wonder why it is that there’s no real economic growth, allowing our glorious leaders to claim that there is no money to spend on building up society, that improving the quality of life for those who need it will just have to wait.
“We can’t afford it” is the perpetual excuse for a lack of public services and crumbling infrastructure. Can’t get to see a doctor or another healthcare specialist? With the zone flooded with “AI”, desperate people turn to desperate solutions with disastrous results. We should, of course, expect better support for the people who need it in our societies, from actual human professionals, but that requires investment and commitment. Expect to be fobbed off with technological toys that cosplay the experience of interacting with professionals instead. Unless you are wealthy or well-connected, because then only the best will do.
“AI” is just the latest escalation in the practice of “divide and rule”, facilitating the targeting of individuals to such a degree that the powerful can go beyond merely targeting minorities and smaller groups while having to indulge the majority to keep them passive and broadly supportive of such cruelty. With “AI”, the powerful can potentially pull each person apart from those closest to them, corrupting their communications and poisoning their relationships, to a degree not even achieved through the manipulation of people’s lives by predatory social media. Populists and the affluent will gladly embrace “AI” for all the amusement it offers and for as long as it lasts, unaware that there may no longer be a “safe” majority for them to hide within any more.
It is not exactly an exaggeration to consider “AI” as an existential threat requiring collective action, not least because of its ruinous power and resource consumption, coupled with the ineffective measures to tackle climate change that are formulated by those politicians always encouraging us to wait for better times. Such times will never arrive if the barons of “AI” and others who prioritise their own wealth are setting the agenda, of course.
There seem to be plenty of people who think that by enthusing about ruinous practices like “AI”, parroting the rhetoric of the oligarchs, and otherwise only caring about things that benefit them personally, that they will somehow get to join that club of the wealthy, that they too will get to go on the spaceship.
To those people, I can only say this: instead of joining the club, even you will find yourself alone, having been torn from the fabric of the society you helped to obliterate, but instead of living your best life, you will be miserable and there will be nobody left to defend you.
And by the way, there is no spaceship.



