Paul Boddie's Free Software-related blog


Archive for the ‘English’ Category

The Academic Barriers of Commercialisation

Monday, January 9th, 2017

Last year, the university through which I obtained my degree celebrated a “milestone” anniversary, meaning that I got even more announcements, notices and other such things than I was already getting from them before. Fortunately, not everything published into this deluge is bound up in proprietary formats (as one brochure was, sitting on a Web page in Flash-only form) or only reachable via a dubious “Libyan link-shortener” (as certain things were published via a social media channel that I have now quit). It is indeed infuriating to see one of the links in a recent HTML/plain text hybrid e-mail message using a redirect service hosted on the university’s own alumni sub-site, sending the reader to a bit.ly URL, which will redirect them off into the great unknown and maybe even back to the original site. But such things are what one comes to expect on today’s Internet with all the unquestioning use of random “cloud” services, each one profiling the unsuspecting visitor and betraying their privacy to make a few extra cents or pence.

But anyway, upon following a more direct – but still redirected – link to an article on the university Web site, I found myself looking around to see what gets published there these days. Personally, I find the main university Web site rather promotional and arguably only superficially informative – you can find out the required grades to take courses along with supposed student approval ratings and hypothetical salary expectations upon qualifying – but it probably takes more digging to get at the real detail than most people would be willing to do. I wouldn’t mind knowing what they teach now in their computer science courses, for instance. I guess I’ll get back to looking into that later.

Gatekeepers of Knowledge

However, one thing did catch my eye as I browsed around the different sections, encountering the “technology transfer” department with the expected rhetoric about maximising benefits to society: the inevitable “IP” policy in all its intimidating length, together with an explanatory guide to that policy. Now, I am rather familiar with such policies from my time at my last academic employer, having been obliged to sign some kind of statement of compliance at one point, but then apparently not having to do so when starting a subsequent contract. It was not as if enlightenment had come calling at the University of Oslo between these points in time such that the “IP rights” agreement now suddenly didn’t feature in the hiring paperwork; it was more likely that such obligations had presumably been baked into everybody’s terms of employment as yet another example of the university upper management’s dubious organisational reform and questionable human resources practices.

Back at Heriot-Watt University, credit is perhaps due to the authors of their explanatory guide to try and explain the larger policy document, because it is most likely that most people aren’t going to get through that much longer document and retain a clear head. But one potentially unintended reason for credit is that by being presented with a much less opaque treatment of the policy and its motivations, we are able to see with enhanced clarity many of the damaging misconceptions that have sadly become entrenched in higher education and academia, including the ways in which such policies actually do conflict with the sharing of knowledge that academic endeavour is supposed to be all about.

So, we get the sales pitch about new things needing investment…

However, often new technologies and inventions are not fully developed because development needs investment, and investment needs commercial returns, and to ensure commercial returns you need something to sell, and a freely available idea cannot be sold.

If we ignore various assumptions about investment or the precise economic mechanisms supposedly required to bring about such investment, we can immediately note that ideas on their own aren’t worth anything anyway, freely available or not. Although the Norwegian Industrial Property Office (or the Norwegian Patent Office if we use a more traditional name) uses the absurd vision slogan “turning ideas into values” (it should probably read “value”, but whatever), this perhaps says more about greedy profiteering through the sale of government-granted titles bound to arbitrary things than it does about what kinds of things have any kind of inherent value that you can take to the bank.

But assuming that we have moved beyond the realm of simple ideas and have entered the realm of non-trivial works, we find that we have also entered the realm of morality and attitude management:

That is why, in some cases, it is better for your efforts not to be published immediately, but instead to be protected and then published, for protection gives you something to sell, something to sell can bring in investment, and investment allows further development. Therefore in the interests of advancing the knowledge within the field you work in, it is important that you consider the commercial potential of your work from the outset, and if necessary ensure it is properly protected before you publish.

Once upon a time, the most noble pursuit in academic research was to freely share research with others so that societal, scientific and technological progress could be made. Now it appears that the average researcher should treat it as their responsibility to conceal their work from others, seek “protection” on it, and then release the encumbered details for mere perusal and the conditional participation of those once-valued peers. And they should, of course, be wise to the commercial potential of the work, whatever that is. Naturally, “intellectual property” offices in such institutions have an “if in doubt, see us” policy, meaning that they seek to interfere with research as soon as possible, and should someone fail to have “seen them”, that person’s loyalty may very well be called into question as if they had somehow squandered their employer’s property. In some institutions, this could very easily get people marginalised or “reorganised” if not immediately or obviously fired.

The Rewards of Labour

It is in matters of property and ownership where things get very awkward indeed. Many people would accept that employees of an organisation are producing output that becomes the property of that organisation. What fewer people might accept is that the customers of an organisation are also subject to having their own output taken to be the property of that organisation. The policy guide indicates that even undergraduate students may also be subject to an obligation to assign ownership of their work to the university: those visiting the university supposedly have to agree to this (although it doesn’t say anything about what their “home institution” might have to say about that), and things like final year projects are supposedly subject to university ownership.

So, just because you as a student have a supervisor bound by commercialisation obligations, you end up not only paying tuition fees to get your university education (directly or through taxation), but you also end up having your own work taken off you because it might be seen as some element in your supervisor’s “portfolio”. I suppose this marks a new low in workplace regulation and standards within a sector that already skirts the law with regard to how certain groups are treated by their employers.

One can justifiably argue that employees of academic institutions should not be allowed to run away with work funded by those institutions, particularly when such funding originally comes from other sources such as the general public. After all, such work is not exactly the private property of the researchers who created it, and to treat it as such would deny it to those whose resources made it possible in the first place. Any claims about “rightful rewards” needing to be given are arguably made to confuse rational thinking on the matter: after all, with appropriate salaries, the researchers are already being rewarded doing work that interests and stimulates them (unlike a lot of people in the world of work). One can argue that academics increasingly suffer from poorer salaries, working conditions and career stability, but such injustices are not properly remedied by creating other injustices to supposedly level things out.

A policy around what happens with the work done in an academic institution is important. But just as individuals should not be allowed to treat broadly-funded work as their own private property, neither should the institution itself claim complete ownership and consider itself entitled to do what it wishes with the results. It may be acting as a facilitator to allow research to happen, but by seeking to intervene in the process of research, it risks acting as an inhibitor. Consider the following note about “confidential information”:

This is, in short, anything which, if you told people about, might damage the commercial interests of the university. It specifically includes information relating to intellectual property that could be protected, but isn’t protected yet, and which if you told people about couldn’t be protected, and any special know how or clever but non patentable methods of doing things, like trade secrets. It specifically includes all laboratory notebooks, including those stored in an electronic fashion. You must be very careful with this sort of information. This is of particular relevance to something that may be patented, because if other people know about it then it can’t be.

Anyone working in even a moderately paranoid company may have read things like this. But here the context is an environment where knowledge should be shared to benefit and inform the research community. Instead, one gets the impression that the wish to control the propagation of knowledge is so great that some people would rather see the details of “clever but non patentable methods” destroyed than passed on openly for others to benefit from. Indeed, one must question whether “trade secrets” should even feature in a university environment at all.

Of course, the obsession with “laboratory notebooks”, “methods of doing things” and “trade secrets” in such policies betrays the typical origins of such drives for commercialisation: the apparently rich pickings to be had in the medical, pharmaceutical and biosciences domains. It is hardly a coincidence that the University of Oslo intensified its dubious “innovation” efforts under a figurehead with a background (or an interest) in exactly those domains: with a narrow personal focus, an apparent disdain for other disciplines, and a wider commercial atmosphere that gives such a strategy a “dead cert” air of impending fortune, we should perhaps expect no more of such a leadership creature (and his entourage) than the sum of that creature’s instincts and experiences. But then again, we should demand more from such people when their role is to cultivate an institution of learning and not to run a private research organisation at the public’s expense.

The Dirty Word

At no point in the policy guide does the word “monopoly” appear. Given that such a largely technical institution would undoubtedly be performing research where the method of “protection” would involve patents being sought, omitting the word “monopoly” might be that document’s biggest flaw. Heriot-Watt University originates from the merger of two separate institutions, one of which was founded by the well-known pioneer of steam engine technology, James Watt.

Recent discussion of Watt’s contributions to the development and proliferation of such technology has brought up claims that Watt’s own patents – the things that undoubtedly made him wealthy enough to fund an educational organisation – actually held up progress in the domain concerned for a number of decades. While he was clearly generous and sensible enough to spend his money on worthy causes, one can always challenge whether the questionable practices that resulted in the accumulation of such wealth can justify the benefits from the subsequent use of that wealth, particularly if those practices can be regarded as having had negative effects of society and may even have increased wealth inequality.

Questioning philanthropy is not a particularly fashionable thing to do. In capitalist societies, wealthy people are often seen as having made their fortunes in an honest fashion, enjoying a substantial “benefit of the doubt” that this was what really occurred. Criticising a rich person giving money to ostensibly good causes is seen as unkind to both the generous donor and to those receiving the donations. But we should question the means through which the likes of Bill Gates (in our time) and James Watt (in his own time) made their fortunes and the power that such fortunes give to such people to direct money towards causes of their own personal choosing, not to mention the way in which wealthy people also choose to influence public policy and the use of money given by significantly less wealthy individuals – the rest of us – gathered through taxation.

But back to monopolies. Can they really be compatible with the pursuit and sharing of knowledge that academia is supposed to be cultivating? Just as it should be shocking that secretive “confidentiality” rules exist in an academic context, it should appal us that researchers are encouraged to be competitively hostile towards their peers.

Removing the Barriers

It appears that some well-known institutions understand that the unhindered sharing of their work is their primary mission. MIT Media Lab now encourages the licensing of software developed under its roof as Free Software, not requiring special approval or any other kind of institutional stalling that often seems to take place as the “innovation” vultures pick over the things they think should be monetised. Although proprietary licensing still appears to be an option for those within the Media Lab organisation, at least it seems that people wanting to follow their principles and make their work available as Free Software can do so without being made to feel bad about it.

As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.

So says the director of the MIT Media Lab. It says a lot about the times we live in that this needs to be said at all. Free Software licensing is, as a mechanism to encourage sharing, a natural choice for software, but we should also expect similar measures to be adopted for other kinds of works. Papers and articles should at the very least be made available using content licences that permit sharing, even if the licence variants chosen by authors might seek to prohibit the misrepresentation of parts of their work by prohibiting remixes or derived works. (This may sound overly restrictive, but one should consider the way in which scientific articles are routinely misrepresented by climate change and climate science deniers.)

Free Software has encouraged an environment where sharing is safely and routinely done. Licences like the GNU General Public Licence seek to shield recipients from things like patent threats, particularly from organisations which might appear to want to share their works, but which might be tempted to use patents to regulate the further use of those works. Even in realms where patents have traditionally been tolerated, attempts have been made to shield others from the effects of patents, intended or otherwise: the copyleft hardware movement demands that shared hardware designs are patent-free, for instance.

In contrast, one might think that despite the best efforts of the guide’s authors, all the precautions and behavioural self-correction it encourages might just drive the average researcher to distraction. Or, just as likely, to ignoring most of the guidelines and feigning ignorance if challenged by their “innovation”-obsessed superiors. But in the drive to monetise every last ounce of effort there is one statement that is worth remembering:

If intellectual property is not assigned, this can create problems in who is allowed to exploit the work, and again work can go to waste due to a lack of clarity over who owns what.

In other words, in an environment where everybody wants a share of the riches, it helps to have everybody’s interests out in the open so that there may be no surprises later on. Now, it turns out that unclear ownership and overly casual management of contributions is something that has occasionally threatened Free Software projects, resulting in more sophisticated thinking about how contributions are managed.

And it is precisely this combination of Free Software licensing, or something analogous for other domains, with proper contribution and attribution management that will extend safe and efficient sharing of knowledge to the academic realm. Researchers just cannot have the same level of confidence when dealing with the “technology transfer” offices of their institution and of other institutions. Such offices only want to look after themselves while undermining everyone beyond the borders of their own fiefdoms.

Divide and Rule

It is unfortunate that academic institutions feel that they need to “pull their weight” and have to raise funds to make up for diminishing public funding. By turning their backs on the very reason for their own existence and seeking monopolies instead of sharing knowledge, they unwittingly participate in the “divide and rule” tactics blatantly pursued in the political arena: that everyone must fight each other for all that is left once the lion’s share of public funding has been allocated to prestige megaprojects and schemes that just happen to benefit the well-connected, the powerful and the influential people in society the most.

A properly-funded education sector is an essential component of a civilised society, and its institutions should not be obliged to “sharpen their elbows” in the scuffle for funding and thus deprive others of knowledge just to remain viable. Sadly, while austerity politics remains fashionable, it may be up to us in the Free Software realm to remind academia of its obligations and to show that sustainable ways of sharing knowledge exist and function well in the “real world”.

Indeed, it is up to us to keep such institutions honest and to prevent advocates of monopoly-driven “innovation” from being able to insist that their way is the only way, because just as “divide and rule” politics erects barriers between groups in wider society, commercialisation erects barriers that inhibit the essential functions of academic pursuit. And such barriers ultimately risk extinguishing academia altogether, along with all the benefits its institutions bring to society. If my university were not reinforcing such barriers with its “IP” policy, maybe its anniversary as a measure of how far we have progressed from monopolies and intellectual selfishness would have been worth celebrating after all.

Rename This Project

Tuesday, December 13th, 2016

It is interesting how the CPython core developers appear to prefer to spend their time on choosing names for someone else’s fork of Python 2, with some rather expansionist views on trademark applicability, than on actually winning over Python 2 users to their cause, which is to make Python 3 the only possible future of the Python language, of course. Never mind that the much broader Python language community still appears to have an overwhelming majority of Python 2 users. And not some kind of wafer-thin, “first past the post”, mandate-exaggerating, Brexit-level majority, but an actual “that doesn’t look so bad but, oh, the scale is logarithmic!” kind of majority.

On the one hand, there are core developers who claim to be very receptive to the idea of other people maintaining Python 2, because the CPython core developers have themselves decided that they cannot bear to look at that code after 2020 and will not issue patches, let alone make new releases, even for the issues that have been worthy of their attention in recent years. Telling people that they are completely officially unsupported applies yet more “stick” and even less “carrot” to those apparently lazy Python 2 users who are still letting the side down by not spending their own time and money on realising someone else’s vision. But apparently, that receptivity extends only so far into the real world.

One often reads and hears claims of “entitlement” when users complain about developers or the output of Free Software projects. Let it be said that I really appreciate what has been delivered over the decades by the Python project: the language has kept programming an interesting activity for me; I still to this day maintain and develop software written in Python; I have even worked to improve the CPython distribution at times, not always successfully. But it should always be remembered that even passive users help to validate projects, and active users contribute in numerous ways to keep projects viable. Indeed, users invest in the viability of such projects. Without such investment, many projects (like many companies) would remain unable to fulfil their potential.

Instead of inflicting burdensome change whose predictable effect is to cause a depreciation of the users’ existing investments and to demand that they make new investments just to mitigate risk and “keep up”, projects should consider their role in developing sustainable solutions that do not become obsolete just because they are not based on the “latest and greatest” of the technology realm’s toys. If someone comes along and picks up this responsibility when it is abdicated by others, then at the very least they should not be given a hard time about it. And at least this “Python 2.8” barely pretends to be anything more than a continuation of things that came before, which is not something that can be said about Python 3 and the adoption/migration fiasco that accompanies it to this day.

On Not Liking Computers

Monday, November 21st, 2016

Adam Williamson recently wrote about how he no longer really likes computers. This attracted many responses from people who misunderstood him and decided to dispense career advice, including doses of the usual material about “following one’s passion” or “changing one’s direction” (which usually involves becoming some kind of “global nomad”), which do make me wonder how some of these people actually pay their bills. Do they have a wealthy spouse or wealthy parents or “an inheritance”, or do they just do lucrative contracting for random entities whose nature or identities remain deliberately obscure to avoid thinking about where the money for those jobs really comes from? Particularly the latter would be the “global nomad” way, as far as I can tell.

But anyway, Adam appears to like his job: it’s just that he isn’t interested in technological pursuits outside working hours. At some level, I think we can all sympathise with that. For those of us who have similarly pessimistic views about computing, it’s worth presenting a list of reasons why we might not be so enthusiastic about technology any more, particularly for those of us who also care about the ethical dimensions, not merely whether the technology itself is “any good” or whether it provides a sufficient intellectual challenge. By the way, this is my own list: I don’t know Adam from, well, Adam!

Lack of Actual Progress

One may be getting older and noticing that the same technological problems keep occurring again and again, never getting resolved, while seeing people with no sense of history provoke change for change’s – not progress’s – sake. After a while, or when one gets to a certain age, one expects technology to just work and that people might have figured out how to get things to communicate with each other, or whatever, by building on what went before. But then it usually seems to be the case that some boy genius or other wanted a clear run at solving such problems from scratch, developing lots of flashy features but not the mundane reliability that everybody really wanted.

People then get told that such “advanced” technology is necessarily complicated. Whereas once upon a time, you could pick up a telephone, dial a number, have someone answer, and conduct a half-decent conversation, now you have to make sure that the equipment is all connected up properly, that all the configurations are correct, that the Internet provider isn’t short-changing you or trying to suppress your network traffic. And then you might dial and not get through, or you might have the call mysteriously cut out, or the audio quality might be like interviewing a gang of squabbling squirrels speaking from the bottom of a dustbin/trashcan.

Depreciating Qualifications

One may be seeing a profession that requires a fair amount of educational investment – which, thanks to inept/corrupt politicians, also means a fair amount of financial investment – become devalued to the point that its practitioners are regarded as interchangeable commodities who can be coerced into working for as little as possible. So much for the “knowledge economy” when its practitioners risk ending up earning less than people doing so-called “menial” work who didn’t need to go through a thorough higher education or keep up an ongoing process of self-improvement to remain “relevant”. (Not that there’s anything wrong with “menial” work: without people doing unfashionable jobs, everything would grind to a halt very quickly, whereas quite a few things I’ve done might as well not exist, so little difference they made to anything.)

Now we get told that programming really will be the domain of “artificial intelligence” this time around. That instead of humans writing code, “high priests” will merely direct computers to write the software they need. Of course, such stuff sounds great in Wired magazine and rather amusing to anyone with any actual experience of software projects. Unfortunately, politicians (and other “thought leaders”) read such things one day and then slash away at budgets the next. And in a decade’s time, we’ll be suffering the same “debate” about a lack of “engineering talent” with the same “insights” from the usual gaggle of patent lobbyists and vested interests.

Neoliberal Fantasy Economics

One may have encountered the “internship” culture where as many people as possible try to get programmers and others in the industry to work for nothing, making them feel as if they need to do so in order to prove their worth for a hypothetical employment position or to demonstrate that they are truly committed to some corporate-aligned goal. One reads or hears people advocating involvement in “open source” not to uphold the four freedoms (to use, share, modify and distribute software), but instead to persuade others to “get on the radar” of an employer whose code has been licensed as Free Software (or something pretending to be so) largely to get people to work for them for free.

Now, I do like the idea of employers getting to know potential employees by interacting in a Free Software project, but it should really only occur when the potential employee is already doing something they want to do because it interests them and is in their interests. And no-one should be persuaded into doing work for free on the vague understanding that they might get hired for doing so.

The Expendable Volunteers

One may have seen the exploitation of volunteer effort where people are made to feel that they should “step up” for the benefit of something they believe in, often requiring volunteers to sacrifice their own time and money to do such free work, and often seeing those volunteers being encouraged to give money directly to the cause, as if all their other efforts were not substantial contributions in themselves. While striving to make a difference around the edges of their own lives, volunteers are often working in opposition to well-resourced organisations whose employees have the luxury of countering such volunteer efforts on a full-time basis and with a nice salary. Those people can go home in the evenings and at weekends and tune it all out if they want to.

No wonder volunteers burn out or decide that they just don’t have time or aren’t sufficiently motivated any more. The sad thing is that some organisations ignore this phenomenon because there are plenty of new volunteers wanting to “get active” and “be visible”, perhaps as a way of marketing themselves. Then again, some communities are content to alienate existing users if they can instead attract the mythical “10x” influx of new users to take their place, so we shouldn’t really be surprised, I suppose.

Blame the Powerless

One may be exposed to the culture that if you care about injustices or wrongs then bad or unfortunate situations are your responsibility even if you had nothing to do with their creation. This culture pervades society and allows the powerful to do what they like, to then make everyone else feel bad about the consequences, and to virtually force people to just accept the results if they don’t have the energy at the end of a busy day to do the legwork of bringing people to account.

So, those of us with any kind of conscience at all might already be supporting people trying to do the right thing like helping others, holding people to account, protecting the vulnerable, and so on. But at the same time, we aren’t short of people – particularly in the media and in politics – telling us how bad things are, with an air of expectation that we might take responsibility for something supposedly done on our behalf that has had grave consequences. (The invasion and bombing of foreign lands is one depressingly recurring example.) Sadly, the feeling of powerlessness many people have, as the powerful go round doing what they like regardless, is exploited by the usual cynical “divide and rule” tactics of other powerful people who merely see the opportunities in the misuse of power and the misery it causes. And so, selfishness and tribalism proliferate, demotivating anyone wanting the world to become a better place.

Reversal of Liberties

One may have had the realisation that technology is no longer merely about creating opportunities or making things easier, but is increasingly about controlling and monitoring people and making things complicated and difficult. That sustainability is sacrificed so that companies can cultivate recurring and rich profit opportunities by making people dependent on obsolete products that must be replaced regularly. And that technology exacerbates societal ills rather than helping to eradicate them.

We have the modern Web whose average site wants to “dial out” to a cast of recurring players – tracking sites, content distribution networks (providing advertising more often than not), font resources, image resources, script resources – all of which contribute to making the “signal-to-noise” ratio of the delivered content smaller and smaller all the time. Where everything has to maintain a channel of communication to random servers to constantly update them about what the user is doing, where they spent most of their time, what they looked at and what they clicked on. All of this requiring hundreds of megabytes of program code and data, burning up CPU time, wasting energy, making computers slow and steadily obsolete, forcing people to throw things away and to buy more things to throw away soon enough.

We have the “app” ecosystem experience, with restrictions on access, competition and interoperability, with arbitrarily-curated content: the walled gardens that the likes of Apple and Microsoft failed to impose on everybody at the dawn of the “consumer Internet” but do so now under the pretences of convenience and safety. We have social networking empires that serve fake news to each person’s little echo chamber, whipping up bubbles of hate and distracting people from what is really going on in the world and what should really matter. We have “cloud” services that often offer mediocre user experiences but which offer access from “any device”, with users opting in to both the convenience of being able to get their messages or files from their phone and the surveillance built into such services for commercial and governmental exploitation.

We have planned obsolescence designed into software and hardware, with customers obliged to buy new products to keep doing the things they want to do with those products and to keep it a relatively secure experience. And we have dodgy batteries sealed into devices, with the obligation apparently falling on the customers themselves to look after their own safety and – when the product fails – the impact of that product on the environment. By burdening the hapless user of technology with so many caveats that their life becomes dominated by them, those things become a form of tyranny, too.

Finding Meaning

Many people need to find meaning in their work and to feel that their work aligns with their own priorities. Some people might be able to do work that is unchallenging or uninteresting and then pursue their interests and goals in their own time, but this may be discouraging and demotivating over the longer term. When people’s work is not orthogonal to their own beliefs and interests but instead actively undermines them, the result is counterproductive and even damaging to those beliefs and interests and to others who share them.

For example, developing proprietary software or services in a full-time job, although potentially intellectually challenging, is likely to undermine any realistic level of commitment in one’s own free time to Free Software that does the same thing. Some people may prioritise a stimulating job over the things they believe in, feeling that their work still benefits others in a different way. Others may feel that they are betraying Free Software users by making people reliant on proprietary software and causing interoperability problems when those proprietary software users start assuming that everything should revolve around them, their tools, their data, and their expectations.

Although Adam wasn’t framing this shift in perspectives in terms of his job or career, it might have an impact on some people in that regard. I sometimes think of the interactions between my personal priorities and my career. Indeed, the way that Adam can seemingly stash his technological pursuits within the confines of his day job, while leaving the rest of his time for other things, was some kind of vision that I once had for studying and practising computer science. I think he is rather lucky in that his employer’s interests and his own are aligned sufficiently for him to be able to consider his workplace a venue for furthering those interests, doing so sufficiently to not need to try and make up the difference at home.

We live in an era of computational abundance and yet so much of that abundance is applied ineffectively and inappropriately. I wish I had a concise solution to the complicated equation involving technology and its effects on our quality of life, if not for the application of technology in society in general, then at least for individuals, and not least for myself. Maybe a future article needs to consider what we should expect from technology, as its application spreads ever wider, such that the technology we use and experience upholds our rights and expectations as human beings instead of undermining and marginalising them.

It’s not hard to see how even those who were once enthusiastic about computers can end up resenting them and disliking what they have become.

Defending the 99%

Monday, October 24th, 2016

In the context of a fairly recent discussion of Free Software licence enforcement on the Linux Kernel Summit mailing list, where Matthew Garrett defended the right of users to enjoy the four freedoms upheld by the GPL, but where Linus Torvalds and others insisted that upstream corporate contributions are more important and that it doesn’t matter if the users get to see the source code, Jonas Öberg made the remarkable claim that…

“It’s almost as if Matthew is talking about benefits for the 1% whereas Linus is aiming for the benefit of the 99%.”

So, if we are to understand this correctly, a highly-privileged and famous software developer, whose position on the “tivoization” of hardware was that users shouldn’t expect to have any control over the software running on their purchases, is now seemingly echoing the sentiments of a billionaire monopolist who once said that users didn’t need to see the source code of the programs they use. That particular monopolist stated that the developers of his company’s software would take care of everything and that the users would “rely on us” because the mere notion of anybody else interacting with the source code was apparently “the opposite of what’s supposed to go on”.

Here, this famous software developer’s message is that corporations may in future enrich his and his colleagues’ work so that a device purchaser may, at some unspecified time in the future, get to enjoy a properly-maintained version of the Linux kernel running inside a purchase of theirs. All the purchaser needs to do is to stop agitating for their four freedom rights and instead effectively rely on them to look after everything. (Where “them” would be the upstream kernel development community incorporating supposedly-cooperative corporate representatives.)

Now, note once again that such kernel code would only appear in some future product, not in the now-obsolete or broken product that the purchaser currently has. So far, the purchaser may be without any proper insight into that product – apart from the dubious consolation of knowing that the vendor likes Linux enough to have embedded it in the product – and they may well be left without any control over what software the product actually ends up running. So much for relying on “them” to look after the pressing present-day needs of users.

And even with any mythical future product unboxed and powered by a more official form of Linux, the message from the vendor may very well be that at no point should the purchaser ever feel entitled to look inside the device at the code, to try and touch it, to modify it, improve or fix it, and they should absolutely not try and use that device as a way of learning about computing, just as the famous developer and his colleagues were able to do when they got their start in the industry. So much for relying on “them” to look after the future needs of users.

(And let us not even consider that a bunch of other code delivered in a product may end up violating other projects’ licences because those projects did not realise that they had to “make friends” with the same group of dysfunctional corporations.)

Somehow, I rather feel that Matthew Garrett is the one with more of an understanding of what it is like to be among the 99%: where you buy something that could potentially be insecure junk as soon as it is unboxed, where the vendor might even arrogantly declare that the licensing does not apply to them. And he certainly has an understanding of what the 99% actually want: to be able to do something about such things right now, rather than to be at the mercy of lazy, greedy and/or predatory corporate practices; to finally get the product with all the features you thought your money had managed to buy you in the first place.

All of this ground-level familiarity seems very much in contrast to that of some other people who presumably only “hear” via second- or third-hand accounts what the average user or purchaser supposedly experiences, whose privilege and connections will probably get “them” what they want or need without any trouble at all. Let us say that in this dispute Matthew Garrett is not the one suffering from what might be regarded as “benevolent dictator syndrome”.

The Misrepresentation of Others

And one thing Jonas managed to get taken in by was the despicable and continued misrepresentation of organisations like the Software Freedom Conservancy, their staff, and their activities. Despite the public record showing otherwise, certain participants in the discussion were only too happy to perpetuate the myth of such organisations being litigious, and to belittle those organisations’ work, in order to justify their own hostile and abusive tone towards decent, helpful and good people.

No-one has ever really been forced to choose between cooperation, encouragement, community-building and the pursuit of enforcement. Indeed, the organisations pursuing responsible enforcement strategies, in reminding people of their responsibilities, are actually encouraging companies to honour licences and to respect the people who chose such licences for their works. The aim is ultimately to integrate today’s licence violators into the community of tomorrow as honest, respectable and respectful participants.

Community-building can therefore occur even when pointing out to people what they have been doing wrong. But without any substance, licences would provide only limited powers in persuading companies to do the right thing. And the substance of licences is rooted in their legal standing, meaning that occasionally a licence-violating entity might need to be reminded that its behaviour may be scrutinised in a legal forum and that the company involved may experience negative financial and commercial effects as a result.

Reminding others that licences have substance and requiring others to observe such licences is not “force”, at least not the kind of illegitimate force that is insinuated by various factions who prefer the current situation of widespread licence violation and lip service to the Linux brand. It is the instrument through which the authors of Free Software works can be heard and respected when all other reasonable channels of redress have been shut down. And, crucially, it is the instrument through which the needs of the end-user, the purchaser, the people who do no development at all – indeed, all of the people who paid good money and who actually funded the product making use of the Free Software at risk, whose money should also be funding the development of that software – can be heard and respected, too.

I always thought that “the 1%” were the people who had “got theirs” already, the privileged, the people who promise the betterment of everybody else’s lives through things like trickle-down economics, the people who want everything to go through them so that they get to say who benefits or not. If pandering to well-financed entities for some hypothetical future pay-off while they conduct business as usual at everybody else’s expense is “for the benefit of the 99%”, then it seems to me that Jonas has “the 1%” and “the 99%” the wrong way round.

EOMA68: The Campaign (and some remarks about recurring criticisms)

Thursday, August 18th, 2016

I have previously written about the EOMA68 initiative and its objective of making small, modular computing cards that conform to a well-defined standard which can be plugged into certain kinds of device – a laptop or desktop computer, or maybe even a tablet or smartphone – providing a way of supplying such devices with the computing power they all need. This would also offer a convenient way of taking your computing environment with you, using it in the kind of device that makes most sense at the time you need to use it, since the computer card is the actual computer and all you are doing is putting it in a different box: switch off, unplug the card, plug it into something else, switch that on, and your “computer” has effectively taken on a different form.

(This “take your desktop with you” by actually taking your computer with you is fundamentally different to various dubious “cloud synchronisation” services that would claim to offer something similar: “now you can synchronise your tablet with your PC!”, or whatever. Such services tend to operate rather imperfectly – storing your files on some remote site – and, of course, exposing you to surveillance and convenience issues.)

Well, a crowd-funding campaign has since been launched to fund a number of EOMA68-related products, with an opportunity for those interested to acquire the first round of computer cards and compatible devices, those devices being a “micro-desktop” that offers a simple “mini PC” solution, together with a somewhat radically designed and produced laptop (or netbook, perhaps) that emphasises accessible construction methods (home 3D printing) and alternative material usage (“eco-friendly plywood”). In the interests of transparency, I will admit that I have pledged for a card and the micro-desktop, albeit via my brother for various personal reasons that also delayed me from actually writing about this here before now.

An EOMA68 computer card in a wallet

An EOMA68 computer card in a wallet (courtesy Rhombus Tech/Crowd Supply)

Of course, EOMA68 is about more than just conveniently taking your computer with you because it is now small enough to fit in a wallet. Even if you do not intend to regularly move your computer card from device to device, it emphasises various sustainability issues such as power consumption (deliberately kept low), long-term support and matters of freedom (the selection of CPUs that completely support Free Software and do not introduce surveillance backdoors), and device longevity (that when the user wants to upgrade, they may easily use the card in something else that might benefit from it).

This is not modularity to prove some irrelevant hypothesis. It is modularity that delivers concrete benefits to users (that they aren’t forced to keep replacing products engineered for obsolescence), to designers and manufacturers (that they can rely on the standard to provide computing functionality and just focus on their own speciality to differentiate their product in more interesting ways), and to society and the environment (by reducing needless consumption and waste caused by the upgrade treadmill promoted by the technology industries over the last few decades).

One might think that such benefits might be received with enthusiasm. Sadly, it says a lot about today’s “needy consumer” culture that instead of welcoming another choice, some would rather spend their time criticising it, often to the point that one might wonder about their motivations for doing so. Below, I present some common criticisms and some of my own remarks.

(If you don’t want to read about “first world” objections – typically about “new” and “fast” – and are already satisfied by the decisions made regarding more understandable concerns – typically involving corporate behaviour and licensing – just skip to the last section.)

“The A20 is so old and slow! What’s the point?”

The Allwinner A20 has been around for a while. Indeed, its predecessor – the A10 – was the basis of initial iterations of the computer card several years ago. Now, the amount of engineering needed to upgrade the prototypes that were previously made to use the A10 instead of the A20 is minimal, at least in comparison to adopting another CPU (that would probably require a redesign of the circuit board for the card). And hardware prototyping is expensive, especially when unnecessary design changes have to be made, when they don’t always work out as expected, and when extra rounds of prototypes are then required to get the job done. For an initiative with a limited budget, the A20 makes a lot of sense because it means changing as little as possible, benefiting from the functionality upgrade and keeping the risks low.

Obviously, there are faster processors available now, but as the processor selection criteria illustrate, if you cannot support them properly with Free Software and must potentially rely on binary blobs which potentially violate the GPL, it would be better to stick to a more sustainable choice (because that is what adherence to Free Software is largely about) even if that means accepting reduced performance. In any case, at some point, other cards with different processors will come along and offer faster performance. Alternatively, someone will make a dual-slot product that takes two cards (or even a multi-slot product that provides a kind of mini-cluster), and then with software that is hopefully better-equipped for concurrency, there will be alternative ways of improving the performance to that of finding faster processors and hoping that they meet all the practical and ethical criteria.

“The RasPi 3…”

Lots of people love the Raspberry Pi, it would seem. The original models delivered a cheap, adequate desktop computer for a sum that was competitive even with some microcontroller-based single-board computers that are aimed at electronics projects and not desktop computing, although people probably overlook rivals like the BeagleBoard and variants that would probably have occupied a similar price point even if the Raspberry Pi had never existed. Indeed, the BeagleBone Black resides in the same pricing territory now, as do many other products. It is interesting that both product families are backed by certain semiconductor manufacturers, and the Raspberry Pi appears to benefit from privileged access to Broadcom products and employees that is denied to others seeking to make solutions using the same SoC (system on a chip).

Now, the first Raspberry Pi models were not entirely at the performance level of contemporary desktop solutions, especially by having only 256MB or 512MB RAM, meaning that any desktop experience had to be optimised for the device. Furthermore, they employed an ARM architecture variant that was not fully supported by mainstream GNU/Linux distributions, in particular the one favoured by the initiative: Debian. So a variant of Debian has been concocted to support the devices – Raspbian – and despite the Raspberry Pi 2 being the first device in the series to employ an architecture variant that is fully supported by Debian, Raspbian is still recommended for it and its successor.

Anyway, the Raspberry Pi 3 having 1GB RAM and being several times faster than the earliest models might be more competitive with today’s desktop solutions, at least for modestly-priced products, and perhaps it is faster than products using the A20. But just like the fascination with MHz and GHz until Intel found that it couldn’t rely on routinely turning up the clock speed on its CPUs, or everybody emphasising the number of megapixels their digital camera had until they discovered image noise, such number games ignore other factors: the closed source hardware of the Raspberry Pi boards, the opaque architecture of the Broadcom SoCs with a closed source operating system running on the GPU (graphics processing unit) that has control over the ARM CPU running the user’s programs, the impracticality of repurposing the device for things like laptops (despite people attempting to repurpose it for such things, anyway), and the organisation behind the device seemingly being happy to promote a variety of unethical proprietary software from a variety of unethical vendors who clearly want a piece of the action.

And finally, with all the fuss about how much faster the opaque Broadcom product is than the A20, the Raspberry Pi 3 has half the RAM of the EOMA68-A20 computer card. For certain applications, more RAM is going to be much more helpful than more cores or “64-bit!”, which makes us wonder why the Raspberry Pi 3 doesn’t support 4GB RAM or more. (Indeed, the current trend of 64-bit ARM products offering memory quantities addressable by 32-bit CPUs seems to have missed the motivation for x86 finally going 64-bit back in the early 21st century, which was largely about efficiently supporting the increasingly necessary amounts of RAM required for certain computing tasks, with Intel’s name for x86-64 actually being at one time “Extended Memory 64 Technology“. Even the DEC Alpha, back in the 1990s, which could be regarded as heralding the 64-bit age in mainstream computing, and which arguably relied on the increased performance provided by a 64-bit architecture for its success, still supported 64-bit quantities of memory in delivered products when memory was obviously a lot more expensive than it is now.)

“But the RasPi Zero!”

Sure, who can argue with a $5 (or £4, or whatever) computer with 512MB RAM and a 1GHz CPU that might even be a usable size and shape for some level of repurposing for the kinds of things that EOMA68 aims at: putting a general purpose computer into a wide range of devices? Except that the Raspberry Pi Zero has had persistent availability issues, even ignoring the free give-away with a magazine that had people scuffling in newsagents to buy up all the available copies so they could resell them online at several times the retail price. And it could be perceived as yet another inventory-dumping exercise by Broadcom, given that it uses the same SoC as the original Raspberry Pi.

Arguably, the Raspberry Pi Zero is a more ambiguous follow-on from the Raspberry Pi Compute Module that obviously was (and maybe still is) intended for building into other products. Some people may wonder why the Compute Module wasn’t the same success as the earlier products in the Raspberry Pi line-up. Maybe its lack of success was because any organisation thinking of putting the Compute Module (or, these days, the Pi Zero) in a product to sell to other people is relying on a single vendor. And with that vendor itself relying on a single vendor with whom it currently has a special relationship, a chain of single vendor reliance is formed.

Any organisation wanting to build one of these boards into their product now has to have rather a lot of confidence that the chain will never weaken or break and that at no point will either of those vendors decide that they would rather like to compete in that particular market themselves and exploit their obvious dominance in doing so. And they have to be sure that the Raspberry Pi Foundation doesn’t suddenly decide to get out of the hardware business altogether and pursue those educational objectives that they once emphasised so much instead, or that the Foundation and its manufacturing partners don’t decide for some reason to cease doing business, perhaps selectively, with people building products around their boards.

“Allwinner are GPL violators and will never get my money!”

Sadly, Allwinner have repeatedly delivered GPL-licensed software without providing the corresponding source code, and this practice may even persist to this day. One response to this has referred to the internal politics and organisation of Allwinner and that some factions try to do the right thing while others act in an unenlightened, licence-violating fashion.

Let it be known that I am no fan of the argument that there are lots of departments in companies and that just because some do some bad things doesn’t mean that you should punish the whole company. To this day, Sony does not get my business because of the unsatisfactorily-resolved rootkit scandal and I am hardly alone in taking this position. (It gets brought up regularly on a photography site I tend to visit where tensions often run high between Sony fanatics and those who use cameras from other manufacturers, but to be fair, Sony also has other ways of irritating its customers.) And while people like to claim that Microsoft has changed and is nice to Free Software, even to the point where people refusing to accept this assertion get criticised, it is pretty difficult to accept claims of change and improvement when the company pulls in significant sums from shaking down device manufacturers using dubious patent claims on Android and Linux: systems it contributed nothing to. And no, nobody will have been reading any patents to figure out how to implement parts of Android or Linux, let alone any belonging to some company or other that Microsoft may have “vacuumed up” in an acquisition spree.

So, should the argument be discarded here as well? Even though I am not too happy about Allwinner’s behaviour, there is the consideration that as the saying goes, “beggars cannot be choosers”. When very few CPUs exist that meet the criteria desirable for the initiative, some kind of nasty compromise may have to be made. Personally, I would have preferred to have had the option of the Ingenic jz4775 card that was close to being offered in the campaign, although I have seen signs of Ingenic doing binary-only code drops on certain code-sharing sites, and so they do not necessarily have clean hands, either. But they are actually making the source code for such binaries available elsewhere, however, if you know where to look. Thus it is most likely that they do not really understand the precise obligations of the software licences concerned, as opposed to deliberately withholding the source code.

But it may well be that unlike certain European, American and Japanese companies for whom the familiar regime of corporate accountability allows us to judge a company on any wrongdoing, because any executives unaware of such wrongdoing have been negligent or ineffective at building the proper processes of supervision and thus permit an unethical corporate culture, and any executives aware of such wrongdoing have arguably cultivated an unethical corporate culture themselves, it could be the case that Chinese companies do not necessarily operate (or are regulated) on similar principles. That does not excuse unethical behaviour, but it might at least entertain the idea that by supporting an ethical faction within a company, the unethical factions may be weakened or even eliminated. If that really is how the game is played, of course, and is not just an excuse for finger-pointing where nobody is held to account for anything.

But companies elsewhere should certainly not be looking for a weakening of their accountability structures so as to maintain a similarly convenient situation of corporate hypocrisy: if Sony BMG does something unethical, Sony Imaging should take the bad with the good when they share and exploit the Sony brand; people cannot have things both ways. And Chinese companies should comply with international governance principles, if only to reassure their investors that nasty surprises (and liabilities) do not lie in wait because parts of such businesses were poorly supervised and not held accountable for any unethical activities taking place.

It is up to everyone to make their own decision about this. The policy of the campaign is that the A20 can be supported by Free Software without needing any proprietary software, does not rely on any Allwinner-engineered, licence-violating software (which might be perceived as a good thing), and is merely the first step into a wider endeavour that could be conveniently undertaken with the limited resources available at the time. Later computer cards may ignore Allwinner entirely, especially if the company does not clean up its act, but such cards may never get made if the campaign fails and the wider endeavour never even begins in earnest.

(And I sincerely hope that those who are apparently so outraged by GPL violations actually support organisations seeking to educate and correct companies who commit such violations.)

“You could buy a top-end laptop for that price!”

Sure you could. But this isn’t about a crowd-funding campaign trying to magically compete with an optimised production process that turns out millions of units every year backed by a multi-billion-dollar corporation. It is about highlighting the possibilities of more scalable (down to the economically-viable manufacture of a single unit), more sustainable device design and construction. And by the way, that laptop you were talking about won’t be upgradeable, so when you tire of its performance or if the battery loses its capacity, I suppose you will be disposing of it (hopefully responsibly) and presumably buying something similarly new and shiny by today’s measures.

Meanwhile, with EOMA68, the computing part of the supposedly overpriced laptop will be upgradeable, and with sensible device design the battery (and maybe other things) will be replaceable, too. Over time, EOMA68 solutions should be competitive on price, anyway, because larger numbers of them will be produced, but unlike traditional products, the increased usable lifespans of EOMA68 solutions will also offer longer-term savings to their purchasers, too.

“You could just buy a used laptop instead!”

Sure you could. At some point you will need to be buying a very old laptop just to have a CPU without a surveillance engine and offering some level of upgrade potential, although the specification might be disappointing to you. Even worse, things don’t last forever, particularly batteries and certain kinds of electronic components. Replacing those things may well be a challenge, and although it is worthwhile to make sure things get reused rather than immediately discarded, you can’t rely on picking up a particular product in the second-hand market forever. And relying on sourcing second-hand items is very much for limited edition products, whereas the EOMA68 initiative is meant to be concerned with reliably producing widely-available products.

“Why pay more for ideological purity?”

Firstly, words like “ideology”, “religion”, “church”, and so on, might be useful terms for trolls to poison and polarise any discussion, but does anyone not see that expecting suspiciously cheap, increasingly capable products to be delivered in an almost conveyor belt fashion is itself subscribing to an ideology? One that mandates that resources should be procured at the minimum cost and processed and assembled at the minimum cost, preferably without knowing too much about the human rights abuses at each step. Where everybody involved is threatened that at any time their role may be taken over by someone offering the same thing for less. And where a culture of exploitation towards those doing the work grows, perpetuating increasing wealth inequality because those offering the services in question will just lean harder on the workers to meet their cost target (while they skim off “their share” for having facilitated the deal). Meanwhile, no-one buying the product wants to know “how the sausage is made”. That sounds like an ideology to me: one of neoliberalism combined with feigned ignorance of the damage it does.

Anyway, people pay for more sustainable, more ethical products all the time. While the wilfully ignorant may jeer that they could just buy what they regard as the same thing for less (usually being unaware of factors like quality, never mind how these things get made), more sensible people see that the extra they pay provides the basis for a fairer, better society and higher-quality goods.

“There’s no point to such modularity!”

People argue this alongside the assertion that systems are easy to upgrade and that they can independently upgrade the RAM and CPU in their desktop tower system or whatever, although they usually start off by talking about laptops, but clearly not the kind of “welded shut” laptops that they or maybe others would apparently prefer to buy (see above). But systems are getting harder to upgrade, particularly portable systems like laptops, tablets, smartphones (with Fairphone 2 being a rare exception of being something that might be upgradeable), and even upgradeable systems are not typically upgraded by most end-users: they may only manage to do so by enlisting the help of more knowledgeable relatives and friends.

I use a 32-bit system that is over 11 years old. It could have more RAM, and I could do the job of upgrading it, but guess how much I would be upgrading it to: 2GB, which is as much as is supported by the two prototyped 32-bit architecture EOMA68 computer card designs (A20 and jz4775). Only certain 32-bit systems actually support more RAM, mostly because it requires the use of relatively exotic architectural features that a lot of software doesn’t support. As for the CPU, there is no sensible upgrade path even if I were sure that I could remove the CPU without causing damage to it or the board. Now, 64-bit systems might offer more options, and in upgradeable desktop systems more RAM might be added, but it still relies on what the chipset was designed to support. Some chipsets may limit upgrades based on either manufacturer pessimism (no-one will be able to afford larger amounts in the near future) or manufacturer cynicism (no-one will upgrade to our next product if they can keep adding more RAM).

EOMA68 makes a trade-off in order to support the upgrading of devices in a way that should be accessible to people who are not experts: no-one should be dealing with circuit boards and memory modules. People who think hardware engineering has nothing to do with compromises should get out of their armchair, join one of the big corporations already doing hardware, and show them how it is done, because I am sure those companies would appreciate such market-dominating insight.

An EOMA68 computer card with the micro-desktop device

An EOMA68 computer card with the micro-desktop device (courtesy Rhombus Tech/Crowd Supply)

Back to the Campaign

But really, the criticisms are not the things to focus on here. Maybe EOMA68 was interesting to you and then you read one of these criticisms somewhere and started to wonder about whether it is a good idea to support the initiative after all. Now, at least you have another perspective on them, albeit from someone who actually believes that EOMA68 provides an interesting and credible way forward for sustainable technology products.

Certainly, this campaign is not for everyone. Above all else it is crowd-funding: you are pledging for rewards, not buying things, even though the aim is to actually manufacture and ship real products to those who have pledged for them. Some crowd-funding exercises never deliver anything because they underestimate the difficulties of doing so, leaving a horde of angry backers with nothing to show for their money. I cannot make any guarantees here, but given that prototypes have been made over the last few years, that videos have been produced with a charming informality that would surely leave no-one seriously believing that “the whole thing was just rendered” (which tends to happen a lot with other campaigns), and given the initiative founder’s stubbornness not to give up, I have a lot of confidence in him to make good on his plans.

(A lot of campaigns underestimate the logistics and, having managed to deliver a complicated technological product, fail to manage the apparently simple matter of “postage”, infuriating their backers by being unable to get packages sent to all the different countries involved. My impression is that logistics expertise is what Crowd Supply brings to the table, and it really surprises me that established freight and logistics companies aren’t dipping their toes in the crowd-funding market themselves, either by running their own services or taking ownership stakes and integrating their services into such businesses.)

Personally, I think that $65 for a computer card that actually has more RAM than most single-board computers is actually a reasonable price, but I can understand that some of the other rewards seem a bit more expensive than one might have hoped. But these are effectively “limited edition” prices, and the aim of the exercise is not to merely make some things, get them for the clique of backers, and then never do anything like this ever again. Rather, the aim is to demonstrate that such products can be delivered, develop a market for them where the quantities involved will be greater, and thus be able to increase the competitiveness of the pricing, iterating on this hopefully successful formula. People are backing a standard and a concept, with the benefit of actually getting some hardware in return.

Interestingly, one priority of the campaign has been to seek the FSF’s “Respects Your Freedom” (RYF) endorsement. There is already plenty of hardware that employs proprietary software at some level, leaving the user to merely wonder what some “binary blob” actually does. Here, with one of the software distributions for the computer card, all of the software used on the card and the policies of the GNU/Linux distribution concerned – a surprisingly awkward obstacle – will seek to meet the FSF’s criteria. Thus, the “Libre Tea” card will hopefully be one of the first general purpose computing solutions to actually be designed for RYF certification and to obtain it, too.

The campaign runs until August 26th and has over a thousand pledges. If nothing else, go and take a look at the details and the updates, with the latter providing lots of background including video evidence of how the software offerings have evolved over the course of the campaign. And even if it’s not for you, maybe people you know might appreciate hearing about it, even if only to follow the action and to see how crowd-funding campaigns are done.

Other people’s thoughts on “Freedom and security issues on x86 platforms”

Saturday, July 2nd, 2016

A couple of months ago, we had a brief discussion on the FSFE discussion mailing list about the topic of “Uncorrectable freedom and security issues on x86 platforms“, but it just came to my attention that a bunch of other people were discussing our discussion, too. Hacker News is, of course, so very “meta”, but fortunately they got onto discussing the actual topic as well.

The initial message in the original discussion advocated adopting the Power computing architecture as a primary hardware platform for Free Software. Now, the Hacker News participants were surprised that nobody mentioned SPARC and yet I was sure that SPARC did get mentioned in our discussion. A brief search doesn’t find any mention of it, however, and I’m embarrassed to admit that I do know about things like LEON and even used SPARC-based hardware for many years. (The Sun 4 workstations at my university had SPARC CPUs, for instance.)

I suppose the disconnect here involves price, availability and performance of readily-available products. Certainly, a free hardware SPARC implementation can be synthesised for an FPGA, but the previous discussion covered things like RISC-V in a similar fashion: it’s nice to have the ability to deploy a “soft processor” in an FPGA, but customers of computing products usually expect “hard” CPU performance. And you can at least buy ARM and MIPS CPUs, even if they aren’t free hardware implementations, having decent-enough performance which support Free Software from the very bottom of the software stack.

The participants in the meta-discussion wondered why MIPS became so popular given that there are licensing fees involved, whereas Sun made certain SPARC designs available under the GPL, and given that the SPARC architecture is supposedly royalty-free. For some manufacturers, this is asking the wrong question: they did not seek to license the patent-encumbered versions of the MIPS architecture; like the OpenRISC initiative, they merely implemented the unencumbered versions instead.

It would be nice to have a high-performance, inexpensive, readily-available free hardware CPU for use in free hardware designs. And of course those designs would support Free Software completely. But until that comes to pass, we have to work with what we can get. And indeed, for whichever architecture seems to be favoured for such a role, we also need to have usable and accessible hardware that is compatible with our favoured architecture so that we may prepare ourselves for the day it finally gets rolled out.

There might be a reason why SPARC isn’t so well supported by things like GNU/Linux distributions. Sadly, unlike various competitors, inexpensive SPARC products seem to be thin on the ground, and without those the efforts to maintain ports of large Free Software collections inevitably grind to a halt, but I would be only too happy for someone to point me to a source of products that I may have overlooked. There is no inherent reason why SPARC couldn’t be a viable platform for Free Software, regardless of what people may have to say about register windows!

Making Python Programs Faster with Shedskin

Tuesday, February 2nd, 2016

A few months ago, I had the opportunity to combine two of my interests: retrocomputing and Python programming. The latter needs little additional explanation, but the former perhaps requires a few more words. Retrocomputing is the study and use of computing equipment from an earlier era in computing, where such equipment is typically no longer in use, or is no longer in widespread use. My own experiences with microcomputers began in the 1980s and are largely centred upon those manufactured by Acorn Computers, such as the Acorn Electron and BBC Microcomputer.

Some History

One of my earlier initiatives was to attempt to document and understand the functioning of the Acorn Electron’s ULA integrated circuit: the Uncommitted Logic Array employed in the computer to generate video and perform input/output tasks. When Acorn decided to make the Electron as a variant of the BBC Microcomputer, the engineers merged many of the functions performed by separate chips into a single one, for several good and not-so-good reasons:

  • To reduce system complexity: having to connect several components and make sure that they all work correctly and in time with each other can be challenging, and it is arguably best to reduce the number of things that can go wrong by just reducing the number of things involved in the first place.
  • To reduce system cost: production becomes less complicated and savings can potentially be realised by combining discrete components.
  • To deepen the organisation’s experience with integrated circuit design: this being the company that developed the ARM architecture and eventually brought the first ARM chipset (CPU, audio/video controller, memory management unit, and input/output controller) to market.
  • To make a proprietary component that others could not readily clone: this being something of an obsession in the early 1980s marketplace.

Now, the ULA has the job of reading from memory and translating what it reads into a sequence of colour values, thus generating a picture on the screen. However, it has the annoying limitation of locking the CPU out of the memory (in fact, only the RAM which resides in a certain region) while it generates each line of the displayed image. Depending on which “screen mode” is selected (determining the resolution and colour depth), it may still let the CPU access the RAM at a lower speed, or it may effectively suspend the CPU for the entire time taken to generate a single horizontal display line.

Consequently, the Acorn Electron is considerably slower than the BBC Microcomputer for this reason: the BBC Micro employs faster memory and lets the CPU and its own video circuitry take turns with the memory while they both run at full speed; the Electron used cheaper memory as another cost-saving measure; reviews of the Electron, while generally positive about getting BBC Micro features for less, tended to note this performance degradation with some disappointment.

The Acorn Electron ULA

The Acorn Electron ULA (with socket cover/heatsink removed)

Some Background

Normally, the matter of the CPU stalling for much of its time would be considered a disadvantage, but my brother and I were having a discussion about software running from ROM (or, in fact, in the area of memory not affected by the ULA), with the pitfalls of such software needing to access the RAM and potentially becoming stalled as the CPU finds itself waiting for the ULA to do its work. Somehow the notion arose that the ULA effectively provides a kind of synchronisation mechanism for software needing to run in the period between display lines.

So, a program can read instructions from ROM and then, when it needs to know that the display is not being updated, it can attempt to access RAM. Whether or not the program stalls remains unknown to the program itself, but when it gets the result of accessing the RAM – perhaps immediately, perhaps after a few microseconds – it can be certain that the ULA is not accessing the RAM because it just did so itself.

A screen image showing the update region

An image showing the display update region (the "test card" picture) when the ULA accesses memory, with the black border indicating the region (or time period) during which the CPU may access the lower region of the Electron's memory.

(See the BBC Test Cards for the origin of the above picture.)

One might wonder what kind of program would benefit from synchronising itself to the display line periods. Well, one thing that tended to happen back in the microcomputing era, was the trick of reprogramming the display palette – the selection of colours shown on screen – so that a greater number of colours can be displayed simultaneously on the screen than would normally be the case for that particular display configuration. For example, a screen mode normally offering only four colours could instead offer the full range – eight “proper” colours on an Electron – if it switched the palette during screen updates. Even a screen mode offering only two colours could offer eight by employing palette switching.

And with a certain amount of experimentation, a working solution was eventually delivered (with only limited input needed from my side). By running software in a ROM (or from RAM mapped into the same area of memory as a ROM), it became possible to reliably change the palette on a line-by-line basis, bridging the gap between a medium-resolution four-colour mode and a hypothetical eight-colour version that would only become available on Acorn’s ARM-based microcomputers. Although only four colours can be used per display line, each of the 256 display lines can employ four-colour combinations of the full eight colours, not quite making an eight-colour mode – which would, of course, allow eight colours on every display line – but still permitting graphical output much closer to a true eight-colour mode than can be contemplated with a restrictive four-colour mode.

Rainbow lorikeets on a lawn

Rainbow lorikeets on a lawn (left: 4 colours from 8 per display line; right: 8 colours per display line)

Palette Optimisation

With the trick in place to switch the palette on demand, all that remained was to make a program that could take an input image and to optimise the colours so that…

  1. Only the eight permitted colours (black, white, primary and secondary colours) are used in the image.
  2. No pixel row (display line) employs more than four different colours.

Expectations were rather low at first. First of all, in this era of bountiful quantities of fresh imagery, most pictures appear to have plenty of colours and are photographs, and so the easiest images to use are the ones that first need to be reduced in both resolution and colour depth. And once this is done, the job of optimising the colours to meet the second of the above criteria is required. So, initially, very simple techniques were employed to do both of these things.

Later, after some discussions, it appeared that integrating the two processing activities and, crucially, applying basic dithering and error propagation techniques, made it possible to represent complicated “deep-colour” images within the limited display representation.

Magnified details of the two representations

Magnified details of the two representations

Closer examination of the above images reveals how the algorithm attempts to spread the responsibility of representing colours across rows, being restricted to the choice of four colours (in the left image) to produce the appropriate tones that are more readily encoded using eight colours (in the right image).

Tuning the Implementation

To perform the optimisation process, I wrote a program which took an input image, rotated and scaled it to the target resolution, and processed the colours by inspecting the pixel data on each horizontal line (or row) of the image, calculating the appropriate four-colour combination for a line and generating suitable pixel data appropriate for this restricted palette. Since I am probably most comfortable with Python, and since Python also has various convenient image-processing libraries, I found myself developing a short Python program to do the work.

Now, Python doesn’t usually exhibit the highest performance, particularly for tasks such as this, and as the experimentation with different approaches started to lessen, with the most rewarding approaches making themselves evident, and with the temptation to convert lots of images just to see the results, my attentions turned to speeding up the program. Since I had been involved with packaging the Shedskin Python-to-C++ compiler for Debian, and thus the package was right there for me to use, it made sense to give it a try on this code.

At this point, the program was taking around 40 seconds to convert a 16 megapixel photographic image into the appropriate output representation. Despite using the Python Imaging Library to do the rotation and scaling, visiting each pixel in a Python program and doing some simple statistical and arithmetic operations was taking up rather a lot of time. In the past, I have used various “numeric” Python extensions – usually the ones supported by pygame – but things have moved on somewhat incoherently since then, and I also wanted to keep my code straightforward and readable, which is often something that is lost when making code “numeric”.

Time for Shedskin

I have used Shedskin before, and the first thing to think about when using it is how it will interact with non-Python code. Shedskin takes Python code and generates C++ which must then be compiled. If a whole program is translated to C++, the resulting executable can be run with no further work necessary. However, the program of interest here uses libraries that are actually implemented in C and are delivered as shared libraries that are loaded by CPython (the Python virtual machine implementation written in the C programming language). Shedskin cannot generally translate such a program in its entirety.

From the earliest days of Python, it was (and has remained) a common practice to first write library code in Python, desire better performance, and to then rewrite much of that code in the C programming language against the Python/C API (or CPython API) as an “extension module”, which is what these problematic shared libraries are. Such libraries act as part of the CPython virtual machine, more or less, and with suitable implementation choices made for performance, everything using such libraries will run much more quickly. Unfortunately, but understandably, Shedskin doesn’t seek to interact closely with the CPython implementation: it produces C++ code that works with its own runtime libraries.

So, instead of working with a single program file, I split my program up into a main program which deals with CPython extensions such as the Python Imaging Library, whose JPEG and PNG manipulation facilities are too convenient to abandon, along with a library module that does all the computation for this particular application. The library would provide its own image abstraction for pixel-level accesses, but be given the pixel data obtained by the main program, returning the processed data to the main program for saving to a file. By only compiling the library, Shedskin can produce a standalone library file containing hopefully high-performance implementations of the original Python code.

But how can such a library constructed by Shedskin be used? Would we now need to somehow translate the main program into C or C++ in order to be able to use it? Fortunately, but slightly confusingly, Shedskin can also generate the necessary CPython API wrapper around the translated code, making it possible for CPython to load this newly-created library after all.

(One might wonder how this is possible, but if one considers that arbitrary C or C++ code can be wrapped using the CPython API, with the values being sent in and out of a library only being converted at the interface, Shedskin has the luxury of generating something that lives by its own rules internally, with the wrapper doing the necessary “marshalling” of the values as they go in and come out. Such a wrapped library might be frowned upon in the Python world, not being a sophisticated extension module that takes full advantage of the CPython API, but such libraries were, and probably still are, the “bread and butter” of Python’s access to a wide array of tools and technologies.)

Knowing that this is a reasonable approach, I experienced a moment of excessive ambition. I tried to take the newly-broken-out library and compile it using the appropriate option:

shedskin -e optimiser.py

This took quite some time, and things did not go well…

*** SHED SKIN Python-to-C++ Compiler 0.9.2 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)

[analyzing types..]
****************************88%
*WARNING* reached maximum number of iterations
********************************100%
[generating c++ code..]
*WARNING* 'get_combinations' function not exported (cannot convert argument 'c')
*WARNING* 'get_colours' function not exported (cannot convert return value)
*WARNING* 'balance' function not exported (cannot convert argument 'd')
*WARNING* optimiser.py: expression has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiser.py: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiser.py: expression has dynamic (sub)type: {int, tuple}
*WARNING* optimiser.py: variable 'data' has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py: variable (class SimpleImage, 'data') has dynamic (sub)type: {None, int, tuple}

And then there were many more lines of a more specific nature:

*WARNING* optimiser.py:41: function distance not called!
*WARNING* optimiser.py:50: expression has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py:53: expression has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py:57: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiser.py:57: expression has dynamic (sub)type: {int, tuple}

And so on. Now, if you are thinking that this will probably not end well, you would be right. Running make gives plenty of errors looking as scary as this…

optimiser.cpp: In function '__shedskin__::list<__shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*>* __optimiser__::list_comp_0(__shedskin__::pyiter<double>*)':
optimiser.cpp:76:17: error: base operand of '->' is not a pointer
optimiser.cpp:77:21: error: base operand of '->' is not a pointer
optimiser.cpp:78:68: error: invalid conversion from '__shedskin__::tuple2<int, int>*' to 'int' [-fpermissive]
In file included from /usr/share/shedskin/lib/builtin.hpp:1204:0,
                 from optimiser.cpp:1:
/usr/share/shedskin/lib/builtin/tuple.hpp:211:28: error:   initializing argument 2 of '__shedskin__::tuple2<A, B>::tuple2(int, A, B) [with A = int; B = double]' [-fpermissive]
optimiser.cpp:78:70: error: no matching function for call to '__shedskin__::list<__shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*>::append(__shedskin__::tuple2<int, double>*)'
optimiser.cpp:78:70: note: candidate is:
In file included from /usr/share/shedskin/lib/builtin.hpp:1203:0,
                 from optimiser.cpp:1:
/usr/share/shedskin/lib/builtin/list.hpp:96:25: note: void* __shedskin__::list<T>::append(T) [with T = __shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*]
/usr/share/shedskin/lib/builtin/list.hpp:96:25: note:   no known conversion for argument 1 from '__shedskin__::tuple2<int, double>*' to '__shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*'

Of course, it would be unfair to expect this to have worked: Shedskin did indeed give us plenty of warnings! But we should at least try and understand such warnings if we are to make progress. After a few more iterations of this bold strategy, I realised that I had started out in the wrong fashion, presenting Shedskin with code that is too dynamic (and ambiguous) for it to reasonably infer sensible types and produce a compilable C++ representation.

First Things First

I started out by reintroducing some necessary changes gradually. First of all, I made a branch of my code committing the special image abstraction to be used in any future Shedskin-compiled version. Meanwhile, I looked at some things that might generally be performance-degrading in Python, and which might also help Shedskin deduce the program types more readily.

One thing that I had done in the initial implementation was to freely use sequences to represent colour triplets: the red, green and blue values. I was using code like this:

def invert(srgb):
    return tuple(map(lambda x: 1.0 - x, srgb))

This might be elegant – there’s no separate treatment of each element in the triplet – but there is likely to be more overhead in treating the triplet as a sequence, iterating over it, and so on. Moreover, Shedskin is likely to see objects appearing from a collection without any idea of what was put into that collection to start with. So, more mundane code was introduced in its place:

def invert(srgb):
    r, g, b = srgb
    return 1.0 - r, 1.0 - g, 1.0 - b

Such changes reduced the processing time from around 35 seconds to around 25 seconds. After various other performance modifications, such as avoiding the repeated computation of common values, this became around 18 to 19 seconds. Interestingly, merging such modifications into the branch with the special image abstraction reduced the running time to around 16 to 17 seconds. But at this point, obvious optimisation opportunities were more or less exhausted.

It then became time for another attempt to present this to Shedskin. This time, instead of calling the main program main.py and the library optimiser.py, which is not particularly intuitive, I retained the name of the main program as optimiser.py and used the traditional optimiserlib.py naming for the library:

shedskin -e optimiserlib.py

This completed without any warnings, and running make produced a library file, optimiserlib.so, that Python recognises as an extension module. Running the program produced a palette-optimised image in just under 9 seconds!

Making the Difference

So, what were the differences that made Shedskin accept the library code? Since the splitting up of the code into two files makes comparisons between versions awkward, we can first compare the version of the program before our preparatory work with the version of the program that fed into our Shedskin version. There are four areas of changes:

  • The introduction of that image abstraction class – SimpleImage – to be used to manipulate images instead of using Python Imaging Library objects. (In fact, this is only used initially as a container for the raw pixel data, with such data being passed to library functions, as described below.)
  • Modifications to access the different components of colour triplets explicitly.
  • Some “common sense” performance modifications, avoiding the repeated computation of things that always produce the same result anyway.
  • Some local name adjustments.

This final point is something that the Shedskin documentation mentions prominently, but it can be easy to forget. Consider the following code (in the balance function of the library module):

dd = dict([(value, f) for f, value in d])

Originally, this looked like this:

d = dict([(value, f) for f, value in d])

This earlier version just reverses a dictionary mapping and assigns the result to the name of the original dictionary. Although this seems harmless enough, Shedskin’s analysis would be complicated substantially by having to deal with a name being reassigned and potentially being associated with completely different kinds of objects, even if in this case we are only dealing with dictionaries. (Even if the same type were only ever involved, as we see here, it would also prevent any analysis being done on the nature of the keys and values in the dictionaries.)

We can see the effect of this by trying to compile the functioning version with the earlier naming scheme reintroduced. First, we see a warning like this:

*WARNING* 'balance' function not exported (cannot convert argument 'd')

Since Shedskin propagates type information around the program, this leads to other problems:

*WARNING* optimiserlib.py: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py: expression has dynamic (sub)type: {int, tuple}
*WARNING* optimiserlib.py: variable (function (function balance, 'list_comp_0'), 'value') has dynamic (sub)type: {int, tuple}
*WARNING* optimiserlib.py: variable (function (function balance, 'list_comp_1'), 'f') has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py: variable (function (function balance, 'list_comp_1'), 'value') has dynamic (sub)type: {int, tuple}

And later in the messages, the more specific errors indicate the problem and its consequences:

*WARNING* optimiserlib.py:100: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:100: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:100: expression has dynamic (sub)type: {int, tuple}
*WARNING* optimiserlib.py:102: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:102: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:103: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:103: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:104: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:104: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:105: class 'list' has no method 'items'
*WARNING* optimiserlib.py:105: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:105: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:105: expression has dynamic (sub)type: {int, tuple}

And so on. Such warnings, which will cause errors if one tries to compile the result, can seem very intimidating, but within all these details the cause should be identifiable.

But another important aspect of the successful translation of the library module should not be forgotten: it is the matter of choosing the right initial functionality to give to Shedskin. Instead of trying to compile everything, it makes sense to concentrate on things like functions which are fairly self-contained, which do not call potentially vast regions of other program functionality, and which are invoked often, especially in loops that take a long time. Migrating a small piece of functionality at a time helps to keep the troubleshooting activity manageable.

By using the SimpleImage class as a convenient staging post for pixel data, simple library functions could be migrated first, and the library could be kept ignorant of the abstractions being maintained in the main program. Later on, as we shall see below, such abstractions and functions that require them could then be migrated themselves.

Broadening the Scope

With some core functionality migrated to a compilable extension module, it becomes possible to give other things the Shedskin treatment. At the start of the second attempt to get Shedskin to compile the code, quite a few functions were still being run by the CPython virtual machine, notably various image operations. With the functions called by these operations already migrated, it becomes possible to move each of these operations over into the extension module. The expectation here is that since these functions can now call each other without the overhead of the CPython API, as well as avoiding running code in the virtual machine, further performance improvements will occur.

And since Shedskin’s own runtime library supports certain Python standard library modules in accelerated form, it becomes interesting to migrate operations that employ such module functionality. For example, the get_combinations function can be moved to take advantage of Shedskin’s itertools implementation. And in addition to just moving functions, classes can also be compiled by Shedskin and potentially accessed more quickly, while remaining accessible to normal Python code that hasn’t been compiled.

Not all of these optimisations give the boost in performance that might be hoped for, perhaps even bringing performance penalties when first introduced. But it is important to look beyond any current, small optimisation to future optimisations that the accumulation of those small optimisations will permit. Moving the SimpleImage class into the extension module permits the migration of other code, with the consequence that the program when run takes around 6 seconds instead of 9 seconds: the apparent optimisation barrier is overcome, leading to yet more gains!

Some Conclusions

It is possible to make very useful performance gains just by revisiting code and writing it in a way that suits the language implementation. Starting out with an execution time of around 40 seconds, it was possible to make the program run in closer to 15 seconds, and that might have been enough for occasional conversion of images. But by adopting Shedskin and applying it to a subset of the program’s functionality, an execution time of 9 seconds was initially obtained. This might not be quite as impressive as the earlier two-and-a-half fold reduction in time (or, of course, a two-and-a-half fold increase in throughput), but it was obtained with little additional effort, with admittedly some planning for it being incorporated into the earlier optimisation work.

Ultimately, reaching around 6 seconds of execution time means that Shedskin was indeed able to more or less match earlier performance gains, but again with little actual optimisation effort. And one can argue that merely using Shedskin helped to inform those earlier gains, anyway. Regardless of what should take the credit, the program ended up running almost seven times faster than when we started out on this journey.

Tools like Shedskin have a slightly controversial place in the Python world. People like to point out that Shedskin really only compiles a “restricted subset” of Python: one that Shedskin is able to analyse. And as the above exercise demonstrates, modifications to programs are likely to be required to take advantage of it. But the resulting programs are still Python programs, and the CPython virtual machine will still run them, just slower – three times slower in this case – than if they were compiled.

Authors of tools like Shedskin and alternative implementations of the Python language have a tough decision to make. They must either deal with the continuously changing “full-fat” version of the language, with new features being added all the time that they have to support, with the risk that they will never catch up and never be considered a “proper” implementation. Or they must impose limitations on the form of the language and the features they support, knowing that even if their software accepts programs written in Python, it will be marginalised and regarded as a distraction from “proper” Python programming. And yet, Python as delivered by CPython offers little in the way of things that Shedskin and similar tools seek to offer, so it should be no wonder that such tools have come to exist.

Instead, one might wonder why it is that the language must continue to evolve in ways that frustrate static analysis and enhancements to program performance and scalability. Indeed, my own interests in the analysis of Python code have been somewhat rekindled by this exercise, but unlike the valiant efforts of other developers (such as the author of Nuitka and his continuing quest to support the compilation of “full-fat” Python), I have no intention of using Python as my gold standard. In essence, Python is and has been a productive language to use – that is why I have used it here and many times before – but the very nature of the language should be open to question and review.

If, by discarding features, something like Python can be more predictable and better-performing, why would exploring this avenue of inquiry be a bad thing? Shedskin shows us that there are benefits in doing so, and I hope that this article has also shown some practical techniques for using and understanding the tool. And maybe this will encourage me and others to make some more progress with our own Python program analysis efforts as well, if only to offer alternatives that seek to deliver on some of the unrealised promise and potential that Python seemed to offer back when we first discovered it, so many years ago now.

Oslo architecture (4 from 8 colours with scaled original)

An architectural picture from Oslo with the left version using 4 colours from 8 on every display line, the right version being a scaled version of the original.

Testing Times for Free Software and Open Hardware

Tuesday, January 12th, 2016

The last few months haven’t been too kind on Free Software and open hardware initiatives in a number of ways. Here, in a shorter form than one might usually expect from me, are some problematic developments on topics that I may have covered in the past year.

Software Freedom Undervalued

About a couple of months ago, the Software Freedom Conservancy started a fund-raising campaign after it became apparent that companies could not be relied upon to support the organisation’s activities. Since the start of the campaign, many individuals have stepped up and pledged financial support of their own, which is very generous of them, as is the support of enlightened organisations that have offered to match individual contributions.

Sadly, such generosity seems not to be shared by many of the largest companies making money from Free Software and from Linux in particular, and thus from the non-financial contributions that make projects like Linux viable in the first place, with many of those even coming from those same generous individuals who have supported the Conservancy financially. And let us consider for a moment why one prominent umbrella organisation’s members might not want to enforce the GPL, especially given that some of them have been successfully prosecuted for violating that licence, in relation to various Free Software projects, in the past.

The Proprietary Instincts of the BBC

The BBC Micro Bit was a topic covered in the last year, when I indicated a degree of caution about the mistakes of the past being repeated needlessly. And indeed, for some time, everything was being done behind the curtain of a non-disclosure agreement (NDA), meaning that very little information was being made available about the device and accompanying materials, and thus very little could be done by the average member of the public to prepare for the availability of the device, let alone develop their own materials, software, accessories or anything else for it.

Since then, a degree of secrecy has been eliminated, and efforts have been made to get the embedded variant of Python known as Micropython working on the board. However, certain parts of that work still appear to be encumbered by NDA, arguably making the effort of developing Python-related materials something of a social networking exercise. Meanwhile, notorious industry monopolist, Microsoft, somehow managed to muscle in on the initiative and take control of the principally-supported method of developing software with the device. I guess people at the BBC and their friends in politics and business don’t always learn from the mistakes of the past, particularly as they spend other people’s money.

The Walled Garden Party’s Hangover for Free Software Development

Just over twelve months ago, I made some observations about the Python core development group’s attraction to GitHub. It seems that the infatuation with the enthusiastic masses and their inevitable unleashing on Python assets, with the expectation of stimulating an exponential upturn in development activity, will now be gratified through a migration of various Python infrastructure components to the proprietary and centralised service that GitHub offers. (I have my doubts as to whether CPython contribution barriers are really the cause of Python’s current malaise, despite the usual clamour for Git and the associated “network effects” amongst a community of self-proclaimed version control wizards whose powers somehow don’t extend to mastering simple workflows with other tools.)

Anatoly Techtonik makes some interesting points, which will presumably go unheard because those involved have all decided not to listen to him any more. One of the more disturbing ones is that the “comparison shopping” mentality, where Free Software developers abandon their colleagues writing various tools and systems in favour of random corporations offering proprietary stuff at no cost, may well result in the Free Software solutions in such areas becoming seen as uncompetitive and unattractive. What those making such foolish decisions fail to realise is that their own projects can easily get the same treatment, if nobody bothers to see beyond the end of their own nose.

The result of all this is less funding and fewer resources for Free Software projects, with potentially fewer contributions, too, as the attraction of supporting “losing” solutions starts to fade. Community-oriented Free Software is arguably grossly underfunded as it is: we don’t really need other Free Software developers abandoning or undermining their colleagues while ridiculing those colleagues’ “ideological purity“. And, of course, volunteer effort will undoubtedly be burned up in the needless migration to the proprietary solution, setting everyone up for another costly transition down the road, which experience indicates is always more work than anyone anticipated (if they even bothered to think ahead at all).

PayPal: Doesn’t Pay, Not Your Pal

It has been a long time since I wrote about the Neo900 project. Things were looking promising: necessary components had been secured, and everyone was just waiting for Nikolaus to finish his work with the Pyra handheld console. And then we learned that PayPal had decided to hold a significant amount of money as a form of “security”, thus cutting off a vital source of funds for actually doing the work. Apparently, PayPal have a habit of doing this kind of thing, on one reported occasion even taking the opportunity to then offer loans to those people they deliberately put in such a difficult position.

If you supported the Neo900 project and pledged funds via PayPal, you need to tell PayPal to actually pay the project. You know: like the verb in their company name. Otherwise, in the worst case, you may not only not get a Neo900 and not see it developed to completion, but you will also have loaned your money to a large corporation for a substantial period and earned no interest on that involuntary loan, perhaps even incurring fees for the privilege. (So, please see the “How to fix it” section of the relevant article.)

Maybe in 2016, people will become a lot clearer about who their real friends are. Let us hope so!

Supporting the Software Freedom Conservancy

Saturday, December 5th, 2015

Daniel Pocock asks whether supporting the Software Freedom Conservancy is the right thing to do, given the recent announcement of a fund-raising drive inviting individuals to sustain the organisation’s activities. The short answer is “yes it is”, but the question and the longer answer are still worth thinking about.

An Overview

A certain focus has been placed on the Conservancy’s licensing compliance activities, which are valuable for a number of reasons that we shall consider in a moment, but let us also consider the other work done by the organisation:

Although other organisations exist to look after Free Software projects in certain ways, many only offer technical facilities to those projects, whereas others rely on copyright assignments or comparable instruments in order to act as stewards for those projects. Unusually, the Conservancy instead offers a framework where projects may delegate responsibility for activities that would otherwise take time away from the vital work of developing software, rather than assuming all responsibility and leadership for a project as a starting point for cooperation.

So, by working with the Conservancy, developers may retain their project’s autonomy while being able to get help from the Conservancy when they need it. Indeed, the merits of the Conservancy’s offerings complement the offerings of other organisations in such a way that Debian has chosen to work more closely with the Conservancy to safeguard the interests of those developers making their work available via the Debian software distribution.

The image of member project logos gives a representative indication of the organisation’s influence and importance in the Free Software world today. Many of these projects provide vital infrastructure and tools that Free Software users and developers rely upon every single day.

Compliance and Enforcement

But what about those licensing compliance or licensing enforcement activities undertaken by the Conservancy? Some people might wonder whether there is a real need to ensure that individuals and organisations adhere to Free Software licences, and if they do not, whether it is worthwhile taking those parties to task on such matters. Others, arguably with their own agenda, may even dislike the very idea of bringing anyone to account for not respecting the Free Software licensing of various works.

First of all, we must ask whether Free Software licences are being violated. The sad answer to this is “on an industrial scale“, given the glut of products being manufactured on the back of Free Software and then sold without even notifying customers of their rights. When companies are approached about the source code for the copyleft-licensed software provided in their products, it is by all accounts a rare occurrence to be directed to a well-managed repository of code that can be built, installed and used on the product. If one is lucky, a hastily-prepared bundle of sources might be thrown over the wall, leaving the enquirer with the task of verifying that it really does generate the originally-shipped software.

And beyond those more favourable outcomes is the case of the mystery “original design manufacturer”, who was merely passing on stuff concocted by the platform vendor, with everybody else insisting that they hardly touched anything and that the software is someone else’s responsibility. Or the manufacturer who declares that they are not affected by the software licensing and that anything they find on the Internet is presumably fair game to use as they please.

Now, some people would advise Free Software developers not to expect too much after contributing anything to a project. Such people would probably also advise developers to use permissive licences: that way, they won’t build up any expectations around what people might do with their work, nor hold out any hopes that others might benefit from seeing the source code down the line.

Certainly, it rather suits some of those people to cultivate the notion that getting one’s code out there into widespread use should be the principal reward for a Free Software developer, not because it actually encourages generosity or delivers a sense of satisfaction or recognition, but because it keeps those developers in their place and discourages them from expecting anything more. Meanwhile, various companies do very nicely out of repackaging and selling such code, denying end-users any insight into – or control over – the code they end up using, and (of course) denying them the right to give away or sell such code to others themselves.

When choosing to use a copyleft licence, Free Software developers are making a valid statement: they are actively stating that anyone who receives their software should enjoy the benefits of being able to modify, install, run and redistribute it to others who would also benefit from it. This is nothing that anyone should be ashamed of, nor should it be something that people should be forced to abandon because others (for whatever reason) do not share the same goals or vision. But at the same time, it perhaps requires more attention to be paid towards those redistributing the software. If others fail to uphold the licence, there needs to be some mechanism in place to demand a remedy.

Some people are obviously never going to like the idea of licence enforcement. For a start, licence violators are not going to like it: it means that they can no longer get away with their shoddy engineering practices and turning a quick profit on code they happened to find online. It has previously been made very apparent that apologists for licence violators are likely to claim that licence enforcement will only “scare away business from open source” (being of the ideological persuasion that considers “open source” as a business productivity tool, as opposed to Free Software which is about end-user freedoms), and they also tend to advocate for more permissively-licensed software so that it will be virtually impossible for the average software outfit not to accidentally clear the resulting lower threshold for licence compliance.

But why should Free Software developers care about the convenience of blatantly profiteering, inept and often hostile companies? No-one forces those companies to use the software, and if they don’t like the licensing terms, they can always go and use something else. The problem here is firmly with the companies in question (and their apologists): they really want to use such software, but they also want to behave as if they own it, all so that they get to decide what kind of licence it might have, and all without having done the hard work of actually writing it themselves. In short, they want it all! Well, forgive the rest of us for not giving them the ingredients of a charmed existence on a silver platter!

Investments of Time and Money

Unfortunately, chasing up licence violators is costly in terms of time and money. The Conservancy actually takes a very gentle approach to seeking licensing compliance when you consider that other people accused of copyright infringement can expect hostile industry bodies working law-enforcement agencies like puppets and performing on-the-spot “audits” (not to mention the endless barrage of messaging about “piracy” aimed at individuals).

Here, Daniel gets on to something fairly important. While certain figures promote the virtues of volunteering one’s own time to write “open source” software, presumably around a day job which does not reward the average developer for writing Free Software, certain aspects of software development and distribution cannot be so easily covered by spontaneous volunteer contributions. Money is required, but that money has to come from somewhere. Again, people can be persuaded to donate their own money (alongside their own time) to help make things happen, but that money also has to come from somewhere.

Sadly, with the cultivation of the notion of the noble volunteer, together with the misguided idea that “open source” be promoted as the cheap or free-of-charge alternative to “commercial” software, the realm of Free Software development – as far as community-centred projects, not corporate projects, are concerned – has been left chronically underfunded. And when many corporate participants prioritise their own interests, the result is a funding gap that leaves vital projects undone or unfinished and a more general sustainability problem around how such projects may be started, staffed and supported.

Lately, I have read a few articles about people burning out, perhaps because they took on too much work, and perhaps because they believed that their “marketable skills” would be enhanced by a heavy portfolio of volunteer responsibilities, making them attractive to potential employers. Again, the interests of profit-making businesses are put before the needs or values of the individual, with the individual even feeling obliged to make this so. Indeed, there are commercial interests which gain from Free Software remaining perpetually underfunded: proprietary software vendors can portray Free Software solutions as being less capable and somehow worth less. That results in Free Software projects, whose offerings would be improved and more competitive with more available revenue, actually getting less and less funding, interest and support over time.

We should not be pandering to the interests of those who are effectively impoverishing us, degrading our life quality, or forcing us to choose between the things we believe in and the means to be able to live a decent-enough life. Quite how we can develop a sustainable stream of funding for projects that would benefit everybody, along with forms of organisation where such actual work may be undertaken, is a topic for another time. However, one way of stopping the exploitation of developers is to uphold the licences through which those developers have shared their contributions, and that requires us to realise that such efforts also require ongoing funding to become viable and to remain so.

So, of course, I believe that supporting the Software Freedom Conservancy is the right thing to do. And beyond the good work that is done by that organisation, sustained by what is effectively an investment in the continued viability of Free Software in a hostile world, I hope that people will gradually realise that investment is also more generally needed to sustain the creation and maintenance of Free Software as well.

imip-agent: Integrating Calendaring with E-Mail

Tuesday, November 17th, 2015

Longer ago than I had, until now, realised, I wrote an article about my ongoing exploration of groupware and, specifically, calendaring. As I noted in that article, I felt that a broader range of options may be needed for those wishing to expand their use of communications technologies beyond plain e-mail and into the structured exchange of other kinds of information, whilst retaining and building upon that e-mail infrastructure.

And I noted that more often than not, people wanting to increase their ambitions in this regard are often confronted with the prospect of abandoning what they already use successfully, instead being obliged to adopt a complete package of technologies, some of which they may not even need. While proprietary software and service vendors might pursue such strategies of persuasion – getting the big sale or the big contract – it is baffling that Free Software projects might also put potential users on the spot in the same way. After all, Free Software is very much about choice and control.

So, as I spelled out in that previous article, there may be some mileage in trying to offer extensions to existing infrastructure so that people can increase their communications capabilities whilst retaining the technologies they already know. And in some depth (and at some length), I described what a mail-centred calendaring solution might need to provide in order to address most people’s needs. Finally, I promised to make my own efforts available in this area so that anyone remotely interested in the topic might get some benefit from it.

Last month, I started a very brief exchange on a Debian- and groupware-related mailing list about such matters, just to see what people interested in groupware projects might think, also attempting to find out what they use for calendaring themselves. (Unfortunately, there doesn’t seem to be so many non-product-specific, public and open places to discuss matters such as this one. Search mail software lists for calendaring discussions and you may even get to see hostility towards anyone mentioning groupware.) Ultimately, to keep the discussion concrete, I decided to announce informally what I have been working on.

Introducing imip-agent

imip-agent logoCalendaring and distributed scheduling can be achieved over e-mail using the iMIP standard. My work relies on this standard to function, providing programs that are integrated in mail transfer agents (MTAs) acting as calendaring agents. Thus, I decided to call the project imip-agent.

Initially, and as noted previously, my interest in such matters started with the mail handling functionality of Kolab and the component called Wallace that is responsible for responding to requests sent to certain e-mail addresses. Meanwhile, Kolab provided (and maybe still provides) a rather inelegant way of preparing “free/busy” information describing the availability of calendar system participants: a daemon program would run periodically, scanning mailboxes for events stored in special folders, and generate completely new manifests of each user’s schedule. (This may have changed since I last looked at Kolab in any serious manner.)

It occurred to me that the exchange of messages between participants in a scheduling transaction should be sufficient to maintain a live record of each participant’s availability, and that some experimentation would demonstrate the feasibility or infeasibility of such an approach. I had already looked into how existing architectures prepare and consume free/busy information, and felt that I had enumerated the relevant essentials for a viable calendaring architecture based on e-mail exchanges alone.

And so I set about learning about mail handling programs and expanding my existing knowledge of calendar-related standards. Fortunately, my work trying to get Kolab configured in a nice way didn’t go entirely to waste after all, although I also wanted to support different MTAs and not use convoluted Postfix-specific integration mechanisms, and so had to read up about more convenient and approachable mechanisms that other systems use to integrate with mail pipelines without trying hard to be all “high performance” about it. And I also wanted to make it possible for people to adopt a solution that didn’t force them to roll out LDAP in a scary “cross your fingers and run this script” fashion, even if many organisations already rely on LDAP and are comfortable with it.

The resulting description of this work is now available on the Web, and an attempt has been made to document the many different aspects of development, deployment and integration. Naturally, it is a work in progress and not a finished product: one step on the road to hopefully becoming a dependable solution involves packaging for Free Software distributions, which would result in the effort currently required to configure the software being minimised for the person setting it up. But at the same time, the mechanisms for integration with other systems (such as mail, mailboxes and Web servers) still need to be documented so that such work may have a chance to proceed.

Why Bother?

For various reasons unrelated to the work itself, it has taken a bit longer to get to this point than previously anticipated. But the act of making it available is, for me, a very necessary part of what I regard as a contribution to a kind of conversation about what kinds of software and solutions might work for certain groups of people, touching upon topics like how such solutions might be developed and realised. For instance, the handling of calendar data, although already supported by various Python libraries, hasn’t really led to similar Python-based solutions being developed as far as I can tell. Perhaps my contribution can act as an encouragement there.

There are, of course, various Python-based CalDAV servers, but I regard the projects around them to be somewhat opaque, and I perceive a common tendency amongst them to provide something resembling a product that covers some specific needs but then leaves those people deploying that product with numerous open-ended questions about how they might address related needs. I also wonder whether there should be more library sharing between these projects for more than basic data interpretation, but I know that this is quite difficult to achieve in practice, even if these projects should be largely functionally identical.

With such things forming the background of Free Software groupware, I can understand why some organisations are pitching complete solutions that aim to do many things. But here, in certain regards, I perceive a lack of opportunity for that conversation I mentioned above: there’s either a monologue with the insinuation that some parties know better than others (or worse, that they have the magic formula to total market domination) or there’s a dialogue with one side not really extending the courtesy of taking the other side’s views or contributions seriously.

And it is clear that those wanting to use such solutions should also be part of a conversation about what, in the end, should work best for them. Now, it is possible that organisations might see the benefit in the incremental approach to improving their systems and services that imip-agent offers. But it is also possible that there are also organisations who will contrast imip-agent with a selection of all-in-one solutions, possibly being dangled in front of them on special terms by vendors who just want to “close the deal”, and in the comparison shopping exercise that ensues, they will buy into the sales pitch of one of those vendors.

Without a concerted education exercise, that latter group of potential users are never likely to be a serious participant in our conversations (although I would hope that they might ultimately see sense), but the former group of potential users should be most welcome to participate in our conversations and thus enrich the wealth of choices and options that we should be offering. They would, I hope, realise that it is not about what they can get out of other people for nothing (or next to nothing), but instead what expertise and guidance they can contribute so that they and others can benefit from a sustainable and durable solution that, above all else, serves them and their needs and interests.

What Next?

Some people might point out that calendaring is only a small portion of what groupware is, if the latter term can even be somewhat accurately defined. This is indeed true. I would like to think that Free Software projects in other domains might enter the picture here to offer a compelling, broader groupware alternative. For instance, despite the apparent focus on chat and real-time communications, one doesn’t hear too much about one of the most popular groupware technologies on the Web today: the wiki. When used effectively, and when the dated rhetoric about wikis being equivalent to anarchy has been silenced by demonstrating effective collaborative editing and content management techniques, a wiki can be a potent tool for collaboration and collective information management.

It also turns out that Free Software calendar clients could do with some improvement. Their deficiencies may be a product of an unfortunate but fashionable fascination with proprietary mail, scheduling and social networking services amongst the community of people who use and develop Free Software. Once again, even though imip-agent seeks to provide only basic functionality as a calendar client, I hope that such functionality may inform or, at the very least, inspire developers to improve existing programs and bring them up to the expected levels of functionality.

Alongside this work, I have other things I want (and need) to be looking at, but I will happily entertain enquiries about how it might be developed further or deployed. It is, after all, Free Software, and given sufficient interest, it should be developed and improved in a collaborative fashion. There are some future plans for it that I take rather seriously, but with the privileges or freedoms granted in the licence, there is nothing stopping it from having a life of its own from now on.

So, if you are interested in this kind of solution and want to know more about it, take a look at the imip-agent site. If nothing else, I hope that it reminds you of the importance of independently-developed solutions for communication and the value in retaining control of the software and systems you rely on for that communication.