Paul Boddie's Free Software-related blog


Archive for the ‘English’ Category

End of Support for Fairphone 1: Some Unanswered Questions

Saturday, September 16th, 2017

I previously followed the goings-on at Fairphone a lot more closely than I have done recently, so after having mentioned the obsolescence risks of the first model in an earlier article, it was interesting to discover a Fairphone blog post explaining why the company will no longer support the Fairphone 1. Some of the reasons given are understandable: they went to market with an existing design, focusing instead on minimising the use of conflict minerals; as a result various parts are no longer manufactured or available; the manufacturer they used even stopped producing phones altogether!

A mention of batteries is made in the article, and in community reaction to the announcement, a lot of concern has been expressed about how long the batteries will be good for, whether any kind of replacements might be found, and so on. With today’s bewildering proliferation of batteries of different shapes and sizes, often sealed into devices for guaranteed obsolescence, we are surely storing up a great deal of trouble for the future in this realm. But that is a topic for another time.

In the context of my previous articles about Fairphone, however, what is arguably more interesting is why this recent article fails to properly address the issues of software longevity and support. My first reaction to the Fairphone initiative was caution: it was not at all clear that the company had full control over the software stack, at least within the usual level of expectations. Subsequent information confirmed my suspicions: critical software components were being made available only as proprietary software by MediaTek.

To be fair to Fairphone, the company did acknowledge its shortcomings and promise to do better for Fairphone 2, although I decided to withhold judgement on that particular matter. And for the Fairphone 1, some arrangements were apparently made to secure access to certain software components that had been off-limits. But as I noted in an article on the topic, despite the rather emphatic assurances (“Fairphone has control over the Fairphone 1 source code”), the announcement perhaps raised more questions than it gave answers.

Now, it would seem, we do not get our questions answered as such, but we appear to learn a few things nevertheless. As some people noted in the discussion of the most recent announcement – that of discontinuing support for the device altogether – ceasing the sale of parts and accessories is one thing, but what does that have to do with the software? The only mention of software with any kind of detail in the entire discussion appears to be this:

This is a question of copyright. All the stuff that we would be allowed to publish is pretty boring because it is out there already. The juicy parts are proprietary to Mediatek. There are some Fairphone related changes to open source parts. But they are really really minor…

So what do we learn? That “control over the Fairphone 1 source code” is, in reality, the stuff that is Free Software already, plus various Android customisations done by their software vendor, plus some kind of licence for the real-time operating system deployed on the device. But the MediaTek elephant in the room kept on standing there and everyone agreed not to mention it again.

Naturally, I am far from alone in having noticed the apparent discrepancy between the assurances given and the capabilities Fairphone appeared to have. One can now revisit “the possibility of replacing the Android software by alternative operating systems” mentioned in the earlier, more optimistic announcement and wonder whether this was ever truly realistic, whether it might have ended up being dependent on reverse-engineering efforts or MediaTek suddenly having an episode of uncharacteristic generosity.

I guess that “cooperation from license holders and our own resources” said it all. Although the former thing sounds like the pipedream it always seemed to be, the latter is understandable given the stated need for the company to focus on newer products and keep them funded. We might conclude from this statement that the licensing arrangements for various essential components involved continuing payments that were a burdensome diversion of company resources towards an increasingly unsupportable old product.

If anything this demonstrates why Free Software licensing will always be superior to contractual arrangements around proprietary software that only function as long as everyone feels that the arrangement is lucrative enough. With Free Software, the community could take over the maintenance in as seamless a transition as possible, but in this case they are instead presumably left “high and dry” and in need of a persuasive and perpetually-generous “rich uncle” character to do the necessary deals. It is obvious which one of these options makes more sense. (I have experienced technology communities where people liked to hope for the latter, and it is entertaining only for a short while.)

It is possible that Fairphone 2 provides a platform that is more robust in the face of sourcing and manufacturing challenges, and that there may be a supported software variant that will ultimately be completely Free Software. But with “binary blobs” still apparently required by Fairphone 2, people are right to be concerned that as new products are considered, the company’s next move might not be the necessary step in the right direction that maintains the flexibility that modularity should be bringing whilst remedying the continuing difficulties that the software seems to be causing.

With other parties now making loud noises about phones that run Free Software, promising big things that will eventually need to be delivered to be believed, maybe it would not be out of place to suggest that instead of “big bang” funding campaigns for entirely new one-off products, these initiatives start to work together. Maybe someone could develop a “compute module” for the Fairphone 2 modular architecture, if it lends itself to that. If not, maybe people might consider working towards something that would allow everyone to deliver the things they do best. Otherwise, I fear that we will keep seeing the same mistakes occur over and over again.

Three-and-a-half years’ support is not very encouraging for a phone that should promote sustainability, and the software inputs to this unfortunate situation were clear to me as an outsider over four years ago. That cannot be changed now, and so I just hope Fairphone has learned enough from this and from all the other things that have happened since, so that they may make better decisions in the future and make phones that truly are, as they claim themselves, “built to last”.

Public Money, Public Code, Public Control

Thursday, September 14th, 2017

An interesting article published by the UK Government Digital Service was referenced in a response to the LWN.net coverage of the recently-launched “Public Money, Public Code” campaign. Arguably, the article focuses a little too much on “in the open” and perhaps not enough on the matter of control. Transparency is a good thing, collaboration is a good thing, no-one can really argue about spending less tax money and getting more out of it, but it is the matter of control that makes this campaign and similar initiatives so important.

In one of the comments on the referenced article you can already see the kind of resistance that this worthy and overdue initiative will meet. There is this idea that the public sector should just buy stuff from companies and not be in the business of writing software. Of course, this denies the reality of delivering solutions where you have to pay attention to customer needs and not just have some package thrown through the doorway of the customer as big bucks are exchanged for the privilege. And where the public sector ends up managing its vendors, you inevitably get an under-resourced customer paying consultants to manage those vendors, maybe even their own consultant colleagues. Guess how that works out!

There is a culture of proprietary software vendors touting their wares or skills to public sector departments, undoubtedly insisting that their products are a result of their own technological excellence and that they are doing their customers a favour by merely doing business with them. But at the same time, those vendors need a steady – perhaps generous – stream of revenue consisting largely of public money. Those vendors do not want their customers to have any real control: they want their customers to be obliged to come back year after year for updates, support, further sales, and so on; they want more revenue opportunities rather than their customers empowering themselves and collaborating with each other. So who really needs whom here?

Some of these vendors undoubtedly think that the public sector is some kind of vehicle to support and fund enterprises. (Small- and medium-sized enterprises are often mentioned, but the big winners are usually the corporate giants.) Some may even think that the public sector is a vehicle for “innovation” where publicly-funded work gets siphoned off for businesses to exploit. Neither of these things cultivate a sustainable public sector, nor do they even create wealth effectively in wider society: they lock organisations into awkward, even perilous technological dependencies, and they undermine competition while inhibiting the spread of high-quality solutions and the effective delivery of services.

Unfortunately, certain flavours of government hate the idea that the state might be in a role of actually doing anything itself, preferring that its role be limited to delegating everything to “the market” where private businesses will magically do everything better and cheaper. In practice, under such conditions, some people may benefit (usually the rich and well-represented) but many others often lose out. And it is not unknown for the taxpayer to have to pick up the bill to fix the resulting mess that gets produced, anyway.

We need sustainable public services and a sustainable software-producing economy. By insisting on Free Software – public code – we can build the foundations of sustainability by promoting interoperability and sharing, maximising the opportunities for those wishing to improve public services by upholding proper competition and establishing fair relationships between customers and vendors. But this also obliges us to be vigilant to ensure that where politicians claim to support this initiative, they do not try and limit its impact by directing money away from software development to the more easily subverted process of procurement, while claiming that procured systems not be subject to the same demands.

Indeed, we should seek to expand our campaigning to cover public procurement in general. When public money is used to deliver any kind of system or service, it should not matter whether the code existed in some form already or not: it should be Free Software. Otherwise, we indulge those who put their own profits before the interests of a well-run public sector and a functioning society.

Stupid Git

Monday, August 28th, 2017

This kind of thing has happened to me a lot in recent times…

$ git pull
remote: Counting objects: 1367008, done.
remote: Compressing objects: 100% (242709/242709), done.
remote: Total 1367008 (delta 1118135), reused 1330194 (delta 1113455)
Receiving objects: 100% (1367008/1367008), 402.55 MiB | 3.17 MiB/s, done.
Resolving deltas: 100% (1118135/1118135), done.
fatal: missing blob object '715c19c45d9adbf565c28839d6f9d45cdb627b15'
error: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git did not send all necessary objects

OK, but now what? At least it hasn’t downloaded gigabytes of data and thrown it all away this time. No, just a few hundred megabytes instead.

Looking around on the Internet, I see various guides that are like having your car engine repeatedly stall at traffic lights and being told to crack open the bonnet/hood and start poking at random components, hoping that the thing will jump back to life. After all, isn’t that supposed to be one of the joys of motoring?

Or to translate the usual kind of response about Git when anyone dares to question its usability: “Do you not understand that you are not merely driving a car but are instead interacting with an extensible automotive platform?” And to think that the idea was just to get from A to B conveniently.

I cannot remember a single time when a Mercurial repository failed to update in such a fashion. But I suppose that we will all have to continue to endure the fashionable clamour for projects to move to Git and onto GitHub so that they can become magically popular and suddenly receive a bounty of development attention from the Internet pixies. Because they all love Git, apparently.

One Step Forward, Two Steps Back

Thursday, August 10th, 2017

I have written about the state of “The Free Software Desktop” before, and how change apparently for change’s sake has made it difficult for those of us with a technology background to provide a stable and reliable computing experience to others who are less technically inclined. It surprises me slightly that I have not written about this topic more often, but the pattern of activity usually goes something like this:

  1. I am asked to upgrade or troubleshoot a computer running a software distribution I have indicated a willingness to support.
  2. I investigate confusing behaviour, offer advice on how to perform certain tasks using the available programs, perhaps install or upgrade things.
  3. I get exasperated by how baroque or unintuitive the experience must be to anyone not “living the dream” of developing software for one of the Free Software desktop environments.
  4. Despite getting very annoyed by the lack of apparent usability of the software, promising myself that I should at least mention how frustrating and unintuitive it is, I then return home and leave things for a few days.
  5. I end up calming down sufficiently about the matter to not to be so bothered about saying something about it after all.

But it would appear that this doesn’t really serve my interests that well because the situation apparently gets no better as time progresses. Back at the end of 2013, it took some opining from a community management “expert” to provoke my own commentary. Now, with recent experience of upgrading a system between Kubuntu long-term support releases, I feel I should commit some remarks to writing just to communicate my frustration while the experience is still fresh in my memory.

Back in 2013, when I last wrote something on the topic, I suppose I was having to manage the transition of Kubuntu from KDE 3 to KDE 4 on another person’s computer, perhaps not having to encounter this yet on my own Debian system. This transition required me to confront the arguably dubious user interface design decisions made for KDE 4. I had to deal with things like the way the desktop background no longer behaved as it had done on most systems for many years, requiring things like the “folder view” widget to show desktop icons. Disappointingly, my most recent experience involved revisiting and replaying some of these annoyances.

Actual Users

It is worth stepping back for a moment and considering how people not “living the dream” actually use computers. Although a desktop cluttered with icons might be regarded as the product of an untidy or disorganised user, like the corporate user who doesn’t understand filesystems and folders and who saves everything to the desktop just to get through the working day, the ability to put arbitrary icons on the desktop background serves as a convenient tool to present a range of important tasks and operations to less confident and less technically-focused users.

Let us consider the perspective of such users for a moment. They may not be using the computer to fill their free time, hang out online, or whatever the kids do these days. Instead, they may have a set of specific activities that require the use of the computer: communicate via e-mail, manage their photographs, read and prepare documents, interact with businesses and organisations.

This may seem quaint to members of the “digital native” generation for whom the interaction experience is presumably a blur of cloud service interactions and social media posts. But unlike the “digital natives” who, if you read the inevitably laughable articles about how today’s children are just wizards at technology and that kind of thing, probably just want to show their peers how great they are, there are also people who actually need to get productive work done.

So, might it not be nice to show a few essential programs and actions as desktop icons to direct the average user? I even had this set up having figured out the “folder view” widget which, as if embarrassed at not having shown up for work in the first place, actually shows the contents of the “Desktop” directory when coaxed into appearing. Problem solved? Well, not forever. (Although there is a slim chance that the problem will solve itself in future.)

The Upgrade

So, Kubuntu had been moaning for a while about a new upgrade being available. And being the modern way with KDE and/or Ubuntu, the user is confronted with parade after parade of notifications about all things important, trivial, and everything in between. Admittedly, it was getting to the point where support might be ending for the distribution version concerned, and so the decision was taken to upgrade. In principle this should improve the situation: software should be better supported, more secure, and so on. Sadly, with Ubuntu being a distribution that particularly likes to rearrange the furniture on a continuous basis, it just created more “busy work” for no good reason.

To be fair, the upgrade script did actually succeed. I remember trying something similar in the distant past and it just failing without any obvious remedy. This time, there were some messages nagging about package configuration changes about which I wasn’t likely to have any opinion or useful input. And some lengthy advice about reconfiguring the PostgreSQL server, popping up in some kind of packaging notification, seemed redundant given what the script did to the packages afterwards. I accept that it can be pretty complicated to orchestrate this kind of thing, though.

It was only afterwards that the problems began to surface, beginning with the login manager. Since we are describing an Ubuntu derivative, the default login manager was the Unity-styled one which plays some drum beats when it starts up. But after the upgrade, the login manager was obsessed with connecting to the wireless network and wouldn’t be denied the chance to do so. But it also wouldn’t connect, either, even if given the correct password. So, some work needed to be done to install a different login manager and to remove the now-malfunctioning software.

An Empty Desk

Although changing the login manager also changes the appearance of the software and thus the experience of using it, providing an unnecessary distraction from the normal use of the machine and requiring unnecessary familiarisation with the result of the upgrade, at least it solved a problem of functionality that had “gone rogue”. Meanwhile, the matter of configuring the desktop experience has perhaps not yet been completely and satisfactorily resolved.

When the machine in question was purchased, it was running stock Ubuntu. At some point, perhaps sooner rather than later, the Unity desktop became the favoured environment getting the attention of the Ubuntu developers, and finding that it was rather ill-suited for users familiar with more traditional desktop paradigms, a switch was made to the KDE environment instead. This is where a degree of peace ended up being made with the annoyingly disruptive changes introduced by KDE 4 and its Plasma environment.

One problem that KDE always seems to have had is that of respecting user preferences and customisations across upgrades. On this occasion, with KDE Plasma 5 now being offered, no exception was made: logging in yielded no “folder view” widgets with all those desktop icons; panels were bare; the desktop background was some stock unfathomable geometric form with blurry edges. I seem to remember someone associated with KDE – maybe even the aforementioned “expert” – saying how great it had been to blow away his preferences and experience the awesomeness of the raw experience, or something. Well, it really isn’t so awesome if you are a real user.

As noted above, persuading the “folder view” widgets to return was easy enough once I had managed to open the fancy-but-sluggish widget browser. This gave me a widget showing the old icons that was too small to show them all at once. So, how do you resize it? Since I don’t use such features myself, I had forgotten that it was previously done by pointing at the widget somehow. But because there wasn’t much in the way of interactive help, I had to search the Web for clues.

This yielded the details of how to resize and move a folder view widget. That’s right: why not emulate the impoverished tablet/phone interaction paradigm and introduce a dubious “long click” gesture instead? Clearly because a “mouseover” gesture is non-existent in the tablet/phone universe, it must be abolished elsewhere. What next? Support only one mouse button because that is how the Mac has always done it? And, given that context menus seem to be available on plenty of other things, it is baffling that one isn’t offered here.

Restoring the desktop icons was easy enough, but showing them all was more awkward because the techniques involved are like stepping back to an earlier era where only a single control is available for resizing, where combinations of moves and resizes are required to get the widget in the right place and to be the right size. And then we assume that the icons still do what they had done before which, despite the same programs being available, was not the case: programs didn’t start but also didn’t give any indication why they didn’t start, this being familiar to just about anyone who has used a desktop environment in the last twenty years. Maybe there is a log somewhere with all the errors in it. Who knows? Why is there never any way of troubleshooting this?

One behaviour that I had set up earlier was single-click activation of icons, where programs could be launched with a single click with the mouse. That no longer works, nor is it obvious how to change it. Clearly the usability police have declared the unergonomic double-click action the “winner”. However, some Qt widgets are still happy with single-click navigation. Try explaining such inconsistencies to anyone already having to remember how to distinguish between multiple programs, what each of them does and doesn’t do, and so on.

The Developers Know Best

All of this was frustrating enough, but when trying to find out whether I could launch programs from the desktop or whether such actions had been forbidden by the usability police, I found that when programs were actually launching they took a long time to do so. Firing up a terminal showed the reason for this sluggishness: Tracker and Baloo were wanting to index everything.

Despite having previously switched off KDE’s indexing and searching features and having disabled, maybe even uninstalled, Tracker, the developers and maintainers clearly think that coercion is better than persuasion, that “everyone” wants all their content indexed for “desktop search” or “semantic search” (or whatever they call it now), the modern equivalent of saving everything to the desktop and then rifling through it all afterwards. Since we’re only the stupid users, what would we really have to say about it? So along came Tracker again, primed to waste computing time and storage space, together with another separate solution for doing the same thing, “just in case”, because the different desktop developers cannot work together.

Amongst other frustrations, the process of booting to the login prompt is slower, and so perhaps switching from Upstart to systemd wasn’t such an entirely positive idea after all. Meanwhile, with reduced scrollbar and control affordances, it would seem that the tendency to mimic Microsoft’s usability disasters continues. I also observed spontaneous desktop crashes and consequently turned off all the fancy visual effects in order to diminish the chances of such crashes recurring in future. (Honestly, most people don’t want Project Looking Glass and similar “demoware” guff: they just want to use their computers.)

Of Entitlement and Sustainable Development

Some people might argue that I am just another “entitled” user who has never contributed anything to these projects and is just whining incorrectly about how bad things are. Well, I do not agree. I enthusiastically gave constructive feedback and filed bugs while I still believed that the developers genuinely wanted to know how they might improve the software. (Admittedly, my enthusiasm had largely faded by the time I had to migrate to KDE 4.) I even wrote software using some of the technologies discussed in this article. I always wanted things to be better and stuck with the software concerned.

And even if I had never done such things, I would, along with other users, still have invested a not inconsiderable amount of effort into familiarising people with the software, encouraging others to use it, and trying to establish it as a sustainable option. As opposed to proprietary software that we neither want to use, nor wish to support, nor are necessarily able to support. Being asked to support some Microsoft product is not only ethically dubious but also frustrating when we end up having to guess our way around the typically confusing and poorly-designed interfaces concerned. And we should definitely resent having to do free technical support for a multi-billion-dollar corporation even if it is to help out other people we know.

I rather feel that the “entitlement” argument comes up when both the results of the development process and the way the development is done are scrutinised. There is this continually perpetuated myth that “open source” can only be done by people when those people have “enough time and interest to work on it”, as if there can be no other motivations or models to sustain the work. This cultivates the idea of the “talented artist” developer lifestyle: that the developers do their amazing thing and that its proliferation serves as some form of recognition of its greatness; that, like art, one should take it or leave it, and that the polite response is to applaud it or to remain silent and not betray a supposed ignorance of what has been achieved.

I do think that the production of Free Software is worthy of respect: after all, I am a developer of Free Software myself and know what has to go into making potentially useful systems. But those producing it should understand that people depend on it, too, and that the respect its users have for the software’s development is just as easily lost as it is earned, indeed perhaps more easily lost. Developers have power over their users, and like anyone in any other position of power, we expect them to behave responsibly. They should also recognise that any legitimate authority they have over their users can only exist when they acknowledge the role of those users in legitimising and validating the software.

In a recent argument about the behaviour of systemd, its principal developer apparently noted that as Free Software, it may be forked and developed as anyone might wish. Although true, this neglects the matter of sustainable software development. If I disagree with the behaviour of some software or of the direction of a software project, and if there is no reasonable way to accommodate this disagreement within the framework of the project, then I must maintain my own fork of that software indefinitely if I am to continue using it.

If others cannot be convinced to participate in this fork, and if other software must be changed to work with the forked code, then I must also maintain forks of other software packages. Suddenly, I might be looking at having to maintain an entire parallel software distribution, all because the developers of one piece of software are too precious to accept other perspectives as being valid and are unwilling to work with others in case it “compromises their vision”, or whatever.

Keeping Up on the Treadmill

Most people feel that they have no choice but to accept the upgrade treadmill, the constant churn of functionality, the shiny new stuff that the developers “living the dream” have convinced their employers or their peers is the best and most efficient way forward. It just isn’t a practical way of living for most users to try and deal with the consequences of this in a technical fashion by trying to do all those other people’s jobs again so that they may be done “properly”. So that “most efficient way” ends up incurring inefficiencies and costs amongst everybody else as they struggle to find new ways of doing the things that just worked before.

How frustrating it is that perhaps the only way to cope might be to stop using the software concerned altogether! And how unfortunate it is that for those who do not value Free Software in its own right or who feel that the protections of Free Software are unaffordable luxuries, it probably means that they go and use proprietary software instead and just find a way of rationalising the decision and its inconvenient consequences as part of being a modern consumer engaging in yet another compromised transaction.

So, unhindered by rants like these and by more constructive feedback, the Free Software desktop seems to continue on its way, seemingly taking two steps backward for every one step forward, testing the tolerance even of its most patient users to disruptive change. I dread having to deal with such things again in a few years’ time or even sooner. Maybe CDE will once again seem like an attractive option and bring us full circle for “Unix on the desktop”, saving many people a lot of unnecessary bother. And then the tortoise really will have beaten the hare.

The Mobile Web

Wednesday, July 26th, 2017

I was tempted to reply to a comment on LWN.net’s news article “The end of Flash”, where the following observation was made:

So they create a mobile site with a bit fewer graphics and fewer scripts loading up to try to speed it up.

But I found that I had enough to say that I might as well put it here.

A recent experience I had with one airline’s booking Web site involved an obvious pandering to “mobile” users. But to the designers this seemed to mean oversized widgets on any non-mobile device coupled with a frustratingly sequential mode of interaction, as if Fisher-Price had an enterprise computing division and had been contracted to do the work. A minimal amount of information was displayed at any given time, and even normal widget navigation failed to function correctly. (Maybe this is completely unfair to Fisher-Price as some of their products appear to encourage far more sophisticated interaction.)

And yet, despite all the apparent simplification, the site ran abominably slow. Every – single – keypress – took – ages – to – process. Even in normal text boxes. My desktop machine is ancient and mostly skipped the needless opening and closing animations on widgets because it just isn’t fast enough to notice that it should have been doing them before the time limit for doing them runs out. And despite fewer graphics and scripts, it was still heavy on the CPU.

After fighting my way through the booking process, I was pointed to the completely adequate (and actually steadily improving) conventional site that I’d used before but which was now hidden by the new and shiny default experience. And then I noticed a message about customer feedback and the continued availability of the old site: many of their other customers were presumably so appalled by the new “made for mobile” experience and, with some of them undoubtedly having to use the site for their job, booking travel for their colleagues or customers, they’d let the airline know what they thought. I imagine that some of the conversations were pretty frank.

I suppose that when companies manage to decouple themselves from fads and trends and actually listen to their customers (and not via Twitter), they can be reminded to deliver usable services after all. And I am thankful for the “professional customers” who are presumably all that stand in the way of everyone being obliged to download an “app” to book their flights. Maybe that corporate urge will lead to the next reality check for the airline’s “digital strategists”.

Some Thoughts on Python-Like Languages

Tuesday, June 6th, 2017

A few different things have happened recently that got me thinking about writing something about Python, its future, and Python-like languages. I don’t follow the different Python implementations as closely as I used to, but certain things did catch my attention over the last few months. But let us start with things closer to the present day.

I was neither at the North American PyCon event, nor at the invitation-only Python Language Summit that occurred as part of that gathering, but LWN.net has been reporting the proceedings to its subscribers. One of the presentations of particular interest was covered by LWN.net under the title “Keeping Python competitive”, apparently discussing efforts to “make Python faster”, the challenges faced by different Python implementations, and the limitations imposed by the “canonical” CPython implementation that can frustrate performance improvement efforts.

Here is where this more recent coverage intersects with things I have noticed over the past few months. Every now and again, an attempt is made to speed Python up, sometimes building on the CPython code base and bolting on additional technology to boost performance, sometimes reimplementing the virtual machine whilst introducing similar performance-enhancing technology. When such projects emerge, especially when a large company is behind them in some way, expectations of a much faster Python are considerable.

Thus, when the Pyston reimplementation of Python became more widely known, undertaken by people working at Dropbox (who also happen to employ Python’s creator Guido van Rossum), people were understandably excited. Three years after that initial announcement, however, and those ambitious employees now have to continue that work on their own initiative. One might be reminded of an earlier project, Unladen Swallow, which also sought to perform just-in-time compilation of Python code, undertaken by people working at Google (who also happened to employ Python’s creator Guido van Rossum at the time), which was then abandoned as those people were needed to go and work on other things. Meanwhile, another apparently-broadly-similar project, Pyjion, is being undertaken by people working at Microsoft, albeit as a “side project at work”.

As things stand, perhaps the most dependable alternative implementation of Python, at least if you want one with a just-in-time compiler that is actively developed and supported for “production use”, appears to be PyPy. And this is only because of sustained investment of both time and funding over the past decade and a half into developing the technology and tracking changes in the Python language. Heroically, the developers even try and support both Python 2 and Python 3.

Motivations for Change

Of course, Google, Dropbox and Microsoft presumably have good reasons to try and get their Python code running faster and more efficiently. Certainly, the first two companies will be running plenty of Python to support their services; reducing the hardware demands of delivering those services is definitely a motivation for investigating Python implementation improvements. I guess that there’s enough Python being run at Microsoft to make it worth their while, too. But then again, none of these organisations appear to be resourcing these efforts at anything close to what would be marshalled for their actual products, and I imagine that even similar infrastructure projects originating from such companies (things like Go, for example) have had many more people assigned to them on a permanent basis.

And now, noting the existence of projects like Grumpy – a Python to Go translator – one has to wonder whether there isn’t some kind of strategy change afoot: that it now might be considered easier for the likes of Google to migrate gradually to Go and steadily reduce their dependency on Python than it is to remedy identified deficiencies with Python. Of course, the significant problem remains of translating Python code to Go and still have it interface with code written in C against Python’s extension interfaces, maintaining reliability and performance in the result.

Indeed, the matter of Python’s “C API”, used by extensions written in C for Python programs to use, is covered in the LWN.net article. As people have sought to improve the performance of their software, they have been driven to rewrite parts of it in C, interfacing these performance-critical parts with the rest of their programs. Although such optimisation techniques make sense and have been a constant presence in software engineering more generally for many decades, it has almost become the path of least resistance when encountering performance difficulties in Python, even amongst the maintainers of the CPython implementation.

And so, alternative implementations need to either extract C-coded functionality and offer it in another form (maybe even written in Python, can you imagine?!), or they need to have a way of interfacing with it, one that could produce difficulties and impair their own efforts to deliver a robust and better-performing solution. Thus, attempts to mitigate CPython’s shortcomings have actually thwarted the efforts of other implementations to mitigate the shortcomings of Python as a whole.

Is “Python” Worth It?

You may well be wondering, if I didn’t manage to lose you already, whether all of these ambitious and brave efforts are really worth it. Might there be something with Python that just makes it too awkward to target with a revised and supposedly better implementation? Again, the LWN.net article describes sentiments that simpler, Python-like languages might be worth considering, mentioning the Hack language in the context of PHP, although I might also suggest Crystal in the context of Ruby, even though the latter is possibly closer to various functional languages and maybe only bears syntactic similarities to Ruby (although I haven’t actually looked too closely).

One has to be careful with languages that look dynamic but are really rather strict in how types are assigned, propagated and checked. And, should one choose to accept static typing, even with type inference, it could be said that there are plenty of mature languages – OCaml, for instance – that are worth considering instead. As people have experimented with Python-like languages, others have been quick to criticise them for not being “Pythonic”, even if the code one writes is valid Python. But I accept that the challenge for such languages and their implementations is to offer a Python-like experience without frustrating the programmer too much about things which look valid but which are forbidden.

My tuning of a Python program to work with Shedskin needed to be informed about what Shedskin was likely to allow and to reject. As far as I am concerned, as long as this is not too restrictive, and as long as guidance is available, I don’t see a reason why such a Python-like language couldn’t be as valid as “proper” Python. Python itself has changed over the years, and the version I first used probably wouldn’t measure up to what today’s newcomers would accept as Python at all, but I don’t accept that the language I used back in 1995 was not Python: that would somehow be a denial of history and of my own experiences.

Could I actually use something closer to Python 1.4 (or even 1.3) now? Which parts of more recent versions would I miss? And which parts of such ancient Pythons might even be superfluous? In pursuing my interests in source code analysis, I decided to consider such questions in more detail, partly motivated by the need to keep the investigation simple, partly motivated by laziness (that something might be amenable to analysis but more effort than I considered worthwhile), and partly motivated by my own experiences developing Python-based solutions.

A Leaner Python

Usually, after a title like that, one might expect to read about how I made everything in Python statically typed, or that I decided to remove classes and exceptions from the language, or do something else that would seem fairly drastic and change the character of the language. But I rather like the way Python behaves in a fundamental sense, with its classes, inheritance, dynamic typing and indentation-based syntax.

Other languages inspired by Python have had a tendency to diverge noticeably from the general form of Python: Boo, Cobra, Delight, Genie and Nim introduce static typing and (arguably needlessly) change core syntactic constructs; Converge and Mython focus on meta-programming; MyPy is the basis of efforts to add type annotations and “optional static typing” to Python itself. Meanwhile, Serpentine is a project being developed by my brother, David, and is worth looking at if you want to write software for Android, have some familiarity with user interface frameworks like PyQt, and can accept the somewhat moderated type discipline imposed by the Android APIs and the Dalvik runtime environment.

In any case, having already made a few rounds trying to perform analysis on Python source code, I am more interested in keeping the foundations of Python intact and focusing on the less visible characteristics of programs: effectively reading between the lines of the source code by considering how it behaves during execution. Solutions like Shedskin take advantage of restrictions on programs to be able to make deductions about program behaviour. These deductions can be sufficient in helping us understand what a program might actually do when run, as well as helping the compiler make more robust or efficient programs.

And the right kind of restrictions might even help us avoid introducing more disruptive restrictions such as having to annotate all the types in a program in order to tell us similar things (which appears to be one of the main directions of Python in the current era, unfortunately). I would rather lose exotic functionality that I have never really been convinced by, than retain such functionality and then have to tell the compiler about things it would otherwise have a chance of figuring out for itself.

Rocking the Boat

Certainly, being confronted with any list of restrictions, despite the potential benefits, can seem like someone is taking all the toys away. And it can be difficult to deliver the benefits to make up for this loss of functionality, no matter how frivolous some of it is, especially if there are considerable expectations in terms of things like performance. Plenty of people writing alternative Python implementations can attest to that. But there are other reasons to consider a leaner, more minimal, Python-like language and accompanying implementation.

For me, one rather basic reason is merely to inform myself about program analysis, figure out how difficult it is, and hopefully produce a working solution. But beyond that is the need to be able to exercise some level of control over the tools I depend on. Python 2 will in time no longer be maintained by the Python core development community; a degree of agitation has existed for some time to replace it with Python 3 in Free Software operating system distributions. Yet I remain unconvinced about Python 3, particularly as it evolves towards a language that offers “optional” static typing that will inevitably become mandatory (despite assertions that it will always officially be optional) as everyone sprinkles their code with annotations and hopes for the magic fairies and pixies to come along and speed it up, that latter eventuality being somewhat less certain.

There are reasons to consider alternative histories for Python in the form of Python-like languages. People argue about whether Python 3’s Unicode support makes it as suitable for certain kinds of programs as Python 2 has been, with the Mercurial project being notable in its refusal to hurry along behind the Python 3 adoption bandwagon. Indeed, PyPy was devised as a platform for such investigations, being only somewhat impaired in some respects by its rather intensive interpreter generation process (but I imagine there are ways to mitigate this).

Making a language implementation that is adaptable is also important. I like the ability to be able to cross-compile programs, and my own work attempts to make this convenient. Meanwhile, cross-building CPython has been a struggle for many years, and I feel that it says rather a lot about Python core development priorities that even now, with the need to cross-build CPython if it is to be available on mobile platforms like Android, the lack of a coherent cross-building strategy has left those interested in doing this kind of thing maintaining their own extensive patch sets. (Serpentine gets around this problem, as well as the architectural limitations of dropping CPython on an Android-based device and trying to hook it up with the different Android application frameworks, by targeting the Dalvik runtime environment instead.)

No Need for Another Language?

I found it depressingly familiar when David announced his Android work on the Python mobile-sig mailing list and got the following response:

In case you weren't aware, you can just write Android apps and services
in Python, using Kivy.  No need to invent another language.

Fortunately, various other people were more open-minded about having a new toolchain to target Android. Personally, the kind of “just use …” rhetoric reminds me of the era when everyone writing Web applications in Python were exhorted to “just use Zope“, which was a complicated (but admittedly powerful and interesting) framework whose shortcomings were largely obscured and downplayed until enough people had experienced them and felt that progress had to be made by working around Zope altogether and developing other solutions instead. Such zero-sum games – that there is one favoured approach to be promoted, with all others to be terminated or hidden – perhaps inspired by an overly-parroted “only one way to do it” mantra in the Python scene, have been rather damaging to both the community and to the adoption of Python itself.

Not being Python, not supporting every aspect of Python, has traditionally been seen as a weakness when people have announced their own implementations of Python or of Python-like languages. People steer clear of impressive works like PyPy or Nuitka because they feel that these things might not deliver everything CPython does, exactly like CPython does. Which is pretty terrible if you consider the heroic effort that the developer of Nuitka puts in to make his software work as similarly to CPython as possible, even going as far as to support Python 2 and Python 3, just as the PyPy team do.

Solutions like MicroPython have got away with certain incompatibilities with the justification that the target environment is rather constrained. But I imagine that even that project’s custodians get asked whether it can run Django, or whatever the arbitrarily-set threshold for technological validity might be. Never mind whether you would really want to run Django on a microcontroller or even on a phone. And never mind whether large parts of the mountain of code propping up such supposedly essential solutions could actually do with an audit and, in some cases, benefit from being retired and rewritten.

I am not fond of change for change’s sake, but new opportunities often bring new priorities and challenges with them. What then if Python as people insist on it today, with all the extra features added over the years to satisfy various petitioners and trends, is actually the weakness itself? What if the Python-like languages can adapt to these changes, and by having to confront their incompatibilities with hastily-written code from the 1990s and code employing “because it’s there” programming techniques, they can adapt to the changing environment while delivering much of what people like about Python in the first place? What if Python itself cannot?

“Why don’t you go and use something else if you don’t like what Python is?” some might ask. Certainly, Free Software itself is far more important to me than any adherence to Python. But I can also choose to make that other language something that carries forward the things I like about Python, not something that looks and behaves completely differently. And in doing so, at least I might gain a deeper understanding of what matters to me in Python, even if others refuse the lessons and the opportunities such Python-like languages can provide.

VGA Signal Generation with the PIC32

Monday, May 22nd, 2017

It all started after I had designed – and received from fabrication – a circuit board for prototyping cartridges for the Acorn Electron microcomputer. Although some prototyping had already taken place with an existing cartridge, with pins intended for ROM components being routed to drive other things, this board effectively “breaks out” all connections available to a cartridge that has been inserted into the computer’s Plus 1 expansion unit.

Acorn Electron cartridge breakout board

The Acorn Electron cartridge breakout board being used to drive an external circuit

One thing led to another, and soon my brother, David, was interfacing a microcontroller to the Electron in order to act as a peripheral being driven directly by the system’s bus signals. His approach involved having a program that would run and continuously scan the signals for read and write conditions and then interpret the address signals, sending and receiving data on the bus when appropriate.

Having acquired some PIC32 devices out of curiosity, with the idea of potentially interfacing them with the Electron, I finally took the trouble of looking at the datasheet to see whether some of the hard work done by David’s program might be handled by the peripheral hardware in the PIC32. The presence of something called “Parallel Master Port” was particularly interesting.

Operating this function in the somewhat insensitively-named “slave” mode, the device would be able to act like a memory device, with the signalling required by read and write operations mostly being dealt with by the hardware. Software running on the PIC32 would be able to read and write data through this port and be able to get notifications about new data while getting on with doing other things.

So began my journey into PIC32 experimentation, but this article isn’t about any of that, mostly because I put that particular investigation to one side after a degree of experience gave me perhaps a bit too much confidence, and I ended up being distracted by something far more glamorous: generating a video signal using the PIC32!

The Precedents’ Hall of Fame

There are plenty of people who have written up their experiments generating VGA and other video signals with microcontrollers. Here are some interesting examples:

And there are presumably many more pages on the Web with details of people sending pixel data over a cable to a display of some sort, often trying to squeeze every last cycle out of their microcontroller’s instruction processing unit. But, given an awareness of how microcontrollers should be able to take the burden off the programs running on them, employing peripheral hardware to do the grunt work of switching pins on and off at certain frequencies, maybe it would be useful to find examples of projects where such advantages of microcontrollers had been brought to bear on the problem.

In fact, I was already aware of the Maximite “single chip computer” partly through having seen the cloned version of the original being sold by Olimex – something rather resented by the developer of the Maximite for reasons largely rooted in an unfortunate misunderstanding of Free Software licensing on his part – and I was aware that this computer could generate a VGA signal. Indeed, the method used to achieve this had apparently been written up in a textbook for the PIC32 platform, albeit generating a composite video signal using one of the on-chip SPI peripherals. The Colour Maximite uses three SPI channels to generate one red, one green, and one blue channel of colour information, thus supporting eight-colour graphical output.

But I had been made aware of the Parallel Master Port (PMP) and its “master” mode, used to drive LCD panels with eight bits of colour information per pixel (or, using devices with many more pins than those I had acquired, with sixteen bits of colour information per pixel). Would it surely not be possible to generate 256-colour graphical output at the very least?

Information from people trying to use PMP for this purpose was thin on the ground. Indeed, reading again one article that mentioned an abandoned attempt to get PMP working in this way, using the peripheral to emit pixel data for display on a screen instead of a panel, I now see that it actually mentions an essential component of the solution that I finally arrived at. But the author had unfortunately moved away from that successful component in an attempt to get the data to the display at a rate regarded as satisfactory.

Direct Savings

It is one thing to have the means to output data to be sent over a cable to a display. It is another to actually send the data efficiently from the microcontroller. Having contemplated such issues in the past, it was not a surprise that the Maximite and other video-generating solutions use direct memory access (DMA) to get the hardware, as opposed to programs, to read through memory and to write its contents to a destination, which in most cases seemed to be the memory address holding output data to be emitted via a data pin using the SPI mechanism.

I had also envisaged using DMA and was still fixated on using PMP to emit the different data bits to the output circuit producing the analogue signals for the display. Indeed, Microchip promotes the PMP and DMA combination as a way of doing “low-cost controllerless graphics solutions” involving LCD panels, so I felt that there surely couldn’t be much difference between that and getting an image on my monitor via a few resistors on the breadboard.

And so, a tour of different PIC32 features began, trying to understand the DMA documentation, the PMP documentation, all the while trying to get a grasp of what the VGA signal actually looks like, the timing constraints of the various synchronisation pulses, and battle various aspects of the MIPS architecture and the PIC32 implementation of it, constantly refining my own perceptions and understanding and learning perhaps too often that there may have been things I didn’t know quite enough about before trying them out!

Using VGA to Build a Picture

Before we really start to look at a VGA signal, let us first look at how a picture is generated by the signal on a screen:

VGA Picture Structure

The structure of a display image or picture produced by a VGA signal

The most important detail at this point is the central area of the diagram, filled with horizontal lines representing the colour information that builds up a picture on the display, with the actual limits of the screen being represented here by the bold rectangle outline. But it is also important to recognise that even though there are a number of visible “display lines” within which the colour information appears, the entire “frame” sent to the display actually contains yet more lines, even though they will not be used to produce an image.

Above and below – really before and after – the visible display lines are the vertical back and front porches whose lines are blank because they do not appear on the screen or are used to provide a border at the top and bottom of the screen. Such extra lines contribute to the total frame period and to the total number of lines dividing up the total frame period.

Figuring out how many lines a display will have seems to involve messing around with something called the “generalised timing formula”, and if you have an X server like Xorg installed on your system, you may even have a tool called “gtf” that will attempt to calculate numbers of lines and pixels based on desired screen resolutions and frame rates. Alternatively, you can look up some common sets of figures on sites providing such information.

What a VGA Signal Looks Like

Some sources show diagrams attempting to describe the VGA signal, but many of these diagrams are open to interpretation (in some cases, very much so). They perhaps show the signal for horizontal (display) lines, then other signals for the entire image, but they either do not attempt to combine them, or they instead combine these details ambiguously.

For instance, should the horizontal sync (synchronisation) pulse be produced when the vertical sync pulse is active or during the “blanking” period when no pixel information is being transmitted? This could be deduced from some diagrams but only if you share their authors’ unstated assumptions and do not consider other assertions about the signal structure. Other diagrams do explicitly show the horizontal sync active during vertical sync pulses, but this contradicts statements elsewhere such as “during the vertical sync period the horizontal sync must also be held low”, for instance.

After a lot of experimentation, I found that the following signal structure was compatible with the monitor I use with my computer:

VGA Signal Structure

The basic structure of a VGA signal, or at least a signal that my monitor can recognise

There are three principal components to the signal:

  • Colour information for the pixel or line data forms the image on the display and it is transferred within display lines during what I call the visible display period in every frame
  • The horizontal sync pulse tells the display when each horizontal display line ends, or at least the frequency of the lines being sent
  • The vertical sync pulse tells the display when each frame (or picture) ends, or at least the refresh rate of the picture

The voltage levels appear to be as follows:

  • Colour information should be at 0.7V (although some people seem to think that 1V is acceptable as “the specified peak voltage for a VGA signal”)
  • Sync pulses are supposed to be at “TTL” levels, which apparently can be from 0V to 0.5V for the low state and from 2.7V to 5V for the high state

Meanwhile, the polarity of the sync pulses is also worth noting. In the above diagram, they have negative polarity, meaning that an active pulse is at the low logic level. Some people claim that “modern VGA monitors don’t care about sync polarity”, but since it isn’t clear to me what determines the polarity, and since most descriptions and demonstrations of VGA signal generation seem to use negative polarity, I chose to go with the flow. As far as I can tell, the gtf tool always outputs the same polarity details, whereas certain resources provide signal characteristics with differing polarities.

It is possible, and arguably advisable, to start out trying to generate sync pulses and just grounding the colour outputs until your monitor (or other VGA-capable display) can be persuaded that it is receiving a picture at a certain refresh rate and resolution. Such confirmation can be obtained on a modern display by seeing a blank picture without any “no signal” or “input not supported” messages and by being able to activate the on-screen menu built into the device, in which an option is likely to exist to show the picture details.

How the sync and colour signals are actually produced will be explained later on. This section was merely intended to provide some background and gather some fairly useful details into one place.

Counting Lines and Generating Vertical Sync Pulses

The horizontal and vertical sync pulses are each driven at their own frequency. However, given that there are a fixed number of lines in every frame, it becomes apparent that the frequency of vertical sync pulse occurrences is related to the frequency of horizontal sync pulses, the latter occurring once per line, of course.

With, say, 622 lines forming a frame, the vertical sync will occur once for every 622 horizontal sync pulses, or at a rate that is 1/622 of the horizontal sync frequency or “line rate”. So, if we can find a way of generating the line rate, we can not only generate horizontal sync pulses, but we can also count cycles at this frequency, and every 622 cycles we can produce a vertical sync pulse.

But how do we calculate the line rate in the first place? First, we decide what our refresh rate should be. The “classic” rate for VGA output is 60Hz. Then, we decide how many lines there are in the display including those extra non-visible lines. We multiply the refresh rate by the number of lines to get the line rate:

60Hz * 622 = 37320Hz = 37.320kHz

On a microcontroller, the obvious way to obtain periodic events is to use a timer. Given a particular frequency at which the timer is updated, a quick calculation can be performed to discover how many times a timer needs to be incremented before we need to generate an event. So, let us say that we have a clock frequency of 24MHz, and a line rate of 37.320kHz, we calculate the number of timer increments required to produce the latter from the former:

24MHz / 37.320kHz = 24000000Hz / 37320Hz = 643

So, if we set up a timer that counts up to 642 and then upon incrementing again to 643 actually starts again at zero, with the timer sending a signal when this “wraparound” occurs, we can have a mechanism providing a suitable frequency and then make things happen at that frequency. And this includes counting cycles at this particular frequency, meaning that we can increment our own counter by 1 to keep track of display lines. Every 622 display lines, we can initiate a vertical sync pulse.

One aspect of vertical sync pulses that has not yet been mentioned is their duration. Various sources suggest that they should last for only two display lines, although the “gtf” tool specifies three lines instead. Our line-counting logic therefore needs to know that it should enable the vertical sync pulse by bringing it low at a particular starting line and then disable it by bringing it high again after two whole lines.

Generating Horizontal Sync Pulses

Horizontal sync pulses take place within each display line, have a specific duration, and they must start at the same time relative to the start of each line. Some video output demonstrations seem to use lots of precisely-timed instructions to achieve such things, but we want to use the peripherals of the microcontroller as much as possible to avoid wasting CPU time. Having considered various tricks involving specially formulated data that might be transferred from memory to act as a pulse, I was looking for examples of DMA usage when I found a mention of something called the Output Compare unit on the PIC32.

What the Output Compare (OC) units do is to take a timer as input and produce an output signal dependent on the current value of the timer relative to certain parameters. In clearer terms, you can indicate a timer value at which the OC unit will cause the output to go high, and you can indicate another timer value at which the OC unit will cause the output to go low. It doesn’t take much imagination to realise that this sounds almost perfect for generating the horizontal sync pulse:

  1. We take the timer previously set up which counts up to 643 and thus divides the display line period into units of 1/643.
  2. We identify where the pulse should be brought low and present that as the parameter for taking the output low.
  3. We identify where the pulse should be brought high and present that as the parameter for taking the output high.

Upon combining the timer and the OC unit, then configuring the output pin appropriately, we end up with a low pulse occurring at the line rate, but at a suitable offset from the start of each line.

VGA Display Line Structure

The structure of each visible display line in the VGA signal

In fact, the OC unit also proves useful in actually generating the vertical sync pulses, too. Although we have a timer that can tell us when it has wrapped around, we really need a mechanism to act upon this signal promptly, at least if we are to generate a clean signal. Unfortunately, handling an interrupt will introduce a delay between the timer wrapping around and the CPU being able to do something about it, and it is not inconceivable that this delay may vary depending on what the CPU has been doing.

So, what seems to be a reasonable solution to this problem is to count the lines and upon seeing that the vertical sync pulse should be initiated at the start of the next line, we can enable another OC unit configured to act as soon as the timer value is zero. Thus, upon wraparound, the OC unit will spring into action and bring the vertical sync output low immediately. Similarly, upon realising that the next line will see the sync pulse brought high again, we can reconfigure the OC unit to do so as soon as the timer value again wraps around to zero.

Inserting the Colour Information

At this point, we can test the basic structure of the signal and see if our monitor likes it. But none of this is very interesting without being able to generate a picture, and so we need a way of getting pixel information from the microcontroller’s memory to its outputs. We previously concluded that Direct Memory Access (DMA) was the way to go in reading the pixel data from what is usually known as a framebuffer, sending it to another place for output.

As previously noted, I thought that the Parallel Master Port (PMP) might be the right peripheral to use. It provides an output register, confusingly called the PMDIN (parallel master data in) register, that lives at a particular address and whose value is exposed on output pins. On the PIC32MX270, only the least significant eight bits of this register are employed in emitting data to the outside world, and so a DMA destination having a one-byte size, located at the address of PMDIN, is chosen.

The source data is the framebuffer, of course. For various retrocomputing reasons hinted at above, I had decided to generate a picture 160 pixels in width, 256 lines in height, and with each byte providing eight bits of colour depth (specifying how many distinct colours are encoded for each pixel). This requires 40 kilobytes and can therefore reside in the 64 kilobytes of RAM provided by the PIC32MX270. It was at this point that I learned a few things about the DMA mechanisms of the PIC32 that didn’t seem completely clear from the documentation.

Now, the documentation talks of “transactions”, “cells” and “blocks”, but I don’t think it describes them as clearly as it could do. Each “transaction” is just a transfer of a four-byte word. Each “cell transfer” is a collection of transactions that the DMA mechanism performs in a kind of batch, proceeding with these as quickly as it can until it either finishes the batch or is told to stop the transfer. Each “block transfer” is a collection of cell transfers. But what really matters is that if you want to transfer a certain amount of data and not have to keep telling the DMA mechanism to keep going, you need to choose a cell size that defines this amount. (When describing this, it is hard not to use the term “block” rather than “cell”, and I do wonder why they assigned these terms in this way because it seems counter-intuitive.)

You can perhaps use the following template to phrase your intentions:

I want to transfer <cell size> bytes at a time from a total of <block size> bytes, reading data starting from <source address>, having <source size>, and writing data starting at <destination address>, having <destination size>.

The total number of bytes to be transferred – the block size – is calculated from the source and destination sizes, with the larger chosen to be the block size. If we choose a destination size less than the source size, the transfers will not go beyond the area of memory defined by the specified destination address and the destination size. What actually happens to the “destination pointer” is not immediately obvious from the documentation, but for our purposes, where we will use a destination size of one byte, the DMA mechanism will just keep writing source bytes to the same destination address over and over again. (One might imagine the pointer starting again at the initial start address, or perhaps stopping at the end address instead.)

So, for our purposes, we define a “cell” as 160 bytes, being the amount of data in a single display line, and we only transfer one cell in a block. Thus, the DMA source is 160 bytes long, and even though the destination size is only a single byte, the DMA mechanism will transfer each of the source bytes into the destination. There is a rather unhelpful diagram in the documentation that perhaps tries to communicate too much at once, leading one to believe that the cell size is a factor in how the destination gets populated by source data, but the purpose of the cell size seems only to be to define how much data is transferred at once when a transfer is requested.

DMA Transfer Mechanism

The transfer of framebuffer data to PORTB using DMA cell transfers (noting that this hints at the eventual approach which uses PORTB and not PMDIN)

In the matter of requesting a transfer, we have already described the mechanism that will allow us to make this happen: when the timer signals the start of a new line, we can use the wraparound event to initiate a DMA transfer. It would appear that the transfer will happen as fast as both the source and the destination will allow, at least as far as I can tell, and so it is probably unlikely that the data will be sent to the destination too quickly. Once the transfer of a line’s pixel data is complete, we can do some things to set up the transfer for the next line, like changing the source data address to point to the next 160 bytes representing the next display line.

(We could actually set the block size to the length of the entire framebuffer – by setting the source size – and have the DMA mechanism automatically transfer each line in turn, updating its own address for the current line. However, I intend to support hardware scrolling, where the address of the first line of the screen can be adjusted so that the display starts part way through the framebuffer, reaches the end of the framebuffer part way down the screen, and then starts again at the beginning of the framebuffer in order to finish displaying the data at the bottom of the screen. The DMA mechanism doesn’t seem to support the necessary address wraparound required to manage this all by itself.)

Output Complications

Having assumed that the PMP peripheral would be an appropriate choice, I soon discovered some problems with the generated output. Although the data that I had stored in the RAM seemed to be emitted as pixels in appropriate colours, there were gaps between the pixels on the screen. Yet the documentation seemed to vaguely indicate that the PMDIN register was more or less continuously updated. That meant that the actual output signals were being driven low between each pixel, causing black-level gaps and ruining the result.

I wondered if anything could be done about this issue. PMP is really intended as some kind of memory interface, and it isn’t unreasonable for it to only maintain valid data for certain periods of time, modifying control signals to define this valid data period. That PMP can be used to drive LCD panels is merely a result of those panels themselves upholding this kind of interface. For those of you familiar with microcontrollers, the solution to my problem was probably obvious several paragraphs ago, but it needed me to reconsider my assumptions and requirements before I realised what I should have been doing all along.

Unlike SPI, which concerns itself with the bit-by-bit serial output of data, PMP concerns itself with the multiple-bits-at-once parallel output of data, and all I wanted to do was to present multiple bits to a memory location and have them translated to a collection of separate signals. But, of course, this is exactly how normal I/O (input/output) pins are provided on microcontrollers! They all seem to provide “PORT” registers whose bits correspond to output pins, and if you write a value to those registers, all the pins can be changed simultaneously. (This feature is obscured by platforms like Arduino where functions are offered to manipulate only a single pin at once.)

And so, I changed the DMA destination to be the PORTB register, which on the PIC32MX270 is the only PORT register with enough bits corresponding to I/O pins to be useful enough for this application. Even then, PORTB does not have a complete mapping from bits to pins: some pins that are available in other devices have been dedicated to specific functions on the PIC32MX270F256B and cannot be used for I/O. So, it turns out that we can only employ at most seven bits of our pixel data in generating signal data:

PORTB Pin Availability on the PIC32MX270F256B
Pins
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
RPB15 RPB14 RPB13 RPB11 RPB10 RPB9 RPB8 RPB7 RPB5 RPB4 RPB3 RPB2 RPB1 RPB0

We could target the first byte of PORTB (bits 0 to 7) or the second byte (bits 8 to 15), but either way we will encounter an unmapped bit. So, instead of choosing a colour representation making use of eight bits, we have to make do with only seven.

Initially, not noticing that RPB6 was not available, I was using a “RRRGGGBB” or “332” representation. But persuaded by others in a similar predicament, I decided to choose a representation where each colour channel gets two bits, and then a separate intensity bit is used to adjust the final intensity of the basic colour result. This also means that greyscale output is possible because it is possible to balance the channels.

The 2-bit-per-channel plus intensity colours

The colours employing two bits per channel plus one intensity bit, perhaps not shown completely accurately due to circuit inadequacies and the usual white balance issues when taking photographs

It is worth noting at this point that since we have left the 8-bit limitations of the PMP peripheral far behind us now, we could choose to populate two bytes of PORTB at once, aiming for sixteen bits per pixel but actually getting fourteen bits per pixel once the unmapped bits have been taken into account. However, this would double our framebuffer memory requirements for the same resolution, and we don’t have that much memory. There may be devices with more than sixteen bits mapped in the 32-bit PORTB register (or in one of the other PORT registers), but they had better have more memory to be useful for greater colour depths.

Back in Black

One other matter presented itself as a problem. It is all very well generating a colour signal for the pixels in the framebuffer, but what happens at the end of each DMA transfer once a line of pixels has been transmitted? For the portions of the display not providing any colour information, the channel signals should be held at zero, yet it is likely that the last pixel on any given line is not at the lowest possible (black) level. And so the DMA transfer will have left a stray value in PORTB that could then confuse the monitor, producing streaks of colour in the border areas of the display, making the monitor unsure about the black level in the signal, and also potentially confusing some monitors about the validity of the picture, too.

As with the horizontal sync pulses, we need a prompt way of switching off colour information as soon as the pixel data has been transferred. We cannot really use an Output Compare unit because that only affects the value of a single output pin, and although we could wire up some kind of blanking in our external circuit, it is simpler to look for a quick solution within the capabilities of the microcontroller. Fortunately, such a quick solution exists: we can “chain” another DMA channel to the one providing the pixel data, thereby having this new channel perform a transfer as soon as the pixel data has been sent in its entirety. This new channel has one simple purpose: to transfer a single byte of black pixel data. By doing this, the monitor will see black in any borders and beyond the visible regions of the display.

Wiring Up

Of course, the microcontroller still has to be connected to the monitor somehow. First of all, we need a way of accessing the pins of a VGA socket or cable. One reasonable approach is to obtain something that acts as a socket and that breaks out the different signals from a cable, connecting the microcontroller to these broken-out signals.

Wanting to get something for this task quickly and relatively conveniently, I found a product at a local retailer that provides a “male” VGA connector and screw-adjustable terminals to break out the different pins. But since the VGA cable also has a male connector, I also needed to get a “gender changer” for VGA that acts as a “female” connector in both directions, thus accommodating the VGA cable and the male breakout board connector.

Wiring up to the broken-out VGA connector pins is mostly a matter of following diagrams and the pin numbering scheme, illustrated well enough in various resources (albeit with colour signal transposition errors in some resources). Pins 1, 2 and 3 need some special consideration for the red, green and blue signals, and we will look at them in a moment. However, pins 13 and 14 are the horizontal and vertical sync pins, respectively, and these can be connected directly to the PIC32 output pins in this case, since the 3.3V output from the microcontroller is supposedly compatible with the “TTL” levels. Pins 5 through 10 can be connected to ground.

We have seen mentions of colour signals with magnitudes of up to 0.7V, but no explicit mention of how they are formed has been presented in this article. Fortunately, everyone is willing to show how they converted their digital signals to an analogue output, with most of them electing to use a resistor network to combine each output pin within a channel to produce a hopefully suitable output voltage.

Here, with two bits per channel, I take the most significant bit for a channel and send it through a 470ohm resistor. Meanwhile, the least significant bit for the channel is sent through a 1000ohm resistor. Thus, the former contributes more to the magnitude of the signal than the latter. If we were only dealing with channel information, this would be as much as we need to do, but here we also employ an intensity bit whose job it is to boost the channels by a small amount, making sure not to allow the channels to pollute each other via this intensity sub-circuit. Here, I feed the intensity output through a 2200ohm resistor and then to each of the channel outputs via signal diodes.

VGA Output Circuit

The circuit showing connections relevant to VGA output (generic connections are not shown)

The Final Picture

I could probably go on and cover other aspects of the solution, but the fundamental aspects are probably dealt with sufficiently above to help others reproduce this experiment themselves. Populating memory with usable image data, at least in this solution, involves copying data to RAM, and I did experience problems with accessing RAM that are probably related to CPU initialisation (as covered in my previous article) and to synchronising the memory contents with what the CPU has written via its cache.

As for the actual picture data, the RGB-plus-intensity representation is not likely to be the format of most images these days. So, to prepare data for output, some image processing is needed. A while ago, I made a program to perform palette optimisation and dithering on images for the Acorn Electron, and I felt that it was going to be easier to adapt the dithering code than it was to figure out the necessary techniques required for software like ImageMagick or the Python Imaging Library. The pixel data is then converted to assembly language data definition statements and incorporated into my PIC32 program.

VGA output from a PIC32 microcontroller

VGA output from a PIC32 microcontroller, featuring a picture showing some Oslo architecture, with the PIC32MX270 being powered (and programmed) by the Arduino Duemilanove, and with the breadboards holding the necessary resistors and diodes to supply the VGA breakout and, beyond that, the cable to the monitor

To demonstrate control over the visible region, I deliberately adjusted the display frequencies so that the monitor would consider the signal to be carrying an image 800 pixels by 600 pixels at a refresh rate of 60Hz. Since my framebuffer is only 256 lines high, I double the lines to produce 512 lines for the display. It would seem that choosing a line rate to try and produce 512 lines has the monitor trying to show something compatible with the traditional 640×480 resolution and thus lines are lost off the screen. I suppose I could settle for 480 lines or aim for 300 lines instead, but I actually don’t mind having a border around the picture.

The on-screen menu showing the monitor's interpretation of the signal

The on-screen menu showing the monitor's interpretation of the signal

It is worth noting that I haven’t really mentioned a “pixel clock” or “dot clock” so far. As far as the display receiving the VGA signal is concerned, there is no pixel clock in that signal. And as far as we are concerned, the pixel clock is only important when deciding how quickly we can get our data into the signal, not in actually generating the signal. We can generate new colour values as slowly (or as quickly) as we might like, and the result will be wider (or narrower) pixels, but it shouldn’t make the actual signal invalid in any way.

Of course, it is important to consider how quickly we can generate pixels. Previously, I mentioned a 24MHz clock being used within the PIC32, and it is this clock that is used to drive peripherals and this clock’s frequency that will limit the transfer speed. As noted elsewhere, a pixel clock frequency of 25MHz is used to support the traditional VGA resolution of 640×480 at 60Hz. With the possibilities of running the “peripheral clock” in the PIC32MX270 considerably faster than this, it becomes a matter of experimentation as to how many pixels can be supported horizontally.

Some Oslo street art being displayed by the PIC32

Some Oslo street art being displayed by the PIC32

For my own purposes, I have at least reached the initial goal of generating a stable and usable video signal. Further work is likely to involve attempting to write routines to modify the framebuffer, maybe support things like scrolling and sprites, and even consider interfacing with other devices.

Naturally, this project is available as Free Software from its own repository. Maybe it will inspire or encourage you to pursue something similar, knowing that you absolutely do not need to be any kind of “expert” to stubbornly persist and to eventually get results!

Evaluating PIC32 for Hardware Experiments

Friday, May 19th, 2017

Some time ago I became aware of the PIC32 microcontroller platform, perhaps while following various hardware projects, pursuing hardware-related interests, and looking for pertinent documentation. Although I had heard of PIC microcontrollers before, my impression of them was that they were mostly an alternative to other low-end computing products like the Atmel AVR series, but with a development ecosystem arguably more reliant on its vendor than the Atmel products for which tools like avr-gcc and avrdude exist as Free Software and have gone on to see extensive use, perhaps most famously in connection with the Arduino ecosystem.

What made PIC32 stand out when I first encountered it, however, was that it uses the MIPS32 instruction set architecture instead of its own specialised architecture. Consequently, instead of being reliant on the silicon vendor and random third-party tool providers for proprietary tools, the possibility of using widely-used Free Software tools exists. Moreover, with a degree of familiarity with MIPS assembly language, thanks to the Ben NanoNote, I felt that there might be an opportunity to apply some of my elementary skills to another MIPS-based system and gain some useful experience.

Some basic investigation occurred before I made any attempt to acquire hardware. As anyone having to pay attention to the details of hardware can surely attest, it isn’t much fun to obtain something only to find that some necessary tool required for the successful use of that hardware happens to be proprietary, only works on proprietary platforms, or is generally a nuisance in some way that makes you regret the purchase. Looking around at various resources such as the Pinguino Web site gave me some confidence that there were people out there using PIC32 devices with Free Software. (Of course, the eventual development scenario proved to be different from that envisaged in these initial investigations, but previous experience has taught me to expect that things may not go as originally foreseen.)

Some Discoveries

So that was perhaps more than enough of an introduction, given that I really want to focus on some of my discoveries in using my acquired PIC32 devices, hoping that writing them up will help others to use Free Software with this platform. So, below, I will present a few discoveries that may well, for all I know, be “obvious” to people embedded in the PIC universe since it began, or that might be “superfluous” to those happy that Microchip’s development tools can obscure the operation of the hardware to offer a simpler “experience”.

I should mention at this point that I chose to acquire PDIP-profile products for use with a solderless breadboard. This is the level of sophistication at which the Arduino products mostly operate and it allows convenient prototyping involving discrete components and other electronic devices. The evidence from the chipKIT and Pinguino sites suggested that it would be possible to set up a device on a breadboard, wire it up to a power supply with a few supporting components, and then be able to program it. (It turned out that programming involved another approach than indicated by that latter reference, however.)

The 28-pin product I elected to buy was the PIC32MX270F256B-50/SP. I also bought some capacitors that were recommended for wiring up the device. In the picture below, you can just about see the capacitor connecting two of the pins, and there is also a resistor pulling up one of the pins. I recommend obtaining a selection of resistors of different values so as to be able to wire up various circuits as you encounter them. Note that the picture does not show a definitive wiring guide: please refer to the product documentation or to appropriate project documentation for such things.

PIC32 device on a mini-breadboard

PIC32 device on a mini-breadboard connected to an Arduino Duemilanove for power and programming

Programming the Device

Despite the apparent suitability of a program called pic32prog, recommended by the “cheap DIY programmer” guide, I initially found success elsewhere. I suppose it didn’t help that the circuit diagram was rather hard to follow for me, as someone who isn’t really familiar with certain electrical constructs that have been mixed in, arguably without enough explanation.

Initial Recommendation: ArduPIC32

Instead, I looked for a solution that used an Arduino product (not something procured from ephemeral Chinese “auction site” vendors) and found ArduPIC32 living a quiet retirement in the Google Code Archive. Bypassing tricky voltage level conversion and connecting an Arduino Duemilanove with the PIC32 device on its 5V-tolerant JTAG-capable pins, ArduPIC32 mostly seemed to work, although I did alter it slightly to work the way I wanted to and to alleviate the usual oddness with Arduino serial-over-USB connections.

However, I didn’t continue to use ArduPIC. One reason is that programming using the JTAG interface is slow, but a much more significant reason is that the use of JTAG means that the pins on the PIC32 associated with JTAG cannot be used for other things. This is either something that “everybody” knows or just something that Microchip doesn’t feel is important enough to mention in convenient places in the product documentation. More on this topic below!

Final Recommendation: Pickle (and Nanu Nanu)

So, I did try and use pic32prog and the suggested circuit, but had no success. Then, searching around, I happened to find some very useful resources indeed: Pickle is a GPL-licensed tool for uploading data to different PIC devices including PIC32 devices; Nanu Nanu is a GPL-licensed program that runs on AVR devices and programs PIC32 devices using the ICSP interface. Compiling the latter for the Arduino and uploading it in the usual way (actually done by the Makefile), it can then run on the Arduino and be controlled by the Pickle tool.

Admittedly, I did have some problems with the programming circuit, most likely self-inflicted, but the developer of these tools was very responsive and, as I know myself from being in his position in other situations, provided the necessary encouragement that was perhaps most sorely lacking to get the PIC32 device talking. I used the “sink” or “open drain” circuit so that the Arduino would be driving the PIC32 device using a suitable voltage and not the Arduino’s native 5V. Conveniently, this is the default configuration for Nanu Nanu.

PIC32 on breadboard with Arduino programming circuit

PIC32 on breadboard with Arduino programming circuit (and some LEDs for diagnostic purposes)

I should point out that the Pinguino initiative promotes USB programming similar to that employed by the Arduino series of boards. Even though that should make programming very easy, it is still necessary to program the USB bootloader on the PIC32 device using another method in the first place. And for my experiments involving an integrated circuit on a breadboard, setting up a USB-based programming circuit is a distraction that would have complicated the familiarisation process and would have mostly duplicated the functionality that the Arduino can already provide, even though this two-stage programming circuit may seem a little contrived.

Compiling Code for the Device

This is perhaps the easiest problem to solve, strongly motivating my choice of PIC32 in the first place…

Recommendation: GNU Toolchain

PIC32 uses the MIPS32 instruction set. Since MIPS has been around for a very long time, and since the architecture was prominent in workstations, servers and even games consoles in the late 1980s and 1990s, remaining in widespread use in more constrained products such as routers as this century has progressed, the GNU toolchain (GCC, binutils) has had a long time to comfortably support MIPS. Although the computer you are using is not particularly likely to be MIPS-based, cross-compiling versions of these tools can be built to run on, say, x86 or x86-64 while generating MIPS32 executable programs.

And fortunately, Debian GNU/Linux provides the mipsel-linux-gnu variant of the toolchain packages (at least in the unstable version of Debian) that makes the task of building software simply a matter of changing the definitions for the compiler, linker and other tools in one’s Makefile to use these variants instead of the unprefixed “host” gcc, ld, and so on. You can therefore keep using the high-quality Free Software tools you already know. The binutils-mipsel-linux-gnu package provides an assembler, if you just want to practise your MIPS assembly language, as well as a linker and tools for creating binaries. Meanwhile, the gcc-mipsel-linux-gnu package provides a C compiler.

Perhaps the only drawback of using the GNU tools is that people using the proprietary tools supplied by Microchip and partners will post example code that uses special notation interpreted in a certain way by those products. Fortunately, with some awareness that this is going on, we can still support the necessary functionality in our own way, as described below.

Configuring the Device

With the PIC32, and presumably with the other PIC products, there is a distinct activity of configuring the device when programming it with program code and data. This isn’t so obvious until one reads the datasheets, tries to find a way of changing some behaviour of the device, and then stumbles across the DEVCFG registers. These registers cannot be set in a running program: instead, they are “programmed” before the code is run.

You might wonder what the distinction is between “programming” the device to take the code you have written and “programming” the configuration registers, and there isn’t much difference conceptually. All that matters is that you have your program written into one part of the device’s memory and you also ask for certain data values to be written to the configuration registers. How this is done in the Microchip universe is with something like this:

#pragma config SOMETHING = SOMEVALUE;

With a basic GNU tool configuration, we need to find the equivalent operation and express it in a different way (at least as far as I know, unless someone has implemented an option to support this kind of notation in the GNU tools). The mechanism for achieving this is related to the linker script and is described in a section of this article presented below. For now, we will concentrate on what configuration settings we need to change.

Recommendation: Disable JTAG

As briefly mentioned above, one thing that “everybody” knows, at least if they are using Microchip’s own tools and are busy copying code from Microchip’s examples, is that the JTAG functionality takes over various pins and won’t let you use them, even if you switch on a “peripheral” in the microcontroller that needs to use those pins. Maybe I am naive about how intrusive JTAG should or should not be, but the lesson from this matter is just to configure the device to not have JTAG features enabled and to use the ICSP programming method recommended above instead.

Recommendation: Disable Watchdog Timer

Again, although I am aware of the general concept of a watchdog timer – something that resets a device if it thinks that the device has hung, crashed, or experienced something particularly bad – I did not expect something like this to necessarily be configured by default. In case it is, and one does see lots of code assuming so, then it should be disabled as well. Otherwise, I can imagine that you might experience spontaneous resets for no obvious reason.

Recommendation: Configure the Oscillator and Clocks

If you need to change the oscillator frequency or the origin of the oscillator used by the PIC32, it is perhaps best to do this in the configuration registers rather than try and mess around with this while the device is running. Indeed, some configuration is probably unavoidable even if there is a need to, say, switch between oscillators at run-time. Meanwhile, the relationship between the system clock (used by the processor to execute instructions) and the peripheral clock (used to interact with other devices and to run things like timers) is defined in the configuration registers.

Linker Scripts

So, to undertake the matter of configuration, a way is needed to express the setting of configuration register values in a general way. For this, we need to take a closer look at linker scripts. If you routinely compile and link software, you will be using linker scripts without realising it, because such scripts are telling the tools things about where parts of the program should be stored, what kinds of addresses they should be using, and so on. Here, we need to take more of an interest in such matters.

Recommendation: Expand a Simple Script

Writing linker scripts does not seem like much fun. The syntax is awkward to read and to produce as a human being, and knowledge about tool output is assumed. However, it is possible to start with a simple script that works for someone else in a similar situation and to modify it conservatively in order to achieve what you need. I started out with one that just defined the memory regions and a few sections. To avoid reproducing all the details here, I will just show what the memory regions for a configuration register look like:

  config2                    : ORIGIN = 0xBFC00BF4, LENGTH = 0x4
  physical_config2           : ORIGIN = 0x3FC00BF4, LENGTH = 0x4

This will be written in the MEMORY construct in the script. What they tell us is that config2 is a region of four bytes starting at virtual address 0xBFC00BF4, which is the location of the DEVCFG2 register as specified in the PIC32MX270 documentation. However, this register actually has a physical address of 0x3FC00BF4. The difference between virtual addresses and physical addresses is perhaps easiest to summarise by saying that CPU instructions would use the virtual address when referencing the register, whereas the actual memory of the device employs the physical address to refer to this four-byte region.

Meanwhile, in the SECTIONS construct, there needs to be something like this:

  .devcfg2 : {
        *(.devcfg2)
        } > config2 AT > physical_config2

Now you might understand my remark about the syntax! Nevertheless, what these things do is to tell the tools to put things from a section called .devcfg2 in the physical_config2 memory region and, if there were to be any address references in the data (which there isn’t in this case), then they would use the addresses in the config2 region.

Recommendation: Define Configuration Sections and Values in the Code

Since I have been using assembly language, here is what I do in my actual program source file. Having looked at the documentation and figured out which configuration register I need to change, I introduce a section in the code that defines the register value. For DEVCFG2, it looks like this:

.section .devcfg2, "a"
.word 0xfff9fffb        /* DEVCFG2<18:16> = FPLLODIV<2:0> = 001;
                        DEVCFG2<6:4> = FPLLMUL<2:0> = 111;
                        DEVCFG2<2:0> = FPLLIDIV<2:0> = 011 */

Here, I fully acknowledge that this might not be the most optimal approach, but when you’re learning about all these things at once, you make progress as best you can. In any case, what this does is to tell the assembler to include the .devcfg2 section and to populate it with the specified “word”, which is four bytes on the 32-bit PIC32 platform. This word contains the value of the register which has been expressed in hexadecimal with the most significant digit first.

Returning to our checklist of configuration tasks, what we now need to do is to formulate each one in terms of configuration register values, introduce the necessary sections and values, and adjust the values to contain the appropriate bit patterns. The above example shows how the DEVCFG2 bits are adjusted and set in the value of the register. Here is a short amplification of the operation:

DEVCFG2 Bits
31…28 27…24 23…20 19…16 15…12 11…8 7…4 3…0
1111 1111 1111 1001
FPLLODIV<2:0>
1111 1111 1111
FPLLMUL<2:0>
1011
FPLLIDIV<2:0>
f f f 9 f f f b

Here, the underlined bits are those of interest and have been changed to the desired values. It turns out that we can set the other bits as 1 for the functionality we want (or don’t want) in this case.

By the way, the JTAG functionality is disabled in the DEVCFG0 register (JTAGEN, bit 2, on this device). The watchdog timer is disabled in DEVCFG1 (FWDTEN, bit 23, on this device).

Recommendation: Define Regions for Exceptions and Interrupts

The MIPS architecture has the processor jump to certain memory locations when bad things happen (exceptions) or when other things happen (interrupts). We will cover the details of this below, but while we are setting up the places in memory where things will reside, we might as well define where the code to handle exceptions and interrupts will be living:

  .flash : { *(.flash*) } > kseg0_program_mem AT > physical_program_mem

This will be written in the SECTIONS construct. It relies on a memory region being defined, which would appear in the MEMORY construct as follows:

  kseg0_program_mem    (rx)  : ORIGIN = 0x9D000000, LENGTH = 0x40000
  physical_program_mem (rx)  : ORIGIN = 0x1D000000, LENGTH = 0x40000

These definitions allow the .flash section to be placed at 0x9D00000 but actually be written to memory at 0x1D00000.

Initialising the Device

On the various systems I have used in the past, even when working in assembly language I never had to deal with the earliest stages of the CPU’s activity. However, on the MIPS systems I have used in more recent times, I have been confronted with the matter of installing code to handle system initialisation, and this does require some knowledge of what MIPS processors would like to do when something goes wrong or if some interrupt arrives and needs to be processed.

The convention with the PIC32 seems to be that programs are installed within the MIPS KSEG0 region (one of the four principal regions) of memory, specifically at address 0x9FC00000, and so in the linker script we have MEMORY definitions like this:

  kseg0_boot_mem       (rx)  : ORIGIN = 0x9FC00000, LENGTH = 0xBF0
  physical_boot_mem    (rx)  : ORIGIN = 0x1FC00000, LENGTH = 0xBF0

As you can see, this region is far shorter than the 512MB of the KSEG0 region in its entirety. Indeed, 0xBF0 is only 3056 bytes! So, we need to put more substantial amounts of code elsewhere, some of which will be tasked with handling things when they go wrong.

Recommendation: Define Exception and Interrupt Handlers

As we have seen, the matter of defining routines to handle errors and interrupt conditions falls on the unlucky Free Software developer in this case. When something goes wrong, like the CPU not liking an instruction it has just seen, it will jump to a predefined location and try and execute code to recover. By default, with the PIC32, this location will be at address 0x80000000 which is the start of RAM, but if the RAM has not been configured then the CPU will not achieve very much trying to load instructions from that address.

Now, it can be tempting to set the “exception base”, as it is known, to be the place where our “boot” code is installed (at 0x9FC00000). So if something bad does happen, our code will start running again “from the top”, a bit like what used to happen if you ever wrote BASIC code that said…

10 ON ERROR GOTO 10

Clearly, this isn’t particularly desirable because it can mask problems. Indeed, I found that because I had not observed a step in the little dance required to change interrupt locations, my program would be happily restarting itself over and over again upon receiving interrupts. This is where the work done in the linker script starts to pay off: we can move the exception handler, this being the code residing at the exception base, to another region of memory and tell the CPU to jump to that address instead. We should therefore never have unscheduled restarts occurring once this is done.

Again, I have been working in assembly language, so I position my exception handling code using a directive like this:

.section .flash, "a"

exception_handler:
    /* Exception handling code goes here. */

Note that .flash is what we mentioned in the linker script. After this directive, the exception handler is defined so that the CPU has something to jump to. But exceptions are just one kind of condition that may occur, and we also need to handle interrupts. Although we might be able to handle both things together, you can instead position an interrupt handler after the exception handler at a well-defined offset, and the CPU can be told to use that when it receives an interrupt. Here is what I do:

.org 0x200

interrupt_handler:
    /* Interrupt handling code goes here. *

The .org directive tells the assembler that the interrupt handler will reside at an offset of 0x200 from the start of the .flash section. This number isn’t something I have made up: it is defined by the MIPS architecture and will be observed by the CPU when suitably configured.

So that leaves the configuration. Although one sees a lot of advocacy for the “multi-vector” interrupt handling support of the PIC32, possibly because the Microchip example code seems to use it, I wanted to stick with the more widely available “single-vector” support which is what I have effectively described above: you have one place the CPU jumps to and it is then left to the code to identify the interrupt condition – what it is that happened, exactly – and then handle the condition appropriately. (Multi-vector handling has the CPU identify the kind of condition and then choose a condition-specific location to jump to all by itself.)

The following steps are required for this to work properly:

  1. Make sure that the CP0 (co-processor 0) STATUS register has the BEV bit set. Otherwise things will fail seemingly inexplicably.
  2. Set the CP0 EBASE (exception base) register to use the exception handler address. This changes the target of the jump that occurs when an exception or interrupt occurs.
  3. Set the CP0 INTCTL (interrupt control) register to use a non-zero vector spacing, even though this setting probably has no relevance to the single-vector mode.
  4. Make sure that the CP0 CAUSE (exception cause) register has the IV bit set. This tells the CPU that the interrupt handler is at that magic 0x200 offset from the exception handler, meaning that interrupts should be dispatched there instead of to the exception handler.
  5. Now make sure that the CP0 STATUS register has the BEV bit cleared, so that the CPU will now use the new handlers.

To enable exceptions and interrupts, the IE bit in the CP0 STATUS register must be set, but there are also other things that must be done for them to actually be delivered.

Recommendation: Handle the Initial Error Condition

As I found out with the Ben NanoNote, a MIPS device will happily run in its initial state, but it turns out that this state is some kind of “error” state that prevents exceptions and interrupts from being delivered, even though interrupts may have been enabled. This does make some kind of sense: if a program is in the process of setting up interrupt handlers, it doesn’t really want an interrupt to occur before that work is done.

So, the MIPS architecture defines some flags in the CP0 STATUS register to override normal behaviour as some kind of “fail-safe” controls. The ERL (error level) bit is set when the CPU starts up, preventing errors (or probably most errors, at least) as well as interrupts from interrupting the execution of the installed code. The EXL (exception level) bit may also be set, preventing exceptions and interrupts from occurring even when ERL is clear. Thus, both of these bits must be cleared for interrupts to be enabled and delivered, and what happens next rather depends on how successful we were in setting up those handlers!

Recommendation: Set up the Global Offset Table

Something that I first noticed when looking at code for the Ben NanoNote was the initialisation of a register to refer to something called the global offset table (or GOT). For anyone already familiar with MIPS assembly language or the way code is compiled for the architecture, programs follow a convention where they refer to objects via a table containing the addresses of those objects. For example:

Global Offset Table
Offset from Start Member
+0 0
+4 address of _start
+8 address of my_routine
+12 address of other_routine

But in order to refer to the table, something has to have the address of the table. This is where the $gp register comes in: it holds that address and lets the code access members of the table relative to the address.

What seems to work for setting up the $gp register is this:

        lui $gp, %hi(_GLOBAL_OFFSET_TABLE_)
        ori $gp, $gp, %lo(_GLOBAL_OFFSET_TABLE_)

Here, the upper 16 bits of the $gp register are set with the “high” 16 bits of the value assigned to the special _GLOBAL_OFFSET_TABLE_ symbol, which provides the address of the special .got section defined in the linker script. This value is then combined using a logical “or” operation with the “low” 16 bits of the symbol’s value.

(I’m sure that anyone reading this far will already know that MIPS instructions are fixed length and are the same length as the address being loaded here, so it isn’t possible to fit the address into the load instruction. So each half of the address has to be loaded separately by different instructions. If you look at the assembled output of an assembly language program employing the “li” instruction, you will see that the assembler breaks this instruction down into “lui” and “ori” if it needs to. Note well that you cannot rely on the “la” instruction to load the special symbol into $gp precisely because it relies on $gp having already been set up.)

Rounding Off

I could probably go on about MIPS initialisation rituals, setting up a stack for function calls, and so on, but that isn’t really what this article is about. My intention here is to leave enough clues and reminders for other people in a similar position and perhaps even my future self.

Even though I have some familiarity with the MIPS architecture, I suppose you might be wondering why I am not evaluating more open hardware platforms. I admit that it is partly laziness: I could get into doing FPGA-related stuff and deploy open microcontroller cores, maybe even combining them with different peripheral circuits, but that would be a longer project with a lot of familiarisation involved, plus I would have to choose carefully to get a product supported by Free Software. It is also cheapness: I could have ordered a HiFive1 board and started experimenting with the RISC-V architecture, but that board seems expensive for what it offers, at least once you disregard the pioneering aspects of the product and focus on the applications of interest.

So, for now, I intend to move slowly forwards and gain experiences with an existing platform. In my next article on this topic, I hope to present some of the things I have managed to achieve with the PIC32 and a selection of different components and technologies.

Funding Free Software

Tuesday, March 28th, 2017

My last post raised the issue of funding Free Software, which I regard as an essential way of improving both the user and developer experiences around the use and creation of Free Software. In short, asking people to just volunteer more of their spare time, in order to satisfy the erroneous assumption that Free Software must be free of charge, is neither sustainable nor likely to grow the community around Free Software to which both users and developers belong.

In response, Eric Jones raised some pertinent themes: attitudes to paying for things; judgements about the perceived monetary value of those things; fund-raising skills and what might be called business or economic awareness; the potentially differing predicaments of infrastructure projects and user-facing projects. I gave a quick response in a comment of my own, but I also think that a more serious discussion could be had to try and find real solutions to the problem of sustainable funding for Free Software. I do not think that a long message to the FSFE discussion mailing list is the correct initial step, however, and so this article is meant to sketch out a few thoughts and suggestions as a prelude to a proper discussion.

On Paying for Things

It seems to me that people who can afford it do not generally have problems finding ways to spend their money. For instance, how many people spend tens of euros, dollars, pounds per month on mobile phone subscriptions with possibly-generous data allowances that they do not use? What about insurance, electricity, broadband Internet access, television/cable/streaming services, banking services? How often do we hear various self-appointed “consumer champions” berate people for overspending on such things? That people could shop around and save money. It is almost as if this is the primary purpose of the “developed world” human: to continually monitor the “competitive landscape” as each of its inhabitants adjusts its pricing and terms, nudging their captive revenues upward until such antics are spotted and punished with a rapid customer relationship change.

I may have noted before that many industries have moved to subscription models, presumably having noticed that once people have been parked in a customer relationship with a stable and not-particularly-onerous outgoing payment once a month, they will most likely stay in that relationship unless something severe enough occurs to overcome the effort of switching. Now, I do not advocate taking advantage of people in this way, but disregarding the matter of customer inertia and considering subscription models specifically, it is to be noted that these are used by various companies in the enterprise sector and have even been tried with individual “consumers”. For example, Linspire offered a subscription-based GNU/Linux distribution, and Eazel planned to offer subscription based access to online services providing, amongst other things, access to Free Software repositories.

(The subscription model can also be dubious in situations where things should probably be paid for according to actual usage, rather than some formula being cooked up to preserve a company’s balance sheet. For instance, is it right that electricity should be sold at a fixed per-month subscription cost? Would that product help uphold a company’s obligations to source renewable energy or would it give them some kind of implied permission to buy capacity generated from fossil fuels that has been dumped on the market? This kind of thing is happening now: it is not a mere thought exercise.)

Neither Linspire nor Eazel survived to the present day, the former being unable to resist the temptation to mix proprietary software into its offerings, thus alienating potential customers who might have considered a polished Debian-based distribution product three years before Ubuntu entered the scene. Eazel, meanwhile, burned through its own funding and closed up shop even before Linspire appeared, leaving the Nautilus file manager as its primary legacy. Both companies funded Free Software development, with Linspire funding independent projects, but obviously both companies decided internally how to direct funds to deserving projects, not really so unlike how businesses might normally fund Free Software.

On Infrastructure and User-Facing Software

Would individuals respond well to a one-off payment model for Free Software? The “app” model, with accompanying “store”, has clearly been considered as an option by certain companies and communities. Some have sought to provide such a store stocked with Free Software and, in the case of certain commercial services, proprietary applications. Others seek to provide their own particular software in existing stores. One hindrance to the adoption of such stores in the traditional Free Software realm is the availability of mature distribution channels for making software available, and such channels are typically freely accessible unless an enterprise software distribution is involved.

Since there seems to be an ongoing chorus of complaint about how “old” distribution software generally is, one might think that an opportunity might exist for more up-to-date applications to be delivered through some kind of store. Effort has gone into developing mechanisms for doing this separately from distributions’ own mechanisms, partly to prevent messing up the stable base system, partly to deliver newer versions of software that can be supported by the base system.

One thing that would bother me about such initiatives, even assuming that the source code for each “product” was easily obtained and rebuilt to run in the container-like environment inevitably involved, would be the longevity of the purchased items. Will the software keep working even after my base system has been upgraded? How much effort would I need to put in myself? Is this something else I have to keep track of?

For creators of Free Software, one concern would be whether it would even be attractive for people to obtain their software via such a store. As Eric notes, applications like The GIMP resemble the kind of discrete applications that people routinely pay money for, if marketed at them by proprietary software companies. But what about getting a more recent version of a mail server or a database system? Some might claim that such infrastructure things are the realm of the enterprise and that only companies would pay for this, and if individuals were interested, it would only be a small proportion of the wider GIMP-using audience.

(One might argue that the never-ending fixation on discrete applications always was a factor undermining the vision of component-based personal computing systems. Why deliver a reliable component that is reused by everyone else when you could bundle it up inside an “experience” and put your own brand name on it, just as everyone else will do if you make a component instead? The “app” marketplace now is merely this phenomenon taken to a more narcissistic extreme.)

On Sharing the Revenue

This brings us to how any collected funds would be shared out. Corporate donations are typically shared out according to corporate priorities, and communities probably do not get much of a say in the matter. In a per-unit store model, is it fair to channel all revenue from “sales” of The GIMP to that project or should some of that money be reserved for related projects, maybe those on which the application is built or maybe those that are not involved but which could offer complementary solutions? Should the project be expected to distribute the money to other projects if the storefront isn’t doing so itself? What kind of governance should be expected in order to avoid squabbles about people taking an unfair share of the money?

Existing funding platforms like Gratipay, which solicit ongoing contributions or donations as the means of funding, avoid such issues entirely by defining project-level entities which receive the money, and it is their responsibility to distribute it further. Maybe this works if the money is not meant to go any further than a small group of people with a well-defined entitlement to the incoming revenues, but with relationships between established operating system distributions and upstream projects sometimes becoming difficult, with distributions sometimes demanding an easier product to work with, and with upstream projects sometimes demanding that they not be asked to support those distributions’ users, the addition of money risks aggravating such tensions further despite having the potential to reduce or eliminate them if the sharing is done right.

On Worthy Projects

This is something about which I should know just a little. My own Free Software endeavours do not really seem to attract many users. It is not that they have attracted no other users besides myself – I have had communications over the years with some reasonably satisfied users – but my projects have not exactly been seen as meeting some urgent and widespread need.

(Having developed a common API for Python Web applications, and having delivered software to provide it, deliberately basing it on existing technologies, it was particularly galling to see that particular software labelled by some commentators as contributing to the technology proliferation problem it was meant to alleviate, but that is another story.)

What makes a worthy project? Does it have to have a huge audience or does it just need to do something useful? Who gets to define “useful”? And in a situation where someone might claim that someone else’s “important” project is a mere vanity exercise – doing something that others have already done, perhaps, or solving a need that “nobody really has” – who are we to believe if the developer seeks funding to further their work? Can people expect to see money for working on what they feel is important, or should others be telling them what they should be doing in exchange for support?

It is great that people are able to deliver Free Software as part of a business because their customers can see how that software addresses their needs. But what about the cases where potential customers are not able to see how some software might be useful, even essential, to them? Is it not part of the definition of marketing to help people realise that they need a product or solution? And in a world where people appear unaware of the risks accompanying proprietary software and services, might it not be part of our portfolio of activities to not only develop solutions before the intended audience has realised their importance, but also to try and communicate their importance to that audience?

Even essential projects have been neglected by those who use them to perform essential functions in their lives or in their businesses. It took a testimonial by Edward Snowden to get people to think how little funding GNU Privacy Guard was getting.

On Sources of Funding

Some might claim that there are already plenty of sources of funding for Free Software. Some companies sponsor or make awards to projects or developers – this even happened to me last year after someone graciously suggested my work on imip-agent as being worthy of such an award – and there are ongoing programmes to encourage Free Software participation. Meanwhile, organisations exist within the Free Software realm that receive income and spend some of that income on improving Free Software offerings and to cover the costs of providing those works to the broader public.

The Python Software Foundation is an interesting example. It solicits sponsorship from organisations and individuals whilst also receiving substantial revenue from the North American Python conference series, PyCon. It would be irresponsible to just sit on all of this money, of course, and so some of it is awarded as grants or donations towards community activities. This is where it gets somewhat more interesting from the perspective of funding software, as opposed to funding activities around the software. Looking at the PSF board resolutions, one can get an idea of what the money is spent on, and having considered this previously, I made a script that tries to identify each of the different areas of spending. This is somewhat tricky due to the free-format nature of the source data, but the numbers are something like the following for 2016 resolutions:

Category Total (USD) Percentage
Events 288051 89%
Other software development 19398 6%
Conference software development 10440 3%
Outreach 6400 2%
(All items) 324289 100%

Clearly, PyCon North America relies heavily on the conference management software that has been developed for it over the years – arguably, everyone and their dog wants to write their own conference software instead of improving existing offerings – but let us consider that expenditure a necessity for the PSF and PyCon. That leaves grants for other software development activities accounting for only 6% of the money made available for community-related activities, and one of those four grants was for translation of existing content, not software, into another language.

Now, the PSF’s mission statement arguably emphasises promotion over other matters, with “Python-related open source projects” and “Python-related research” appearing at the end of the list of objectives. Not having been involved in the grant approval process during my time as a PSF member, I cannot say what the obstacles are to getting access to funds to develop Python-related Free Software, but it seems to me that organisations like this particular one are not likely to be playing central roles in broader and more sustainable funding for Free Software projects.

On Bounties and Donations

Outside the normal boundaries of employment, there are a few other ways that have been suggested for getting paid to write Free Software. Donations are what fund organisations like the PSF, ostensibly representing one or more projects, but some individual developers solicit donations themselves, as do smaller projects that have not reached the stage of needing a foundation or other formal entity. More often than not, however, people are just doing this to “keep the lights on”, or rather, keep the hosting costs down for their Web site that has to serve up their software to everyone. Generally, such “tip jar” funding doesn’t really allow anyone to spend very much, if any, paid time on their projects.

At one point in time it seemed that bounties would provide a mechanism whereby users could help improve software projects by offering money if certain tasks were performed, features implemented, shortcomings remedied, and so on. Groups of users could pool their donations to make certain tasks more attractive for developers to work on, thus using good old-fashioned market forces to set rewards according to demand and to give developers an incentive to participate. At least in theory.

There are quite a few problems with bounty marketplaces. First of all, the pooled rewards for a task may not be at all realistic for the work requested. This is often the case on less popular marketplaces, and it used to be the case on virtually every marketplace, although one might say that more money is making it into the more well-known marketplaces now. Nevertheless, one ends up seeing bounties like this one for a new garbage collector for LuaJIT currently offering no reward at all, bearing the following comment from one discussion participant: “This really important for our production workload.” If that comment, particularly being made in such a venue, was made without any money accompanying it, then I don’t think we would need to look much further for a summarising expression of the industry-wide attitude problem around Free Software funding.

Meanwhile, where the cash is piling up, it doesn’t mean that anyone can actually take the bounty off the table. This bounty for a specific architecture port of LuaJIT attracted $5000 from IBM but still isn’t awarded, despite several people supposedly taking it on. And that brings up the next problem: who gets the reward? Unless there is some kind of coordination, and especially if a bounty is growing into something that might be a few months salary for someone, people will compete instead of collaborate to try and get their hands on the whole pot. But if a task is large and complicated, and particularly if the participants are speculating and not necessarily sufficiently knowledgeable or experienced in the technologies to complete it, such competition will just leave a trail of abandonment and very little to show for all the effort that has been made.

Admittedly, some projects appear to be making good use of such marketplaces. For example, the organisation behind elementary OS seems to have some traffic in terms of awarded bounties, although it might be said that they appear to involve smaller tasks and not some of the larger requests that appear to linger for years, never getting resolved. Of course, things can change on the outside without bounties ever getting reviewed for their continued relevance. How many Thunderbird bounties are still relevant now? Seven years is a long time even for Thunderbird.

Given a particular project with plenty of spare money coming in from somewhere, I suppose that someone who happens to be “in the zone” to do the work on that project could manage to queue up enough bounty claims to make some kind of living. But if that “somewhere” is a steady and sizeable source of revenue, and given the “gig economy” nature of the working relationship, one has to wonder whether anyone managing to get by in this way isn’t actually being short-changed by the people making the real money. And that leads us to the uncomfortable feeling that such marketplaces, while appearing to reward developers, are just a tool to drive costs down by promoting informal working arrangements and pitting people against each other for work that only needs to be paid for once, no matter how many times it was actually done in parallel.

Of course, bounties don’t really address the issue of where funds come from, although bounty marketplaces might offer ways of soliciting donations and encouraging sponsorship from companies. But one might argue that all they add is a degree of flexibility to the process of passing on donations that does not necessarily serve the interests of the participating developers. Maybe this is why the bounty marketplace featured in the above links, Bountysource, has shifted focus somewhat to compete with the likes of Gratipay and Patreon, seeking to emphasise ongoing funding instead of one-off rewards.

On Crowd-Funding

One significant development in funding mechanisms over the last few years has been the emergence of crowd-funding. Most people familiar with this would probably summarise it as someone promising something in the form of a campaign, asking for money to do that thing, getting the money if some agreed criteria were met, and then hopefully delivering that promised thing. As some people have unfortunately discovered, the last part can sometimes be a problem, resulting in continuing reputational damage for various crowd-funding platforms.

Such platforms have traditionally been popular for things like creative works, arts, crafts, literature, and so on. But hardware has become a popular funding area as people try their hand at designing devices, sourcing materials, going and getting the devices made, delivering the goods, and dealing with all the logistical challenges these things entail. Probably relatively few of these kinds of crowd-funding campaigns go completely according to plan; things like warranties and support may take a back seat (or not even be in the vehicle at all). The cynical might claim that crowd-funding is a way of people shirking their responsibilities as a manufacturer and doing things on the cheap.

But as far as software is concerned, crowd-funding might avoid the more common pitfalls experienced in other domains. Software developers would merely need to deliver software, not master other areas of expertise on a steep learning curve (sourcing, manufacturing, logistics), and they would benefit from being able to deliver the crucial element of the campaign using the same medium as that involved in getting everyone signed up in the first place: the Internet. So there wouldn’t be the same kind of customs and shipment issues that appear to plague just about every other kind of campaign.

I admit that I do not maintain a sufficient overview when it comes to software-related crowd-funding or, indeed, crowd-funding in general. The two major software campaigns I am aware of are Mailpile and Roundcube Next. Although development has been done on Roundcube Next, and Mailpile is certainly under active development, neither managed to deliver a product within the anticipated schedule. Software development costs are notoriously difficult to estimate, and it is very possible that neither project asked for enough money to pursue their goals with enough resources for timely completion.

One might say that it is no good complaining about things not getting done quickly enough and that people should pitch in and help out. On the one hand, I agree that since such products are Free Software and are available even in their unfinished state, we should take advantage of this to produce a result that satisfies everybody. On the other hand, we cannot keep returning to the “everybody pitch in” approach all the time, particularly when the circumstances provoking such a refrain have come about when people have purposefully tried to cultivate a different, more sustainable way of developing Free Software.

On Reflection

So there it is, a short article that went long again! Hopefully, it provides some useful thoughts about the limitations of existing funding approaches and some clues that might lead to better funding approaches in future.

Making Free Software Work for Everybody

Friday, March 17th, 2017

Another week and another perfect storm of articles and opinions. This time, we start with Jonas Öberg’s “How Free Software is Failing the Users“, where he notes that users don’t always get the opportunity to exercise their rights to improve Free Software. I agree with many of the things Jonas says, but he omits an important factor that is perhaps worth thinking about when reading some of the other articles. Maybe he will return to it in a later article, but I will discuss it here.

Let us consider the other articles. Alanna Irving of Open Collective wrote an interview with Jason Miller about project “maintainer burnout” entitled “Preact: Shattering the Perception that Open Source Must be Free“. It’s worth noting here that Open Collective appears to be a venture capital funded platform with similar goals to the more community-led Gratipay and Liberapay, which are funding platforms that enable people to get others to fund them to do ongoing work. Nolan Lawson of Microsoft describes the demands of volunteer-driven “open source” in “What it feels like to be an open-source maintainer“. In “Life of free software project“, Michal Čihař writes about his own experiences maintaining projects and trying to attract contributions and funding.

When reading about “open source”, one encounters some common themes over and over again: that Free Software (which is almost always referenced as “open source” when these themes are raised) must be free as in cost, and that people volunteer to work on such software in their own time or without any financial reward, often for fun or for the technical challenge. Of course, Free Software has never been about the cost. It probably doesn’t help that the word “free” can communicate the meaning of zero cost, but “free as in freedom” usually gets articulated very early on in any explanation of the concept of Free Software to newcomers.

Even “open source” isn’t about the cost, either. But the “open source” movement started out by differentiating itself from the Free Software movement by advocating the efficiency of producing Free Software instead of emphasising the matter of personal control over such software and the freedom it gives the users of such software. Indeed, the Open Source Initiative tells us this in its mission description:

Open source enables a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is higher quality, better reliability, greater flexibility, lower cost, and an end to predatory vendor lock-in.

It makes Free Software – well, “open source” – sound like a great way of realising business efficiencies rather than being an ethical choice. And with this comes the notion that when it comes to developing software, a brigade of pixies on the Internet will happily work hard to deliver a quality product that can be acquired for free, thus saving businesses money on software licence and development costs.

Thus, everybody now has to work against this perception of no-cost, made-by-magic Free Software. Jonas writes, “I know how to bake bread, but oftentimes I choose to buy bread instead.” Unfortunately, thanks to the idea that the pixies will always be on hand to fix our computers or to make new things, we now have the equivalent of bakers being asked to bake bread for nothing. (Let us ignore the specifics of the analogy here: in some markets it isn’t exactly lucrative to run a bakery, either.)

Jason Miller makes some reasonable observations as he tries to “shatter” this perception. Sadly, as it seems to be with all these funding platforms, there is some way to go. With perhaps one or two exceptions, even the most generously supported projects appear to be drawing a fraction of a single salary as donations or contributions, and it would seem that things like meet-ups and hackerspaces attract funding more readily. I guess that when there are tangible expenses – rental costs, consumables, power and network bills – people are happy to pay such externally-imposed costs. When it comes to valuing the work done by someone, even if one can quote “market rates” and document that person’s hours, everyone can argue about whether it was “really worth that amount”.

Michal Čihař notes…

But the most important thing is to persuade people and companies to give back. You know there are lot of companies relying on your project, but how to make them fund the project? I really don’t know, I still struggle with this as I don’t want to be too pushy in asking for money, but I’d really like to see them to give back.

Sadly, we live in an age of free stuff. If it looks like a project is stalling because of a lack of investment, many people and businesses will look elsewhere instead of stepping up and contributing. Indeed, this is when you see those people and businesses approaching the developers of other projects, telling those developers that they really want to use their project but it perhaps isn’t yet “good enough” for “the enterprise” or for “professional use” and maybe if it only did this and this, then they would use it, and wouldn’t that give it the credibility it clearly didn’t have before? (Even if there are lots of satisfied existing users and that this supposed absence of credibility purely exists in the minds of those shopping around for something else to use.) Oh, and crucially, how about doing the work to make it “good enough” for us for nothing? Thank you very much.

It is in this way that independent Free Software projects are kept marginalised, remaining viable enough to survive (mostly thanks to volunteer effort) but weakened by being continually discarded in favour of something else as soon as a better “deal” can be made and another group of pixies exploited. Such projects are thereby ill-equipped to deal with aggressive proprietary competitors. When fancy features are paraded by proprietary software vendors in front of decision-makers in organisations that should be choosing Free Software, advocates of free and open solutions may struggle to persuade those decision-makers that Free Software solutions can step in and do what they need.

Playing projects against each other to see which pixies will work the hardest, making developers indulge in competitions to see who can license their code the most permissively (to “reach more people”, you understand), portraying Free Software development as some kind of way of showcasing developers’ skills to potential employers (while really just making them unpaid interns on an indefinite basis) are all examples of the opportunistic underinvestment in Free Software which ultimately just creates opportunities for proprietary software. And it also goes a long way to undermining the viability of the profession in an era when we apparently don’t have enough programmers.

So that was a long rant about the plight of developers, but what does this have to do with the users? Well, first of all, users need to realise that the things they use do not cost nothing to make. Of course, in this age of free stuff (as in stuff that costs no money), they can decide that some program or service just doesn’t “do it for them” any more and switch to a shinier, better thing, but that isn’t made of pixie dust either. All of the free stuff has other hidden costs in terms of diminished privacy, increased surveillance and tracking, dubious data security, possible misuse of their property, and the discovery that certain things that they appear to own weren’t really their property all along.

Users do need to be able to engage with Free Software projects within the conventions of those projects, of course. But they also need the option of saying, “I think this could be better in this regard, but I am not the one to improve it.” And that may need the accompanying option: “Here is some money to pay someone to do it.” Free Software was always about giving control to the users but not necessarily demanding that the users become developers (or even technical writers, designers, artists, and so on). A user benefits from their office suite or drawing application being Free Software not only in situations where the user has the technical knowledge to apply it to the software, but they also benefit when they can hand the software to someone else and get them to fix or improve it instead. And, yes, that may well involve money changing hands.

Those of us who talk about Free Software and not “open source” don’t need reminding that it is about freedom and not about things being free of charge. But the users, whether they are individuals or organisations, end-users or other developers, may need reminding that you never really get something for nothing, especially when that particular something is actually a rather expensive thing to produce. Giving them the opportunity to cover some of that expense, and not just at “tip jar” levels, might actually help make Free Software work, not just for users, not just for developers, but as a consequence of empowering both groups, for everybody.