Paul Boddie's Free Software-related blog
Paul's activities and perspectives around Free Software
Some Thoughts on Python-Like Languages
June 6th, 2017
A few different things have happened recently that got me thinking about writing something about Python, its future, and Python-like languages. I don’t follow the different Python implementations as closely as I used to, but certain things did catch my attention over the last few months. But let us start with things closer to the present day.
I was neither at the North American PyCon event, nor at the invitation-only Python Language Summit that occurred as part of that gathering, but LWN.net has been reporting the proceedings to its subscribers. One of the presentations of particular interest was covered by LWN.net under the title “Keeping Python competitive”, apparently discussing efforts to “make Python faster”, the challenges faced by different Python implementations, and the limitations imposed by the “canonical” CPython implementation that can frustrate performance improvement efforts.
Here is where this more recent coverage intersects with things I have noticed over the past few months. Every now and again, an attempt is made to speed Python up, sometimes building on the CPython code base and bolting on additional technology to boost performance, sometimes reimplementing the virtual machine whilst introducing similar performance-enhancing technology. When such projects emerge, especially when a large company is behind them in some way, expectations of a much faster Python are considerable.
Thus, when the Pyston reimplementation of Python became more widely known, undertaken by people working at Dropbox (who also happen to employ Python’s creator Guido van Rossum), people were understandably excited. Three years after that initial announcement, however, and those ambitious employees now have to continue that work on their own initiative. One might be reminded of an earlier project, Unladen Swallow, which also sought to perform just-in-time compilation of Python code, undertaken by people working at Google (who also happened to employ Python’s creator Guido van Rossum at the time), which was then abandoned as those people were needed to go and work on other things. Meanwhile, another apparently-broadly-similar project, Pyjion, is being undertaken by people working at Microsoft, albeit as a “side project at work”.
As things stand, perhaps the most dependable alternative implementation of Python, at least if you want one with a just-in-time compiler that is actively developed and supported for “production use”, appears to be PyPy. And this is only because of sustained investment of both time and funding over the past decade and a half into developing the technology and tracking changes in the Python language. Heroically, the developers even try and support both Python 2 and Python 3.
Motivations for Change
Of course, Google, Dropbox and Microsoft presumably have good reasons to try and get their Python code running faster and more efficiently. Certainly, the first two companies will be running plenty of Python to support their services; reducing the hardware demands of delivering those services is definitely a motivation for investigating Python implementation improvements. I guess that there’s enough Python being run at Microsoft to make it worth their while, too. But then again, none of these organisations appear to be resourcing these efforts at anything close to what would be marshalled for their actual products, and I imagine that even similar infrastructure projects originating from such companies (things like Go, for example) have had many more people assigned to them on a permanent basis.
And now, noting the existence of projects like Grumpy – a Python to Go translator – one has to wonder whether there isn’t some kind of strategy change afoot: that it now might be considered easier for the likes of Google to migrate gradually to Go and steadily reduce their dependency on Python than it is to remedy identified deficiencies with Python. Of course, the significant problem remains of translating Python code to Go and still have it interface with code written in C against Python’s extension interfaces, maintaining reliability and performance in the result.
Indeed, the matter of Python’s “C API”, used by extensions written in C for Python programs to use, is covered in the LWN.net article. As people have sought to improve the performance of their software, they have been driven to rewrite parts of it in C, interfacing these performance-critical parts with the rest of their programs. Although such optimisation techniques make sense and have been a constant presence in software engineering more generally for many decades, it has almost become the path of least resistance when encountering performance difficulties in Python, even amongst the maintainers of the CPython implementation.
And so, alternative implementations need to either extract C-coded functionality and offer it in another form (maybe even written in Python, can you imagine?!), or they need to have a way of interfacing with it, one that could produce difficulties and impair their own efforts to deliver a robust and better-performing solution. Thus, attempts to mitigate CPython’s shortcomings have actually thwarted the efforts of other implementations to mitigate the shortcomings of Python as a whole.
Is “Python” Worth It?
You may well be wondering, if I didn’t manage to lose you already, whether all of these ambitious and brave efforts are really worth it. Might there be something with Python that just makes it too awkward to target with a revised and supposedly better implementation? Again, the LWN.net article describes sentiments that simpler, Python-like languages might be worth considering, mentioning the Hack language in the context of PHP, although I might also suggest Crystal in the context of Ruby, even though the latter is possibly closer to various functional languages and maybe only bears syntactic similarities to Ruby (although I haven’t actually looked too closely).
One has to be careful with languages that look dynamic but are really rather strict in how types are assigned, propagated and checked. And, should one choose to accept static typing, even with type inference, it could be said that there are plenty of mature languages – OCaml, for instance – that are worth considering instead. As people have experimented with Python-like languages, others have been quick to criticise them for not being “Pythonic”, even if the code one writes is valid Python. But I accept that the challenge for such languages and their implementations is to offer a Python-like experience without frustrating the programmer too much about things which look valid but which are forbidden.
My tuning of a Python program to work with Shedskin needed to be informed about what Shedskin was likely to allow and to reject. As far as I am concerned, as long as this is not too restrictive, and as long as guidance is available, I don’t see a reason why such a Python-like language couldn’t be as valid as “proper” Python. Python itself has changed over the years, and the version I first used probably wouldn’t measure up to what today’s newcomers would accept as Python at all, but I don’t accept that the language I used back in 1995 was not Python: that would somehow be a denial of history and of my own experiences.
Could I actually use something closer to Python 1.4 (or even 1.3) now? Which parts of more recent versions would I miss? And which parts of such ancient Pythons might even be superfluous? In pursuing my interests in source code analysis, I decided to consider such questions in more detail, partly motivated by the need to keep the investigation simple, partly motivated by laziness (that something might be amenable to analysis but more effort than I considered worthwhile), and partly motivated by my own experiences developing Python-based solutions.
A Leaner Python
Usually, after a title like that, one might expect to read about how I made everything in Python statically typed, or that I decided to remove classes and exceptions from the language, or do something else that would seem fairly drastic and change the character of the language. But I rather like the way Python behaves in a fundamental sense, with its classes, inheritance, dynamic typing and indentation-based syntax.
Other languages inspired by Python have had a tendency to diverge noticeably from the general form of Python: Boo, Cobra, Delight, Genie and Nim introduce static typing and (arguably needlessly) change core syntactic constructs; Converge and Mython focus on meta-programming; MyPy is the basis of efforts to add type annotations and “optional static typing” to Python itself. Meanwhile, Serpentine is a project being developed by my brother, David, and is worth looking at if you want to write software for Android, have some familiarity with user interface frameworks like PyQt, and can accept the somewhat moderated type discipline imposed by the Android APIs and the Dalvik runtime environment.
In any case, having already made a few rounds trying to perform analysis on Python source code, I am more interested in keeping the foundations of Python intact and focusing on the less visible characteristics of programs: effectively reading between the lines of the source code by considering how it behaves during execution. Solutions like Shedskin take advantage of restrictions on programs to be able to make deductions about program behaviour. These deductions can be sufficient in helping us understand what a program might actually do when run, as well as helping the compiler make more robust or efficient programs.
And the right kind of restrictions might even help us avoid introducing more disruptive restrictions such as having to annotate all the types in a program in order to tell us similar things (which appears to be one of the main directions of Python in the current era, unfortunately). I would rather lose exotic functionality that I have never really been convinced by, than retain such functionality and then have to tell the compiler about things it would otherwise have a chance of figuring out for itself.
Rocking the Boat
Certainly, being confronted with any list of restrictions, despite the potential benefits, can seem like someone is taking all the toys away. And it can be difficult to deliver the benefits to make up for this loss of functionality, no matter how frivolous some of it is, especially if there are considerable expectations in terms of things like performance. Plenty of people writing alternative Python implementations can attest to that. But there are other reasons to consider a leaner, more minimal, Python-like language and accompanying implementation.
For me, one rather basic reason is merely to inform myself about program analysis, figure out how difficult it is, and hopefully produce a working solution. But beyond that is the need to be able to exercise some level of control over the tools I depend on. Python 2 will in time no longer be maintained by the Python core development community; a degree of agitation has existed for some time to replace it with Python 3 in Free Software operating system distributions. Yet I remain unconvinced about Python 3, particularly as it evolves towards a language that offers “optional” static typing that will inevitably become mandatory (despite assertions that it will always officially be optional) as everyone sprinkles their code with annotations and hopes for the magic fairies and pixies to come along and speed it up, that latter eventuality being somewhat less certain.
There are reasons to consider alternative histories for Python in the form of Python-like languages. People argue about whether Python 3’s Unicode support makes it as suitable for certain kinds of programs as Python 2 has been, with the Mercurial project being notable in its refusal to hurry along behind the Python 3 adoption bandwagon. Indeed, PyPy was devised as a platform for such investigations, being only somewhat impaired in some respects by its rather intensive interpreter generation process (but I imagine there are ways to mitigate this).
Making a language implementation that is adaptable is also important. I like the ability to be able to cross-compile programs, and my own work attempts to make this convenient. Meanwhile, cross-building CPython has been a struggle for many years, and I feel that it says rather a lot about Python core development priorities that even now, with the need to cross-build CPython if it is to be available on mobile platforms like Android, the lack of a coherent cross-building strategy has left those interested in doing this kind of thing maintaining their own extensive patch sets. (Serpentine gets around this problem, as well as the architectural limitations of dropping CPython on an Android-based device and trying to hook it up with the different Android application frameworks, by targeting the Dalvik runtime environment instead.)
No Need for Another Language?
I found it depressingly familiar when David announced his Android work on the Python mobile-sig mailing list and got the following response:
In case you weren't aware, you can just write Android apps and services in Python, using Kivy. No need to invent another language.
Fortunately, various other people were more open-minded about having a new toolchain to target Android. Personally, the kind of “just use …” rhetoric reminds me of the era when everyone writing Web applications in Python were exhorted to “just use Zope“, which was a complicated (but admittedly powerful and interesting) framework whose shortcomings were largely obscured and downplayed until enough people had experienced them and felt that progress had to be made by working around Zope altogether and developing other solutions instead. Such zero-sum games – that there is one favoured approach to be promoted, with all others to be terminated or hidden – perhaps inspired by an overly-parroted “only one way to do it” mantra in the Python scene, have been rather damaging to both the community and to the adoption of Python itself.
Not being Python, not supporting every aspect of Python, has traditionally been seen as a weakness when people have announced their own implementations of Python or of Python-like languages. People steer clear of impressive works like PyPy or Nuitka because they feel that these things might not deliver everything CPython does, exactly like CPython does. Which is pretty terrible if you consider the heroic effort that the developer of Nuitka puts in to make his software work as similarly to CPython as possible, even going as far as to support Python 2 and Python 3, just as the PyPy team do.
Solutions like MicroPython have got away with certain incompatibilities with the justification that the target environment is rather constrained. But I imagine that even that project’s custodians get asked whether it can run Django, or whatever the arbitrarily-set threshold for technological validity might be. Never mind whether you would really want to run Django on a microcontroller or even on a phone. And never mind whether large parts of the mountain of code propping up such supposedly essential solutions could actually do with an audit and, in some cases, benefit from being retired and rewritten.
I am not fond of change for change’s sake, but new opportunities often bring new priorities and challenges with them. What then if Python as people insist on it today, with all the extra features added over the years to satisfy various petitioners and trends, is actually the weakness itself? What if the Python-like languages can adapt to these changes, and by having to confront their incompatibilities with hastily-written code from the 1990s and code employing “because it’s there” programming techniques, they can adapt to the changing environment while delivering much of what people like about Python in the first place? What if Python itself cannot?
“Why don’t you go and use something else if you don’t like what Python is?” some might ask. Certainly, Free Software itself is far more important to me than any adherence to Python. But I can also choose to make that other language something that carries forward the things I like about Python, not something that looks and behaves completely differently. And in doing so, at least I might gain a deeper understanding of what matters to me in Python, even if others refuse the lessons and the opportunities such Python-like languages can provide.
VGA Signal Generation with the PIC32
May 22nd, 2017
It all started after I had designed – and received from fabrication – a circuit board for prototyping cartridges for the Acorn Electron microcomputer. Although some prototyping had already taken place with an existing cartridge, with pins intended for ROM components being routed to drive other things, this board effectively “breaks out” all connections available to a cartridge that has been inserted into the computer’s Plus 1 expansion unit.
One thing led to another, and soon my brother, David, was interfacing a microcontroller to the Electron in order to act as a peripheral being driven directly by the system’s bus signals. His approach involved having a program that would run and continuously scan the signals for read and write conditions and then interpret the address signals, sending and receiving data on the bus when appropriate.
Having acquired some PIC32 devices out of curiosity, with the idea of potentially interfacing them with the Electron, I finally took the trouble of looking at the datasheet to see whether some of the hard work done by David’s program might be handled by the peripheral hardware in the PIC32. The presence of something called “Parallel Master Port” was particularly interesting.
Operating this function in the somewhat insensitively-named “slave” mode, the device would be able to act like a memory device, with the signalling required by read and write operations mostly being dealt with by the hardware. Software running on the PIC32 would be able to read and write data through this port and be able to get notifications about new data while getting on with doing other things.
So began my journey into PIC32 experimentation, but this article isn’t about any of that, mostly because I put that particular investigation to one side after a degree of experience gave me perhaps a bit too much confidence, and I ended up being distracted by something far more glamorous: generating a video signal using the PIC32!
The Precedents’ Hall of Fame
There are plenty of people who have written up their experiments generating VGA and other video signals with microcontrollers. Here are some interesting examples:
- “VGA Video Generator” provides an introduction to VGA signal generation and gives circuit details as well as assembly language programs for AVR microcontrollers
- Craft is an impressive AVR-based “demo platform” producing video and audio
- “VGA On The Arduino With No External Parts Or CPU!” provides some good background detail on VGA signal generation and then describes how to get the secondary serial-to-USB microcontroller on certain Arduino devices to act as a video adapter
- “VGA output using a 36-pin STM32” provides some helpful signal details and implementation suggestions
And there are presumably many more pages on the Web with details of people sending pixel data over a cable to a display of some sort, often trying to squeeze every last cycle out of their microcontroller’s instruction processing unit. But, given an awareness of how microcontrollers should be able to take the burden off the programs running on them, employing peripheral hardware to do the grunt work of switching pins on and off at certain frequencies, maybe it would be useful to find examples of projects where such advantages of microcontrollers had been brought to bear on the problem.
In fact, I was already aware of the Maximite “single chip computer” partly through having seen the cloned version of the original being sold by Olimex – something rather resented by the developer of the Maximite for reasons largely rooted in an unfortunate misunderstanding of Free Software licensing on his part – and I was aware that this computer could generate a VGA signal. Indeed, the method used to achieve this had apparently been written up in a textbook for the PIC32 platform, albeit generating a composite video signal using one of the on-chip SPI peripherals. The Colour Maximite uses three SPI channels to generate one red, one green, and one blue channel of colour information, thus supporting eight-colour graphical output.
But I had been made aware of the Parallel Master Port (PMP) and its “master” mode, used to drive LCD panels with eight bits of colour information per pixel (or, using devices with many more pins than those I had acquired, with sixteen bits of colour information per pixel). Would it surely not be possible to generate 256-colour graphical output at the very least?
Information from people trying to use PMP for this purpose was thin on the ground. Indeed, reading again one article that mentioned an abandoned attempt to get PMP working in this way, using the peripheral to emit pixel data for display on a screen instead of a panel, I now see that it actually mentions an essential component of the solution that I finally arrived at. But the author had unfortunately moved away from that successful component in an attempt to get the data to the display at a rate regarded as satisfactory.
Direct Savings
It is one thing to have the means to output data to be sent over a cable to a display. It is another to actually send the data efficiently from the microcontroller. Having contemplated such issues in the past, it was not a surprise that the Maximite and other video-generating solutions use direct memory access (DMA) to get the hardware, as opposed to programs, to read through memory and to write its contents to a destination, which in most cases seemed to be the memory address holding output data to be emitted via a data pin using the SPI mechanism.
I had also envisaged using DMA and was still fixated on using PMP to emit the different data bits to the output circuit producing the analogue signals for the display. Indeed, Microchip promotes the PMP and DMA combination as a way of doing “low-cost controllerless graphics solutions” involving LCD panels, so I felt that there surely couldn’t be much difference between that and getting an image on my monitor via a few resistors on the breadboard.
And so, a tour of different PIC32 features began, trying to understand the DMA documentation, the PMP documentation, all the while trying to get a grasp of what the VGA signal actually looks like, the timing constraints of the various synchronisation pulses, and battle various aspects of the MIPS architecture and the PIC32 implementation of it, constantly refining my own perceptions and understanding and learning perhaps too often that there may have been things I didn’t know quite enough about before trying them out!
Using VGA to Build a Picture
Before we really start to look at a VGA signal, let us first look at how a picture is generated by the signal on a screen:
The most important detail at this point is the central area of the diagram, filled with horizontal lines representing the colour information that builds up a picture on the display, with the actual limits of the screen being represented here by the bold rectangle outline. But it is also important to recognise that even though there are a number of visible “display lines” within which the colour information appears, the entire “frame” sent to the display actually contains yet more lines, even though they will not be used to produce an image.
Above and below – really before and after – the visible display lines are the vertical back and front porches whose lines are blank because they do not appear on the screen or are used to provide a border at the top and bottom of the screen. Such extra lines contribute to the total frame period and to the total number of lines dividing up the total frame period.
Figuring out how many lines a display will have seems to involve messing around with something called the “generalised timing formula”, and if you have an X server like Xorg installed on your system, you may even have a tool called “gtf” that will attempt to calculate numbers of lines and pixels based on desired screen resolutions and frame rates. Alternatively, you can look up some common sets of figures on sites providing such information.
What a VGA Signal Looks Like
Some sources show diagrams attempting to describe the VGA signal, but many of these diagrams are open to interpretation (in some cases, very much so). They perhaps show the signal for horizontal (display) lines, then other signals for the entire image, but they either do not attempt to combine them, or they instead combine these details ambiguously.
For instance, should the horizontal sync (synchronisation) pulse be produced when the vertical sync pulse is active or during the “blanking” period when no pixel information is being transmitted? This could be deduced from some diagrams but only if you share their authors’ unstated assumptions and do not consider other assertions about the signal structure. Other diagrams do explicitly show the horizontal sync active during vertical sync pulses, but this contradicts statements elsewhere such as “during the vertical sync period the horizontal sync must also be held low”, for instance.
After a lot of experimentation, I found that the following signal structure was compatible with the monitor I use with my computer:
There are three principal components to the signal:
- Colour information for the pixel or line data forms the image on the display and it is transferred within display lines during what I call the visible display period in every frame
- The horizontal sync pulse tells the display when each horizontal display line ends, or at least the frequency of the lines being sent
- The vertical sync pulse tells the display when each frame (or picture) ends, or at least the refresh rate of the picture
The voltage levels appear to be as follows:
- Colour information should be at 0.7V (although some people seem to think that 1V is acceptable as “the specified peak voltage for a VGA signal”)
- Sync pulses are supposed to be at “TTL” levels, which apparently can be from 0V to 0.5V for the low state and from 2.7V to 5V for the high state
Meanwhile, the polarity of the sync pulses is also worth noting. In the above diagram, they have negative polarity, meaning that an active pulse is at the low logic level. Some people claim that “modern VGA monitors don’t care about sync polarity”, but since it isn’t clear to me what determines the polarity, and since most descriptions and demonstrations of VGA signal generation seem to use negative polarity, I chose to go with the flow. As far as I can tell, the gtf tool always outputs the same polarity details, whereas certain resources provide signal characteristics with differing polarities.
It is possible, and arguably advisable, to start out trying to generate sync pulses and just grounding the colour outputs until your monitor (or other VGA-capable display) can be persuaded that it is receiving a picture at a certain refresh rate and resolution. Such confirmation can be obtained on a modern display by seeing a blank picture without any “no signal” or “input not supported” messages and by being able to activate the on-screen menu built into the device, in which an option is likely to exist to show the picture details.
How the sync and colour signals are actually produced will be explained later on. This section was merely intended to provide some background and gather some fairly useful details into one place.
Counting Lines and Generating Vertical Sync Pulses
The horizontal and vertical sync pulses are each driven at their own frequency. However, given that there are a fixed number of lines in every frame, it becomes apparent that the frequency of vertical sync pulse occurrences is related to the frequency of horizontal sync pulses, the latter occurring once per line, of course.
With, say, 622 lines forming a frame, the vertical sync will occur once for every 622 horizontal sync pulses, or at a rate that is 1/622 of the horizontal sync frequency or “line rate”. So, if we can find a way of generating the line rate, we can not only generate horizontal sync pulses, but we can also count cycles at this frequency, and every 622 cycles we can produce a vertical sync pulse.
But how do we calculate the line rate in the first place? First, we decide what our refresh rate should be. The “classic” rate for VGA output is 60Hz. Then, we decide how many lines there are in the display including those extra non-visible lines. We multiply the refresh rate by the number of lines to get the line rate:
60Hz * 622 = 37320Hz = 37.320kHz
On a microcontroller, the obvious way to obtain periodic events is to use a timer. Given a particular frequency at which the timer is updated, a quick calculation can be performed to discover how many times a timer needs to be incremented before we need to generate an event. So, let us say that we have a clock frequency of 24MHz, and a line rate of 37.320kHz, we calculate the number of timer increments required to produce the latter from the former:
24MHz / 37.320kHz = 24000000Hz / 37320Hz = 643
So, if we set up a timer that counts up to 642 and then upon incrementing again to 643 actually starts again at zero, with the timer sending a signal when this “wraparound” occurs, we can have a mechanism providing a suitable frequency and then make things happen at that frequency. And this includes counting cycles at this particular frequency, meaning that we can increment our own counter by 1 to keep track of display lines. Every 622 display lines, we can initiate a vertical sync pulse.
One aspect of vertical sync pulses that has not yet been mentioned is their duration. Various sources suggest that they should last for only two display lines, although the “gtf” tool specifies three lines instead. Our line-counting logic therefore needs to know that it should enable the vertical sync pulse by bringing it low at a particular starting line and then disable it by bringing it high again after two whole lines.
Generating Horizontal Sync Pulses
Horizontal sync pulses take place within each display line, have a specific duration, and they must start at the same time relative to the start of each line. Some video output demonstrations seem to use lots of precisely-timed instructions to achieve such things, but we want to use the peripherals of the microcontroller as much as possible to avoid wasting CPU time. Having considered various tricks involving specially formulated data that might be transferred from memory to act as a pulse, I was looking for examples of DMA usage when I found a mention of something called the Output Compare unit on the PIC32.
What the Output Compare (OC) units do is to take a timer as input and produce an output signal dependent on the current value of the timer relative to certain parameters. In clearer terms, you can indicate a timer value at which the OC unit will cause the output to go high, and you can indicate another timer value at which the OC unit will cause the output to go low. It doesn’t take much imagination to realise that this sounds almost perfect for generating the horizontal sync pulse:
- We take the timer previously set up which counts up to 643 and thus divides the display line period into units of 1/643.
- We identify where the pulse should be brought low and present that as the parameter for taking the output low.
- We identify where the pulse should be brought high and present that as the parameter for taking the output high.
Upon combining the timer and the OC unit, then configuring the output pin appropriately, we end up with a low pulse occurring at the line rate, but at a suitable offset from the start of each line.
In fact, the OC unit also proves useful in actually generating the vertical sync pulses, too. Although we have a timer that can tell us when it has wrapped around, we really need a mechanism to act upon this signal promptly, at least if we are to generate a clean signal. Unfortunately, handling an interrupt will introduce a delay between the timer wrapping around and the CPU being able to do something about it, and it is not inconceivable that this delay may vary depending on what the CPU has been doing.
So, what seems to be a reasonable solution to this problem is to count the lines and upon seeing that the vertical sync pulse should be initiated at the start of the next line, we can enable another OC unit configured to act as soon as the timer value is zero. Thus, upon wraparound, the OC unit will spring into action and bring the vertical sync output low immediately. Similarly, upon realising that the next line will see the sync pulse brought high again, we can reconfigure the OC unit to do so as soon as the timer value again wraps around to zero.
Inserting the Colour Information
At this point, we can test the basic structure of the signal and see if our monitor likes it. But none of this is very interesting without being able to generate a picture, and so we need a way of getting pixel information from the microcontroller’s memory to its outputs. We previously concluded that Direct Memory Access (DMA) was the way to go in reading the pixel data from what is usually known as a framebuffer, sending it to another place for output.
As previously noted, I thought that the Parallel Master Port (PMP) might be the right peripheral to use. It provides an output register, confusingly called the PMDIN (parallel master data in) register, that lives at a particular address and whose value is exposed on output pins. On the PIC32MX270, only the least significant eight bits of this register are employed in emitting data to the outside world, and so a DMA destination having a one-byte size, located at the address of PMDIN, is chosen.
The source data is the framebuffer, of course. For various retrocomputing reasons hinted at above, I had decided to generate a picture 160 pixels in width, 256 lines in height, and with each byte providing eight bits of colour depth (specifying how many distinct colours are encoded for each pixel). This requires 40 kilobytes and can therefore reside in the 64 kilobytes of RAM provided by the PIC32MX270. It was at this point that I learned a few things about the DMA mechanisms of the PIC32 that didn’t seem completely clear from the documentation.
Now, the documentation talks of “transactions”, “cells” and “blocks”, but I don’t think it describes them as clearly as it could do. Each “transaction” is just a transfer of a four-byte word. Each “cell transfer” is a collection of transactions that the DMA mechanism performs in a kind of batch, proceeding with these as quickly as it can until it either finishes the batch or is told to stop the transfer. Each “block transfer” is a collection of cell transfers. But what really matters is that if you want to transfer a certain amount of data and not have to keep telling the DMA mechanism to keep going, you need to choose a cell size that defines this amount. (When describing this, it is hard not to use the term “block” rather than “cell”, and I do wonder why they assigned these terms in this way because it seems counter-intuitive.)
You can perhaps use the following template to phrase your intentions:
I want to transfer <cell size> bytes at a time from a total of <block size> bytes, reading data starting from <source address>, having <source size>, and writing data starting at <destination address>, having <destination size>.
The total number of bytes to be transferred – the block size – is calculated from the source and destination sizes, with the larger chosen to be the block size. If we choose a destination size less than the source size, the transfers will not go beyond the area of memory defined by the specified destination address and the destination size. What actually happens to the “destination pointer” is not immediately obvious from the documentation, but for our purposes, where we will use a destination size of one byte, the DMA mechanism will just keep writing source bytes to the same destination address over and over again. (One might imagine the pointer starting again at the initial start address, or perhaps stopping at the end address instead.)
So, for our purposes, we define a “cell” as 160 bytes, being the amount of data in a single display line, and we only transfer one cell in a block. Thus, the DMA source is 160 bytes long, and even though the destination size is only a single byte, the DMA mechanism will transfer each of the source bytes into the destination. There is a rather unhelpful diagram in the documentation that perhaps tries to communicate too much at once, leading one to believe that the cell size is a factor in how the destination gets populated by source data, but the purpose of the cell size seems only to be to define how much data is transferred at once when a transfer is requested.

The transfer of framebuffer data to PORTB using DMA cell transfers (noting that this hints at the eventual approach which uses PORTB and not PMDIN)
In the matter of requesting a transfer, we have already described the mechanism that will allow us to make this happen: when the timer signals the start of a new line, we can use the wraparound event to initiate a DMA transfer. It would appear that the transfer will happen as fast as both the source and the destination will allow, at least as far as I can tell, and so it is probably unlikely that the data will be sent to the destination too quickly. Once the transfer of a line’s pixel data is complete, we can do some things to set up the transfer for the next line, like changing the source data address to point to the next 160 bytes representing the next display line.
(We could actually set the block size to the length of the entire framebuffer – by setting the source size – and have the DMA mechanism automatically transfer each line in turn, updating its own address for the current line. However, I intend to support hardware scrolling, where the address of the first line of the screen can be adjusted so that the display starts part way through the framebuffer, reaches the end of the framebuffer part way down the screen, and then starts again at the beginning of the framebuffer in order to finish displaying the data at the bottom of the screen. The DMA mechanism doesn’t seem to support the necessary address wraparound required to manage this all by itself.)
Output Complications
Having assumed that the PMP peripheral would be an appropriate choice, I soon discovered some problems with the generated output. Although the data that I had stored in the RAM seemed to be emitted as pixels in appropriate colours, there were gaps between the pixels on the screen. Yet the documentation seemed to vaguely indicate that the PMDIN register was more or less continuously updated. That meant that the actual output signals were being driven low between each pixel, causing black-level gaps and ruining the result.
I wondered if anything could be done about this issue. PMP is really intended as some kind of memory interface, and it isn’t unreasonable for it to only maintain valid data for certain periods of time, modifying control signals to define this valid data period. That PMP can be used to drive LCD panels is merely a result of those panels themselves upholding this kind of interface. For those of you familiar with microcontrollers, the solution to my problem was probably obvious several paragraphs ago, but it needed me to reconsider my assumptions and requirements before I realised what I should have been doing all along.
Unlike SPI, which concerns itself with the bit-by-bit serial output of data, PMP concerns itself with the multiple-bits-at-once parallel output of data, and all I wanted to do was to present multiple bits to a memory location and have them translated to a collection of separate signals. But, of course, this is exactly how normal I/O (input/output) pins are provided on microcontrollers! They all seem to provide “PORT” registers whose bits correspond to output pins, and if you write a value to those registers, all the pins can be changed simultaneously. (This feature is obscured by platforms like Arduino where functions are offered to manipulate only a single pin at once.)
And so, I changed the DMA destination to be the PORTB register, which on the PIC32MX270 is the only PORT register with enough bits corresponding to I/O pins to be useful enough for this application. Even then, PORTB does not have a complete mapping from bits to pins: some pins that are available in other devices have been dedicated to specific functions on the PIC32MX270F256B and cannot be used for I/O. So, it turns out that we can only employ at most seven bits of our pixel data in generating signal data:
Pins | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
… | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
… | RPB15 | RPB14 | RPB13 | RPB11 | RPB10 | RPB9 | RPB8 | RPB7 | RPB5 | RPB4 | RPB3 | RPB2 | RPB1 | RPB0 |
We could target the first byte of PORTB (bits 0 to 7) or the second byte (bits 8 to 15), but either way we will encounter an unmapped bit. So, instead of choosing a colour representation making use of eight bits, we have to make do with only seven.
Initially, not noticing that RPB6 was not available, I was using a “RRRGGGBB” or “332” representation. But persuaded by others in a similar predicament, I decided to choose a representation where each colour channel gets two bits, and then a separate intensity bit is used to adjust the final intensity of the basic colour result. This also means that greyscale output is possible because it is possible to balance the channels.

The colours employing two bits per channel plus one intensity bit, perhaps not shown completely accurately due to circuit inadequacies and the usual white balance issues when taking photographs
It is worth noting at this point that since we have left the 8-bit limitations of the PMP peripheral far behind us now, we could choose to populate two bytes of PORTB at once, aiming for sixteen bits per pixel but actually getting fourteen bits per pixel once the unmapped bits have been taken into account. However, this would double our framebuffer memory requirements for the same resolution, and we don’t have that much memory. There may be devices with more than sixteen bits mapped in the 32-bit PORTB register (or in one of the other PORT registers), but they had better have more memory to be useful for greater colour depths.
Back in Black
One other matter presented itself as a problem. It is all very well generating a colour signal for the pixels in the framebuffer, but what happens at the end of each DMA transfer once a line of pixels has been transmitted? For the portions of the display not providing any colour information, the channel signals should be held at zero, yet it is likely that the last pixel on any given line is not at the lowest possible (black) level. And so the DMA transfer will have left a stray value in PORTB that could then confuse the monitor, producing streaks of colour in the border areas of the display, making the monitor unsure about the black level in the signal, and also potentially confusing some monitors about the validity of the picture, too.
As with the horizontal sync pulses, we need a prompt way of switching off colour information as soon as the pixel data has been transferred. We cannot really use an Output Compare unit because that only affects the value of a single output pin, and although we could wire up some kind of blanking in our external circuit, it is simpler to look for a quick solution within the capabilities of the microcontroller. Fortunately, such a quick solution exists: we can “chain” another DMA channel to the one providing the pixel data, thereby having this new channel perform a transfer as soon as the pixel data has been sent in its entirety. This new channel has one simple purpose: to transfer a single byte of black pixel data. By doing this, the monitor will see black in any borders and beyond the visible regions of the display.
Wiring Up
Of course, the microcontroller still has to be connected to the monitor somehow. First of all, we need a way of accessing the pins of a VGA socket or cable. One reasonable approach is to obtain something that acts as a socket and that breaks out the different signals from a cable, connecting the microcontroller to these broken-out signals.
Wanting to get something for this task quickly and relatively conveniently, I found a product at a local retailer that provides a “male” VGA connector and screw-adjustable terminals to break out the different pins. But since the VGA cable also has a male connector, I also needed to get a “gender changer” for VGA that acts as a “female” connector in both directions, thus accommodating the VGA cable and the male breakout board connector.
Wiring up to the broken-out VGA connector pins is mostly a matter of following diagrams and the pin numbering scheme, illustrated well enough in various resources (albeit with colour signal transposition errors in some resources). Pins 1, 2 and 3 need some special consideration for the red, green and blue signals, and we will look at them in a moment. However, pins 13 and 14 are the horizontal and vertical sync pins, respectively, and these can be connected directly to the PIC32 output pins in this case, since the 3.3V output from the microcontroller is supposedly compatible with the “TTL” levels. Pins 5 through 10 can be connected to ground.
We have seen mentions of colour signals with magnitudes of up to 0.7V, but no explicit mention of how they are formed has been presented in this article. Fortunately, everyone is willing to show how they converted their digital signals to an analogue output, with most of them electing to use a resistor network to combine each output pin within a channel to produce a hopefully suitable output voltage.
Here, with two bits per channel, I take the most significant bit for a channel and send it through a 470ohm resistor. Meanwhile, the least significant bit for the channel is sent through a 1000ohm resistor. Thus, the former contributes more to the magnitude of the signal than the latter. If we were only dealing with channel information, this would be as much as we need to do, but here we also employ an intensity bit whose job it is to boost the channels by a small amount, making sure not to allow the channels to pollute each other via this intensity sub-circuit. Here, I feed the intensity output through a 2200ohm resistor and then to each of the channel outputs via signal diodes.
The Final Picture
I could probably go on and cover other aspects of the solution, but the fundamental aspects are probably dealt with sufficiently above to help others reproduce this experiment themselves. Populating memory with usable image data, at least in this solution, involves copying data to RAM, and I did experience problems with accessing RAM that are probably related to CPU initialisation (as covered in my previous article) and to synchronising the memory contents with what the CPU has written via its cache.
As for the actual picture data, the RGB-plus-intensity representation is not likely to be the format of most images these days. So, to prepare data for output, some image processing is needed. A while ago, I made a program to perform palette optimisation and dithering on images for the Acorn Electron, and I felt that it was going to be easier to adapt the dithering code than it was to figure out the necessary techniques required for software like ImageMagick or the Python Imaging Library. The pixel data is then converted to assembly language data definition statements and incorporated into my PIC32 program.

VGA output from a PIC32 microcontroller, featuring a picture showing some Oslo architecture, with the PIC32MX270 being powered (and programmed) by the Arduino Duemilanove, and with the breadboards holding the necessary resistors and diodes to supply the VGA breakout and, beyond that, the cable to the monitor
To demonstrate control over the visible region, I deliberately adjusted the display frequencies so that the monitor would consider the signal to be carrying an image 800 pixels by 600 pixels at a refresh rate of 60Hz. Since my framebuffer is only 256 lines high, I double the lines to produce 512 lines for the display. It would seem that choosing a line rate to try and produce 512 lines has the monitor trying to show something compatible with the traditional 640×480 resolution and thus lines are lost off the screen. I suppose I could settle for 480 lines or aim for 300 lines instead, but I actually don’t mind having a border around the picture.
It is worth noting that I haven’t really mentioned a “pixel clock” or “dot clock” so far. As far as the display receiving the VGA signal is concerned, there is no pixel clock in that signal. And as far as we are concerned, the pixel clock is only important when deciding how quickly we can get our data into the signal, not in actually generating the signal. We can generate new colour values as slowly (or as quickly) as we might like, and the result will be wider (or narrower) pixels, but it shouldn’t make the actual signal invalid in any way.
Of course, it is important to consider how quickly we can generate pixels. Previously, I mentioned a 24MHz clock being used within the PIC32, and it is this clock that is used to drive peripherals and this clock’s frequency that will limit the transfer speed. As noted elsewhere, a pixel clock frequency of 25MHz is used to support the traditional VGA resolution of 640×480 at 60Hz. With the possibilities of running the “peripheral clock” in the PIC32MX270 considerably faster than this, it becomes a matter of experimentation as to how many pixels can be supported horizontally.
For my own purposes, I have at least reached the initial goal of generating a stable and usable video signal. Further work is likely to involve attempting to write routines to modify the framebuffer, maybe support things like scrolling and sprites, and even consider interfacing with other devices.
Naturally, this project is available as Free Software from its own repository. Maybe it will inspire or encourage you to pursue something similar, knowing that you absolutely do not need to be any kind of “expert” to stubbornly persist and to eventually get results!
Evaluating PIC32 for Hardware Experiments
May 19th, 2017
Some time ago I became aware of the PIC32 microcontroller platform, perhaps while following various hardware projects, pursuing hardware-related interests, and looking for pertinent documentation. Although I had heard of PIC microcontrollers before, my impression of them was that they were mostly an alternative to other low-end computing products like the Atmel AVR series, but with a development ecosystem arguably more reliant on its vendor than the Atmel products for which tools like avr-gcc and avrdude exist as Free Software and have gone on to see extensive use, perhaps most famously in connection with the Arduino ecosystem.
What made PIC32 stand out when I first encountered it, however, was that it uses the MIPS32 instruction set architecture instead of its own specialised architecture. Consequently, instead of being reliant on the silicon vendor and random third-party tool providers for proprietary tools, the possibility of using widely-used Free Software tools exists. Moreover, with a degree of familiarity with MIPS assembly language, thanks to the Ben NanoNote, I felt that there might be an opportunity to apply some of my elementary skills to another MIPS-based system and gain some useful experience.
Some basic investigation occurred before I made any attempt to acquire hardware. As anyone having to pay attention to the details of hardware can surely attest, it isn’t much fun to obtain something only to find that some necessary tool required for the successful use of that hardware happens to be proprietary, only works on proprietary platforms, or is generally a nuisance in some way that makes you regret the purchase. Looking around at various resources such as the Pinguino Web site gave me some confidence that there were people out there using PIC32 devices with Free Software. (Of course, the eventual development scenario proved to be different from that envisaged in these initial investigations, but previous experience has taught me to expect that things may not go as originally foreseen.)
Some Discoveries
So that was perhaps more than enough of an introduction, given that I really want to focus on some of my discoveries in using my acquired PIC32 devices, hoping that writing them up will help others to use Free Software with this platform. So, below, I will present a few discoveries that may well, for all I know, be “obvious” to people embedded in the PIC universe since it began, or that might be “superfluous” to those happy that Microchip’s development tools can obscure the operation of the hardware to offer a simpler “experience”.
I should mention at this point that I chose to acquire PDIP-profile products for use with a solderless breadboard. This is the level of sophistication at which the Arduino products mostly operate and it allows convenient prototyping involving discrete components and other electronic devices. The evidence from the chipKIT and Pinguino sites suggested that it would be possible to set up a device on a breadboard, wire it up to a power supply with a few supporting components, and then be able to program it. (It turned out that programming involved another approach than indicated by that latter reference, however.)
The 28-pin product I elected to buy was the PIC32MX270F256B-50/SP. I also bought some capacitors that were recommended for wiring up the device. In the picture below, you can just about see the capacitor connecting two of the pins, and there is also a resistor pulling up one of the pins. I recommend obtaining a selection of resistors of different values so as to be able to wire up various circuits as you encounter them. Note that the picture does not show a definitive wiring guide: please refer to the product documentation or to appropriate project documentation for such things.
Programming the Device
Despite the apparent suitability of a program called pic32prog, recommended by the “cheap DIY programmer” guide, I initially found success elsewhere. I suppose it didn’t help that the circuit diagram was rather hard to follow for me, as someone who isn’t really familiar with certain electrical constructs that have been mixed in, arguably without enough explanation.
Initial Recommendation: ArduPIC32
Instead, I looked for a solution that used an Arduino product (not something procured from ephemeral Chinese “auction site” vendors) and found ArduPIC32 living a quiet retirement in the Google Code Archive. Bypassing tricky voltage level conversion and connecting an Arduino Duemilanove with the PIC32 device on its 5V-tolerant JTAG-capable pins, ArduPIC32 mostly seemed to work, although I did alter it slightly to work the way I wanted to and to alleviate the usual oddness with Arduino serial-over-USB connections.
However, I didn’t continue to use ArduPIC. One reason is that programming using the JTAG interface is slow, but a much more significant reason is that the use of JTAG means that the pins on the PIC32 associated with JTAG cannot be used for other things. This is either something that “everybody” knows or just something that Microchip doesn’t feel is important enough to mention in convenient places in the product documentation. More on this topic below!
Final Recommendation: Pickle (and Nanu Nanu)
So, I did try and use pic32prog and the suggested circuit, but had no success. Then, searching around, I happened to find some very useful resources indeed: Pickle is a GPL-licensed tool for uploading data to different PIC devices including PIC32 devices; Nanu Nanu is a GPL-licensed program that runs on AVR devices and programs PIC32 devices using the ICSP interface. Compiling the latter for the Arduino and uploading it in the usual way (actually done by the Makefile), it can then run on the Arduino and be controlled by the Pickle tool.
Admittedly, I did have some problems with the programming circuit, most likely self-inflicted, but the developer of these tools was very responsive and, as I know myself from being in his position in other situations, provided the necessary encouragement that was perhaps most sorely lacking to get the PIC32 device talking. I used the “sink” or “open drain” circuit so that the Arduino would be driving the PIC32 device using a suitable voltage and not the Arduino’s native 5V. Conveniently, this is the default configuration for Nanu Nanu.
I should point out that the Pinguino initiative promotes USB programming similar to that employed by the Arduino series of boards. Even though that should make programming very easy, it is still necessary to program the USB bootloader on the PIC32 device using another method in the first place. And for my experiments involving an integrated circuit on a breadboard, setting up a USB-based programming circuit is a distraction that would have complicated the familiarisation process and would have mostly duplicated the functionality that the Arduino can already provide, even though this two-stage programming circuit may seem a little contrived.
Compiling Code for the Device
This is perhaps the easiest problem to solve, strongly motivating my choice of PIC32 in the first place…
Recommendation: GNU Toolchain
PIC32 uses the MIPS32 instruction set. Since MIPS has been around for a very long time, and since the architecture was prominent in workstations, servers and even games consoles in the late 1980s and 1990s, remaining in widespread use in more constrained products such as routers as this century has progressed, the GNU toolchain (GCC, binutils) has had a long time to comfortably support MIPS. Although the computer you are using is not particularly likely to be MIPS-based, cross-compiling versions of these tools can be built to run on, say, x86 or x86-64 while generating MIPS32 executable programs.
And fortunately, Debian GNU/Linux provides the mipsel-linux-gnu variant of the toolchain packages (at least in the unstable version of Debian) that makes the task of building software simply a matter of changing the definitions for the compiler, linker and other tools in one’s Makefile to use these variants instead of the unprefixed “host” gcc, ld, and so on. You can therefore keep using the high-quality Free Software tools you already know. The binutils-mipsel-linux-gnu package provides an assembler, if you just want to practise your MIPS assembly language, as well as a linker and tools for creating binaries. Meanwhile, the gcc-mipsel-linux-gnu package provides a C compiler.
Perhaps the only drawback of using the GNU tools is that people using the proprietary tools supplied by Microchip and partners will post example code that uses special notation interpreted in a certain way by those products. Fortunately, with some awareness that this is going on, we can still support the necessary functionality in our own way, as described below.
Configuring the Device
With the PIC32, and presumably with the other PIC products, there is a distinct activity of configuring the device when programming it with program code and data. This isn’t so obvious until one reads the datasheets, tries to find a way of changing some behaviour of the device, and then stumbles across the DEVCFG registers. These registers cannot be set in a running program: instead, they are “programmed” before the code is run.
You might wonder what the distinction is between “programming” the device to take the code you have written and “programming” the configuration registers, and there isn’t much difference conceptually. All that matters is that you have your program written into one part of the device’s memory and you also ask for certain data values to be written to the configuration registers. How this is done in the Microchip universe is with something like this:
#pragma config SOMETHING = SOMEVALUE;
With a basic GNU tool configuration, we need to find the equivalent operation and express it in a different way (at least as far as I know, unless someone has implemented an option to support this kind of notation in the GNU tools). The mechanism for achieving this is related to the linker script and is described in a section of this article presented below. For now, we will concentrate on what configuration settings we need to change.
Recommendation: Disable JTAG
As briefly mentioned above, one thing that “everybody” knows, at least if they are using Microchip’s own tools and are busy copying code from Microchip’s examples, is that the JTAG functionality takes over various pins and won’t let you use them, even if you switch on a “peripheral” in the microcontroller that needs to use those pins. Maybe I am naive about how intrusive JTAG should or should not be, but the lesson from this matter is just to configure the device to not have JTAG features enabled and to use the ICSP programming method recommended above instead.
Recommendation: Disable Watchdog Timer
Again, although I am aware of the general concept of a watchdog timer – something that resets a device if it thinks that the device has hung, crashed, or experienced something particularly bad – I did not expect something like this to necessarily be configured by default. In case it is, and one does see lots of code assuming so, then it should be disabled as well. Otherwise, I can imagine that you might experience spontaneous resets for no obvious reason.
Recommendation: Configure the Oscillator and Clocks
If you need to change the oscillator frequency or the origin of the oscillator used by the PIC32, it is perhaps best to do this in the configuration registers rather than try and mess around with this while the device is running. Indeed, some configuration is probably unavoidable even if there is a need to, say, switch between oscillators at run-time. Meanwhile, the relationship between the system clock (used by the processor to execute instructions) and the peripheral clock (used to interact with other devices and to run things like timers) is defined in the configuration registers.
Linker Scripts
So, to undertake the matter of configuration, a way is needed to express the setting of configuration register values in a general way. For this, we need to take a closer look at linker scripts. If you routinely compile and link software, you will be using linker scripts without realising it, because such scripts are telling the tools things about where parts of the program should be stored, what kinds of addresses they should be using, and so on. Here, we need to take more of an interest in such matters.
Recommendation: Expand a Simple Script
Writing linker scripts does not seem like much fun. The syntax is awkward to read and to produce as a human being, and knowledge about tool output is assumed. However, it is possible to start with a simple script that works for someone else in a similar situation and to modify it conservatively in order to achieve what you need. I started out with one that just defined the memory regions and a few sections. To avoid reproducing all the details here, I will just show what the memory regions for a configuration register look like:
config2 : ORIGIN = 0xBFC00BF4, LENGTH = 0x4 physical_config2 : ORIGIN = 0x3FC00BF4, LENGTH = 0x4
This will be written in the MEMORY construct in the script. What they tell us is that config2 is a region of four bytes starting at virtual address 0xBFC00BF4, which is the location of the DEVCFG2 register as specified in the PIC32MX270 documentation. However, this register actually has a physical address of 0x3FC00BF4. The difference between virtual addresses and physical addresses is perhaps easiest to summarise by saying that CPU instructions would use the virtual address when referencing the register, whereas the actual memory of the device employs the physical address to refer to this four-byte region.
Meanwhile, in the SECTIONS construct, there needs to be something like this:
.devcfg2 : { *(.devcfg2) } > config2 AT > physical_config2
Now you might understand my remark about the syntax! Nevertheless, what these things do is to tell the tools to put things from a section called .devcfg2 in the physical_config2 memory region and, if there were to be any address references in the data (which there isn’t in this case), then they would use the addresses in the config2 region.
Recommendation: Define Configuration Sections and Values in the Code
Since I have been using assembly language, here is what I do in my actual program source file. Having looked at the documentation and figured out which configuration register I need to change, I introduce a section in the code that defines the register value. For DEVCFG2, it looks like this:
.section .devcfg2, "a" .word 0xfff9fffb /* DEVCFG2<18:16> = FPLLODIV<2:0> = 001; DEVCFG2<6:4> = FPLLMUL<2:0> = 111; DEVCFG2<2:0> = FPLLIDIV<2:0> = 011 */
Here, I fully acknowledge that this might not be the most optimal approach, but when you’re learning about all these things at once, you make progress as best you can. In any case, what this does is to tell the assembler to include the .devcfg2 section and to populate it with the specified “word”, which is four bytes on the 32-bit PIC32 platform. This word contains the value of the register which has been expressed in hexadecimal with the most significant digit first.
Returning to our checklist of configuration tasks, what we now need to do is to formulate each one in terms of configuration register values, introduce the necessary sections and values, and adjust the values to contain the appropriate bit patterns. The above example shows how the DEVCFG2 bits are adjusted and set in the value of the register. Here is a short amplification of the operation:
DEVCFG2 Bits | |||||||
---|---|---|---|---|---|---|---|
31…28 | 27…24 | 23…20 | 19…16 | 15…12 | 11…8 | 7…4 | 3…0 |
1111 | 1111 | 1111 | 1001 FPLLODIV<2:0> |
1111 | 1111 | 1111 FPLLMUL<2:0> |
1011 FPLLIDIV<2:0> |
f | f | f | 9 | f | f | f | b |
Here, the underlined bits are those of interest and have been changed to the desired values. It turns out that we can set the other bits as 1 for the functionality we want (or don’t want) in this case.
By the way, the JTAG functionality is disabled in the DEVCFG0 register (JTAGEN, bit 2, on this device). The watchdog timer is disabled in DEVCFG1 (FWDTEN, bit 23, on this device).
Recommendation: Define Regions for Exceptions and Interrupts
The MIPS architecture has the processor jump to certain memory locations when bad things happen (exceptions) or when other things happen (interrupts). We will cover the details of this below, but while we are setting up the places in memory where things will reside, we might as well define where the code to handle exceptions and interrupts will be living:
.flash : { *(.flash*) } > kseg0_program_mem AT > physical_program_mem
This will be written in the SECTIONS construct. It relies on a memory region being defined, which would appear in the MEMORY construct as follows:
kseg0_program_mem (rx) : ORIGIN = 0x9D000000, LENGTH = 0x40000 physical_program_mem (rx) : ORIGIN = 0x1D000000, LENGTH = 0x40000
These definitions allow the .flash section to be placed at 0x9D00000 but actually be written to memory at 0x1D00000.
Initialising the Device
On the various systems I have used in the past, even when working in assembly language I never had to deal with the earliest stages of the CPU’s activity. However, on the MIPS systems I have used in more recent times, I have been confronted with the matter of installing code to handle system initialisation, and this does require some knowledge of what MIPS processors would like to do when something goes wrong or if some interrupt arrives and needs to be processed.
The convention with the PIC32 seems to be that programs are installed within the MIPS KSEG0 region (one of the four principal regions) of memory, specifically at address 0x9FC00000, and so in the linker script we have MEMORY definitions like this:
kseg0_boot_mem (rx) : ORIGIN = 0x9FC00000, LENGTH = 0xBF0 physical_boot_mem (rx) : ORIGIN = 0x1FC00000, LENGTH = 0xBF0
As you can see, this region is far shorter than the 512MB of the KSEG0 region in its entirety. Indeed, 0xBF0 is only 3056 bytes! So, we need to put more substantial amounts of code elsewhere, some of which will be tasked with handling things when they go wrong.
Recommendation: Define Exception and Interrupt Handlers
As we have seen, the matter of defining routines to handle errors and interrupt conditions falls on the unlucky Free Software developer in this case. When something goes wrong, like the CPU not liking an instruction it has just seen, it will jump to a predefined location and try and execute code to recover. By default, with the PIC32, this location will be at address 0x80000000 which is the start of RAM, but if the RAM has not been configured then the CPU will not achieve very much trying to load instructions from that address.
Now, it can be tempting to set the “exception base”, as it is known, to be the place where our “boot” code is installed (at 0x9FC00000). So if something bad does happen, our code will start running again “from the top”, a bit like what used to happen if you ever wrote BASIC code that said…
10 ON ERROR GOTO 10
Clearly, this isn’t particularly desirable because it can mask problems. Indeed, I found that because I had not observed a step in the little dance required to change interrupt locations, my program would be happily restarting itself over and over again upon receiving interrupts. This is where the work done in the linker script starts to pay off: we can move the exception handler, this being the code residing at the exception base, to another region of memory and tell the CPU to jump to that address instead. We should therefore never have unscheduled restarts occurring once this is done.
Again, I have been working in assembly language, so I position my exception handling code using a directive like this:
.section .flash, "a" exception_handler: /* Exception handling code goes here. */
Note that .flash is what we mentioned in the linker script. After this directive, the exception handler is defined so that the CPU has something to jump to. But exceptions are just one kind of condition that may occur, and we also need to handle interrupts. Although we might be able to handle both things together, you can instead position an interrupt handler after the exception handler at a well-defined offset, and the CPU can be told to use that when it receives an interrupt. Here is what I do:
.org 0x200 interrupt_handler: /* Interrupt handling code goes here. *
The .org directive tells the assembler that the interrupt handler will reside at an offset of 0x200 from the start of the .flash section. This number isn’t something I have made up: it is defined by the MIPS architecture and will be observed by the CPU when suitably configured.
So that leaves the configuration. Although one sees a lot of advocacy for the “multi-vector” interrupt handling support of the PIC32, possibly because the Microchip example code seems to use it, I wanted to stick with the more widely available “single-vector” support which is what I have effectively described above: you have one place the CPU jumps to and it is then left to the code to identify the interrupt condition – what it is that happened, exactly – and then handle the condition appropriately. (Multi-vector handling has the CPU identify the kind of condition and then choose a condition-specific location to jump to all by itself.)
The following steps are required for this to work properly:
- Make sure that the CP0 (co-processor 0) STATUS register has the BEV bit set. Otherwise things will fail seemingly inexplicably.
- Set the CP0 EBASE (exception base) register to use the exception handler address. This changes the target of the jump that occurs when an exception or interrupt occurs.
- Set the CP0 INTCTL (interrupt control) register to use a non-zero vector spacing, even though this setting probably has no relevance to the single-vector mode.
- Make sure that the CP0 CAUSE (exception cause) register has the IV bit set. This tells the CPU that the interrupt handler is at that magic 0x200 offset from the exception handler, meaning that interrupts should be dispatched there instead of to the exception handler.
- Now make sure that the CP0 STATUS register has the BEV bit cleared, so that the CPU will now use the new handlers.
To enable exceptions and interrupts, the IE bit in the CP0 STATUS register must be set, but there are also other things that must be done for them to actually be delivered.
Recommendation: Handle the Initial Error Condition
As I found out with the Ben NanoNote, a MIPS device will happily run in its initial state, but it turns out that this state is some kind of “error” state that prevents exceptions and interrupts from being delivered, even though interrupts may have been enabled. This does make some kind of sense: if a program is in the process of setting up interrupt handlers, it doesn’t really want an interrupt to occur before that work is done.
So, the MIPS architecture defines some flags in the CP0 STATUS register to override normal behaviour as some kind of “fail-safe” controls. The ERL (error level) bit is set when the CPU starts up, preventing errors (or probably most errors, at least) as well as interrupts from interrupting the execution of the installed code. The EXL (exception level) bit may also be set, preventing exceptions and interrupts from occurring even when ERL is clear. Thus, both of these bits must be cleared for interrupts to be enabled and delivered, and what happens next rather depends on how successful we were in setting up those handlers!
Recommendation: Set up the Global Offset Table
Something that I first noticed when looking at code for the Ben NanoNote was the initialisation of a register to refer to something called the global offset table (or GOT). For anyone already familiar with MIPS assembly language or the way code is compiled for the architecture, programs follow a convention where they refer to objects via a table containing the addresses of those objects. For example:
Offset from Start | Member |
---|---|
+0 | 0 |
+4 | address of _start |
+8 | address of my_routine |
+12 | address of other_routine |
… | … |
But in order to refer to the table, something has to have the address of the table. This is where the $gp register comes in: it holds that address and lets the code access members of the table relative to the address.
What seems to work for setting up the $gp register is this:
lui $gp, %hi(_GLOBAL_OFFSET_TABLE_) ori $gp, $gp, %lo(_GLOBAL_OFFSET_TABLE_)
Here, the upper 16 bits of the $gp register are set with the “high” 16 bits of the value assigned to the special _GLOBAL_OFFSET_TABLE_ symbol, which provides the address of the special .got section defined in the linker script. This value is then combined using a logical “or” operation with the “low” 16 bits of the symbol’s value.
(I’m sure that anyone reading this far will already know that MIPS instructions are fixed length and are the same length as the address being loaded here, so it isn’t possible to fit the address into the load instruction. So each half of the address has to be loaded separately by different instructions. If you look at the assembled output of an assembly language program employing the “li” instruction, you will see that the assembler breaks this instruction down into “lui” and “ori” if it needs to. Note well that you cannot rely on the “la” instruction to load the special symbol into $gp precisely because it relies on $gp having already been set up.)
Rounding Off
I could probably go on about MIPS initialisation rituals, setting up a stack for function calls, and so on, but that isn’t really what this article is about. My intention here is to leave enough clues and reminders for other people in a similar position and perhaps even my future self.
Even though I have some familiarity with the MIPS architecture, I suppose you might be wondering why I am not evaluating more open hardware platforms. I admit that it is partly laziness: I could get into doing FPGA-related stuff and deploy open microcontroller cores, maybe even combining them with different peripheral circuits, but that would be a longer project with a lot of familiarisation involved, plus I would have to choose carefully to get a product supported by Free Software. It is also cheapness: I could have ordered a HiFive1 board and started experimenting with the RISC-V architecture, but that board seems expensive for what it offers, at least once you disregard the pioneering aspects of the product and focus on the applications of interest.
So, for now, I intend to move slowly forwards and gain experiences with an existing platform. In my next article on this topic, I hope to present some of the things I have managed to achieve with the PIC32 and a selection of different components and technologies.
Funding Free Software
March 28th, 2017
My last post raised the issue of funding Free Software, which I regard as an essential way of improving both the user and developer experiences around the use and creation of Free Software. In short, asking people to just volunteer more of their spare time, in order to satisfy the erroneous assumption that Free Software must be free of charge, is neither sustainable nor likely to grow the community around Free Software to which both users and developers belong.
In response, Eric Jones raised some pertinent themes: attitudes to paying for things; judgements about the perceived monetary value of those things; fund-raising skills and what might be called business or economic awareness; the potentially differing predicaments of infrastructure projects and user-facing projects. I gave a quick response in a comment of my own, but I also think that a more serious discussion could be had to try and find real solutions to the problem of sustainable funding for Free Software. I do not think that a long message to the FSFE discussion mailing list is the correct initial step, however, and so this article is meant to sketch out a few thoughts and suggestions as a prelude to a proper discussion.
On Paying for Things
It seems to me that people who can afford it do not generally have problems finding ways to spend their money. For instance, how many people spend tens of euros, dollars, pounds per month on mobile phone subscriptions with possibly-generous data allowances that they do not use? What about insurance, electricity, broadband Internet access, television/cable/streaming services, banking services? How often do we hear various self-appointed “consumer champions” berate people for overspending on such things? That people could shop around and save money. It is almost as if this is the primary purpose of the “developed world” human: to continually monitor the “competitive landscape” as each of its inhabitants adjusts its pricing and terms, nudging their captive revenues upward until such antics are spotted and punished with a rapid customer relationship change.
I may have noted before that many industries have moved to subscription models, presumably having noticed that once people have been parked in a customer relationship with a stable and not-particularly-onerous outgoing payment once a month, they will most likely stay in that relationship unless something severe enough occurs to overcome the effort of switching. Now, I do not advocate taking advantage of people in this way, but disregarding the matter of customer inertia and considering subscription models specifically, it is to be noted that these are used by various companies in the enterprise sector and have even been tried with individual “consumers”. For example, Linspire offered a subscription-based GNU/Linux distribution, and Eazel planned to offer subscription based access to online services providing, amongst other things, access to Free Software repositories.
(The subscription model can also be dubious in situations where things should probably be paid for according to actual usage, rather than some formula being cooked up to preserve a company’s balance sheet. For instance, is it right that electricity should be sold at a fixed per-month subscription cost? Would that product help uphold a company’s obligations to source renewable energy or would it give them some kind of implied permission to buy capacity generated from fossil fuels that has been dumped on the market? This kind of thing is happening now: it is not a mere thought exercise.)
Neither Linspire nor Eazel survived to the present day, the former being unable to resist the temptation to mix proprietary software into its offerings, thus alienating potential customers who might have considered a polished Debian-based distribution product three years before Ubuntu entered the scene. Eazel, meanwhile, burned through its own funding and closed up shop even before Linspire appeared, leaving the Nautilus file manager as its primary legacy. Both companies funded Free Software development, with Linspire funding independent projects, but obviously both companies decided internally how to direct funds to deserving projects, not really so unlike how businesses might normally fund Free Software.
On Infrastructure and User-Facing Software
Would individuals respond well to a one-off payment model for Free Software? The “app” model, with accompanying “store”, has clearly been considered as an option by certain companies and communities. Some have sought to provide such a store stocked with Free Software and, in the case of certain commercial services, proprietary applications. Others seek to provide their own particular software in existing stores. One hindrance to the adoption of such stores in the traditional Free Software realm is the availability of mature distribution channels for making software available, and such channels are typically freely accessible unless an enterprise software distribution is involved.
Since there seems to be an ongoing chorus of complaint about how “old” distribution software generally is, one might think that an opportunity might exist for more up-to-date applications to be delivered through some kind of store. Effort has gone into developing mechanisms for doing this separately from distributions’ own mechanisms, partly to prevent messing up the stable base system, partly to deliver newer versions of software that can be supported by the base system.
One thing that would bother me about such initiatives, even assuming that the source code for each “product” was easily obtained and rebuilt to run in the container-like environment inevitably involved, would be the longevity of the purchased items. Will the software keep working even after my base system has been upgraded? How much effort would I need to put in myself? Is this something else I have to keep track of?
For creators of Free Software, one concern would be whether it would even be attractive for people to obtain their software via such a store. As Eric notes, applications like The GIMP resemble the kind of discrete applications that people routinely pay money for, if marketed at them by proprietary software companies. But what about getting a more recent version of a mail server or a database system? Some might claim that such infrastructure things are the realm of the enterprise and that only companies would pay for this, and if individuals were interested, it would only be a small proportion of the wider GIMP-using audience.
(One might argue that the never-ending fixation on discrete applications always was a factor undermining the vision of component-based personal computing systems. Why deliver a reliable component that is reused by everyone else when you could bundle it up inside an “experience” and put your own brand name on it, just as everyone else will do if you make a component instead? The “app” marketplace now is merely this phenomenon taken to a more narcissistic extreme.)
On Sharing the Revenue
This brings us to how any collected funds would be shared out. Corporate donations are typically shared out according to corporate priorities, and communities probably do not get much of a say in the matter. In a per-unit store model, is it fair to channel all revenue from “sales” of The GIMP to that project or should some of that money be reserved for related projects, maybe those on which the application is built or maybe those that are not involved but which could offer complementary solutions? Should the project be expected to distribute the money to other projects if the storefront isn’t doing so itself? What kind of governance should be expected in order to avoid squabbles about people taking an unfair share of the money?
Existing funding platforms like Gratipay, which solicit ongoing contributions or donations as the means of funding, avoid such issues entirely by defining project-level entities which receive the money, and it is their responsibility to distribute it further. Maybe this works if the money is not meant to go any further than a small group of people with a well-defined entitlement to the incoming revenues, but with relationships between established operating system distributions and upstream projects sometimes becoming difficult, with distributions sometimes demanding an easier product to work with, and with upstream projects sometimes demanding that they not be asked to support those distributions’ users, the addition of money risks aggravating such tensions further despite having the potential to reduce or eliminate them if the sharing is done right.
On Worthy Projects
This is something about which I should know just a little. My own Free Software endeavours do not really seem to attract many users. It is not that they have attracted no other users besides myself – I have had communications over the years with some reasonably satisfied users – but my projects have not exactly been seen as meeting some urgent and widespread need.
(Having developed a common API for Python Web applications, and having delivered software to provide it, deliberately basing it on existing technologies, it was particularly galling to see that particular software labelled by some commentators as contributing to the technology proliferation problem it was meant to alleviate, but that is another story.)
What makes a worthy project? Does it have to have a huge audience or does it just need to do something useful? Who gets to define “useful”? And in a situation where someone might claim that someone else’s “important” project is a mere vanity exercise – doing something that others have already done, perhaps, or solving a need that “nobody really has” – who are we to believe if the developer seeks funding to further their work? Can people expect to see money for working on what they feel is important, or should others be telling them what they should be doing in exchange for support?
It is great that people are able to deliver Free Software as part of a business because their customers can see how that software addresses their needs. But what about the cases where potential customers are not able to see how some software might be useful, even essential, to them? Is it not part of the definition of marketing to help people realise that they need a product or solution? And in a world where people appear unaware of the risks accompanying proprietary software and services, might it not be part of our portfolio of activities to not only develop solutions before the intended audience has realised their importance, but also to try and communicate their importance to that audience?
Even essential projects have been neglected by those who use them to perform essential functions in their lives or in their businesses. It took a testimonial by Edward Snowden to get people to think how little funding GNU Privacy Guard was getting.
On Sources of Funding
Some might claim that there are already plenty of sources of funding for Free Software. Some companies sponsor or make awards to projects or developers – this even happened to me last year after someone graciously suggested my work on imip-agent as being worthy of such an award – and there are ongoing programmes to encourage Free Software participation. Meanwhile, organisations exist within the Free Software realm that receive income and spend some of that income on improving Free Software offerings and to cover the costs of providing those works to the broader public.
The Python Software Foundation is an interesting example. It solicits sponsorship from organisations and individuals whilst also receiving substantial revenue from the North American Python conference series, PyCon. It would be irresponsible to just sit on all of this money, of course, and so some of it is awarded as grants or donations towards community activities. This is where it gets somewhat more interesting from the perspective of funding software, as opposed to funding activities around the software. Looking at the PSF board resolutions, one can get an idea of what the money is spent on, and having considered this previously, I made a script that tries to identify each of the different areas of spending. This is somewhat tricky due to the free-format nature of the source data, but the numbers are something like the following for 2016 resolutions:
Category | Total (USD) | Percentage |
---|---|---|
Events | 288051 | 89% |
Other software development | 19398 | 6% |
Conference software development | 10440 | 3% |
Outreach | 6400 | 2% |
(All items) | 324289 | 100% |
Clearly, PyCon North America relies heavily on the conference management software that has been developed for it over the years – arguably, everyone and their dog wants to write their own conference software instead of improving existing offerings – but let us consider that expenditure a necessity for the PSF and PyCon. That leaves grants for other software development activities accounting for only 6% of the money made available for community-related activities, and one of those four grants was for translation of existing content, not software, into another language.
Now, the PSF’s mission statement arguably emphasises promotion over other matters, with “Python-related open source projects” and “Python-related research” appearing at the end of the list of objectives. Not having been involved in the grant approval process during my time as a PSF member, I cannot say what the obstacles are to getting access to funds to develop Python-related Free Software, but it seems to me that organisations like this particular one are not likely to be playing central roles in broader and more sustainable funding for Free Software projects.
On Bounties and Donations
Outside the normal boundaries of employment, there are a few other ways that have been suggested for getting paid to write Free Software. Donations are what fund organisations like the PSF, ostensibly representing one or more projects, but some individual developers solicit donations themselves, as do smaller projects that have not reached the stage of needing a foundation or other formal entity. More often than not, however, people are just doing this to “keep the lights on”, or rather, keep the hosting costs down for their Web site that has to serve up their software to everyone. Generally, such “tip jar” funding doesn’t really allow anyone to spend very much, if any, paid time on their projects.
At one point in time it seemed that bounties would provide a mechanism whereby users could help improve software projects by offering money if certain tasks were performed, features implemented, shortcomings remedied, and so on. Groups of users could pool their donations to make certain tasks more attractive for developers to work on, thus using good old-fashioned market forces to set rewards according to demand and to give developers an incentive to participate. At least in theory.
There are quite a few problems with bounty marketplaces. First of all, the pooled rewards for a task may not be at all realistic for the work requested. This is often the case on less popular marketplaces, and it used to be the case on virtually every marketplace, although one might say that more money is making it into the more well-known marketplaces now. Nevertheless, one ends up seeing bounties like this one for a new garbage collector for LuaJIT currently offering no reward at all, bearing the following comment from one discussion participant: “This really important for our production workload.” If that comment, particularly being made in such a venue, was made without any money accompanying it, then I don’t think we would need to look much further for a summarising expression of the industry-wide attitude problem around Free Software funding.
Meanwhile, where the cash is piling up, it doesn’t mean that anyone can actually take the bounty off the table. This bounty for a specific architecture port of LuaJIT attracted $5000 from IBM but still isn’t awarded, despite several people supposedly taking it on. And that brings up the next problem: who gets the reward? Unless there is some kind of coordination, and especially if a bounty is growing into something that might be a few months salary for someone, people will compete instead of collaborate to try and get their hands on the whole pot. But if a task is large and complicated, and particularly if the participants are speculating and not necessarily sufficiently knowledgeable or experienced in the technologies to complete it, such competition will just leave a trail of abandonment and very little to show for all the effort that has been made.
Admittedly, some projects appear to be making good use of such marketplaces. For example, the organisation behind elementary OS seems to have some traffic in terms of awarded bounties, although it might be said that they appear to involve smaller tasks and not some of the larger requests that appear to linger for years, never getting resolved. Of course, things can change on the outside without bounties ever getting reviewed for their continued relevance. How many Thunderbird bounties are still relevant now? Seven years is a long time even for Thunderbird.
Given a particular project with plenty of spare money coming in from somewhere, I suppose that someone who happens to be “in the zone” to do the work on that project could manage to queue up enough bounty claims to make some kind of living. But if that “somewhere” is a steady and sizeable source of revenue, and given the “gig economy” nature of the working relationship, one has to wonder whether anyone managing to get by in this way isn’t actually being short-changed by the people making the real money. And that leads us to the uncomfortable feeling that such marketplaces, while appearing to reward developers, are just a tool to drive costs down by promoting informal working arrangements and pitting people against each other for work that only needs to be paid for once, no matter how many times it was actually done in parallel.
Of course, bounties don’t really address the issue of where funds come from, although bounty marketplaces might offer ways of soliciting donations and encouraging sponsorship from companies. But one might argue that all they add is a degree of flexibility to the process of passing on donations that does not necessarily serve the interests of the participating developers. Maybe this is why the bounty marketplace featured in the above links, Bountysource, has shifted focus somewhat to compete with the likes of Gratipay and Patreon, seeking to emphasise ongoing funding instead of one-off rewards.
On Crowd-Funding
One significant development in funding mechanisms over the last few years has been the emergence of crowd-funding. Most people familiar with this would probably summarise it as someone promising something in the form of a campaign, asking for money to do that thing, getting the money if some agreed criteria were met, and then hopefully delivering that promised thing. As some people have unfortunately discovered, the last part can sometimes be a problem, resulting in continuing reputational damage for various crowd-funding platforms.
Such platforms have traditionally been popular for things like creative works, arts, crafts, literature, and so on. But hardware has become a popular funding area as people try their hand at designing devices, sourcing materials, going and getting the devices made, delivering the goods, and dealing with all the logistical challenges these things entail. Probably relatively few of these kinds of crowd-funding campaigns go completely according to plan; things like warranties and support may take a back seat (or not even be in the vehicle at all). The cynical might claim that crowd-funding is a way of people shirking their responsibilities as a manufacturer and doing things on the cheap.
But as far as software is concerned, crowd-funding might avoid the more common pitfalls experienced in other domains. Software developers would merely need to deliver software, not master other areas of expertise on a steep learning curve (sourcing, manufacturing, logistics), and they would benefit from being able to deliver the crucial element of the campaign using the same medium as that involved in getting everyone signed up in the first place: the Internet. So there wouldn’t be the same kind of customs and shipment issues that appear to plague just about every other kind of campaign.
I admit that I do not maintain a sufficient overview when it comes to software-related crowd-funding or, indeed, crowd-funding in general. The two major software campaigns I am aware of are Mailpile and Roundcube Next. Although development has been done on Roundcube Next, and Mailpile is certainly under active development, neither managed to deliver a product within the anticipated schedule. Software development costs are notoriously difficult to estimate, and it is very possible that neither project asked for enough money to pursue their goals with enough resources for timely completion.
One might say that it is no good complaining about things not getting done quickly enough and that people should pitch in and help out. On the one hand, I agree that since such products are Free Software and are available even in their unfinished state, we should take advantage of this to produce a result that satisfies everybody. On the other hand, we cannot keep returning to the “everybody pitch in” approach all the time, particularly when the circumstances provoking such a refrain have come about when people have purposefully tried to cultivate a different, more sustainable way of developing Free Software.
On Reflection
So there it is, a short article that went long again! Hopefully, it provides some useful thoughts about the limitations of existing funding approaches and some clues that might lead to better funding approaches in future.
Making Free Software Work for Everybody
March 17th, 2017
Another week and another perfect storm of articles and opinions. This time, we start with Jonas Öberg’s “How Free Software is Failing the Users“, where he notes that users don’t always get the opportunity to exercise their rights to improve Free Software. I agree with many of the things Jonas says, but he omits an important factor that is perhaps worth thinking about when reading some of the other articles. Maybe he will return to it in a later article, but I will discuss it here.
Let us consider the other articles. Alanna Irving of Open Collective wrote an interview with Jason Miller about project “maintainer burnout” entitled “Preact: Shattering the Perception that Open Source Must be Free“. It’s worth noting here that Open Collective appears to be a venture capital funded platform with similar goals to the more community-led Gratipay and Liberapay, which are funding platforms that enable people to get others to fund them to do ongoing work. Nolan Lawson of Microsoft describes the demands of volunteer-driven “open source” in “What it feels like to be an open-source maintainer“. In “Life of free software project“, Michal Čihař writes about his own experiences maintaining projects and trying to attract contributions and funding.
When reading about “open source”, one encounters some common themes over and over again: that Free Software (which is almost always referenced as “open source” when these themes are raised) must be free as in cost, and that people volunteer to work on such software in their own time or without any financial reward, often for fun or for the technical challenge. Of course, Free Software has never been about the cost. It probably doesn’t help that the word “free” can communicate the meaning of zero cost, but “free as in freedom” usually gets articulated very early on in any explanation of the concept of Free Software to newcomers.
Even “open source” isn’t about the cost, either. But the “open source” movement started out by differentiating itself from the Free Software movement by advocating the efficiency of producing Free Software instead of emphasising the matter of personal control over such software and the freedom it gives the users of such software. Indeed, the Open Source Initiative tells us this in its mission description:
Open source enables a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is higher quality, better reliability, greater flexibility, lower cost, and an end to predatory vendor lock-in.
It makes Free Software – well, “open source” – sound like a great way of realising business efficiencies rather than being an ethical choice. And with this comes the notion that when it comes to developing software, a brigade of pixies on the Internet will happily work hard to deliver a quality product that can be acquired for free, thus saving businesses money on software licence and development costs.
Thus, everybody now has to work against this perception of no-cost, made-by-magic Free Software. Jonas writes, “I know how to bake bread, but oftentimes I choose to buy bread instead.” Unfortunately, thanks to the idea that the pixies will always be on hand to fix our computers or to make new things, we now have the equivalent of bakers being asked to bake bread for nothing. (Let us ignore the specifics of the analogy here: in some markets it isn’t exactly lucrative to run a bakery, either.)
Jason Miller makes some reasonable observations as he tries to “shatter” this perception. Sadly, as it seems to be with all these funding platforms, there is some way to go. With perhaps one or two exceptions, even the most generously supported projects appear to be drawing a fraction of a single salary as donations or contributions, and it would seem that things like meet-ups and hackerspaces attract funding more readily. I guess that when there are tangible expenses – rental costs, consumables, power and network bills – people are happy to pay such externally-imposed costs. When it comes to valuing the work done by someone, even if one can quote “market rates” and document that person’s hours, everyone can argue about whether it was “really worth that amount”.
Michal Čihař notes…
But the most important thing is to persuade people and companies to give back. You know there are lot of companies relying on your project, but how to make them fund the project? I really don’t know, I still struggle with this as I don’t want to be too pushy in asking for money, but I’d really like to see them to give back.
Sadly, we live in an age of free stuff. If it looks like a project is stalling because of a lack of investment, many people and businesses will look elsewhere instead of stepping up and contributing. Indeed, this is when you see those people and businesses approaching the developers of other projects, telling those developers that they really want to use their project but it perhaps isn’t yet “good enough” for “the enterprise” or for “professional use” and maybe if it only did this and this, then they would use it, and wouldn’t that give it the credibility it clearly didn’t have before? (Even if there are lots of satisfied existing users and that this supposed absence of credibility purely exists in the minds of those shopping around for something else to use.) Oh, and crucially, how about doing the work to make it “good enough” for us for nothing? Thank you very much.
It is in this way that independent Free Software projects are kept marginalised, remaining viable enough to survive (mostly thanks to volunteer effort) but weakened by being continually discarded in favour of something else as soon as a better “deal” can be made and another group of pixies exploited. Such projects are thereby ill-equipped to deal with aggressive proprietary competitors. When fancy features are paraded by proprietary software vendors in front of decision-makers in organisations that should be choosing Free Software, advocates of free and open solutions may struggle to persuade those decision-makers that Free Software solutions can step in and do what they need.
Playing projects against each other to see which pixies will work the hardest, making developers indulge in competitions to see who can license their code the most permissively (to “reach more people”, you understand), portraying Free Software development as some kind of way of showcasing developers’ skills to potential employers (while really just making them unpaid interns on an indefinite basis) are all examples of the opportunistic underinvestment in Free Software which ultimately just creates opportunities for proprietary software. And it also goes a long way to undermining the viability of the profession in an era when we apparently don’t have enough programmers.
So that was a long rant about the plight of developers, but what does this have to do with the users? Well, first of all, users need to realise that the things they use do not cost nothing to make. Of course, in this age of free stuff (as in stuff that costs no money), they can decide that some program or service just doesn’t “do it for them” any more and switch to a shinier, better thing, but that isn’t made of pixie dust either. All of the free stuff has other hidden costs in terms of diminished privacy, increased surveillance and tracking, dubious data security, possible misuse of their property, and the discovery that certain things that they appear to own weren’t really their property all along.
Users do need to be able to engage with Free Software projects within the conventions of those projects, of course. But they also need the option of saying, “I think this could be better in this regard, but I am not the one to improve it.” And that may need the accompanying option: “Here is some money to pay someone to do it.” Free Software was always about giving control to the users but not necessarily demanding that the users become developers (or even technical writers, designers, artists, and so on). A user benefits from their office suite or drawing application being Free Software not only in situations where the user has the technical knowledge to apply it to the software, but they also benefit when they can hand the software to someone else and get them to fix or improve it instead. And, yes, that may well involve money changing hands.
Those of us who talk about Free Software and not “open source” don’t need reminding that it is about freedom and not about things being free of charge. But the users, whether they are individuals or organisations, end-users or other developers, may need reminding that you never really get something for nothing, especially when that particular something is actually a rather expensive thing to produce. Giving them the opportunity to cover some of that expense, and not just at “tip jar” levels, might actually help make Free Software work, not just for users, not just for developers, but as a consequence of empowering both groups, for everybody.
The Academic Barriers of Commercialisation
January 9th, 2017
Last year, the university through which I obtained my degree celebrated a “milestone” anniversary, meaning that I got even more announcements, notices and other such things than I was already getting from them before. Fortunately, not everything published into this deluge is bound up in proprietary formats (as one brochure was, sitting on a Web page in Flash-only form) or only reachable via a dubious “Libyan link-shortener” (as certain things were published via a social media channel that I have now quit). It is indeed infuriating to see one of the links in a recent HTML/plain text hybrid e-mail message using a redirect service hosted on the university’s own alumni sub-site, sending the reader to a bit.ly URL, which will redirect them off into the great unknown and maybe even back to the original site. But such things are what one comes to expect on today’s Internet with all the unquestioning use of random “cloud” services, each one profiling the unsuspecting visitor and betraying their privacy to make a few extra cents or pence.
But anyway, upon following a more direct – but still redirected – link to an article on the university Web site, I found myself looking around to see what gets published there these days. Personally, I find the main university Web site rather promotional and arguably only superficially informative – you can find out the required grades to take courses along with supposed student approval ratings and hypothetical salary expectations upon qualifying – but it probably takes more digging to get at the real detail than most people would be willing to do. I wouldn’t mind knowing what they teach now in their computer science courses, for instance. I guess I’ll get back to looking into that later.
Gatekeepers of Knowledge
However, one thing did catch my eye as I browsed around the different sections, encountering the “technology transfer” department with the expected rhetoric about maximising benefits to society: the inevitable “IP” policy in all its intimidating length, together with an explanatory guide to that policy. Now, I am rather familiar with such policies from my time at my last academic employer, having been obliged to sign some kind of statement of compliance at one point, but then apparently not having to do so when starting a subsequent contract. It was not as if enlightenment had come calling at the University of Oslo between these points in time such that the “IP rights” agreement now suddenly didn’t feature in the hiring paperwork; it was more likely that such obligations had presumably been baked into everybody’s terms of employment as yet another example of the university upper management’s dubious organisational reform and questionable human resources practices.
Back at Heriot-Watt University, credit is perhaps due to the authors of their explanatory guide to try and explain the larger policy document, because it is most likely that most people aren’t going to get through that much longer document and retain a clear head. But one potentially unintended reason for credit is that by being presented with a much less opaque treatment of the policy and its motivations, we are able to see with enhanced clarity many of the damaging misconceptions that have sadly become entrenched in higher education and academia, including the ways in which such policies actually do conflict with the sharing of knowledge that academic endeavour is supposed to be all about.
So, we get the sales pitch about new things needing investment…
However, often new technologies and inventions are not fully developed because development needs investment, and investment needs commercial returns, and to ensure commercial returns you need something to sell, and a freely available idea cannot be sold.
If we ignore various assumptions about investment or the precise economic mechanisms supposedly required to bring about such investment, we can immediately note that ideas on their own aren’t worth anything anyway, freely available or not. Although the Norwegian Industrial Property Office (or the Norwegian Patent Office if we use a more traditional name) uses the absurd vision slogan “turning ideas into values” (it should probably read “value”, but whatever), this perhaps says more about greedy profiteering through the sale of government-granted titles bound to arbitrary things than it does about what kinds of things have any kind of inherent value that you can take to the bank.
But assuming that we have moved beyond the realm of simple ideas and have entered the realm of non-trivial works, we find that we have also entered the realm of morality and attitude management:
That is why, in some cases, it is better for your efforts not to be published immediately, but instead to be protected and then published, for protection gives you something to sell, something to sell can bring in investment, and investment allows further development. Therefore in the interests of advancing the knowledge within the field you work in, it is important that you consider the commercial potential of your work from the outset, and if necessary ensure it is properly protected before you publish.
Once upon a time, the most noble pursuit in academic research was to freely share research with others so that societal, scientific and technological progress could be made. Now it appears that the average researcher should treat it as their responsibility to conceal their work from others, seek “protection” on it, and then release the encumbered details for mere perusal and the conditional participation of those once-valued peers. And they should, of course, be wise to the commercial potential of the work, whatever that is. Naturally, “intellectual property” offices in such institutions have an “if in doubt, see us” policy, meaning that they seek to interfere with research as soon as possible, and should someone fail to have “seen them”, that person’s loyalty may very well be called into question as if they had somehow squandered their employer’s property. In some institutions, this could very easily get people marginalised or “reorganised” if not immediately or obviously fired.
The Rewards of Labour
It is in matters of property and ownership where things get very awkward indeed. Many people would accept that employees of an organisation are producing output that becomes the property of that organisation. What fewer people might accept is that the customers of an organisation are also subject to having their own output taken to be the property of that organisation. The policy guide indicates that even undergraduate students may also be subject to an obligation to assign ownership of their work to the university: those visiting the university supposedly have to agree to this (although it doesn’t say anything about what their “home institution” might have to say about that), and things like final year projects are supposedly subject to university ownership.
So, just because you as a student have a supervisor bound by commercialisation obligations, you end up not only paying tuition fees to get your university education (directly or through taxation), but you also end up having your own work taken off you because it might be seen as some element in your supervisor’s “portfolio”. I suppose this marks a new low in workplace regulation and standards within a sector that already skirts the law with regard to how certain groups are treated by their employers.
One can justifiably argue that employees of academic institutions should not be allowed to run away with work funded by those institutions, particularly when such funding originally comes from other sources such as the general public. After all, such work is not exactly the private property of the researchers who created it, and to treat it as such would deny it to those whose resources made it possible in the first place. Any claims about “rightful rewards” needing to be given are arguably made to confuse rational thinking on the matter: after all, with appropriate salaries, the researchers are already being rewarded doing work that interests and stimulates them (unlike a lot of people in the world of work). One can argue that academics increasingly suffer from poorer salaries, working conditions and career stability, but such injustices are not properly remedied by creating other injustices to supposedly level things out.
A policy around what happens with the work done in an academic institution is important. But just as individuals should not be allowed to treat broadly-funded work as their own private property, neither should the institution itself claim complete ownership and consider itself entitled to do what it wishes with the results. It may be acting as a facilitator to allow research to happen, but by seeking to intervene in the process of research, it risks acting as an inhibitor. Consider the following note about “confidential information”:
This is, in short, anything which, if you told people about, might damage the commercial interests of the university. It specifically includes information relating to intellectual property that could be protected, but isn’t protected yet, and which if you told people about couldn’t be protected, and any special know how or clever but non patentable methods of doing things, like trade secrets. It specifically includes all laboratory notebooks, including those stored in an electronic fashion. You must be very careful with this sort of information. This is of particular relevance to something that may be patented, because if other people know about it then it can’t be.
Anyone working in even a moderately paranoid company may have read things like this. But here the context is an environment where knowledge should be shared to benefit and inform the research community. Instead, one gets the impression that the wish to control the propagation of knowledge is so great that some people would rather see the details of “clever but non patentable methods” destroyed than passed on openly for others to benefit from. Indeed, one must question whether “trade secrets” should even feature in a university environment at all.
Of course, the obsession with “laboratory notebooks”, “methods of doing things” and “trade secrets” in such policies betrays the typical origins of such drives for commercialisation: the apparently rich pickings to be had in the medical, pharmaceutical and biosciences domains. It is hardly a coincidence that the University of Oslo intensified its dubious “innovation” efforts under a figurehead with a background (or an interest) in exactly those domains: with a narrow personal focus, an apparent disdain for other disciplines, and a wider commercial atmosphere that gives such a strategy a “dead cert” air of impending fortune, we should perhaps expect no more of such a leadership creature (and his entourage) than the sum of that creature’s instincts and experiences. But then again, we should demand more from such people when their role is to cultivate an institution of learning and not to run a private research organisation at the public’s expense.
The Dirty Word
At no point in the policy guide does the word “monopoly” appear. Given that such a largely technical institution would undoubtedly be performing research where the method of “protection” would involve patents being sought, omitting the word “monopoly” might be that document’s biggest flaw. Heriot-Watt University originates from the merger of two separate institutions, one of which was founded by the well-known pioneer of steam engine technology, James Watt.
Recent discussion of Watt’s contributions to the development and proliferation of such technology has brought up claims that Watt’s own patents – the things that undoubtedly made him wealthy enough to fund an educational organisation – actually held up progress in the domain concerned for a number of decades. While he was clearly generous and sensible enough to spend his money on worthy causes, one can always challenge whether the questionable practices that resulted in the accumulation of such wealth can justify the benefits from the subsequent use of that wealth, particularly if those practices can be regarded as having had negative effects of society and may even have increased wealth inequality.
Questioning philanthropy is not a particularly fashionable thing to do. In capitalist societies, wealthy people are often seen as having made their fortunes in an honest fashion, enjoying a substantial “benefit of the doubt” that this was what really occurred. Criticising a rich person giving money to ostensibly good causes is seen as unkind to both the generous donor and to those receiving the donations. But we should question the means through which the likes of Bill Gates (in our time) and James Watt (in his own time) made their fortunes and the power that such fortunes give to such people to direct money towards causes of their own personal choosing, not to mention the way in which wealthy people also choose to influence public policy and the use of money given by significantly less wealthy individuals – the rest of us – gathered through taxation.
But back to monopolies. Can they really be compatible with the pursuit and sharing of knowledge that academia is supposed to be cultivating? Just as it should be shocking that secretive “confidentiality” rules exist in an academic context, it should appal us that researchers are encouraged to be competitively hostile towards their peers.
Removing the Barriers
It appears that some well-known institutions understand that the unhindered sharing of their work is their primary mission. MIT Media Lab now encourages the licensing of software developed under its roof as Free Software, not requiring special approval or any other kind of institutional stalling that often seems to take place as the “innovation” vultures pick over the things they think should be monetised. Although proprietary licensing still appears to be an option for those within the Media Lab organisation, at least it seems that people wanting to follow their principles and make their work available as Free Software can do so without being made to feel bad about it.
As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.
So says the director of the MIT Media Lab. It says a lot about the times we live in that this needs to be said at all. Free Software licensing is, as a mechanism to encourage sharing, a natural choice for software, but we should also expect similar measures to be adopted for other kinds of works. Papers and articles should at the very least be made available using content licences that permit sharing, even if the licence variants chosen by authors might seek to prohibit the misrepresentation of parts of their work by prohibiting remixes or derived works. (This may sound overly restrictive, but one should consider the way in which scientific articles are routinely misrepresented by climate change and climate science deniers.)
Free Software has encouraged an environment where sharing is safely and routinely done. Licences like the GNU General Public Licence seek to shield recipients from things like patent threats, particularly from organisations which might appear to want to share their works, but which might be tempted to use patents to regulate the further use of those works. Even in realms where patents have traditionally been tolerated, attempts have been made to shield others from the effects of patents, intended or otherwise: the copyleft hardware movement demands that shared hardware designs are patent-free, for instance.
In contrast, one might think that despite the best efforts of the guide’s authors, all the precautions and behavioural self-correction it encourages might just drive the average researcher to distraction. Or, just as likely, to ignoring most of the guidelines and feigning ignorance if challenged by their “innovation”-obsessed superiors. But in the drive to monetise every last ounce of effort there is one statement that is worth remembering:
If intellectual property is not assigned, this can create problems in who is allowed to exploit the work, and again work can go to waste due to a lack of clarity over who owns what.
In other words, in an environment where everybody wants a share of the riches, it helps to have everybody’s interests out in the open so that there may be no surprises later on. Now, it turns out that unclear ownership and overly casual management of contributions is something that has occasionally threatened Free Software projects, resulting in more sophisticated thinking about how contributions are managed.
And it is precisely this combination of Free Software licensing, or something analogous for other domains, with proper contribution and attribution management that will extend safe and efficient sharing of knowledge to the academic realm. Researchers just cannot have the same level of confidence when dealing with the “technology transfer” offices of their institution and of other institutions. Such offices only want to look after themselves while undermining everyone beyond the borders of their own fiefdoms.
Divide and Rule
It is unfortunate that academic institutions feel that they need to “pull their weight” and have to raise funds to make up for diminishing public funding. By turning their backs on the very reason for their own existence and seeking monopolies instead of sharing knowledge, they unwittingly participate in the “divide and rule” tactics blatantly pursued in the political arena: that everyone must fight each other for all that is left once the lion’s share of public funding has been allocated to prestige megaprojects and schemes that just happen to benefit the well-connected, the powerful and the influential people in society the most.
A properly-funded education sector is an essential component of a civilised society, and its institutions should not be obliged to “sharpen their elbows” in the scuffle for funding and thus deprive others of knowledge just to remain viable. Sadly, while austerity politics remains fashionable, it may be up to us in the Free Software realm to remind academia of its obligations and to show that sustainable ways of sharing knowledge exist and function well in the “real world”.
Indeed, it is up to us to keep such institutions honest and to prevent advocates of monopoly-driven “innovation” from being able to insist that their way is the only way, because just as “divide and rule” politics erects barriers between groups in wider society, commercialisation erects barriers that inhibit the essential functions of academic pursuit. And such barriers ultimately risk extinguishing academia altogether, along with all the benefits its institutions bring to society. If my university were not reinforcing such barriers with its “IP” policy, maybe its anniversary as a measure of how far we have progressed from monopolies and intellectual selfishness would have been worth celebrating after all.
Rename This Project
December 13th, 2016
It is interesting how the CPython core developers appear to prefer to spend their time on choosing names for someone else’s fork of Python 2, with some rather expansionist views on trademark applicability, than on actually winning over Python 2 users to their cause, which is to make Python 3 the only possible future of the Python language, of course. Never mind that the much broader Python language community still appears to have an overwhelming majority of Python 2 users. And not some kind of wafer-thin, “first past the post”, mandate-exaggerating, Brexit-level majority, but an actual “that doesn’t look so bad but, oh, the scale is logarithmic!” kind of majority.
On the one hand, there are core developers who claim to be very receptive to the idea of other people maintaining Python 2, because the CPython core developers have themselves decided that they cannot bear to look at that code after 2020 and will not issue patches, let alone make new releases, even for the issues that have been worthy of their attention in recent years. Telling people that they are completely officially unsupported applies yet more “stick” and even less “carrot” to those apparently lazy Python 2 users who are still letting the side down by not spending their own time and money on realising someone else’s vision. But apparently, that receptivity extends only so far into the real world.
One often reads and hears claims of “entitlement” when users complain about developers or the output of Free Software projects. Let it be said that I really appreciate what has been delivered over the decades by the Python project: the language has kept programming an interesting activity for me; I still to this day maintain and develop software written in Python; I have even worked to improve the CPython distribution at times, not always successfully. But it should always be remembered that even passive users help to validate projects, and active users contribute in numerous ways to keep projects viable. Indeed, users invest in the viability of such projects. Without such investment, many projects (like many companies) would remain unable to fulfil their potential.
Instead of inflicting burdensome change whose predictable effect is to cause a depreciation of the users’ existing investments and to demand that they make new investments just to mitigate risk and “keep up”, projects should consider their role in developing sustainable solutions that do not become obsolete just because they are not based on the “latest and greatest” of the technology realm’s toys. If someone comes along and picks up this responsibility when it is abdicated by others, then at the very least they should not be given a hard time about it. And at least this “Python 2.8” barely pretends to be anything more than a continuation of things that came before, which is not something that can be said about Python 3 and the adoption/migration fiasco that accompanies it to this day.
On Not Liking Computers
November 21st, 2016
Adam Williamson recently wrote about how he no longer really likes computers. This attracted many responses from people who misunderstood him and decided to dispense career advice, including doses of the usual material about “following one’s passion” or “changing one’s direction” (which usually involves becoming some kind of “global nomad”), which do make me wonder how some of these people actually pay their bills. Do they have a wealthy spouse or wealthy parents or “an inheritance”, or do they just do lucrative contracting for random entities whose nature or identities remain deliberately obscure to avoid thinking about where the money for those jobs really comes from? Particularly the latter would be the “global nomad” way, as far as I can tell.
But anyway, Adam appears to like his job: it’s just that he isn’t interested in technological pursuits outside working hours. At some level, I think we can all sympathise with that. For those of us who have similarly pessimistic views about computing, it’s worth presenting a list of reasons why we might not be so enthusiastic about technology any more, particularly for those of us who also care about the ethical dimensions, not merely whether the technology itself is “any good” or whether it provides a sufficient intellectual challenge. By the way, this is my own list: I don’t know Adam from, well, Adam!
Lack of Actual Progress
One may be getting older and noticing that the same technological problems keep occurring again and again, never getting resolved, while seeing people with no sense of history provoke change for change’s – not progress’s – sake. After a while, or when one gets to a certain age, one expects technology to just work and that people might have figured out how to get things to communicate with each other, or whatever, by building on what went before. But then it usually seems to be the case that some boy genius or other wanted a clear run at solving such problems from scratch, developing lots of flashy features but not the mundane reliability that everybody really wanted.
People then get told that such “advanced” technology is necessarily complicated. Whereas once upon a time, you could pick up a telephone, dial a number, have someone answer, and conduct a half-decent conversation, now you have to make sure that the equipment is all connected up properly, that all the configurations are correct, that the Internet provider isn’t short-changing you or trying to suppress your network traffic. And then you might dial and not get through, or you might have the call mysteriously cut out, or the audio quality might be like interviewing a gang of squabbling squirrels speaking from the bottom of a dustbin/trashcan.
Depreciating Qualifications
One may be seeing a profession that requires a fair amount of educational investment – which, thanks to inept/corrupt politicians, also means a fair amount of financial investment – become devalued to the point that its practitioners are regarded as interchangeable commodities who can be coerced into working for as little as possible. So much for the “knowledge economy” when its practitioners risk ending up earning less than people doing so-called “menial” work who didn’t need to go through a thorough higher education or keep up an ongoing process of self-improvement to remain “relevant”. (Not that there’s anything wrong with “menial” work: without people doing unfashionable jobs, everything would grind to a halt very quickly, whereas quite a few things I’ve done might as well not exist, so little difference they made to anything.)
Now we get told that programming really will be the domain of “artificial intelligence” this time around. That instead of humans writing code, “high priests” will merely direct computers to write the software they need. Of course, such stuff sounds great in Wired magazine and rather amusing to anyone with any actual experience of software projects. Unfortunately, politicians (and other “thought leaders”) read such things one day and then slash away at budgets the next. And in a decade’s time, we’ll be suffering the same “debate” about a lack of “engineering talent” with the same “insights” from the usual gaggle of patent lobbyists and vested interests.
Neoliberal Fantasy Economics
One may have encountered the “internship” culture where as many people as possible try to get programmers and others in the industry to work for nothing, making them feel as if they need to do so in order to prove their worth for a hypothetical employment position or to demonstrate that they are truly committed to some corporate-aligned goal. One reads or hears people advocating involvement in “open source” not to uphold the four freedoms (to use, share, modify and distribute software), but instead to persuade others to “get on the radar” of an employer whose code has been licensed as Free Software (or something pretending to be so) largely to get people to work for them for free.
Now, I do like the idea of employers getting to know potential employees by interacting in a Free Software project, but it should really only occur when the potential employee is already doing something they want to do because it interests them and is in their interests. And no-one should be persuaded into doing work for free on the vague understanding that they might get hired for doing so.
The Expendable Volunteers
One may have seen the exploitation of volunteer effort where people are made to feel that they should “step up” for the benefit of something they believe in, often requiring volunteers to sacrifice their own time and money to do such free work, and often seeing those volunteers being encouraged to give money directly to the cause, as if all their other efforts were not substantial contributions in themselves. While striving to make a difference around the edges of their own lives, volunteers are often working in opposition to well-resourced organisations whose employees have the luxury of countering such volunteer efforts on a full-time basis and with a nice salary. Those people can go home in the evenings and at weekends and tune it all out if they want to.
No wonder volunteers burn out or decide that they just don’t have time or aren’t sufficiently motivated any more. The sad thing is that some organisations ignore this phenomenon because there are plenty of new volunteers wanting to “get active” and “be visible”, perhaps as a way of marketing themselves. Then again, some communities are content to alienate existing users if they can instead attract the mythical “10x” influx of new users to take their place, so we shouldn’t really be surprised, I suppose.
Blame the Powerless
One may be exposed to the culture that if you care about injustices or wrongs then bad or unfortunate situations are your responsibility even if you had nothing to do with their creation. This culture pervades society and allows the powerful to do what they like, to then make everyone else feel bad about the consequences, and to virtually force people to just accept the results if they don’t have the energy at the end of a busy day to do the legwork of bringing people to account.
So, those of us with any kind of conscience at all might already be supporting people trying to do the right thing like helping others, holding people to account, protecting the vulnerable, and so on. But at the same time, we aren’t short of people – particularly in the media and in politics – telling us how bad things are, with an air of expectation that we might take responsibility for something supposedly done on our behalf that has had grave consequences. (The invasion and bombing of foreign lands is one depressingly recurring example.) Sadly, the feeling of powerlessness many people have, as the powerful go round doing what they like regardless, is exploited by the usual cynical “divide and rule” tactics of other powerful people who merely see the opportunities in the misuse of power and the misery it causes. And so, selfishness and tribalism proliferate, demotivating anyone wanting the world to become a better place.
Reversal of Liberties
One may have had the realisation that technology is no longer merely about creating opportunities or making things easier, but is increasingly about controlling and monitoring people and making things complicated and difficult. That sustainability is sacrificed so that companies can cultivate recurring and rich profit opportunities by making people dependent on obsolete products that must be replaced regularly. And that technology exacerbates societal ills rather than helping to eradicate them.
We have the modern Web whose average site wants to “dial out” to a cast of recurring players – tracking sites, content distribution networks (providing advertising more often than not), font resources, image resources, script resources – all of which contribute to making the “signal-to-noise” ratio of the delivered content smaller and smaller all the time. Where everything has to maintain a channel of communication to random servers to constantly update them about what the user is doing, where they spent most of their time, what they looked at and what they clicked on. All of this requiring hundreds of megabytes of program code and data, burning up CPU time, wasting energy, making computers slow and steadily obsolete, forcing people to throw things away and to buy more things to throw away soon enough.
We have the “app” ecosystem experience, with restrictions on access, competition and interoperability, with arbitrarily-curated content: the walled gardens that the likes of Apple and Microsoft failed to impose on everybody at the dawn of the “consumer Internet” but do so now under the pretences of convenience and safety. We have social networking empires that serve fake news to each person’s little echo chamber, whipping up bubbles of hate and distracting people from what is really going on in the world and what should really matter. We have “cloud” services that often offer mediocre user experiences but which offer access from “any device”, with users opting in to both the convenience of being able to get their messages or files from their phone and the surveillance built into such services for commercial and governmental exploitation.
We have planned obsolescence designed into software and hardware, with customers obliged to buy new products to keep doing the things they want to do with those products and to keep it a relatively secure experience. And we have dodgy batteries sealed into devices, with the obligation apparently falling on the customers themselves to look after their own safety and – when the product fails – the impact of that product on the environment. By burdening the hapless user of technology with so many caveats that their life becomes dominated by them, those things become a form of tyranny, too.
Finding Meaning
Many people need to find meaning in their work and to feel that their work aligns with their own priorities. Some people might be able to do work that is unchallenging or uninteresting and then pursue their interests and goals in their own time, but this may be discouraging and demotivating over the longer term. When people’s work is not orthogonal to their own beliefs and interests but instead actively undermines them, the result is counterproductive and even damaging to those beliefs and interests and to others who share them.
For example, developing proprietary software or services in a full-time job, although potentially intellectually challenging, is likely to undermine any realistic level of commitment in one’s own free time to Free Software that does the same thing. Some people may prioritise a stimulating job over the things they believe in, feeling that their work still benefits others in a different way. Others may feel that they are betraying Free Software users by making people reliant on proprietary software and causing interoperability problems when those proprietary software users start assuming that everything should revolve around them, their tools, their data, and their expectations.
Although Adam wasn’t framing this shift in perspectives in terms of his job or career, it might have an impact on some people in that regard. I sometimes think of the interactions between my personal priorities and my career. Indeed, the way that Adam can seemingly stash his technological pursuits within the confines of his day job, while leaving the rest of his time for other things, was some kind of vision that I once had for studying and practising computer science. I think he is rather lucky in that his employer’s interests and his own are aligned sufficiently for him to be able to consider his workplace a venue for furthering those interests, doing so sufficiently to not need to try and make up the difference at home.
We live in an era of computational abundance and yet so much of that abundance is applied ineffectively and inappropriately. I wish I had a concise solution to the complicated equation involving technology and its effects on our quality of life, if not for the application of technology in society in general, then at least for individuals, and not least for myself. Maybe a future article needs to consider what we should expect from technology, as its application spreads ever wider, such that the technology we use and experience upholds our rights and expectations as human beings instead of undermining and marginalising them.
It’s not hard to see how even those who were once enthusiastic about computers can end up resenting them and disliking what they have become.
Defending the 99%
October 24th, 2016
In the context of a fairly recent discussion of Free Software licence enforcement on the Linux Kernel Summit mailing list, where Matthew Garrett defended the right of users to enjoy the four freedoms upheld by the GPL, but where Linus Torvalds and others insisted that upstream corporate contributions are more important and that it doesn’t matter if the users get to see the source code, Jonas Öberg made the remarkable claim that…
“It’s almost as if Matthew is talking about benefits for the 1% whereas Linus is aiming for the benefit of the 99%.”
So, if we are to understand this correctly, a highly-privileged and famous software developer, whose position on the “tivoization” of hardware was that users shouldn’t expect to have any control over the software running on their purchases, is now seemingly echoing the sentiments of a billionaire monopolist who once said that users didn’t need to see the source code of the programs they use. That particular monopolist stated that the developers of his company’s software would take care of everything and that the users would “rely on us” because the mere notion of anybody else interacting with the source code was apparently “the opposite of what’s supposed to go on”.
Here, this famous software developer’s message is that corporations may in future enrich his and his colleagues’ work so that a device purchaser may, at some unspecified time in the future, get to enjoy a properly-maintained version of the Linux kernel running inside a purchase of theirs. All the purchaser needs to do is to stop agitating for their four freedom rights and instead effectively rely on them to look after everything. (Where “them” would be the upstream kernel development community incorporating supposedly-cooperative corporate representatives.)
Now, note once again that such kernel code would only appear in some future product, not in the now-obsolete or broken product that the purchaser currently has. So far, the purchaser may be without any proper insight into that product – apart from the dubious consolation of knowing that the vendor likes Linux enough to have embedded it in the product – and they may well be left without any control over what software the product actually ends up running. So much for relying on “them” to look after the pressing present-day needs of users.
And even with any mythical future product unboxed and powered by a more official form of Linux, the message from the vendor may very well be that at no point should the purchaser ever feel entitled to look inside the device at the code, to try and touch it, to modify it, improve or fix it, and they should absolutely not try and use that device as a way of learning about computing, just as the famous developer and his colleagues were able to do when they got their start in the industry. So much for relying on “them” to look after the future needs of users.
(And let us not even consider that a bunch of other code delivered in a product may end up violating other projects’ licences because those projects did not realise that they had to “make friends” with the same group of dysfunctional corporations.)
Somehow, I rather feel that Matthew Garrett is the one with more of an understanding of what it is like to be among the 99%: where you buy something that could potentially be insecure junk as soon as it is unboxed, where the vendor might even arrogantly declare that the licensing does not apply to them. And he certainly has an understanding of what the 99% actually want: to be able to do something about such things right now, rather than to be at the mercy of lazy, greedy and/or predatory corporate practices; to finally get the product with all the features you thought your money had managed to buy you in the first place.
All of this ground-level familiarity seems very much in contrast to that of some other people who presumably only “hear” via second- or third-hand accounts what the average user or purchaser supposedly experiences, whose privilege and connections will probably get “them” what they want or need without any trouble at all. Let us say that in this dispute Matthew Garrett is not the one suffering from what might be regarded as “benevolent dictator syndrome”.
The Misrepresentation of Others
And one thing Jonas managed to get taken in by was the despicable and continued misrepresentation of organisations like the Software Freedom Conservancy, their staff, and their activities. Despite the public record showing otherwise, certain participants in the discussion were only too happy to perpetuate the myth of such organisations being litigious, and to belittle those organisations’ work, in order to justify their own hostile and abusive tone towards decent, helpful and good people.
No-one has ever really been forced to choose between cooperation, encouragement, community-building and the pursuit of enforcement. Indeed, the organisations pursuing responsible enforcement strategies, in reminding people of their responsibilities, are actually encouraging companies to honour licences and to respect the people who chose such licences for their works. The aim is ultimately to integrate today’s licence violators into the community of tomorrow as honest, respectable and respectful participants.
Community-building can therefore occur even when pointing out to people what they have been doing wrong. But without any substance, licences would provide only limited powers in persuading companies to do the right thing. And the substance of licences is rooted in their legal standing, meaning that occasionally a licence-violating entity might need to be reminded that its behaviour may be scrutinised in a legal forum and that the company involved may experience negative financial and commercial effects as a result.
Reminding others that licences have substance and requiring others to observe such licences is not “force”, at least not the kind of illegitimate force that is insinuated by various factions who prefer the current situation of widespread licence violation and lip service to the Linux brand. It is the instrument through which the authors of Free Software works can be heard and respected when all other reasonable channels of redress have been shut down. And, crucially, it is the instrument through which the needs of the end-user, the purchaser, the people who do no development at all – indeed, all of the people who paid good money and who actually funded the product making use of the Free Software at risk, whose money should also be funding the development of that software – can be heard and respected, too.
I always thought that “the 1%” were the people who had “got theirs” already, the privileged, the people who promise the betterment of everybody else’s lives through things like trickle-down economics, the people who want everything to go through them so that they get to say who benefits or not. If pandering to well-financed entities for some hypothetical future pay-off while they conduct business as usual at everybody else’s expense is “for the benefit of the 99%”, then it seems to me that Jonas has “the 1%” and “the 99%” the wrong way round.
EOMA68: The Campaign (and some remarks about recurring criticisms)
August 18th, 2016
I have previously written about the EOMA68 initiative and its objective of making small, modular computing cards that conform to a well-defined standard which can be plugged into certain kinds of device – a laptop or desktop computer, or maybe even a tablet or smartphone – providing a way of supplying such devices with the computing power they all need. This would also offer a convenient way of taking your computing environment with you, using it in the kind of device that makes most sense at the time you need to use it, since the computer card is the actual computer and all you are doing is putting it in a different box: switch off, unplug the card, plug it into something else, switch that on, and your “computer” has effectively taken on a different form.
(This “take your desktop with you” by actually taking your computer with you is fundamentally different to various dubious “cloud synchronisation” services that would claim to offer something similar: “now you can synchronise your tablet with your PC!”, or whatever. Such services tend to operate rather imperfectly – storing your files on some remote site – and, of course, exposing you to surveillance and convenience issues.)
Well, a crowd-funding campaign has since been launched to fund a number of EOMA68-related products, with an opportunity for those interested to acquire the first round of computer cards and compatible devices, those devices being a “micro-desktop” that offers a simple “mini PC” solution, together with a somewhat radically designed and produced laptop (or netbook, perhaps) that emphasises accessible construction methods (home 3D printing) and alternative material usage (“eco-friendly plywood”). In the interests of transparency, I will admit that I have pledged for a card and the micro-desktop, albeit via my brother for various personal reasons that also delayed me from actually writing about this here before now.
Of course, EOMA68 is about more than just conveniently taking your computer with you because it is now small enough to fit in a wallet. Even if you do not intend to regularly move your computer card from device to device, it emphasises various sustainability issues such as power consumption (deliberately kept low), long-term support and matters of freedom (the selection of CPUs that completely support Free Software and do not introduce surveillance backdoors), and device longevity (that when the user wants to upgrade, they may easily use the card in something else that might benefit from it).
This is not modularity to prove some irrelevant hypothesis. It is modularity that delivers concrete benefits to users (that they aren’t forced to keep replacing products engineered for obsolescence), to designers and manufacturers (that they can rely on the standard to provide computing functionality and just focus on their own speciality to differentiate their product in more interesting ways), and to society and the environment (by reducing needless consumption and waste caused by the upgrade treadmill promoted by the technology industries over the last few decades).
One might think that such benefits might be received with enthusiasm. Sadly, it says a lot about today’s “needy consumer” culture that instead of welcoming another choice, some would rather spend their time criticising it, often to the point that one might wonder about their motivations for doing so. Below, I present some common criticisms and some of my own remarks.
(If you don’t want to read about “first world” objections – typically about “new” and “fast” – and are already satisfied by the decisions made regarding more understandable concerns – typically involving corporate behaviour and licensing – just skip to the last section.)
“The A20 is so old and slow! What’s the point?”
The Allwinner A20 has been around for a while. Indeed, its predecessor – the A10 – was the basis of initial iterations of the computer card several years ago. Now, the amount of engineering needed to upgrade the prototypes that were previously made to use the A10 instead of the A20 is minimal, at least in comparison to adopting another CPU (that would probably require a redesign of the circuit board for the card). And hardware prototyping is expensive, especially when unnecessary design changes have to be made, when they don’t always work out as expected, and when extra rounds of prototypes are then required to get the job done. For an initiative with a limited budget, the A20 makes a lot of sense because it means changing as little as possible, benefiting from the functionality upgrade and keeping the risks low.
Obviously, there are faster processors available now, but as the processor selection criteria illustrate, if you cannot support them properly with Free Software and must potentially rely on binary blobs which potentially violate the GPL, it would be better to stick to a more sustainable choice (because that is what adherence to Free Software is largely about) even if that means accepting reduced performance. In any case, at some point, other cards with different processors will come along and offer faster performance. Alternatively, someone will make a dual-slot product that takes two cards (or even a multi-slot product that provides a kind of mini-cluster), and then with software that is hopefully better-equipped for concurrency, there will be alternative ways of improving the performance to that of finding faster processors and hoping that they meet all the practical and ethical criteria.
“The RasPi 3…”
Lots of people love the Raspberry Pi, it would seem. The original models delivered a cheap, adequate desktop computer for a sum that was competitive even with some microcontroller-based single-board computers that are aimed at electronics projects and not desktop computing, although people probably overlook rivals like the BeagleBoard and variants that would probably have occupied a similar price point even if the Raspberry Pi had never existed. Indeed, the BeagleBone Black resides in the same pricing territory now, as do many other products. It is interesting that both product families are backed by certain semiconductor manufacturers, and the Raspberry Pi appears to benefit from privileged access to Broadcom products and employees that is denied to others seeking to make solutions using the same SoC (system on a chip).
Now, the first Raspberry Pi models were not entirely at the performance level of contemporary desktop solutions, especially by having only 256MB or 512MB RAM, meaning that any desktop experience had to be optimised for the device. Furthermore, they employed an ARM architecture variant that was not fully supported by mainstream GNU/Linux distributions, in particular the one favoured by the initiative: Debian. So a variant of Debian has been concocted to support the devices – Raspbian – and despite the Raspberry Pi 2 being the first device in the series to employ an architecture variant that is fully supported by Debian, Raspbian is still recommended for it and its successor.
Anyway, the Raspberry Pi 3 having 1GB RAM and being several times faster than the earliest models might be more competitive with today’s desktop solutions, at least for modestly-priced products, and perhaps it is faster than products using the A20. But just like the fascination with MHz and GHz until Intel found that it couldn’t rely on routinely turning up the clock speed on its CPUs, or everybody emphasising the number of megapixels their digital camera had until they discovered image noise, such number games ignore other factors: the closed source hardware of the Raspberry Pi boards, the opaque architecture of the Broadcom SoCs with a closed source operating system running on the GPU (graphics processing unit) that has control over the ARM CPU running the user’s programs, the impracticality of repurposing the device for things like laptops (despite people attempting to repurpose it for such things, anyway), and the organisation behind the device seemingly being happy to promote a variety of unethical proprietary software from a variety of unethical vendors who clearly want a piece of the action.
And finally, with all the fuss about how much faster the opaque Broadcom product is than the A20, the Raspberry Pi 3 has half the RAM of the EOMA68-A20 computer card. For certain applications, more RAM is going to be much more helpful than more cores or “64-bit!”, which makes us wonder why the Raspberry Pi 3 doesn’t support 4GB RAM or more. (Indeed, the current trend of 64-bit ARM products offering memory quantities addressable by 32-bit CPUs seems to have missed the motivation for x86 finally going 64-bit back in the early 21st century, which was largely about efficiently supporting the increasingly necessary amounts of RAM required for certain computing tasks, with Intel’s name for x86-64 actually being at one time “Extended Memory 64 Technology“. Even the DEC Alpha, back in the 1990s, which could be regarded as heralding the 64-bit age in mainstream computing, and which arguably relied on the increased performance provided by a 64-bit architecture for its success, still supported 64-bit quantities of memory in delivered products when memory was obviously a lot more expensive than it is now.)
“But the RasPi Zero!”
Sure, who can argue with a $5 (or £4, or whatever) computer with 512MB RAM and a 1GHz CPU that might even be a usable size and shape for some level of repurposing for the kinds of things that EOMA68 aims at: putting a general purpose computer into a wide range of devices? Except that the Raspberry Pi Zero has had persistent availability issues, even ignoring the free give-away with a magazine that had people scuffling in newsagents to buy up all the available copies so they could resell them online at several times the retail price. And it could be perceived as yet another inventory-dumping exercise by Broadcom, given that it uses the same SoC as the original Raspberry Pi.
Arguably, the Raspberry Pi Zero is a more ambiguous follow-on from the Raspberry Pi Compute Module that obviously was (and maybe still is) intended for building into other products. Some people may wonder why the Compute Module wasn’t the same success as the earlier products in the Raspberry Pi line-up. Maybe its lack of success was because any organisation thinking of putting the Compute Module (or, these days, the Pi Zero) in a product to sell to other people is relying on a single vendor. And with that vendor itself relying on a single vendor with whom it currently has a special relationship, a chain of single vendor reliance is formed.
Any organisation wanting to build one of these boards into their product now has to have rather a lot of confidence that the chain will never weaken or break and that at no point will either of those vendors decide that they would rather like to compete in that particular market themselves and exploit their obvious dominance in doing so. And they have to be sure that the Raspberry Pi Foundation doesn’t suddenly decide to get out of the hardware business altogether and pursue those educational objectives that they once emphasised so much instead, or that the Foundation and its manufacturing partners don’t decide for some reason to cease doing business, perhaps selectively, with people building products around their boards.
“Allwinner are GPL violators and will never get my money!”
Sadly, Allwinner have repeatedly delivered GPL-licensed software without providing the corresponding source code, and this practice may even persist to this day. One response to this has referred to the internal politics and organisation of Allwinner and that some factions try to do the right thing while others act in an unenlightened, licence-violating fashion.
Let it be known that I am no fan of the argument that there are lots of departments in companies and that just because some do some bad things doesn’t mean that you should punish the whole company. To this day, Sony does not get my business because of the unsatisfactorily-resolved rootkit scandal and I am hardly alone in taking this position. (It gets brought up regularly on a photography site I tend to visit where tensions often run high between Sony fanatics and those who use cameras from other manufacturers, but to be fair, Sony also has other ways of irritating its customers.) And while people like to claim that Microsoft has changed and is nice to Free Software, even to the point where people refusing to accept this assertion get criticised, it is pretty difficult to accept claims of change and improvement when the company pulls in significant sums from shaking down device manufacturers using dubious patent claims on Android and Linux: systems it contributed nothing to. And no, nobody will have been reading any patents to figure out how to implement parts of Android or Linux, let alone any belonging to some company or other that Microsoft may have “vacuumed up” in an acquisition spree.
So, should the argument be discarded here as well? Even though I am not too happy about Allwinner’s behaviour, there is the consideration that as the saying goes, “beggars cannot be choosers”. When very few CPUs exist that meet the criteria desirable for the initiative, some kind of nasty compromise may have to be made. Personally, I would have preferred to have had the option of the Ingenic jz4775 card that was close to being offered in the campaign, although I have seen signs of Ingenic doing binary-only code drops on certain code-sharing sites, and so they do not necessarily have clean hands, either. But they are actually making the source code for such binaries available elsewhere, however, if you know where to look. Thus it is most likely that they do not really understand the precise obligations of the software licences concerned, as opposed to deliberately withholding the source code.
But it may well be that unlike certain European, American and Japanese companies for whom the familiar regime of corporate accountability allows us to judge a company on any wrongdoing, because any executives unaware of such wrongdoing have been negligent or ineffective at building the proper processes of supervision and thus permit an unethical corporate culture, and any executives aware of such wrongdoing have arguably cultivated an unethical corporate culture themselves, it could be the case that Chinese companies do not necessarily operate (or are regulated) on similar principles. That does not excuse unethical behaviour, but it might at least entertain the idea that by supporting an ethical faction within a company, the unethical factions may be weakened or even eliminated. If that really is how the game is played, of course, and is not just an excuse for finger-pointing where nobody is held to account for anything.
But companies elsewhere should certainly not be looking for a weakening of their accountability structures so as to maintain a similarly convenient situation of corporate hypocrisy: if Sony BMG does something unethical, Sony Imaging should take the bad with the good when they share and exploit the Sony brand; people cannot have things both ways. And Chinese companies should comply with international governance principles, if only to reassure their investors that nasty surprises (and liabilities) do not lie in wait because parts of such businesses were poorly supervised and not held accountable for any unethical activities taking place.
It is up to everyone to make their own decision about this. The policy of the campaign is that the A20 can be supported by Free Software without needing any proprietary software, does not rely on any Allwinner-engineered, licence-violating software (which might be perceived as a good thing), and is merely the first step into a wider endeavour that could be conveniently undertaken with the limited resources available at the time. Later computer cards may ignore Allwinner entirely, especially if the company does not clean up its act, but such cards may never get made if the campaign fails and the wider endeavour never even begins in earnest.
(And I sincerely hope that those who are apparently so outraged by GPL violations actually support organisations seeking to educate and correct companies who commit such violations.)
“You could buy a top-end laptop for that price!”
Sure you could. But this isn’t about a crowd-funding campaign trying to magically compete with an optimised production process that turns out millions of units every year backed by a multi-billion-dollar corporation. It is about highlighting the possibilities of more scalable (down to the economically-viable manufacture of a single unit), more sustainable device design and construction. And by the way, that laptop you were talking about won’t be upgradeable, so when you tire of its performance or if the battery loses its capacity, I suppose you will be disposing of it (hopefully responsibly) and presumably buying something similarly new and shiny by today’s measures.
Meanwhile, with EOMA68, the computing part of the supposedly overpriced laptop will be upgradeable, and with sensible device design the battery (and maybe other things) will be replaceable, too. Over time, EOMA68 solutions should be competitive on price, anyway, because larger numbers of them will be produced, but unlike traditional products, the increased usable lifespans of EOMA68 solutions will also offer longer-term savings to their purchasers, too.
“You could just buy a used laptop instead!”
Sure you could. At some point you will need to be buying a very old laptop just to have a CPU without a surveillance engine and offering some level of upgrade potential, although the specification might be disappointing to you. Even worse, things don’t last forever, particularly batteries and certain kinds of electronic components. Replacing those things may well be a challenge, and although it is worthwhile to make sure things get reused rather than immediately discarded, you can’t rely on picking up a particular product in the second-hand market forever. And relying on sourcing second-hand items is very much for limited edition products, whereas the EOMA68 initiative is meant to be concerned with reliably producing widely-available products.
“Why pay more for ideological purity?”
Firstly, words like “ideology”, “religion”, “church”, and so on, might be useful terms for trolls to poison and polarise any discussion, but does anyone not see that expecting suspiciously cheap, increasingly capable products to be delivered in an almost conveyor belt fashion is itself subscribing to an ideology? One that mandates that resources should be procured at the minimum cost and processed and assembled at the minimum cost, preferably without knowing too much about the human rights abuses at each step. Where everybody involved is threatened that at any time their role may be taken over by someone offering the same thing for less. And where a culture of exploitation towards those doing the work grows, perpetuating increasing wealth inequality because those offering the services in question will just lean harder on the workers to meet their cost target (while they skim off “their share” for having facilitated the deal). Meanwhile, no-one buying the product wants to know “how the sausage is made”. That sounds like an ideology to me: one of neoliberalism combined with feigned ignorance of the damage it does.
Anyway, people pay for more sustainable, more ethical products all the time. While the wilfully ignorant may jeer that they could just buy what they regard as the same thing for less (usually being unaware of factors like quality, never mind how these things get made), more sensible people see that the extra they pay provides the basis for a fairer, better society and higher-quality goods.
“There’s no point to such modularity!”
People argue this alongside the assertion that systems are easy to upgrade and that they can independently upgrade the RAM and CPU in their desktop tower system or whatever, although they usually start off by talking about laptops, but clearly not the kind of “welded shut” laptops that they or maybe others would apparently prefer to buy (see above). But systems are getting harder to upgrade, particularly portable systems like laptops, tablets, smartphones (with Fairphone 2 being a rare exception of being something that might be upgradeable), and even upgradeable systems are not typically upgraded by most end-users: they may only manage to do so by enlisting the help of more knowledgeable relatives and friends.
I use a 32-bit system that is over 11 years old. It could have more RAM, and I could do the job of upgrading it, but guess how much I would be upgrading it to: 2GB, which is as much as is supported by the two prototyped 32-bit architecture EOMA68 computer card designs (A20 and jz4775). Only certain 32-bit systems actually support more RAM, mostly because it requires the use of relatively exotic architectural features that a lot of software doesn’t support. As for the CPU, there is no sensible upgrade path even if I were sure that I could remove the CPU without causing damage to it or the board. Now, 64-bit systems might offer more options, and in upgradeable desktop systems more RAM might be added, but it still relies on what the chipset was designed to support. Some chipsets may limit upgrades based on either manufacturer pessimism (no-one will be able to afford larger amounts in the near future) or manufacturer cynicism (no-one will upgrade to our next product if they can keep adding more RAM).
EOMA68 makes a trade-off in order to support the upgrading of devices in a way that should be accessible to people who are not experts: no-one should be dealing with circuit boards and memory modules. People who think hardware engineering has nothing to do with compromises should get out of their armchair, join one of the big corporations already doing hardware, and show them how it is done, because I am sure those companies would appreciate such market-dominating insight.
Back to the Campaign
But really, the criticisms are not the things to focus on here. Maybe EOMA68 was interesting to you and then you read one of these criticisms somewhere and started to wonder about whether it is a good idea to support the initiative after all. Now, at least you have another perspective on them, albeit from someone who actually believes that EOMA68 provides an interesting and credible way forward for sustainable technology products.
Certainly, this campaign is not for everyone. Above all else it is crowd-funding: you are pledging for rewards, not buying things, even though the aim is to actually manufacture and ship real products to those who have pledged for them. Some crowd-funding exercises never deliver anything because they underestimate the difficulties of doing so, leaving a horde of angry backers with nothing to show for their money. I cannot make any guarantees here, but given that prototypes have been made over the last few years, that videos have been produced with a charming informality that would surely leave no-one seriously believing that “the whole thing was just rendered” (which tends to happen a lot with other campaigns), and given the initiative founder’s stubbornness not to give up, I have a lot of confidence in him to make good on his plans.
(A lot of campaigns underestimate the logistics and, having managed to deliver a complicated technological product, fail to manage the apparently simple matter of “postage”, infuriating their backers by being unable to get packages sent to all the different countries involved. My impression is that logistics expertise is what Crowd Supply brings to the table, and it really surprises me that established freight and logistics companies aren’t dipping their toes in the crowd-funding market themselves, either by running their own services or taking ownership stakes and integrating their services into such businesses.)
Personally, I think that $65 for a computer card that actually has more RAM than most single-board computers is actually a reasonable price, but I can understand that some of the other rewards seem a bit more expensive than one might have hoped. But these are effectively “limited edition” prices, and the aim of the exercise is not to merely make some things, get them for the clique of backers, and then never do anything like this ever again. Rather, the aim is to demonstrate that such products can be delivered, develop a market for them where the quantities involved will be greater, and thus be able to increase the competitiveness of the pricing, iterating on this hopefully successful formula. People are backing a standard and a concept, with the benefit of actually getting some hardware in return.
Interestingly, one priority of the campaign has been to seek the FSF’s “Respects Your Freedom” (RYF) endorsement. There is already plenty of hardware that employs proprietary software at some level, leaving the user to merely wonder what some “binary blob” actually does. Here, with one of the software distributions for the computer card, all of the software used on the card and the policies of the GNU/Linux distribution concerned – a surprisingly awkward obstacle – will seek to meet the FSF’s criteria. Thus, the “Libre Tea” card will hopefully be one of the first general purpose computing solutions to actually be designed for RYF certification and to obtain it, too.
The campaign runs until August 26th and has over a thousand pledges. If nothing else, go and take a look at the details and the updates, with the latter providing lots of background including video evidence of how the software offerings have evolved over the course of the campaign. And even if it’s not for you, maybe people you know might appreciate hearing about it, even if only to follow the action and to see how crowd-funding campaigns are done.