Paul Boddie's Free Software-related blog


Archive for the ‘Python’ Category

Concise Attribute Initialisation in Lichen… and Python?

Monday, January 22nd, 2018

In my review of 2017, I mentioned a project of mine to make a Python-like language called Lichen that is more amenable to compile-time analysis than Python is, while still having a feature set I might actually be able to use in “real” programs one day. There are a lot of different “moving parts” in the Lichen toolchain, and being preoccupied with various other projects and activities, I haven’t been able to get back into working on it properly in the last few months.

Recently, as I found myself writing Python code for another of my projects, I got to wondering about something in Python that can occur a lot: the initialisation of instance attributes. Here is a classic example:

class Point:
    def __init__(self, x, y):
        self.x = x
        self.y = y

# For illustration, here is how the class is used...
p = Point(640, 512)
print p.x, p.y # 640 512

In this example, having to assign the parameter values to the instance attributes is not much of a hardship. But with more verbose initialisation methods with more parameters and more attributes involved, writing everything out can be tiresome. Moreover, mistakes can be made, particularly if the interfaces and structures are evolving. Naturally, there are a range of improvements and measures that attempt to alleviate the problem. Here is the most obvious:

class Point:
    def __init__(self, x, y):
        self.x = x; self.y = y

This just puts the same statements on one line, so let us move beyond it to the next attempt:

class Point:
    def __init__(self, x, y):
        self.x, self.y = x, y

Here, we are actually performing “tuple assignment”, with the parameter values being placed in a tuple whose elements are then assigned to the names in the corresponding positions on the left-hand side of the assignment.

Now, without any Python “magic”, this is probably as far as you can get. The “magic” involves introspection and a feature known as “decorators” (which Lichen doesn’t support) to let us use something like this:

class Point:
    @initialising("x", "y")
    def __init__(self, x, y):
        pass

Here, I am taking inspiration from a collection of actual suggestions and solutions, but none of them look like the above. Indeed, many of them take the approach of initialising attributes using every parameter in the method signature which isn’t always what you want, although it does seem to be requested every now and again.

Although the above example looks quite nice, the mechanism responsible for performing the attribute assignments will not look as nice, and so I won’t show it here. And unless a mode is supported where the names can be omitted, thus initialising attributes using all parameters (except self) when you do want to, it is perhaps tiresome to have to write the names out again somewhere else, even more so as strings.

You will also find people advocating more transparent use of the ** catch-all parameter (also not supported by Lichen), sometimes in response to people worried that writing out lots of assignments is a sign of bad code. This yields solutions like this one:

class Point:
    def __init__(self, **kw):
        for name in ("x", "y"):
            setattr(self, name, kw.get(name))

But keeping named parameters in the signature helps to prevent certain kinds of errors, which is one reason why I don’t intend to support catch-all parameters in Lichen.

But what I wondered is why Python never supported something closer to C++’s initialisation lists. In C++, we might write the code somewhat as follows:

class Point
{
    Number x, y;
public:
    Point(Number x, Number y) : x(x), y(y) {};
}

Here, it is evident that repetition occurs just as in the “magic” Python example, which is something I might want to eliminate. Maybe we would want to have a shorthand for attribute initialisation within the parameter list itself. And then I thought of a possible syntax:

class Point:
    def __init__(self, .x, .y):
        pass

So, any parameter employing a dot before its name would result in the assignment of its value to the instance attribute having the same name. Of course, this wouldn’t support a parameter with one name having its value assigned to an attribute with another name, but I thought it best to stick to the simple cases. “Why not add this to Lichen?” I thought.

And in line with not getting too immersed in the toolchain straight away after such a long break, I decided on some rather simple semantics for this feature: dot-prefixed names would still exist as local names; dot-prefixing would just be a form of shorthand meaning that an assignment would be generated at the very start of the function body. So, the above would really translate to the very first example given at the start of this article or, indeed, the second one which is equivalent and is reproduced below:

# Lichen-only...                   # Python and Lichen...
class Point:                       class Point:
    def __init__(self, .x, .y):        def __init__(self, x, y):
        pass                               self.x = x; self.y = y

Keeping the sophistication of the feature at an unambitious level, besides letting me slowly familiarise myself again with the code, also helps to deal with potential conflicts with other mechanisms. For example, what if someone wanted to employ a name twice – once dot-prefixed, once unprefixed – like this…?

class Point:
    def __init__(self, .x, .y, x):
        self.intensity = x ** 2

By asserting that the dot-prefixed x is really just x that also initialises the attribute of the same name, we can fall back on the normal rules around parameters and forbid such duplicate names without having to think very hard about temporary names or more exotic mechanisms that might be used to initialise attributes directly. One other thing worth mentioning is that I don’t reserve the use of such parameters for the exclusive use of initialiser methods, so other applications are possible. For example:

class Point:
    def __init__(.x, .y): pass
    def update(.x, .y): pass

Here, I also omit self because Lichen defines it as always being present in methods, anyway. And we could actually make the update method an alias of the initialiser method, too, but let us not get too carried away!

Fortunately, I adopted a parser framework in Lichen that was originally written for PyPy that allows relatively straightforward modification of the language grammar. Conveniently, the grammar changes required for this feature are minimal and I don’t even have to add any extra tokens. That made me wonder whether such a syntax had been suggested for Python at some point or other. Some quick searches haven’t yielded any results, and I can’t be bothered to trawl the different mailing list archives to find mentions of such features. I can easily imagine that such a feature might have been discussed rather early in Python’s lifetime, possibly in the mid-1990s.

Arguments for new syntax in Python are often met with arguments against “syntactic sugar”, with such “sugar” introducing more convenient notation or a form of shorthand for particular operations. Over the years, people have argued for more concise ways of referencing instance attributes and class attributes instead of using the almost-special self name (that is rather more special in Lichen). Compound assignments to instance attributes have probably been discussed, too, maybe proposing things like this:

# Compound assignment idea...      # Equivalent assignment...
self.(x, y) = x, y                 self.x, self.y = x, y

In response to such suggestions, people seem to be asked how often they need to write such things, whether it is really such a burden to do so, and whether their programming tools cannot help them write out the conventional assignments semi-automatically instead. Proposed general language constructs may well risk introducing conflicts with other language features in unanticipated ways, and if such constructs only ever get used in certain, rather limited, circumstances then one can justifiably ask whether it is really worth the effort to support them. They will, after all, need people to implement them, test them, maintain them, and keep fixing them long into the future.

As is evident from the discussion of the problem of concise initialisation, Python’s community has grown accustomed to solving simple problems in fairly complicated ways using general mechanisms introduced to support broad classes of functionality. Decorators were introduced into Python as a way of inserting extra code around methods and functions to modify or extend their behaviour, allowing people to tackle such problems by getting that extra code to initialise attributes or to do many other weird, wild and wonderful things. Providing such mechanisms lets the language designers send people elsewhere when those people descend on the designers demanding a quick syntactic fix for a specific problem they might be having.

But it really does surprise me that something as simple as dot-prefixing parameter names never managed to get suggested and quickly introduced into an early version of Python. I did wonder whether other Python-inspired languages might have subconsciously inspired me, but a brief perusal of the Boo, Cobra, Delight and Genie documentation turned up nothing. And so, without any more insight into my inspiration, that is the tale of my first experiment in extending Lichen’s syntax beyond that of Python.

Update

I finally remembered where I had seen the dot-prefixed name notation before. When initialising structures in C, you can explicitly indicate a structure member when specifying a value, and I do this all the time in the code generated for Lichen programs. I even define macros that use this feature. For example:

#define __INTVALUE(VALUE) ((__attr) {.intvalue=((VALUE) << 1) | 1})

So I suppose it shows how long it has been since I had to look at that part of the toolchain! Of course, this is directly initialising a structure member by indicating a value, whereas the Lichen syntax enhancement associates an attribute, which is similar to a member, with a parameter received in a method call. But there are some similarities in purpose, nevertheless.

2017 in Review

Thursday, December 7th, 2017

On Planet Debian there seems to be quite a few regularly-posted articles summarising the work done by various people in Free Software over the month that has most recently passed. I thought it might be useful, personally at least, to review the different things I have been doing over the past year. The difference between this article and many of those others is that the work I describe is not commissioned or generally requested by others, instead relying mainly on my own motivation for it to happen. The rate of progress can vary somewhat as a result.

Learning KiCad

Over the years, I have been playing around with Arduino boards, sensors, displays and things of a similar nature. Although I try to avoid buying more things to play with, sometimes I manage to acquire interesting items regardless, and these aren’t always ready to use with the hardware I have. Last December, I decided to buy a selection of electronics-related items for interfacing and experimentation. Some of these items have yet to be deployed, but others were bought with the firm intention of putting different “spare” pieces of hardware to use, or at least to make them usable in future.

One thing that sits in this category of spare, potentially-usable hardware is a display circuit board that was once part of a desk telephone, featuring a two-line, bitmapped character display, driven by the Hitachi HD44780 LCD controller. It turns out that this hardware is so common and mundane that the Arduino libraries already support it, but the problem for me was being able to interface it to the Arduino. The display board uses a cable with a connector that needs a special kind of socket, and so some research is needed to discover the kind of socket needed and how this might be mounted on something else to break the connections out for use with the Arduino.

Fortunately, someone else had done all this research quite some time ago. They had even designed a breakout board to hold such a socket, making it available via the OSH Park board fabricating service. So, to make good on my plan, I ordered the mandatory minimum of three boards, also ordering some connectors from Mouser. When all of these different things arrived, I soldered the socket to the board along with some headers, wired up a circuit, wrote a program to use the LiquidCrystal Arduino library, and to my surprise it more or less worked straight away.

Breakout board for the Molex 52030 connector

Breakout board for the Molex 52030 connector

Hitachi HD44780 LCD display boards driven by an Arduino

Hitachi HD44780 LCD display boards driven by an Arduino

This satisfying experience led me to consider other boards that I might design and get made. Previously, I had only made a board for the Arduino using Fritzing and the Fritzing Fab service, and I had held off looking at other board design solutions, but this experience now encouraged me to look again. After some evaluation of the gEDA tools, I decided that I might as well give KiCad a try, given that it seems to be popular in certain “open source hardware” circles. And after a fair amount of effort familiarising myself with it, with a degree of frustration finding out how to do certain things (and also finding up-to-date documentation), I managed to design my own rather simple board: a breakout board for the Acorn Electron cartridge connector.

Acorn Electron cartridge breakout board (in 3D-printed case section)

Acorn Electron cartridge breakout board (in 3D-printed case section)

In the back of my mind, I have vague plans to do other boards in future, but doing this kind of work can soak up a lot of time and be rather frustrating: you almost have to get into some modified mental state to work efficiently in KiCad. And it isn’t as if I don’t have other things to do. But at least I now know something about what this kind of work involves.

Retro and Embedded Hardware

With the above breakout board in hand, a series of experiments were conducted to see if I could interface various circuits to the Acorn Electron microcomputer. These mostly involved 7400-series logic chips (ICs, integrated circuits) and featured various logic gates and counters. Previously, I had re-purposed an existing ROM cartridge design to break out signals from the computer and make it access a single flash memory chip instead of two ROM chips.

With a dedicated prototyping solution, I was able to explore the implementation of that existing board, determine various aspects of the signal timings that remained rather unclear (despite being successfully handled by the existing board’s logic), and make it possible to consider a dedicated board for a flash memory cartridge. In fact, my brother, David, also wanting to get into board design, later adapted the prototyping cartridge to make such a board.

But this experimentation also encouraged me to tackle some other items in the electronics shipment: the PIC32 microcontrollers that I had acquired because they were MIPS-based chips, with somewhat more built-in RAM than the Atmel AVR-based chips used by the average Arduino, that could also be used on a breadboard. I hoped that my familiarity with the SoC (system-on-a-chip) in the Ben NanoNote – the Ingenic JZ4720 – might confer some benefits when writing low-level code for the PIC32.

PIC32 on breadboard with Arduino programming circuit

PIC32 on breadboard with Arduino programming circuit (and some LEDs for diagnostic purposes)

I do not need to reproduce an account of my activities here, given that I wrote about the effort involved in getting started with the PIC32 earlier in the year, and subsequently described an unusual application of such a microcontroller that seemed to complement my retrocomputing interests. I have since tried to make that particular piece of work more robust, but deducing the actual behaviour of the hardware has been frustrating, the documentation can be vague when it needs to be accurate, and much of the community discussion is focused on proprietary products and specific software tools rather than techniques. Maybe this will finally push me towards investigating programmable logic solutions in the future.

Compiling a Python-like Language

As things actually happened, the above hardware activities were actually distractions from something I have been working on for a long time. But at this point in the article, this can be a diversion from all the things that seem to involve hardware or low-level software development. Many years ago, I started writing software in Python. Over the years since, alternative implementations of the Python language (the main implementation being CPython) have emerged and seen some use, some continuing to be developed to this day. But around fifteen years ago, it became a bit more common for people to consider whether Python could be compiled to something that runs more efficiently (and more quickly).

I followed some of these projects enthusiastically for a while. Starkiller promised compilation to C++ but never delivered any code for public consumption, although the associated academic thesis might have prompted the development of Shed Skin which does compile a particular style of Python program to C++ and is available as Free Software. Meanwhile, PyPy elevated to prominence the notion of writing a language and runtime library implementation in the language itself, previously seen with language technologies like Slang, used to implement Squeak/Smalltalk.

Although other projects have also emerged and evolved to attempt the compilation of Python to lower-level languages (Pyrex, Cython, Nuitka, and so on), my interests have largely focused on the analysis of programs so that we may learn about their structure and behaviour before we attempt to run them, this alongside any benefits that might be had in compiling them to something potentially faster to execute. But my interests have also broadened to consider the evolution of the Python language since the point fifteen years ago when I first started to think about the analysis and compilation of Python. The near-mythical Python 3000 became a real thing in the form of the Python 3 development branch, introducing incompatibilities with Python 2 and fragmenting the community writing software in Python.

With the risk of perfectly usable software becoming neglected, its use actively (and destructively) discouraged, it becomes relevant to consider how one might take control of one’s software tools for long-term stability, where tools might be good for decades of use instead of constantly changing their behaviour and obliging their users to constantly change their software. I expressed some of my thoughts about this earlier in the year having finally reached a point where I might be able to reflect on the matter.

So, the result of a great deal of work, informed by experiences and conversations over the years related to previous projects of my own and those of others, is a language and toolchain called Lichen. This language resembles Python in many ways but does not try to be a Python implementation. The toolchain compiles programs to C which can then be compiled and executed like “normal” binaries. Programs can be trivially cross-compiled by any available C cross-compilers, too, which is something that always seems to be a struggle elsewhere in the software world. Unlike other Python compilers or implementations, it does not use CPython’s libraries, nor does it generate in “longhand” the work done by the CPython virtual machine.

One might wonder why anyone should bother developing such a toolchain given its incompatibility with Python and a potential lack of any other compelling reason for people to switch. Given that I had to accept some necessary reductions in the original scope of the project and to limit my level of ambition just to feel remotely capable of making something work, one does need to ask whether the result is too compromised to be attractive to others. At one point, programs manipulating integers were slower when compiled than when they were run by CPython, and this was incredibly disheartening to see, but upon further investigation I noticed that CPython effectively special-cases integer operations. The design of my implementation permitted me to represent integers as tagged references – a classic trick of various language implementations – and this overturned the disadvantage.

For me, just having the possibility of exploring alternative design decisions is interesting. Python’s design is largely done by consensus, with pronouncements made to settle disagreements and to move the process forward. Although this may have served the language well, depending on one’s perspective, it has also meant that certain paths of exploration have not been followed. Certain things have been improved gradually but not radically due to backwards compatibility considerations, this despite the break in compatibility between the Python 2 and 3 branches where an opportunity was undoubtedly lost to do greater things. Lichen is an attempt to explore those other paths without having to constantly justify it to a group of people who may regard such exploration as hostile to their own interests.

Lichen is not really complete: it needs floating point number and other useful types; its library is minimal; it could be made more robust; it could be made more powerful. But I find myself surprised that it works at all. Maybe I should have more confidence in myself, especially given all the preparation I did in trying to understand the good and bad aspects of my previous efforts before getting started on this one.

Developing for MIPS-based Platforms

A couple of years ago I found myself wondering if I couldn’t write some low-level software for the Ben NanoNote. One source of inspiration for doing this was “The CI20 bare-metal project“: a series of blog articles discussing the challenges of booting the MIPS Creator CI20 single-board computer. The Ben and the CI20 use CPUs (or SoCs) from the same family: the Ingenic JZ4720 and JZ4780 respectively.

For the Ben, I looked at the different boot payloads, principally those written to support booting from a USB host, but also the version of U-Boot deployed on the Ben. I combined elements of these things with the framebuffer driver code from the Linux kernel supporting the Ben, and to my surprise I was able to get the device to boot up and show a pattern on the screen. Progress has not always been steady, though.

For a while, I struggled to make the CPU leave its initial exception state without hanging, and with the screen as my only debugging tool, it was hard to see what might have been going wrong. Some careful study of the code revealed the problem: the code I was using to write to the framebuffer was using the wrong address region, meaning that as soon as an attempt was made to update the contents of the screen, the CPU would detect a bad memory access and an exception would occur. Such exceptions will not be delivered in the initial exception state, but with that state cleared, the CPU will happily trigger a new exception when the program accesses memory it shouldn’t be touching.

Debugging low-level code on the Ben NanoNote (the hard way)

Debugging low-level code on the Ben NanoNote (the hard way)

I have since plodded along introducing user mode functionality, some page table initialisation, trying to read keypresses, eventually succeeding after retracing my steps and discovering my errors along the way. Maybe this will become a genuinely useful piece of software one day.

But one useful purpose this exercise has served is that of familiarising myself with the way these SoCs are organised, the facilities they provide, how these may be accessed, and so on. My brother has the Letux 400 notebook containing yet another SoC in the same family, the JZ4730, which seems to be almost entirely undocumented. This notebook has proven useful under certain circumstances. For instance, it has been used as a kind of appliance for document scanning, driving a multifunction scanner/printer over USB using the enduring SANE project’s software.

However, the Letux 400 is already an old machine, with products based on this hardware platform being almost ten years old, and when originally shipped it used a 2.4 series Linux kernel instead of a more recent 2.6 series kernel. Like many products whose software is shipped as “finished”, this makes the adoption of newer software very difficult, especially if the kernel code is not “upstreamed” or incorporated into the official Linux releases.

As software distributions such as Debian evolve, they depend on newer kernel features, but if a device is stuck on an older kernel (because the special functionality that makes it work on that device is specific to that kernel) then the device, unable to run the newer kernels, gradually becomes unable to run newer versions of the distribution as well. Thus, Debian Etch was the newest distribution version that would work on the 2.4 kernel used by the Letux 400 as shipped.

Fortunately, work had been done to make a 2.6 series kernel work on the Letux 400, and this made Debian Lenny functional. But time passes and even this is now considered ancient. Although David was running some software successfully, there was other software that really needed a newer distribution to be able to run, and this meant considering what it might take to support Debian Squeeze on the hardware. So he set to work adding patches to the 2.6.24 kernel to try and take it within the realm of Squeeze support, making it beyond the bare minimum of 2.6.29 and into the “release candidate” territory of 2.6.30. And this was indeed enough to run Squeeze on the notebook, at least supporting the devices needed to make the exercise worthwhile.

Now, at a much earlier stage in my own experiments with the Ben NanoNote, I had tried without success to reproduce my results on the Letux 400. And I had also made a rather tentative effort at modifying Ben NanoNote kernel drivers to potentially work with the Letux 400 from some 3.x kernel version. David’s success in updating the kernel version led me to look again at the tasks of familiarising myself with kernel drivers, machine details and of supporting the Letux 400 in even newer kernels.

The outcome of this is uncertain at present. Most of the work on updating the drivers and board support has been done, but actual testing of my work still needs to be done, something that I cannot really do myself. That might seem strange: why start something I cannot finish by myself? But how I got started in this effort is also rather related to the topic of the next section.

The MIPS Creator CI20 and L4/Fiasco.OC

Low-level programming on the Ben NanoNote is frustrating unless you modify the device and solder the UART connections to the exposed pads in the battery compartment, thereby enabling a serial connection and allowing debugging information to be sent to a remote display for perusal. My soldering skills are not that great, and I don’t want to damage my device. So debugging was a frustrating exercise. Since I felt that I needed a bit more experience with the MIPS architecture and the Ingenic SoCs, it occurred to me that getting a CI20 might be the way to go.

I am not really a supporter of Imagination Technologies, producer of the CI20, due to the company’s rather hostile attitude towards Free Software around their PowerVR technologies, meaning that of the different graphics acceleration chipsets, PowerVR has been increasingly isolated as a technology that is consistently unsupportable by Free Software drivers. However, the CI20 is well-documented and has been properly supported with Free Software, apart from the PowerVR parts of the hardware, of course. Ingenic were seemingly persuaded to make the programming manual for the JZ4780 used by the CI20 publicly available, unlike the manuals for other SoCs in that family. And the PowerVR hardware is not actually needed to be able to use the CI20.

The MIPS Creator CI20 single-board computer

The MIPS Creator CI20 single-board computer

I had hoped that the EOMA68 campaign would have offered a JZ4775 computer card, and that the campaign might have delivered such a card by now, but with both of these things not having happened I took the plunge and bought a CI20. There were a few other reasons for doing so: I wanted to see how a single-board computer with a decent amount of RAM (1GB) might perform as a working desktop machine; having another computer to offload certain development and testing tasks, rather than run virtual machines, would be useful; I also wanted to experiment with and even attempt to port other operating systems, loosening my dependence on the Linux monoculture.

One of these other operating systems involves two components: the Fiasco.OC microkernel and the L4 Runtime Environment (L4Re). Over the years, microkernels in the L4 family have seen widespread use, and at one point people considered porting GNU Hurd to one of the L4 family microkernels from the Mach microkernel it then used (and still uses). It seems to me like something worth looking at more closely, and fortunately it also seemed that this software combination had been ported to the CI20. However, it turned out that my expectations of building an image, testing the result, and then moving on to developing interesting software were a little premature.

The first real problem was that GCC produced position-independent code that was not called correctly. This meant that upon trying to get the addresses of functions, the program would end up loading garbage addresses and trying to call any code that might be there at those addresses. So some fixes were required. Then, it appeared that the JZ4780 doesn’t support a particular MIPS instruction, meaning that the CPU would encounter this instruction and cause an exception. So, with some guidance, I wrote a handler to decode the instruction and generate the rather trivial result that the instruction should produce. There were also some more generic problems with the microkernel code that had previously been patched but which had not appeared in the upstream repository. But in the end, I got the “hello” program to run.

With a working foundation I tried to explore the hardware just as I had done with the Ben NanoNote, attempting to understand things like the clock and power management hardware, general purpose input/output (GPIO) peripherals, and also the Inter-Integrated Circuit (I2C) peripherals. Some assistance was available in the form of Linux kernel driver code, although the style of code can vary significantly, and it also takes time to “decode” various mechanisms in the Linux code and to unpick the useful bits related to the hardware. I had hoped to get further, but in trying to use the I2C peripherals to talk to my monitor using the DDC protocol, I found that the data being returned was not entirely reliable. This was arguably a distraction from the more interesting task of enabling the display, given that I know what resolutions my monitor supports.

However, all this hardware-related research and detective work at least gave me an insight into mechanisms – software and hardware – that would inform the effort to “decode” the vendor-written code for the Letux 400, making certain things seem a lot more familiar and increasing my confidence that I might be understanding the things I was seeing. For example, the JZ4720 in the Ben NanoNote arranges its hardware registers for GPIO configuration and access in a particular way, but the code written by the vendor for the JZ4730 in the Letux 400 accesses GPIO registers in a different way.

Initially, I might have thought that I was missing some important detail: are the two products really so different, and if not, then why is the code so different? But then, looking at the JZ4780, I encountered another scheme for GPIO register organisation that is different again, but which does have similarities to the JZ4730. With the JZ4780 being publicly documented, the code for the Letux 400 no longer seemed quite so bizarre or unfathomable. With more experience, it is possible to have a little more confidence in one’s understanding of the mechanisms at work.

I would like to spend a bit more time looking at microkernels and alternatives to Linux. While many people presumably think that Linux is running on everything and has “won”, it is increasingly likely that the Linux one sees on devices does not completely control the hardware and is, in fact, virtualised or confined by software systems like L4/Fiasco.OC. I also have reservations about the way Linux is developed and how well it is able to handle the demands of its proliferation onto every kind of device, many of them hooked up to the Internet and being left to fend for themselves.

Developing imip-agent

Alongside Lichen, a project that has been under development for the last couple of years has been imip-agent, allowing calendar-based scheduling activities to be integrated with mail transport agents. I haven’t been able to spend quite as much time on imip-agent this year as I might have liked, although I will also admit that I haven’t always been motivated to spend much time on it, either. Still, there have been brief periods of activity tidying up, fixing, or improving the code. And some interest in packaging the software led me to reconsider some of the techniques used to deploy the software, in particular the way scheduling extensions are discovered, and the way the system configuration is processed (since Debian does not want “executable scripts” in places like /etc, even if those scripts just contain some simple configuration setting definitions).

It is perhaps fairly typical that a project that tries to assess the feasibility of a concept accumulates the necessary functionality in order to demonstrate that it could do a particular task. After such an initial demonstration, the effort of making the code easier to work with, more reliable, more extensible, must occur if further progress is to be made. One intervention that kept imip-agent viable as a project was the introduction of a test suite to ensure that the basic functionality did indeed work. There were other architectural details that I felt needed remedying or improving for the code to remain manageable.

Recently, I have been refining the parts of the code that support editing of calendar objects and the exchange of updates caused by changes to calendar events. Such work is intended to make the Web client easier to understand and to expose such functionality to proper testing. One side-effect of this may be the introduction of a text-based client for people using e-mail programs like Mutt, as well as a potentially usable library for other mail clients. Such tidying up and fixing does not show off fancy new features or argue the case for developing such software in the first place, but I suppose it makes me feel better about the software I have written.

Whither Moin?

There are probably plenty of other little projects of my own that I have started or at least contemplated this year. And there are also projects that are not mine but which I use and which have had contributions from me over the years. One of these is the MoinMoin wiki software that powers a number of Free Software and other Web sites where collaborative editing is made available to the communities involved. I use MoinMoin – or Moin for short – to publish content on the Web myself, and I have encouraged others to use it in the past. However, it worries me now that the level of maintenance it is receiving has fallen to a level where updates for faults in the software are not likely to be forthcoming and where it is no longer clear where such updates should be coming from.

Earlier in the year, having previously read queries about the static export output from Moin, which can be rather basic and not necessarily resemble the appearance of the wiki such output has come from, I spent some time considering my own use of Moin for documentation publishing. For some of my projects, I don’t take advantage of the “through the Web” editing of the solution when publishing the public documentation. Instead, I use Moin locally, store the pages in a separate repository, and then make page packages that get installed on a public instance of Moin. This means that I do not have to worry about Web-based authentication and can just have a wiki as a read-only resource.

Obviously, the parts of Moin that I really need here are just the things that parse the wiki formatting (which I regard as more usable than other document markup formats in various respects) and that format the content as HTML. If I could format it as static content with some pages, some stylesheets, some images, with some Web server magic to make the URLs look nice, then that would probably be sufficient. For some things like the automatic generation of SVG from Graphviz-format files, I would also need to have the relevant parsers available, too. Having a complete Web framework, which is what Moin really is, is rather unnecessary with these diminished requirements.

But I do use Moin as a full wiki solution as well, and so it made me wonder whether I shouldn’t try and bring it up to date. Of course, there is already the MoinMoin 2.0 effort that was intended to modernise and tidy up the software, but since this effort made a clean break from Moin 1.x, it was never an attractive choice for those people already using Moin in anything more than a basic sense. Since there wasn’t an established API for extensions, it was not readily usable for many existing sites that rely on such extensions. In a way, Moin 2 has suffered from something that Python 3 only avoided by having a lot more people working on it, including people being paid to work on it, together with a policy of openly shaming those people who had made Python 2 viable – by developing software for it – into spending time migrating their code to Python 3.

I don’t have an obvious plan of action here. Moin perhaps illustrates the fundamental problem facing many Free Software projects, this being a theme that I have discussed regularly this year: how they may remain viable by having people able to dedicate their time to writing and maintaining Free Software without this work being squeezed in around the edges of people’s “actual work” and thus burdening them with yet another obligation in their lives, particularly one that is not rewarded by a proper appreciation of the sacrifice being made.

Plenty of individuals and organisations benefit from Moin, but we live in an age of “comparison shopping” where people will gladly drop one thing if someone offers them something newer and shinier. This is, after all, how everyone ends up using “free” services where the actual costs are hidden. To their credit, when Moin needed to improve its password management, the Python Software Foundation stepped up and funded this work rather than dropping Moin, which is what I had expected given certain Python community attitudes. Maybe other, more well-known organisations that use Moin also support its development, but I don’t really see much evidence of it.

Maybe they should consider doing so. The notion that something else will always come along, developed by some enthusiastic developer “scratching their itch”, is misguided and exploitative. And a failure to sustain Free Software development can only undermine Free Software as a resource, as an activity or a cause, and as the basis of many of those organisations’ continued existence. Many of us like developing Free Software, as I hope this article has shown, but motivation alone does not keep that software coming forever.

Some Thoughts on Python-Like Languages

Tuesday, June 6th, 2017

A few different things have happened recently that got me thinking about writing something about Python, its future, and Python-like languages. I don’t follow the different Python implementations as closely as I used to, but certain things did catch my attention over the last few months. But let us start with things closer to the present day.

I was neither at the North American PyCon event, nor at the invitation-only Python Language Summit that occurred as part of that gathering, but LWN.net has been reporting the proceedings to its subscribers. One of the presentations of particular interest was covered by LWN.net under the title “Keeping Python competitive”, apparently discussing efforts to “make Python faster”, the challenges faced by different Python implementations, and the limitations imposed by the “canonical” CPython implementation that can frustrate performance improvement efforts.

Here is where this more recent coverage intersects with things I have noticed over the past few months. Every now and again, an attempt is made to speed Python up, sometimes building on the CPython code base and bolting on additional technology to boost performance, sometimes reimplementing the virtual machine whilst introducing similar performance-enhancing technology. When such projects emerge, especially when a large company is behind them in some way, expectations of a much faster Python are considerable.

Thus, when the Pyston reimplementation of Python became more widely known, undertaken by people working at Dropbox (who also happen to employ Python’s creator Guido van Rossum), people were understandably excited. Three years after that initial announcement, however, and those ambitious employees now have to continue that work on their own initiative. One might be reminded of an earlier project, Unladen Swallow, which also sought to perform just-in-time compilation of Python code, undertaken by people working at Google (who also happened to employ Python’s creator Guido van Rossum at the time), which was then abandoned as those people were needed to go and work on other things. Meanwhile, another apparently-broadly-similar project, Pyjion, is being undertaken by people working at Microsoft, albeit as a “side project at work”.

As things stand, perhaps the most dependable alternative implementation of Python, at least if you want one with a just-in-time compiler that is actively developed and supported for “production use”, appears to be PyPy. And this is only because of sustained investment of both time and funding over the past decade and a half into developing the technology and tracking changes in the Python language. Heroically, the developers even try and support both Python 2 and Python 3.

Motivations for Change

Of course, Google, Dropbox and Microsoft presumably have good reasons to try and get their Python code running faster and more efficiently. Certainly, the first two companies will be running plenty of Python to support their services; reducing the hardware demands of delivering those services is definitely a motivation for investigating Python implementation improvements. I guess that there’s enough Python being run at Microsoft to make it worth their while, too. But then again, none of these organisations appear to be resourcing these efforts at anything close to what would be marshalled for their actual products, and I imagine that even similar infrastructure projects originating from such companies (things like Go, for example) have had many more people assigned to them on a permanent basis.

And now, noting the existence of projects like Grumpy – a Python to Go translator – one has to wonder whether there isn’t some kind of strategy change afoot: that it now might be considered easier for the likes of Google to migrate gradually to Go and steadily reduce their dependency on Python than it is to remedy identified deficiencies with Python. Of course, the significant problem remains of translating Python code to Go and still have it interface with code written in C against Python’s extension interfaces, maintaining reliability and performance in the result.

Indeed, the matter of Python’s “C API”, used by extensions written in C for Python programs to use, is covered in the LWN.net article. As people have sought to improve the performance of their software, they have been driven to rewrite parts of it in C, interfacing these performance-critical parts with the rest of their programs. Although such optimisation techniques make sense and have been a constant presence in software engineering more generally for many decades, it has almost become the path of least resistance when encountering performance difficulties in Python, even amongst the maintainers of the CPython implementation.

And so, alternative implementations need to either extract C-coded functionality and offer it in another form (maybe even written in Python, can you imagine?!), or they need to have a way of interfacing with it, one that could produce difficulties and impair their own efforts to deliver a robust and better-performing solution. Thus, attempts to mitigate CPython’s shortcomings have actually thwarted the efforts of other implementations to mitigate the shortcomings of Python as a whole.

Is “Python” Worth It?

You may well be wondering, if I didn’t manage to lose you already, whether all of these ambitious and brave efforts are really worth it. Might there be something with Python that just makes it too awkward to target with a revised and supposedly better implementation? Again, the LWN.net article describes sentiments that simpler, Python-like languages might be worth considering, mentioning the Hack language in the context of PHP, although I might also suggest Crystal in the context of Ruby, even though the latter is possibly closer to various functional languages and maybe only bears syntactic similarities to Ruby (although I haven’t actually looked too closely).

One has to be careful with languages that look dynamic but are really rather strict in how types are assigned, propagated and checked. And, should one choose to accept static typing, even with type inference, it could be said that there are plenty of mature languages – OCaml, for instance – that are worth considering instead. As people have experimented with Python-like languages, others have been quick to criticise them for not being “Pythonic”, even if the code one writes is valid Python. But I accept that the challenge for such languages and their implementations is to offer a Python-like experience without frustrating the programmer too much about things which look valid but which are forbidden.

My tuning of a Python program to work with Shedskin needed to be informed about what Shedskin was likely to allow and to reject. As far as I am concerned, as long as this is not too restrictive, and as long as guidance is available, I don’t see a reason why such a Python-like language couldn’t be as valid as “proper” Python. Python itself has changed over the years, and the version I first used probably wouldn’t measure up to what today’s newcomers would accept as Python at all, but I don’t accept that the language I used back in 1995 was not Python: that would somehow be a denial of history and of my own experiences.

Could I actually use something closer to Python 1.4 (or even 1.3) now? Which parts of more recent versions would I miss? And which parts of such ancient Pythons might even be superfluous? In pursuing my interests in source code analysis, I decided to consider such questions in more detail, partly motivated by the need to keep the investigation simple, partly motivated by laziness (that something might be amenable to analysis but more effort than I considered worthwhile), and partly motivated by my own experiences developing Python-based solutions.

A Leaner Python

Usually, after a title like that, one might expect to read about how I made everything in Python statically typed, or that I decided to remove classes and exceptions from the language, or do something else that would seem fairly drastic and change the character of the language. But I rather like the way Python behaves in a fundamental sense, with its classes, inheritance, dynamic typing and indentation-based syntax.

Other languages inspired by Python have had a tendency to diverge noticeably from the general form of Python: Boo, Cobra, Delight, Genie and Nim introduce static typing and (arguably needlessly) change core syntactic constructs; Converge and Mython focus on meta-programming; MyPy is the basis of efforts to add type annotations and “optional static typing” to Python itself. Meanwhile, Serpentine is a project being developed by my brother, David, and is worth looking at if you want to write software for Android, have some familiarity with user interface frameworks like PyQt, and can accept the somewhat moderated type discipline imposed by the Android APIs and the Dalvik runtime environment.

In any case, having already made a few rounds trying to perform analysis on Python source code, I am more interested in keeping the foundations of Python intact and focusing on the less visible characteristics of programs: effectively reading between the lines of the source code by considering how it behaves during execution. Solutions like Shedskin take advantage of restrictions on programs to be able to make deductions about program behaviour. These deductions can be sufficient in helping us understand what a program might actually do when run, as well as helping the compiler make more robust or efficient programs.

And the right kind of restrictions might even help us avoid introducing more disruptive restrictions such as having to annotate all the types in a program in order to tell us similar things (which appears to be one of the main directions of Python in the current era, unfortunately). I would rather lose exotic functionality that I have never really been convinced by, than retain such functionality and then have to tell the compiler about things it would otherwise have a chance of figuring out for itself.

Rocking the Boat

Certainly, being confronted with any list of restrictions, despite the potential benefits, can seem like someone is taking all the toys away. And it can be difficult to deliver the benefits to make up for this loss of functionality, no matter how frivolous some of it is, especially if there are considerable expectations in terms of things like performance. Plenty of people writing alternative Python implementations can attest to that. But there are other reasons to consider a leaner, more minimal, Python-like language and accompanying implementation.

For me, one rather basic reason is merely to inform myself about program analysis, figure out how difficult it is, and hopefully produce a working solution. But beyond that is the need to be able to exercise some level of control over the tools I depend on. Python 2 will in time no longer be maintained by the Python core development community; a degree of agitation has existed for some time to replace it with Python 3 in Free Software operating system distributions. Yet I remain unconvinced about Python 3, particularly as it evolves towards a language that offers “optional” static typing that will inevitably become mandatory (despite assertions that it will always officially be optional) as everyone sprinkles their code with annotations and hopes for the magic fairies and pixies to come along and speed it up, that latter eventuality being somewhat less certain.

There are reasons to consider alternative histories for Python in the form of Python-like languages. People argue about whether Python 3′s Unicode support makes it as suitable for certain kinds of programs as Python 2 has been, with the Mercurial project being notable in its refusal to hurry along behind the Python 3 adoption bandwagon. Indeed, PyPy was devised as a platform for such investigations, being only somewhat impaired in some respects by its rather intensive interpreter generation process (but I imagine there are ways to mitigate this).

Making a language implementation that is adaptable is also important. I like the ability to be able to cross-compile programs, and my own work attempts to make this convenient. Meanwhile, cross-building CPython has been a struggle for many years, and I feel that it says rather a lot about Python core development priorities that even now, with the need to cross-build CPython if it is to be available on mobile platforms like Android, the lack of a coherent cross-building strategy has left those interested in doing this kind of thing maintaining their own extensive patch sets. (Serpentine gets around this problem, as well as the architectural limitations of dropping CPython on an Android-based device and trying to hook it up with the different Android application frameworks, by targeting the Dalvik runtime environment instead.)

No Need for Another Language?

I found it depressingly familiar when David announced his Android work on the Python mobile-sig mailing list and got the following response:

In case you weren't aware, you can just write Android apps and services
in Python, using Kivy.  No need to invent another language.

Fortunately, various other people were more open-minded about having a new toolchain to target Android. Personally, the kind of “just use …” rhetoric reminds me of the era when everyone writing Web applications in Python were exhorted to “just use Zope“, which was a complicated (but admittedly powerful and interesting) framework whose shortcomings were largely obscured and downplayed until enough people had experienced them and felt that progress had to be made by working around Zope altogether and developing other solutions instead. Such zero-sum games – that there is one favoured approach to be promoted, with all others to be terminated or hidden – perhaps inspired by an overly-parroted “only one way to do it” mantra in the Python scene, have been rather damaging to both the community and to the adoption of Python itself.

Not being Python, not supporting every aspect of Python, has traditionally been seen as a weakness when people have announced their own implementations of Python or of Python-like languages. People steer clear of impressive works like PyPy or Nuitka because they feel that these things might not deliver everything CPython does, exactly like CPython does. Which is pretty terrible if you consider the heroic effort that the developer of Nuitka puts in to make his software work as similarly to CPython as possible, even going as far as to support Python 2 and Python 3, just as the PyPy team do.

Solutions like MicroPython have got away with certain incompatibilities with the justification that the target environment is rather constrained. But I imagine that even that project’s custodians get asked whether it can run Django, or whatever the arbitrarily-set threshold for technological validity might be. Never mind whether you would really want to run Django on a microcontroller or even on a phone. And never mind whether large parts of the mountain of code propping up such supposedly essential solutions could actually do with an audit and, in some cases, benefit from being retired and rewritten.

I am not fond of change for change’s sake, but new opportunities often bring new priorities and challenges with them. What then if Python as people insist on it today, with all the extra features added over the years to satisfy various petitioners and trends, is actually the weakness itself? What if the Python-like languages can adapt to these changes, and by having to confront their incompatibilities with hastily-written code from the 1990s and code employing “because it’s there” programming techniques, they can adapt to the changing environment while delivering much of what people like about Python in the first place? What if Python itself cannot?

“Why don’t you go and use something else if you don’t like what Python is?” some might ask. Certainly, Free Software itself is far more important to me than any adherence to Python. But I can also choose to make that other language something that carries forward the things I like about Python, not something that looks and behaves completely differently. And in doing so, at least I might gain a deeper understanding of what matters to me in Python, even if others refuse the lessons and the opportunities such Python-like languages can provide.

Rename This Project

Tuesday, December 13th, 2016

It is interesting how the CPython core developers appear to prefer to spend their time on choosing names for someone else’s fork of Python 2, with some rather expansionist views on trademark applicability, than on actually winning over Python 2 users to their cause, which is to make Python 3 the only possible future of the Python language, of course. Never mind that the much broader Python language community still appears to have an overwhelming majority of Python 2 users. And not some kind of wafer-thin, “first past the post”, mandate-exaggerating, Brexit-level majority, but an actual “that doesn’t look so bad but, oh, the scale is logarithmic!” kind of majority.

On the one hand, there are core developers who claim to be very receptive to the idea of other people maintaining Python 2, because the CPython core developers have themselves decided that they cannot bear to look at that code after 2020 and will not issue patches, let alone make new releases, even for the issues that have been worthy of their attention in recent years. Telling people that they are completely officially unsupported applies yet more “stick” and even less “carrot” to those apparently lazy Python 2 users who are still letting the side down by not spending their own time and money on realising someone else’s vision. But apparently, that receptivity extends only so far into the real world.

One often reads and hears claims of “entitlement” when users complain about developers or the output of Free Software projects. Let it be said that I really appreciate what has been delivered over the decades by the Python project: the language has kept programming an interesting activity for me; I still to this day maintain and develop software written in Python; I have even worked to improve the CPython distribution at times, not always successfully. But it should always be remembered that even passive users help to validate projects, and active users contribute in numerous ways to keep projects viable. Indeed, users invest in the viability of such projects. Without such investment, many projects (like many companies) would remain unable to fulfil their potential.

Instead of inflicting burdensome change whose predictable effect is to cause a depreciation of the users’ existing investments and to demand that they make new investments just to mitigate risk and “keep up”, projects should consider their role in developing sustainable solutions that do not become obsolete just because they are not based on the “latest and greatest” of the technology realm’s toys. If someone comes along and picks up this responsibility when it is abdicated by others, then at the very least they should not be given a hard time about it. And at least this “Python 2.8″ barely pretends to be anything more than a continuation of things that came before, which is not something that can be said about Python 3 and the adoption/migration fiasco that accompanies it to this day.

Making Python Programs Faster with Shedskin

Tuesday, February 2nd, 2016

A few months ago, I had the opportunity to combine two of my interests: retrocomputing and Python programming. The latter needs little additional explanation, but the former perhaps requires a few more words. Retrocomputing is the study and use of computing equipment from an earlier era in computing, where such equipment is typically no longer in use, or is no longer in widespread use. My own experiences with microcomputers began in the 1980s and are largely centred upon those manufactured by Acorn Computers, such as the Acorn Electron and BBC Microcomputer.

Some History

One of my earlier initiatives was to attempt to document and understand the functioning of the Acorn Electron’s ULA integrated circuit: the Uncommitted Logic Array employed in the computer to generate video and perform input/output tasks. When Acorn decided to make the Electron as a variant of the BBC Microcomputer, the engineers merged many of the functions performed by separate chips into a single one, for several good and not-so-good reasons:

  • To reduce system complexity: having to connect several components and make sure that they all work correctly and in time with each other can be challenging, and it is arguably best to reduce the number of things that can go wrong by just reducing the number of things involved in the first place.
  • To reduce system cost: production becomes less complicated and savings can potentially be realised by combining discrete components.
  • To deepen the organisation’s experience with integrated circuit design: this being the company that developed the ARM architecture and eventually brought the first ARM chipset (CPU, audio/video controller, memory management unit, and input/output controller) to market.
  • To make a proprietary component that others could not readily clone: this being something of an obsession in the early 1980s marketplace.

Now, the ULA has the job of reading from memory and translating what it reads into a sequence of colour values, thus generating a picture on the screen. However, it has the annoying limitation of locking the CPU out of the memory (in fact, only the RAM which resides in a certain region) while it generates each line of the displayed image. Depending on which “screen mode” is selected (determining the resolution and colour depth), it may still let the CPU access the RAM at a lower speed, or it may effectively suspend the CPU for the entire time taken to generate a single horizontal display line.

Consequently, the Acorn Electron is considerably slower than the BBC Microcomputer for this reason: the BBC Micro employs faster memory and lets the CPU and its own video circuitry take turns with the memory while they both run at full speed; the Electron used cheaper memory as another cost-saving measure; reviews of the Electron, while generally positive about getting BBC Micro features for less, tended to note this performance degradation with some disappointment.

The Acorn Electron ULA

The Acorn Electron ULA (with socket cover/heatsink removed)

Some Background

Normally, the matter of the CPU stalling for much of its time would be considered a disadvantage, but my brother and I were having a discussion about software running from ROM (or, in fact, in the area of memory not affected by the ULA), with the pitfalls of such software needing to access the RAM and potentially becoming stalled as the CPU finds itself waiting for the ULA to do its work. Somehow the notion arose that the ULA effectively provides a kind of synchronisation mechanism for software needing to run in the period between display lines.

So, a program can read instructions from ROM and then, when it needs to know that the display is not being updated, it can attempt to access RAM. Whether or not the program stalls remains unknown to the program itself, but when it gets the result of accessing the RAM – perhaps immediately, perhaps after a few microseconds – it can be certain that the ULA is not accessing the RAM because it just did so itself.

A screen image showing the update region

An image showing the display update region (the "test card" picture) when the ULA accesses memory, with the black border indicating the region (or time period) during which the CPU may access the lower region of the Electron's memory.

(See the BBC Test Cards for the origin of the above picture.)

One might wonder what kind of program would benefit from synchronising itself to the display line periods. Well, one thing that tended to happen back in the microcomputing era, was the trick of reprogramming the display palette – the selection of colours shown on screen – so that a greater number of colours can be displayed simultaneously on the screen than would normally be the case for that particular display configuration. For example, a screen mode normally offering only four colours could instead offer the full range – eight “proper” colours on an Electron – if it switched the palette during screen updates. Even a screen mode offering only two colours could offer eight by employing palette switching.

And with a certain amount of experimentation, a working solution was eventually delivered (with only limited input needed from my side). By running software in a ROM (or from RAM mapped into the same area of memory as a ROM), it became possible to reliably change the palette on a line-by-line basis, bridging the gap between a medium-resolution four-colour mode and a hypothetical eight-colour version that would only become available on Acorn’s ARM-based microcomputers. Although only four colours can be used per display line, each of the 256 display lines can employ four-colour combinations of the full eight colours, not quite making an eight-colour mode – which would, of course, allow eight colours on every display line – but still permitting graphical output much closer to a true eight-colour mode than can be contemplated with a restrictive four-colour mode.

Rainbow lorikeets on a lawn

Rainbow lorikeets on a lawn (left: 4 colours from 8 per display line; right: 8 colours per display line)

Palette Optimisation

With the trick in place to switch the palette on demand, all that remained was to make a program that could take an input image and to optimise the colours so that…

  1. Only the eight permitted colours (black, white, primary and secondary colours) are used in the image.
  2. No pixel row (display line) employs more than four different colours.

Expectations were rather low at first. First of all, in this era of bountiful quantities of fresh imagery, most pictures appear to have plenty of colours and are photographs, and so the easiest images to use are the ones that first need to be reduced in both resolution and colour depth. And once this is done, the job of optimising the colours to meet the second of the above criteria is required. So, initially, very simple techniques were employed to do both of these things.

Later, after some discussions, it appeared that integrating the two processing activities and, crucially, applying basic dithering and error propagation techniques, made it possible to represent complicated “deep-colour” images within the limited display representation.

Magnified details of the two representations

Magnified details of the two representations

Closer examination of the above images reveals how the algorithm attempts to spread the responsibility of representing colours across rows, being restricted to the choice of four colours (in the left image) to produce the appropriate tones that are more readily encoded using eight colours (in the right image).

Tuning the Implementation

To perform the optimisation process, I wrote a program which took an input image, rotated and scaled it to the target resolution, and processed the colours by inspecting the pixel data on each horizontal line (or row) of the image, calculating the appropriate four-colour combination for a line and generating suitable pixel data appropriate for this restricted palette. Since I am probably most comfortable with Python, and since Python also has various convenient image-processing libraries, I found myself developing a short Python program to do the work.

Now, Python doesn’t usually exhibit the highest performance, particularly for tasks such as this, and as the experimentation with different approaches started to lessen, with the most rewarding approaches making themselves evident, and with the temptation to convert lots of images just to see the results, my attentions turned to speeding up the program. Since I had been involved with packaging the Shedskin Python-to-C++ compiler for Debian, and thus the package was right there for me to use, it made sense to give it a try on this code.

At this point, the program was taking around 40 seconds to convert a 16 megapixel photographic image into the appropriate output representation. Despite using the Python Imaging Library to do the rotation and scaling, visiting each pixel in a Python program and doing some simple statistical and arithmetic operations was taking up rather a lot of time. In the past, I have used various “numeric” Python extensions – usually the ones supported by pygame – but things have moved on somewhat incoherently since then, and I also wanted to keep my code straightforward and readable, which is often something that is lost when making code “numeric”.

Time for Shedskin

I have used Shedskin before, and the first thing to think about when using it is how it will interact with non-Python code. Shedskin takes Python code and generates C++ which must then be compiled. If a whole program is translated to C++, the resulting executable can be run with no further work necessary. However, the program of interest here uses libraries that are actually implemented in C and are delivered as shared libraries that are loaded by CPython (the Python virtual machine implementation written in the C programming language). Shedskin cannot generally translate such a program in its entirety.

From the earliest days of Python, it was (and has remained) a common practice to first write library code in Python, desire better performance, and to then rewrite much of that code in the C programming language against the Python/C API (or CPython API) as an “extension module”, which is what these problematic shared libraries are. Such libraries act as part of the CPython virtual machine, more or less, and with suitable implementation choices made for performance, everything using such libraries will run much more quickly. Unfortunately, but understandably, Shedskin doesn’t seek to interact closely with the CPython implementation: it produces C++ code that works with its own runtime libraries.

So, instead of working with a single program file, I split my program up into a main program which deals with CPython extensions such as the Python Imaging Library, whose JPEG and PNG manipulation facilities are too convenient to abandon, along with a library module that does all the computation for this particular application. The library would provide its own image abstraction for pixel-level accesses, but be given the pixel data obtained by the main program, returning the processed data to the main program for saving to a file. By only compiling the library, Shedskin can produce a standalone library file containing hopefully high-performance implementations of the original Python code.

But how can such a library constructed by Shedskin be used? Would we now need to somehow translate the main program into C or C++ in order to be able to use it? Fortunately, but slightly confusingly, Shedskin can also generate the necessary CPython API wrapper around the translated code, making it possible for CPython to load this newly-created library after all.

(One might wonder how this is possible, but if one considers that arbitrary C or C++ code can be wrapped using the CPython API, with the values being sent in and out of a library only being converted at the interface, Shedskin has the luxury of generating something that lives by its own rules internally, with the wrapper doing the necessary “marshalling” of the values as they go in and come out. Such a wrapped library might be frowned upon in the Python world, not being a sophisticated extension module that takes full advantage of the CPython API, but such libraries were, and probably still are, the “bread and butter” of Python’s access to a wide array of tools and technologies.)

Knowing that this is a reasonable approach, I experienced a moment of excessive ambition. I tried to take the newly-broken-out library and compile it using the appropriate option:

shedskin -e optimiser.py

This took quite some time, and things did not go well…

*** SHED SKIN Python-to-C++ Compiler 0.9.2 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)

[analyzing types..]
****************************88%
*WARNING* reached maximum number of iterations
********************************100%
[generating c++ code..]
*WARNING* 'get_combinations' function not exported (cannot convert argument 'c')
*WARNING* 'get_colours' function not exported (cannot convert return value)
*WARNING* 'balance' function not exported (cannot convert argument 'd')
*WARNING* optimiser.py: expression has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiser.py: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiser.py: expression has dynamic (sub)type: {int, tuple}
*WARNING* optimiser.py: variable 'data' has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py: variable (class SimpleImage, 'data') has dynamic (sub)type: {None, int, tuple}

And then there were many more lines of a more specific nature:

*WARNING* optimiser.py:41: function distance not called!
*WARNING* optimiser.py:50: expression has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py:53: expression has dynamic (sub)type: {None, int, tuple}
*WARNING* optimiser.py:57: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiser.py:57: expression has dynamic (sub)type: {int, tuple}

And so on. Now, if you are thinking that this will probably not end well, you would be right. Running make gives plenty of errors looking as scary as this…

optimiser.cpp: In function '__shedskin__::list<__shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*>* __optimiser__::list_comp_0(__shedskin__::pyiter<double>*)':
optimiser.cpp:76:17: error: base operand of '->' is not a pointer
optimiser.cpp:77:21: error: base operand of '->' is not a pointer
optimiser.cpp:78:68: error: invalid conversion from '__shedskin__::tuple2<int, int>*' to 'int' [-fpermissive]
In file included from /usr/share/shedskin/lib/builtin.hpp:1204:0,
                 from optimiser.cpp:1:
/usr/share/shedskin/lib/builtin/tuple.hpp:211:28: error:   initializing argument 2 of '__shedskin__::tuple2<A, B>::tuple2(int, A, B) [with A = int; B = double]' [-fpermissive]
optimiser.cpp:78:70: error: no matching function for call to '__shedskin__::list<__shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*>::append(__shedskin__::tuple2<int, double>*)'
optimiser.cpp:78:70: note: candidate is:
In file included from /usr/share/shedskin/lib/builtin.hpp:1203:0,
                 from optimiser.cpp:1:
/usr/share/shedskin/lib/builtin/list.hpp:96:25: note: void* __shedskin__::list<T>::append(T) [with T = __shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*]
/usr/share/shedskin/lib/builtin/list.hpp:96:25: note:   no known conversion for argument 1 from '__shedskin__::tuple2<int, double>*' to '__shedskin__::tuple2<__shedskin__::tuple2<int, int>*, double>*'

Of course, it would be unfair to expect this to have worked: Shedskin did indeed give us plenty of warnings! But we should at least try and understand such warnings if we are to make progress. After a few more iterations of this bold strategy, I realised that I had started out in the wrong fashion, presenting Shedskin with code that is too dynamic (and ambiguous) for it to reasonably infer sensible types and produce a compilable C++ representation.

First Things First

I started out by reintroducing some necessary changes gradually. First of all, I made a branch of my code committing the special image abstraction to be used in any future Shedskin-compiled version. Meanwhile, I looked at some things that might generally be performance-degrading in Python, and which might also help Shedskin deduce the program types more readily.

One thing that I had done in the initial implementation was to freely use sequences to represent colour triplets: the red, green and blue values. I was using code like this:

def invert(srgb):
    return tuple(map(lambda x: 1.0 - x, srgb))

This might be elegant – there’s no separate treatment of each element in the triplet – but there is likely to be more overhead in treating the triplet as a sequence, iterating over it, and so on. Moreover, Shedskin is likely to see objects appearing from a collection without any idea of what was put into that collection to start with. So, more mundane code was introduced in its place:

def invert(srgb):
    r, g, b = srgb
    return 1.0 - r, 1.0 - g, 1.0 - b

Such changes reduced the processing time from around 35 seconds to around 25 seconds. After various other performance modifications, such as avoiding the repeated computation of common values, this became around 18 to 19 seconds. Interestingly, merging such modifications into the branch with the special image abstraction reduced the running time to around 16 to 17 seconds. But at this point, obvious optimisation opportunities were more or less exhausted.

It then became time for another attempt to present this to Shedskin. This time, instead of calling the main program main.py and the library optimiser.py, which is not particularly intuitive, I retained the name of the main program as optimiser.py and used the traditional optimiserlib.py naming for the library:

shedskin -e optimiserlib.py

This completed without any warnings, and running make produced a library file, optimiserlib.so, that Python recognises as an extension module. Running the program produced a palette-optimised image in just under 9 seconds!

Making the Difference

So, what were the differences that made Shedskin accept the library code? Since the splitting up of the code into two files makes comparisons between versions awkward, we can first compare the version of the program before our preparatory work with the version of the program that fed into our Shedskin version. There are four areas of changes:

  • The introduction of that image abstraction class – SimpleImage – to be used to manipulate images instead of using Python Imaging Library objects. (In fact, this is only used initially as a container for the raw pixel data, with such data being passed to library functions, as described below.)
  • Modifications to access the different components of colour triplets explicitly.
  • Some “common sense” performance modifications, avoiding the repeated computation of things that always produce the same result anyway.
  • Some local name adjustments.

This final point is something that the Shedskin documentation mentions prominently, but it can be easy to forget. Consider the following code (in the balance function of the library module):

dd = dict([(value, f) for f, value in d])

Originally, this looked like this:

d = dict([(value, f) for f, value in d])

This earlier version just reverses a dictionary mapping and assigns the result to the name of the original dictionary. Although this seems harmless enough, Shedskin’s analysis would be complicated substantially by having to deal with a name being reassigned and potentially being associated with completely different kinds of objects, even if in this case we are only dealing with dictionaries. (Even if the same type were only ever involved, as we see here, it would also prevent any analysis being done on the nature of the keys and values in the dictionaries.)

We can see the effect of this by trying to compile the functioning version with the earlier naming scheme reintroduced. First, we see a warning like this:

*WARNING* 'balance' function not exported (cannot convert argument 'd')

Since Shedskin propagates type information around the program, this leads to other problems:

*WARNING* optimiserlib.py: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py: expression has dynamic (sub)type: {int, tuple}
*WARNING* optimiserlib.py: variable (function (function balance, 'list_comp_0'), 'value') has dynamic (sub)type: {int, tuple}
*WARNING* optimiserlib.py: variable (function (function balance, 'list_comp_1'), 'f') has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py: variable (function (function balance, 'list_comp_1'), 'value') has dynamic (sub)type: {int, tuple}

And later in the messages, the more specific errors indicate the problem and its consequences:

*WARNING* optimiserlib.py:100: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:100: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:100: expression has dynamic (sub)type: {int, tuple}
*WARNING* optimiserlib.py:102: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:102: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:103: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:103: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:104: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:104: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:105: class 'list' has no method 'items'
*WARNING* optimiserlib.py:105: expression has dynamic (sub)type: {float, int, tuple2}
*WARNING* optimiserlib.py:105: expression has dynamic (sub)type: {float, int, tuple}
*WARNING* optimiserlib.py:105: expression has dynamic (sub)type: {int, tuple}

And so on. Such warnings, which will cause errors if one tries to compile the result, can seem very intimidating, but within all these details the cause should be identifiable.

But another important aspect of the successful translation of the library module should not be forgotten: it is the matter of choosing the right initial functionality to give to Shedskin. Instead of trying to compile everything, it makes sense to concentrate on things like functions which are fairly self-contained, which do not call potentially vast regions of other program functionality, and which are invoked often, especially in loops that take a long time. Migrating a small piece of functionality at a time helps to keep the troubleshooting activity manageable.

By using the SimpleImage class as a convenient staging post for pixel data, simple library functions could be migrated first, and the library could be kept ignorant of the abstractions being maintained in the main program. Later on, as we shall see below, such abstractions and functions that require them could then be migrated themselves.

Broadening the Scope

With some core functionality migrated to a compilable extension module, it becomes possible to give other things the Shedskin treatment. At the start of the second attempt to get Shedskin to compile the code, quite a few functions were still being run by the CPython virtual machine, notably various image operations. With the functions called by these operations already migrated, it becomes possible to move each of these operations over into the extension module. The expectation here is that since these functions can now call each other without the overhead of the CPython API, as well as avoiding running code in the virtual machine, further performance improvements will occur.

And since Shedskin’s own runtime library supports certain Python standard library modules in accelerated form, it becomes interesting to migrate operations that employ such module functionality. For example, the get_combinations function can be moved to take advantage of Shedskin’s itertools implementation. And in addition to just moving functions, classes can also be compiled by Shedskin and potentially accessed more quickly, while remaining accessible to normal Python code that hasn’t been compiled.

Not all of these optimisations give the boost in performance that might be hoped for, perhaps even bringing performance penalties when first introduced. But it is important to look beyond any current, small optimisation to future optimisations that the accumulation of those small optimisations will permit. Moving the SimpleImage class into the extension module permits the migration of other code, with the consequence that the program when run takes around 6 seconds instead of 9 seconds: the apparent optimisation barrier is overcome, leading to yet more gains!

Some Conclusions

It is possible to make very useful performance gains just by revisiting code and writing it in a way that suits the language implementation. Starting out with an execution time of around 40 seconds, it was possible to make the program run in closer to 15 seconds, and that might have been enough for occasional conversion of images. But by adopting Shedskin and applying it to a subset of the program’s functionality, an execution time of 9 seconds was initially obtained. This might not be quite as impressive as the earlier two-and-a-half fold reduction in time (or, of course, a two-and-a-half fold increase in throughput), but it was obtained with little additional effort, with admittedly some planning for it being incorporated into the earlier optimisation work.

Ultimately, reaching around 6 seconds of execution time means that Shedskin was indeed able to more or less match earlier performance gains, but again with little actual optimisation effort. And one can argue that merely using Shedskin helped to inform those earlier gains, anyway. Regardless of what should take the credit, the program ended up running almost seven times faster than when we started out on this journey.

Tools like Shedskin have a slightly controversial place in the Python world. People like to point out that Shedskin really only compiles a “restricted subset” of Python: one that Shedskin is able to analyse. And as the above exercise demonstrates, modifications to programs are likely to be required to take advantage of it. But the resulting programs are still Python programs, and the CPython virtual machine will still run them, just slower – three times slower in this case – than if they were compiled.

Authors of tools like Shedskin and alternative implementations of the Python language have a tough decision to make. They must either deal with the continuously changing “full-fat” version of the language, with new features being added all the time that they have to support, with the risk that they will never catch up and never be considered a “proper” implementation. Or they must impose limitations on the form of the language and the features they support, knowing that even if their software accepts programs written in Python, it will be marginalised and regarded as a distraction from “proper” Python programming. And yet, Python as delivered by CPython offers little in the way of things that Shedskin and similar tools seek to offer, so it should be no wonder that such tools have come to exist.

Instead, one might wonder why it is that the language must continue to evolve in ways that frustrate static analysis and enhancements to program performance and scalability. Indeed, my own interests in the analysis of Python code have been somewhat rekindled by this exercise, but unlike the valiant efforts of other developers (such as the author of Nuitka and his continuing quest to support the compilation of “full-fat” Python), I have no intention of using Python as my gold standard. In essence, Python is and has been a productive language to use – that is why I have used it here and many times before – but the very nature of the language should be open to question and review.

If, by discarding features, something like Python can be more predictable and better-performing, why would exploring this avenue of inquiry be a bad thing? Shedskin shows us that there are benefits in doing so, and I hope that this article has also shown some practical techniques for using and understanding the tool. And maybe this will encourage me and others to make some more progress with our own Python program analysis efforts as well, if only to offer alternatives that seek to deliver on some of the unrealised promise and potential that Python seemed to offer back when we first discovered it, so many years ago now.

Oslo architecture (4 from 8 colours with scaled original)

An architectural picture from Oslo with the left version using 4 colours from 8 on every display line, the right version being a scaled version of the original.

Testing Times for Free Software and Open Hardware

Tuesday, January 12th, 2016

The last few months haven’t been too kind on Free Software and open hardware initiatives in a number of ways. Here, in a shorter form than one might usually expect from me, are some problematic developments on topics that I may have covered in the past year.

Software Freedom Undervalued

About a couple of months ago, the Software Freedom Conservancy started a fund-raising campaign after it became apparent that companies could not be relied upon to support the organisation’s activities. Since the start of the campaign, many individuals have stepped up and pledged financial support of their own, which is very generous of them, as is the support of enlightened organisations that have offered to match individual contributions.

Sadly, such generosity seems not to be shared by many of the largest companies making money from Free Software and from Linux in particular, and thus from the non-financial contributions that make projects like Linux viable in the first place, with many of those even coming from those same generous individuals who have supported the Conservancy financially. And let us consider for a moment why one prominent umbrella organisation’s members might not want to enforce the GPL, especially given that some of them have been successfully prosecuted for violating that licence, in relation to various Free Software projects, in the past.

The Proprietary Instincts of the BBC

The BBC Micro Bit was a topic covered in the last year, when I indicated a degree of caution about the mistakes of the past being repeated needlessly. And indeed, for some time, everything was being done behind the curtain of a non-disclosure agreement (NDA), meaning that very little information was being made available about the device and accompanying materials, and thus very little could be done by the average member of the public to prepare for the availability of the device, let alone develop their own materials, software, accessories or anything else for it.

Since then, a degree of secrecy has been eliminated, and efforts have been made to get the embedded variant of Python known as Micropython working on the board. However, certain parts of that work still appear to be encumbered by NDA, arguably making the effort of developing Python-related materials something of a social networking exercise. Meanwhile, notorious industry monopolist, Microsoft, somehow managed to muscle in on the initiative and take control of the principally-supported method of developing software with the device. I guess people at the BBC and their friends in politics and business don’t always learn from the mistakes of the past, particularly as they spend other people’s money.

The Walled Garden Party’s Hangover for Free Software Development

Just over twelve months ago, I made some observations about the Python core development group’s attraction to GitHub. It seems that the infatuation with the enthusiastic masses and their inevitable unleashing on Python assets, with the expectation of stimulating an exponential upturn in development activity, will now be gratified through a migration of various Python infrastructure components to the proprietary and centralised service that GitHub offers. (I have my doubts as to whether CPython contribution barriers are really the cause of Python’s current malaise, despite the usual clamour for Git and the associated “network effects” amongst a community of self-proclaimed version control wizards whose powers somehow don’t extend to mastering simple workflows with other tools.)

Anatoly Techtonik makes some interesting points, which will presumably go unheard because those involved have all decided not to listen to him any more. One of the more disturbing ones is that the “comparison shopping” mentality, where Free Software developers abandon their colleagues writing various tools and systems in favour of random corporations offering proprietary stuff at no cost, may well result in the Free Software solutions in such areas becoming seen as uncompetitive and unattractive. What those making such foolish decisions fail to realise is that their own projects can easily get the same treatment, if nobody bothers to see beyond the end of their own nose.

The result of all this is less funding and fewer resources for Free Software projects, with potentially fewer contributions, too, as the attraction of supporting “losing” solutions starts to fade. Community-oriented Free Software is arguably grossly underfunded as it is: we don’t really need other Free Software developers abandoning or undermining their colleagues while ridiculing those colleagues’ “ideological purity“. And, of course, volunteer effort will undoubtedly be burned up in the needless migration to the proprietary solution, setting everyone up for another costly transition down the road, which experience indicates is always more work than anyone anticipated (if they even bothered to think ahead at all).

PayPal: Doesn’t Pay, Not Your Pal

It has been a long time since I wrote about the Neo900 project. Things were looking promising: necessary components had been secured, and everyone was just waiting for Nikolaus to finish his work with the Pyra handheld console. And then we learned that PayPal had decided to hold a significant amount of money as a form of “security”, thus cutting off a vital source of funds for actually doing the work. Apparently, PayPal have a habit of doing this kind of thing, on one reported occasion even taking the opportunity to then offer loans to those people they deliberately put in such a difficult position.

If you supported the Neo900 project and pledged funds via PayPal, you need to tell PayPal to actually pay the project. You know: like the verb in their company name. Otherwise, in the worst case, you may not only not get a Neo900 and not see it developed to completion, but you will also have loaned your money to a large corporation for a substantial period and earned no interest on that involuntary loan, perhaps even incurring fees for the privilege. (So, please see the “How to fix it” section of the relevant article.)

Maybe in 2016, people will become a lot clearer about who their real friends are. Let us hope so!

imip-agent: Integrating Calendaring with E-Mail

Tuesday, November 17th, 2015

Longer ago than I had, until now, realised, I wrote an article about my ongoing exploration of groupware and, specifically, calendaring. As I noted in that article, I felt that a broader range of options may be needed for those wishing to expand their use of communications technologies beyond plain e-mail and into the structured exchange of other kinds of information, whilst retaining and building upon that e-mail infrastructure.

And I noted that more often than not, people wanting to increase their ambitions in this regard are often confronted with the prospect of abandoning what they already use successfully, instead being obliged to adopt a complete package of technologies, some of which they may not even need. While proprietary software and service vendors might pursue such strategies of persuasion – getting the big sale or the big contract – it is baffling that Free Software projects might also put potential users on the spot in the same way. After all, Free Software is very much about choice and control.

So, as I spelled out in that previous article, there may be some mileage in trying to offer extensions to existing infrastructure so that people can increase their communications capabilities whilst retaining the technologies they already know. And in some depth (and at some length), I described what a mail-centred calendaring solution might need to provide in order to address most people’s needs. Finally, I promised to make my own efforts available in this area so that anyone remotely interested in the topic might get some benefit from it.

Last month, I started a very brief exchange on a Debian- and groupware-related mailing list about such matters, just to see what people interested in groupware projects might think, also attempting to find out what they use for calendaring themselves. (Unfortunately, there doesn’t seem to be so many non-product-specific, public and open places to discuss matters such as this one. Search mail software lists for calendaring discussions and you may even get to see hostility towards anyone mentioning groupware.) Ultimately, to keep the discussion concrete, I decided to announce informally what I have been working on.

Introducing imip-agent

imip-agent logoCalendaring and distributed scheduling can be achieved over e-mail using the iMIP standard. My work relies on this standard to function, providing programs that are integrated in mail transfer agents (MTAs) acting as calendaring agents. Thus, I decided to call the project imip-agent.

Initially, and as noted previously, my interest in such matters started with the mail handling functionality of Kolab and the component called Wallace that is responsible for responding to requests sent to certain e-mail addresses. Meanwhile, Kolab provided (and maybe still provides) a rather inelegant way of preparing “free/busy” information describing the availability of calendar system participants: a daemon program would run periodically, scanning mailboxes for events stored in special folders, and generate completely new manifests of each user’s schedule. (This may have changed since I last looked at Kolab in any serious manner.)

It occurred to me that the exchange of messages between participants in a scheduling transaction should be sufficient to maintain a live record of each participant’s availability, and that some experimentation would demonstrate the feasibility or infeasibility of such an approach. I had already looked into how existing architectures prepare and consume free/busy information, and felt that I had enumerated the relevant essentials for a viable calendaring architecture based on e-mail exchanges alone.

And so I set about learning about mail handling programs and expanding my existing knowledge of calendar-related standards. Fortunately, my work trying to get Kolab configured in a nice way didn’t go entirely to waste after all, although I also wanted to support different MTAs and not use convoluted Postfix-specific integration mechanisms, and so had to read up about more convenient and approachable mechanisms that other systems use to integrate with mail pipelines without trying hard to be all “high performance” about it. And I also wanted to make it possible for people to adopt a solution that didn’t force them to roll out LDAP in a scary “cross your fingers and run this script” fashion, even if many organisations already rely on LDAP and are comfortable with it.

The resulting description of this work is now available on the Web, and an attempt has been made to document the many different aspects of development, deployment and integration. Naturally, it is a work in progress and not a finished product: one step on the road to hopefully becoming a dependable solution involves packaging for Free Software distributions, which would result in the effort currently required to configure the software being minimised for the person setting it up. But at the same time, the mechanisms for integration with other systems (such as mail, mailboxes and Web servers) still need to be documented so that such work may have a chance to proceed.

Why Bother?

For various reasons unrelated to the work itself, it has taken a bit longer to get to this point than previously anticipated. But the act of making it available is, for me, a very necessary part of what I regard as a contribution to a kind of conversation about what kinds of software and solutions might work for certain groups of people, touching upon topics like how such solutions might be developed and realised. For instance, the handling of calendar data, although already supported by various Python libraries, hasn’t really led to similar Python-based solutions being developed as far as I can tell. Perhaps my contribution can act as an encouragement there.

There are, of course, various Python-based CalDAV servers, but I regard the projects around them to be somewhat opaque, and I perceive a common tendency amongst them to provide something resembling a product that covers some specific needs but then leaves those people deploying that product with numerous open-ended questions about how they might address related needs. I also wonder whether there should be more library sharing between these projects for more than basic data interpretation, but I know that this is quite difficult to achieve in practice, even if these projects should be largely functionally identical.

With such things forming the background of Free Software groupware, I can understand why some organisations are pitching complete solutions that aim to do many things. But here, in certain regards, I perceive a lack of opportunity for that conversation I mentioned above: there’s either a monologue with the insinuation that some parties know better than others (or worse, that they have the magic formula to total market domination) or there’s a dialogue with one side not really extending the courtesy of taking the other side’s views or contributions seriously.

And it is clear that those wanting to use such solutions should also be part of a conversation about what, in the end, should work best for them. Now, it is possible that organisations might see the benefit in the incremental approach to improving their systems and services that imip-agent offers. But it is also possible that there are also organisations who will contrast imip-agent with a selection of all-in-one solutions, possibly being dangled in front of them on special terms by vendors who just want to “close the deal”, and in the comparison shopping exercise that ensues, they will buy into the sales pitch of one of those vendors.

Without a concerted education exercise, that latter group of potential users are never likely to be a serious participant in our conversations (although I would hope that they might ultimately see sense), but the former group of potential users should be most welcome to participate in our conversations and thus enrich the wealth of choices and options that we should be offering. They would, I hope, realise that it is not about what they can get out of other people for nothing (or next to nothing), but instead what expertise and guidance they can contribute so that they and others can benefit from a sustainable and durable solution that, above all else, serves them and their needs and interests.

What Next?

Some people might point out that calendaring is only a small portion of what groupware is, if the latter term can even be somewhat accurately defined. This is indeed true. I would like to think that Free Software projects in other domains might enter the picture here to offer a compelling, broader groupware alternative. For instance, despite the apparent focus on chat and real-time communications, one doesn’t hear too much about one of the most popular groupware technologies on the Web today: the wiki. When used effectively, and when the dated rhetoric about wikis being equivalent to anarchy has been silenced by demonstrating effective collaborative editing and content management techniques, a wiki can be a potent tool for collaboration and collective information management.

It also turns out that Free Software calendar clients could do with some improvement. Their deficiencies may be a product of an unfortunate but fashionable fascination with proprietary mail, scheduling and social networking services amongst the community of people who use and develop Free Software. Once again, even though imip-agent seeks to provide only basic functionality as a calendar client, I hope that such functionality may inform or, at the very least, inspire developers to improve existing programs and bring them up to the expected levels of functionality.

Alongside this work, I have other things I want (and need) to be looking at, but I will happily entertain enquiries about how it might be developed further or deployed. It is, after all, Free Software, and given sufficient interest, it should be developed and improved in a collaborative fashion. There are some future plans for it that I take rather seriously, but with the privileges or freedoms granted in the licence, there is nothing stopping it from having a life of its own from now on.

So, if you are interested in this kind of solution and want to know more about it, take a look at the imip-agent site. If nothing else, I hope that it reminds you of the importance of independently-developed solutions for communication and the value in retaining control of the software and systems you rely on for that communication.

An actual user reports on his use of my parallel processing library

Monday, November 16th, 2015

A long time ago, when lots of people complained all the time about how it was seemingly “impossible” to write Python programs that use more than one CPU or CPU core, and given that Unix has always (at least in accessible, recorded history) allowed processes to “fork” and thus create new ones that run the same code, I decided to write a library that makes such mechanisms more accessible within Python programs. I wasn’t the only one thinking about this: another rather similar library called processing emerged, and the Python core development community adopted that one and dropped it into the Python standard library, renaming it multiprocessing.

Now, there are a few differences between multiprocessing and my own library, pprocess, largely because my objectives may have been slightly different. First of all, I had been inspired by renewed interest in the communicating sequential processes paradigm of parallel processing, which came to prominence around the Transputer system and Occam programming language, becoming visible once more with languages like Erlang, and so I aimed to support channels between processes as one of my priorities. Secondly, and following from the previous objective, I wasn’t trying to make multithreading or the kind of “shared everything at your own risk” model easier or even possible. Thirdly, I didn’t care about proprietary operating systems whose support for process-forking was at that time deficient, and presumably still is.

(The way that the deficiencies of Microsoft Windows, either inherently or in the way it is commonly deployed, dictates development priorities in a Free Software project is yet another maddening thing about Python core development that has increased my distance from the whole endeavour over the years, but that’s a rant for another time.)

User Stories

Despite pprocess making it into Debian all by itself – or rather, with the efforts of people who must have liked it enough to package it – I’ve only occasionally had correspondence about it, much of it regarding the package falling out of Debian and being supported in a specialised Debian variant instead. For all the projects I have produced, I just assume now that the very few people using them are either able to fix any problems themselves or are happy enough with the behaviour of the code that there just isn’t any reason for them to switch to something else.

Recently, however, I heard from Kai Staats who is using Python for some genetic programming work, and he had found the pprocess and multiprocessing libraries and was trying to decide which one would work best for him. Perhaps as a matter of chance, multiprocessing would produce pickle errors that he found somewhat frustrating to understand, whereas pprocess would not: this may have been a consequence of pprocess not really trying very hard to provide some of the features that multiprocessing does. Certainly, multiprocessing attempts to provide some fairly nice features, but maybe they fail under certain overly-demanding circumstances. Kai also noted some issues with random number generators that have recently come to prominence elsewhere, interestingly enough.

Some correspondence between us then ensued, and we both gained a better understanding of how pprocess could be applied to his problem, along with some insights into how the documentation for pprocess might be improved. Eventually, success was achieved, and this article serves as a kind of brief response to his conclusion of our discussions. He notes that the multiprocess model inhibits the sharing of global variables, almost as a kind of protection against “bad things”, and that the processes must explicitly communicate their results with each other. I must admit, being so close to my own work that the peculiarities of its use were easily assumed and overlooked, that pprocess really does turn a few things about Python programs on its head.

Update: Kai suggested that I add the following, perhaps in place of the remarks about the sharing of global variables…

Kai notes that pprocess allows passing more than one variable through the multi-core portal at once, identical to a standard Python method. This enables direct translation of a single CPU for-loop into a multiple-CPU model, with nearly the same number of lines of code.

Two Tales of Concurrency

If you choose to use threads for concurrency in Python, you’ll get the “shared everything at your own risk” model, and you can have your threads modifying global variables as they see fit. Since CPython employs a “global interpreter lock”, the modifications will succeed when considered in isolation – you shouldn’t see things getting corrupted at the lowest level (pointers or references, say) like you might if doing such things unsafely in a systems programming language – but without further measures, such modifications may end up causing inconsistencies in the global data as threads change things in an uncoordinated way.

Meanwhile, pprocess also lets you change global variables at will. However, other processes just don’t see those changes and continue happily on their way with the old values of those globals.

Mutability is a key concept in Python. For anyone learning Python, it is introduced fairly early on in the form of the distinction between lists and tuples, perhaps as a curiosity at first. But then, when dictionaries are introduced to the newcomer, the notion of mutability becomes somewhat more important because dictionary keys must be immutable (or, at least, the computed hash value of keys must remain the same if sanity is to prevail): try and use a list or even a dictionary as a key and Python will complain. But for many Python programmers, it is the convenience of passing objects into functions and seeing them mutated after the function has completed, whether the object is something as transparent as a list or whether it is an instance of some exotic class, that serves as a reminder of the utility of mutability.

But again, pprocess diverges from this behaviour: pass an otherwise mutable object into a created process as an argument to a parallel function and, while the object will indeed get mutated inside the created “child” process, the “parent” process will see no change to that object upon seeing the function apparently complete. The solution is to explicitly return – or send, depending on the mechanisms chosen – changes to the “parent” if those changes are to be recorded outside the “child”.

Awkward Analogies

Kai mentioned to me the notion of a portal or gateway through which data must be transferred. For a deeper thought experiment, I would extend this analogy by suggesting that the portal is situated within a mirror, and that the mirror portrays an alternative reality that happens to be the same as our own. Now, as far as the person looking into the mirror is concerned, everything on the “other side” in the mirror image merely reflects the state of reality on their own side, and initially, when the portal through the mirror is created, this is indeed the case.

But as things start to occur independently on the “other side”, with things changing, moving around, and so on, the original observer remains oblivious to those changes and keeps seeing the state on their own side in the mirror image, believing that nothing has actually changed over there. Meanwhile, an observer on the other side of the portal sees their changes in their own mirror image. They believe that their view reflects reality not only for themselves but for the initial observer as well. It is only when data is exchanged via the portal (or, in the case of pprocess, returned from a parallel function or sent via a channel) that the surprise of previously-unseen data arriving at one of our observers occurs.

Expectations and Opportunities

It may seem very wrong to contradict Python’s semantics in this way, and for all I know the multiprocessing library may try and do clever things to support the normal semantics behind the scenes (although it would be quite tricky to achieve), but anyone familiar with the “fork” system call in other languages would recognise and probably accept the semantic “discontinuity”. One thing I’ve learned is that it isn’t always possible to assume that people who are motivated to use such software will happen to share the same motivations as I and other library developers may have in writing that software.

However, it has also occurred to me that the behavioural differences caused in programs by pprocess could offer some opportunities. For example, programs can implement transactional behaviour by creating new processes which may or may not return data depending on whether the transactions performed by the new processes succeed or fail. Of course, it is possible to implement such transactional behaviour in Python already with some discipline, but the underlying copy-on-write semantics that allow “fork” and pprocess to function make it much easier and arguably more reliable.

With CPython’s scalability constantly being questioned, despite attempts to provide improved concurrency features (in Python 3, at least), and with a certain amount of enthusiasm in some circles for functional programming and the ability to eliminate side effects in parts of Python programs, maybe such broken expectations point the way to evolved forms of Python that possibly work better for certain kinds of applications or systems.

So, it turns out that user feedback doesn’t have to be about bug reports and support questions and nothing else: it can really get you thinking about larger issues, questioning long-held assumptions, and even be a motivation to look at certain topics once again. But for now, I hope that Kai’s programs keep producing correct data as they scale across the numerous cores of the cluster he is using. We can learn new things about my software another time!

Migrating the Mailman Wiki from Confluence to MoinMoin

Tuesday, May 12th, 2015

Having recently seen an article about the closure of a project featuring that project’s usage of proprietary tools from Atlassian – specifically JIRA and Confluence – I thought I would share my own experiences from the migration of another project’s wiki site that had been using Confluence as a collaborative publishing tool.

Quite some time ago now, a call for volunteers was posted to the FSF blog, asking for people familiar with Python to help out with a migration of the Mailman Wiki from Confluence to MoinMoin. Subsequently, Barry Warsaw followed up on the developers’ mailing list for Mailman with a similar message. Unlike the project at the start of this article, GNU Mailman was (and remains) a vibrant Free Software project, but a degree of dissatisfaction with Confluence, combined with the realisation that such a project should be using, benefiting from, and contributing to Free Software tools, meant that such a migration was seen as highly desirable, if not essential.

Up and Away

Initially, things started off rather energetically, and Bradley Dean initiated the process of fact-finding around Confluence and the Mailman project’s usage of it. But within a few months, things apparently became noticeably quieter. My own involvement probably came about through seeing the ConfluenceConverter page on the MoinMoin Wiki, looking at the development efforts, and seeing if I couldn’t nudge the project along by pitching in with notes about representing Confluence markup format features in the somewhat more conventional MoinMoin wiki syntax. Indeed, it appears that my first contribution to this work occurred as early as late May 2011, but I was more or less content to let the project participants get on with their efforts to understand how Confluence represents its data, how Confluence exposes resources on a wiki, and so on.

But after a while, it occurred to me that the volunteers probably had other things to do and that progress had largely stalled. Although there wasn’t very much code available to perform concrete migration-related tasks, Bradley had gained some solid experience with the XML format employed by exported Confluence data, and such experience when combined with my own experiences dealing with very large XML files in my day job suggested an approach that had worked rather well with such large files: performing an extraction of the essential information, including identifiers and references that communicate the actual structure of the information, as opposed to the hierarchical structure of the XML data itself. With the data available in a more concise and flexible form, it can then be processed in a more convenient fashion, and within a few weeks I had something ready to play with.

With a day job and other commitments, it isn’t usually possible to prioritise volunteer projects like this, and I soon discovered that some other factors were involved: technological progress, and the tendency for proprietary software and services to be upgraded. What had initially involved the conversion of textual content from one markup format to another now seemed to involve the conversion from two rather different markup formats. All the effort documenting the original Confluence format now seemed to be almost peripheral if not superfluous: any current content on the Mailman Wiki would now be in a completely different format. And volunteer energy seemed to have run out.

A Revival

Time passed. And then the Mailman developers noticed that the Confluence upgrade had made the wiki situation even less bearable (as indeed other Confluence users had noticed and complained about), and that the benefits of such a solution were being outweighed by the inconveniences of the platform. And it was at this point that I realised that it was worthwhile continuing the migration effort: it is bad enough that people feel constrained by a proprietary platform over which they have little control, but it is even worse when it appears that they will have to abandon their content and start over with little or no benefit from all the hard work they have invested in creating and maintaining that content in the first place.

And with that, I started the long process of trying to support not only both markup formats, but also all the features likely to have been used by the Mailman project and those using its wiki. Some might claim that Confluence is “powerful” by supporting a multitude of seemingly exotic features (page relationships, comments, “spaces”, blogs, as well as various kinds of extensions), but many of these features are rarely used or never actually used at all. Meanwhile, as many migration projects can attest, if one inadvertently omits some minor feature that someone regards as essential, one risks never hearing the end of it, especially if the affected users have been soaking up the propaganda from their favourite proprietary vendor (which was thankfully never a factor in this particular situation).

Despite the “long tail” of feature support, we were able to end 2012 with some kind of overview of the scope of the remaining work. And once again I was able to persuade the concerned parties that we should focus on MoinMoin 1.x not 2.x, which has proved to be the correct decision given the still-ongoing status of the latter even now in 2015. Of course, I didn’t at that point anticipate how much longer the project would take…

Iterating

Over the next few months, I found time to do more work and to keep the Mailman development community informed again and again, which is a seemingly minor aspect of such efforts but is essential to reassure people that things really are happening: the Mailman community had, in fact, forgotten about the separate mailing list for this project long before activity on it had subsided. One benefit of this was to get feedback on how things were looking as each iteration of the converted content was made available, and with something concrete to look at, people tend to remember things that matter to them that they wouldn’t otherwise think of in any abstract discussion about the content.

In such processes, other things tend to emerge that initially aren’t priorities but which have to be dealt with eventually. One of the stated objectives was to have a full history, meaning that all the edits made to the original content would need to be preserved, and for an authentic record, these edits would need to preserve both timestamp and author information. This introduced complications around the import of converted content – it being no longer sufficient to “replay” edits and have them assume the timestamp of the moment they were added to the new wiki – as well as the migration and management of user profiles. Particularly this latter area posed a problem: the exported data from Confluence only contained page (and related) content, not user profile information.

Now, one might not have expected user details to be exportable anyway due to potential security issues with people having sufficient privileges to request a data dump directly from Confluence and then to be able to obtain potentially sensitive information about other users, but this presented another challenge where the migration of an entire site is concerned. On this matter, a very pragmatic approach was taken: any user profile pages (of which there were thankfully very few) were retrieved directly over the Web from the existing site; the existence of user profiles was deduced from the author metadata present in the actual exported wiki content. Since we would be asking existing users to re-enable their accounts on the new wiki once it became active, and since we would be avoiding the migration of spammer accounts, this approach seemed to be a reasonable compromise between convenience and completeness.

Getting There

By November 2013, the end was in sight, with coverage of various “actions” supported by Confluence also supported in the migrated wiki. Such actions are a good example of how things that are on the edges of a migration can demand significant amounts of time. For instance, Confluence supports a PDF export action, and although one might suggest that people just print a page to file from their browser, choosing PDF as the output format, there are reasonable arguments to be made that a direct export might also be desirable. Thus, after a brief survey of existing options for MoinMoin, I decided it would be useful to provide one myself. The conversion of Confluence content had also necessitated the use of more expressive table syntax. Had I not been sufficiently interested in implementing improved table facilities in MoinMoin prior to this work, I would have needed to invest quite a bit of effort in this seemingly peripheral activity.

Again, time passed. Much of the progress occurred off-list at this point. In fact, a degree of confusion, miscommunication and elements of other factors conspired to delay the availability of the infrastructure on which the new wiki would be deployed. Already in October 2013 there had been agreement about hosting within the python.org infrastructure, but the matter seemed to stall despite Barry Warsaw trying to push it along in February and April 2014. Eventually, after complaining from me on the PSF members’ mailing list at the end of May, some motion occurred on the matter and in July the task of provisioning the necessary resources began.

After returning from a long vacation in August, the task of performing the final migration and actually deploying the content could finally begin. Here, I was able to rely on expert help from Mark Sapiro who not only checked the results of the migration thoroughly, but also configured various aspects of the mail system functionality (one of the benefits of participating in a mail-oriented project, I guess), and even enhanced my code to provide features that I had overlooked. By September, we were already playing with the migrated content and discussing how the site would be administered and protected from spam and vandalism. By October, Barry was already confident enough to pre-announce the migrated site!

At Long Last

Alas, things stalled again for a while, perhaps due to other commitments of some of the volunteers needed to make the final transition occur, but in January the new Mailman Wiki was finally announced. But things didn’t stop there. One long-standing volunteer, Jim Tittsler, decided that the visual theme of the new wiki would be improved if it were made to match the other Mailman Web resources, and so he went and figured out how to make a MoinMoin theme to do the job! The new wiki just wouldn’t look as good, despite all the migrated content and the familiarity of MoinMoin, if it weren’t for the special theme that Jim put together.

The all-new Mailman Wiki with Jim Tittsler's theme

The all-new Mailman Wiki with Jim Tittsler's theme

There have been a few things to deal with after deploying the new wiki. Spam and vandalism have not been a problem because we have implemented a very strict editing policy where people have to request editing access. However, this does not prevent people from registering accounts, even if they never get to use them to do anything. To deal with this, we enabled textcha support for new account registrations, and we also enabled e-mail verification of new accounts. As a result, the considerable volume of new user profiles that were being created (potentially hundreds every hour) has been more or less eliminated.

It has to be said that throughout the process, once it got started in earnest, the Mailman development community has been fantastic, with constructive feedback and encouragement throughout. I have had disappointing things to say about the experience of being a volunteer with regard to certain projects and initiatives, but the Mailman project is not that kind of project. Within the limits of their powers, the Mailman custodians have done their best to enable this work and to see it through to the end.

Some Lessons

I am sure that offers of “for free” usage of certain proprietary tools and services are made in a genuinely generous way by companies like Atlassian who presumably feel that they are helping to make Free Software developers more productive. And I can only say that those interactions I experienced with Contegix, who were responsible for hosting the Confluence instance through which the old Mailman Wiki was deployed, were both constructive and polite. Nevertheless, proprietary solutions are ultimately disempowering: they take away the control over the working environment that users and developers need to have; they direct improvement efforts towards themselves and away from Free Software solutions; they also serve as a means of dissuading people from adopting competing Free Software products by giving an indication that only they can meet the rigorous demands of the activity concerned.

I saw a position in the Norwegian public sector not so long ago for someone who would manage and enhance a Confluence installation. While it is not for me to dictate the tools people choose to do their work, it seems to me that such effort would be better spent enhancing Free Software products and infrastructure instead of remedying the deficiencies of a tool over which the customer ultimately has no control, to which the customer is bound, and where the expertise being cultivated is relevant only to a single product for as long as that product is kept viable by its vendor. Such strategic mistakes occur all too frequently in the Norwegian public sector, with its infatuation with proprietary products and services, but those of us not constrained by such habits can make better choices when choosing tools for our own endeavours.

I encourage everyone to support Free Software tools when choosing solutions for your projects. It may be the case that such tools may not at first offer precisely the features you might be looking for, and you might be tempted to accept an offer of a “for free” product or to use a no-cost “cloud” service, and such things may appear to offer an easier path when you might otherwise be confronted with a choice of hosting solutions and deployment issues. But there are whole communities out there who can offer advice and will support their Free Software project, and there are Free Software organisations who will help you deploy your choice of tools, perhaps even having it ready to use as part of their existing infrastructure.

In the end, by embracing Free Software, you get the control you need over your content in order to manage it sustainably. Surely that is better than having some random company in charge, with the ever-present risk of them one day deciding to discontinue their service and/or, with barely enough notice, discard your data.

Leaving the PSF

Sunday, May 10th, 2015

It didn’t all start with a poorly-considered April Fools’ joke about hosting a Python conference in Cuba, but the resulting private mailing list discussion managed to persuade me not to continue as a voting member of the Python Software Foundation (PSF). In recent years, upon returning from vacation, discovering tens if not hundreds of messages whipping up a frenzy about some topic supposedly pertinent to the activities of the PSF, and reading through such messages as if I should inform my own position on the matter, was undoubtedly one of the chores of being a member. This time, my vacation plans were slightly unusual, so I was at least spared the surprise of getting the bulk of people’s opinions in one big serving.

I was invited to participate in the PSF at a time when it was an invitation-only affair. My own modest contributions to the EuroPython conference were the motivating factor, and it would seem that I hadn’t alienated enough people for my nomination to be opposed. (This cannot be said for some other people who did eventually become members as well after their opponents presumably realised the unkindness of their ways.) Being asked to participate was an honour, although I remarked at the time that I wasn’t sure what contribution I might make to such an organisation. Becoming a Fellow of the FSFE was an active choice I made myself because I align myself closely with the agenda the FSFE chooses to pursue, but the PSF is somewhat more vague or more ambivalent about its own agenda: promoting Python is all very well, but should the organisation promote proprietary software that increases Python adoption, or would this undermine the foundations on which Python was built and is sustained? Being invited to participate in an organisation with often unclear objectives combines a degree of passivity with an awareness that some of the decisions being taken may well contradict some of the principles I have actively chosen to support in other organisations. Such as the FSFE, of course.

Don’t get me wrong: there are a lot of vital activities performed within the PSF. For instance, the organisation has a genuine need to enforce its trademarks and to stop other people from claiming the Python name as their own, and the membership can indeed assist in such matters, as can the wider community. But looking at my archives of the private membership mailing list, a lot of noise has been produced on other, more mundane matters. For a long time, it seemed as if the only business of the PSF membership – as opposed to the board who actually make the big decisions – was to nominate and vote on new members, thus giving the organisation the appearance of only really existing for its own sake. Fortunately, organisational reform has made the matter of recruiting members largely obsolete, and some initiatives have motivated other, more meaningful activities. However, I cannot be the only person who has noted that such activities could largely be pursued outside the PSF and within the broader community instead, as indeed these activities typically are.

PyCon

Some of the more divisive topics that have caused the most noise have had some connection with PyCon: the North American Python conference that mostly replaced the previous International Python Conference series (from back when people thought that conferences had to be professionally organised and run, in contrast to PyCon and most, if not all, other significant Python conferences today). Indeed, this lack of separation between the PSF and PyCon has been a significant concern of mine. I will probably never attend a PyCon, partly because it resides in North America as a physical event, partly because its size makes it completely uninteresting to me as an attendee, and largely because I increasingly find the programme uninteresting for a variety of other reasons. When the PSF members’ time is spent discussing or at least exposed to the discussion of PyCon business, it can just add to the burden of membership for those who wish to focus on the supposed core objectives of the organisation.

What may well be worse, however, is that PyCon exposes the PSF to substantial liability issues. As the conference headed along a trajectory of seemingly desirable and ambitious growth, it collided with the economic downturn caused by the global financial crisis of 2008, incurring a not insignificant loss. Fortunately, this outcome has not since been repeated, and the organisation had sufficient liquidity to avoid any serious consequences. Some have argued that it was precisely because profits from previous years’ conferences had been accumulated that the organisation was able to pay its bills, but such good fortune cannot explain away the fundamental liability and the risks it brings to the viability of the organisation, especially if fortune happens not to be on its side in future.

Volunteering

In recent times, I have been more sharply focused on the way volunteers are treated by organisations who rely on their services to fulfil their mission. Sadly, the PSF has exhibited a poor record in various respects on this matter. Once upon a time, the Python language Web site was redesigned under contract, but the burden of maintenance fell on community volunteers. Over time, discontentment forced the decision to change the technology and a specification was drawn up under a degree of consultation. Unfortunately, the priorities of certain stakeholders – that is, community volunteers doing a fair amount of hard work in their own time – were either ignored or belittled, leaving them confronted with either having to adapt to a suboptimal workflow not of their own choosing, as well as spending time and energy developing that workflow, or just quitting and leaving it to other people to tidy up the mess that those other people (and the hired contractors) had made.

Understandably, the volunteers quit, leaving a gap in the Web site functionality that took a year to reinstate. But what was most disappointing was the way those volunteers were branded as uncooperative and irresponsible in an act of revisionism by those who clearly failed to appreciate the magnitude of the efforts of those volunteers in the first place. Indeed, the views of the affected volunteers were even belittled when efforts were championed to finally restore the functionality, with it being stated by one motivated individual that the history of the problem was not of his concern. When people cannot themselves choose the basis of their own involvement in a volunteer-run organisation without being vilified for letting people down or for “holding the organisation to ransom”, the latter being a remarkable accusation given the professionalism that was actually shown in supporting a transition to other volunteers, one must question whether such an organisation deserves to attract any volunteers at all.

Politics

As discussion heated up over the PyCon Cuba affair, the usual clash of political views emerged, with each side accusing the other of ignorance and not understanding the political or cultural situation, apparently blinkered by their own cultural and political biases. I remember brazen (and ill-informed) political advocacy being a component in one of the Python community blogging “planets” before I found the other one, back when there was a confusing level of duplication between the two and when nobody knew which one was the “real” one (which now appears to consist of a lot of repetition and veiled commercial advertising), and I find it infuriating when people decide to use such matters as an excuse to lecture others and to promote their own political preferences.

I have become aware of a degree of hostility within the PSF towards the Free Software Foundation, with the latter being regarded as a “political” organisation, perhaps due to hard feelings experienced when the FSF had to educate the custodians of Python about software licensing (which really came about in the first place because of the way Python development had been moved around, causing various legal representatives to play around with the licensing, arguably to make their own mark and to stop others getting all the credit). And I detect a reluctance in some quarters to defend software freedom within the PSF, with a reluctance to align the PSF with other entities that support software and digital freedoms. At least the FSF can be said to have an honest political agenda, where those who support it more or less know where they stand.

In contrast, the PSF seems to cultivate all kinds of internal squabbling and agenda-setting: true politics in the worst sense of the word. On one occasion I was more or less told that my opinion was not welcome or, indeed, could ever be of interest on a topic related to diversity. Thankfully, diversity politics moved to a dedicated mailing list and I was thereafter mostly able to avoid being told by another Anglo-Saxon male that my own perspectives didn’t matter on that or on any other topic. How it is that someone I don’t actually know can presume to know in any detail what perspectives or experiences I might have to offer on any matter remains something of a mystery to me.

Looking through my archives, there appears to be a lot of noise, squabbling, quipping, and recrimination over the last five years or so. In the midst of the recent finger-wagging, someone dared to mention that maybe Cubans, wherever they are, might actually deserve to have a conference. Indeed, other places were mentioned where the people who live there, through no fault of their own, would also be the object of political grandstanding instead of being treated like normal people wanting to participate in a wider community.

I mostly regard April Fools’ jokes as part of a tedious tradition, part of the circus that distracts people away from genuine matters of concern, perhaps even an avenue of passive aggression in certain circles, a way to bully people and then insist – as cowards do – that it was “just a joke”. The lack of a separation of the PSF’s interests combined with the allure of the circus conspired to make fools out of the people involved in creating the joke and of many in the accompanying debate. I find myself uninterested in spending my own time indulging such distractions, especially when those distractions are products of flaws in the organisation that nobody wishes to fix, and when there are more immediate and necessary activities to pursue in the wider arena of Free Software that, as a movement in its own right, some in the PSF practically refuse to acknowledge.

Effects

Leaving the PSF won’t really change any of my commitments, but it will at least reduce the level of background noise I have to deal with. Such an underwhelming and unfortunate assessment is something the organisation will have to rectify in time if it wishes to remain relevant and to deserve the continued involvement of its members. I do have confidence in some of the reform and improvement processes being conducted by volunteers with too little time of their own to pursue them, and I hope that they make the organisation a substantially better and more effective one, as they continue to play to an audience of people with much to say but, more often than not, little to add.

I would have been tempted to remain in the PSF and to pursue various initiatives if the organisation were a multiplier of effect for any given input of effort. Instead, it currently acts as a divider of effect for all the effort one would apparently need to put in to achieve anything. That isn’t how any organisation, let alone one relying on volunteer time and commitment, should be functioning.

A Footnote

On political matters and accusations of ignorance being traded, my own patience is wearing thin indeed, and this probably nudged me into finally making this decision. It probably doesn’t help that I recently made a trip to Britain where election season has been in full swing, with unashamed displays of wilful idiocy openly paraded on a range of topics, indulged by the curated ignorance of the masses, with the continued destruction of British society, nature and the economy looking inevitable as the perpetrators insist they know best now and will undoubtedly in the future protest their innocence when challenged on the legacy of their ruinous rule, adopting the “it wasn’t me” manner of a petulant schoolchild so befitting of the basis of the nepotism that got most of them where they are today.