Paul Boddie's Free Software-related blog


Archive for the ‘pprocess’ Category

An actual user reports on his use of my parallel processing library

Monday, November 16th, 2015

A long time ago, when lots of people complained all the time about how it was seemingly “impossible” to write Python programs that use more than one CPU or CPU core, and given that Unix has always (at least in accessible, recorded history) allowed processes to “fork” and thus create new ones that run the same code, I decided to write a library that makes such mechanisms more accessible within Python programs. I wasn’t the only one thinking about this: another rather similar library called processing emerged, and the Python core development community adopted that one and dropped it into the Python standard library, renaming it multiprocessing.

Now, there are a few differences between multiprocessing and my own library, pprocess, largely because my objectives may have been slightly different. First of all, I had been inspired by renewed interest in the communicating sequential processes paradigm of parallel processing, which came to prominence around the Transputer system and Occam programming language, becoming visible once more with languages like Erlang, and so I aimed to support channels between processes as one of my priorities. Secondly, and following from the previous objective, I wasn’t trying to make multithreading or the kind of “shared everything at your own risk” model easier or even possible. Thirdly, I didn’t care about proprietary operating systems whose support for process-forking was at that time deficient, and presumably still is.

(The way that the deficiencies of Microsoft Windows, either inherently or in the way it is commonly deployed, dictates development priorities in a Free Software project is yet another maddening thing about Python core development that has increased my distance from the whole endeavour over the years, but that’s a rant for another time.)

User Stories

Despite pprocess making it into Debian all by itself – or rather, with the efforts of people who must have liked it enough to package it – I’ve only occasionally had correspondence about it, much of it regarding the package falling out of Debian and being supported in a specialised Debian variant instead. For all the projects I have produced, I just assume now that the very few people using them are either able to fix any problems themselves or are happy enough with the behaviour of the code that there just isn’t any reason for them to switch to something else.

Recently, however, I heard from Kai Staats who is using Python for some genetic programming work, and he had found the pprocess and multiprocessing libraries and was trying to decide which one would work best for him. Perhaps as a matter of chance, multiprocessing would produce pickle errors that he found somewhat frustrating to understand, whereas pprocess would not: this may have been a consequence of pprocess not really trying very hard to provide some of the features that multiprocessing does. Certainly, multiprocessing attempts to provide some fairly nice features, but maybe they fail under certain overly-demanding circumstances. Kai also noted some issues with random number generators that have recently come to prominence elsewhere, interestingly enough.

Some correspondence between us then ensued, and we both gained a better understanding of how pprocess could be applied to his problem, along with some insights into how the documentation for pprocess might be improved. Eventually, success was achieved, and this article serves as a kind of brief response to his conclusion of our discussions. He notes that the multiprocess model inhibits the sharing of global variables, almost as a kind of protection against “bad things”, and that the processes must explicitly communicate their results with each other. I must admit, being so close to my own work that the peculiarities of its use were easily assumed and overlooked, that pprocess really does turn a few things about Python programs on its head.

Update: Kai suggested that I add the following, perhaps in place of the remarks about the sharing of global variables…

Kai notes that pprocess allows passing more than one variable through the multi-core portal at once, identical to a standard Python method. This enables direct translation of a single CPU for-loop into a multiple-CPU model, with nearly the same number of lines of code.

Two Tales of Concurrency

If you choose to use threads for concurrency in Python, you’ll get the “shared everything at your own risk” model, and you can have your threads modifying global variables as they see fit. Since CPython employs a “global interpreter lock”, the modifications will succeed when considered in isolation – you shouldn’t see things getting corrupted at the lowest level (pointers or references, say) like you might if doing such things unsafely in a systems programming language – but without further measures, such modifications may end up causing inconsistencies in the global data as threads change things in an uncoordinated way.

Meanwhile, pprocess also lets you change global variables at will. However, other processes just don’t see those changes and continue happily on their way with the old values of those globals.

Mutability is a key concept in Python. For anyone learning Python, it is introduced fairly early on in the form of the distinction between lists and tuples, perhaps as a curiosity at first. But then, when dictionaries are introduced to the newcomer, the notion of mutability becomes somewhat more important because dictionary keys must be immutable (or, at least, the computed hash value of keys must remain the same if sanity is to prevail): try and use a list or even a dictionary as a key and Python will complain. But for many Python programmers, it is the convenience of passing objects into functions and seeing them mutated after the function has completed, whether the object is something as transparent as a list or whether it is an instance of some exotic class, that serves as a reminder of the utility of mutability.

But again, pprocess diverges from this behaviour: pass an otherwise mutable object into a created process as an argument to a parallel function and, while the object will indeed get mutated inside the created “child” process, the “parent” process will see no change to that object upon seeing the function apparently complete. The solution is to explicitly return – or send, depending on the mechanisms chosen – changes to the “parent” if those changes are to be recorded outside the “child”.

Awkward Analogies

Kai mentioned to me the notion of a portal or gateway through which data must be transferred. For a deeper thought experiment, I would extend this analogy by suggesting that the portal is situated within a mirror, and that the mirror portrays an alternative reality that happens to be the same as our own. Now, as far as the person looking into the mirror is concerned, everything on the “other side” in the mirror image merely reflects the state of reality on their own side, and initially, when the portal through the mirror is created, this is indeed the case.

But as things start to occur independently on the “other side”, with things changing, moving around, and so on, the original observer remains oblivious to those changes and keeps seeing the state on their own side in the mirror image, believing that nothing has actually changed over there. Meanwhile, an observer on the other side of the portal sees their changes in their own mirror image. They believe that their view reflects reality not only for themselves but for the initial observer as well. It is only when data is exchanged via the portal (or, in the case of pprocess, returned from a parallel function or sent via a channel) that the surprise of previously-unseen data arriving at one of our observers occurs.

Expectations and Opportunities

It may seem very wrong to contradict Python’s semantics in this way, and for all I know the multiprocessing library may try and do clever things to support the normal semantics behind the scenes (although it would be quite tricky to achieve), but anyone familiar with the “fork” system call in other languages would recognise and probably accept the semantic “discontinuity”. One thing I’ve learned is that it isn’t always possible to assume that people who are motivated to use such software will happen to share the same motivations as I and other library developers may have in writing that software.

However, it has also occurred to me that the behavioural differences caused in programs by pprocess could offer some opportunities. For example, programs can implement transactional behaviour by creating new processes which may or may not return data depending on whether the transactions performed by the new processes succeed or fail. Of course, it is possible to implement such transactional behaviour in Python already with some discipline, but the underlying copy-on-write semantics that allow “fork” and pprocess to function make it much easier and arguably more reliable.

With CPython’s scalability constantly being questioned, despite attempts to provide improved concurrency features (in Python 3, at least), and with a certain amount of enthusiasm in some circles for functional programming and the ability to eliminate side effects in parts of Python programs, maybe such broken expectations point the way to evolved forms of Python that possibly work better for certain kinds of applications or systems.

So, it turns out that user feedback doesn’t have to be about bug reports and support questions and nothing else: it can really get you thinking about larger issues, questioning long-held assumptions, and even be a motivation to look at certain topics once again. But for now, I hope that Kai’s programs keep producing correct data as they scale across the numerous cores of the cluster he is using. We can learn new things about my software another time!