Paul Boddie's Free Software-related blog


Archive for the ‘Free Software’ Category

Migrating the Mailman Wiki from Confluence to MoinMoin

Tuesday, May 12th, 2015

Having recently seen an article about the closure of a project featuring that project’s usage of proprietary tools from Atlassian – specifically JIRA and Confluence – I thought I would share my own experiences from the migration of another project’s wiki site that had been using Confluence as a collaborative publishing tool.

Quite some time ago now, a call for volunteers was posted to the FSF blog, asking for people familiar with Python to help out with a migration of the Mailman Wiki from Confluence to MoinMoin. Subsequently, Barry Warsaw followed up on the developers’ mailing list for Mailman with a similar message. Unlike the project at the start of this article, GNU Mailman was (and remains) a vibrant Free Software project, but a degree of dissatisfaction with Confluence, combined with the realisation that such a project should be using, benefiting from, and contributing to Free Software tools, meant that such a migration was seen as highly desirable, if not essential.

Up and Away

Initially, things started off rather energetically, and Bradley Dean initiated the process of fact-finding around Confluence and the Mailman project’s usage of it. But within a few months, things apparently became noticeably quieter. My own involvement probably came about through seeing the ConfluenceConverter page on the MoinMoin Wiki, looking at the development efforts, and seeing if I couldn’t nudge the project along by pitching in with notes about representing Confluence markup format features in the somewhat more conventional MoinMoin wiki syntax. Indeed, it appears that my first contribution to this work occurred as early as late May 2011, but I was more or less content to let the project participants get on with their efforts to understand how Confluence represents its data, how Confluence exposes resources on a wiki, and so on.

But after a while, it occurred to me that the volunteers probably had other things to do and that progress had largely stalled. Although there wasn’t very much code available to perform concrete migration-related tasks, Bradley had gained some solid experience with the XML format employed by exported Confluence data, and such experience when combined with my own experiences dealing with very large XML files in my day job suggested an approach that had worked rather well with such large files: performing an extraction of the essential information, including identifiers and references that communicate the actual structure of the information, as opposed to the hierarchical structure of the XML data itself. With the data available in a more concise and flexible form, it can then be processed in a more convenient fashion, and within a few weeks I had something ready to play with.

With a day job and other commitments, it isn’t usually possible to prioritise volunteer projects like this, and I soon discovered that some other factors were involved: technological progress, and the tendency for proprietary software and services to be upgraded. What had initially involved the conversion of textual content from one markup format to another now seemed to involve the conversion from two rather different markup formats. All the effort documenting the original Confluence format now seemed to be almost peripheral if not superfluous: any current content on the Mailman Wiki would now be in a completely different format. And volunteer energy seemed to have run out.

A Revival

Time passed. And then the Mailman developers noticed that the Confluence upgrade had made the wiki situation even less bearable (as indeed other Confluence users had noticed and complained about), and that the benefits of such a solution were being outweighed by the inconveniences of the platform. And it was at this point that I realised that it was worthwhile continuing the migration effort: it is bad enough that people feel constrained by a proprietary platform over which they have little control, but it is even worse when it appears that they will have to abandon their content and start over with little or no benefit from all the hard work they have invested in creating and maintaining that content in the first place.

And with that, I started the long process of trying to support not only both markup formats, but also all the features likely to have been used by the Mailman project and those using its wiki. Some might claim that Confluence is “powerful” by supporting a multitude of seemingly exotic features (page relationships, comments, “spaces”, blogs, as well as various kinds of extensions), but many of these features are rarely used or never actually used at all. Meanwhile, as many migration projects can attest, if one inadvertently omits some minor feature that someone regards as essential, one risks never hearing the end of it, especially if the affected users have been soaking up the propaganda from their favourite proprietary vendor (which was thankfully never a factor in this particular situation).

Despite the “long tail” of feature support, we were able to end 2012 with some kind of overview of the scope of the remaining work. And once again I was able to persuade the concerned parties that we should focus on MoinMoin 1.x not 2.x, which has proved to be the correct decision given the still-ongoing status of the latter even now in 2015. Of course, I didn’t at that point anticipate how much longer the project would take…

Iterating

Over the next few months, I found time to do more work and to keep the Mailman development community informed again and again, which is a seemingly minor aspect of such efforts but is essential to reassure people that things really are happening: the Mailman community had, in fact, forgotten about the separate mailing list for this project long before activity on it had subsided. One benefit of this was to get feedback on how things were looking as each iteration of the converted content was made available, and with something concrete to look at, people tend to remember things that matter to them that they wouldn’t otherwise think of in any abstract discussion about the content.

In such processes, other things tend to emerge that initially aren’t priorities but which have to be dealt with eventually. One of the stated objectives was to have a full history, meaning that all the edits made to the original content would need to be preserved, and for an authentic record, these edits would need to preserve both timestamp and author information. This introduced complications around the import of converted content – it being no longer sufficient to “replay” edits and have them assume the timestamp of the moment they were added to the new wiki – as well as the migration and management of user profiles. Particularly this latter area posed a problem: the exported data from Confluence only contained page (and related) content, not user profile information.

Now, one might not have expected user details to be exportable anyway due to potential security issues with people having sufficient privileges to request a data dump directly from Confluence and then to be able to obtain potentially sensitive information about other users, but this presented another challenge where the migration of an entire site is concerned. On this matter, a very pragmatic approach was taken: any user profile pages (of which there were thankfully very few) were retrieved directly over the Web from the existing site; the existence of user profiles was deduced from the author metadata present in the actual exported wiki content. Since we would be asking existing users to re-enable their accounts on the new wiki once it became active, and since we would be avoiding the migration of spammer accounts, this approach seemed to be a reasonable compromise between convenience and completeness.

Getting There

By November 2013, the end was in sight, with coverage of various “actions” supported by Confluence also supported in the migrated wiki. Such actions are a good example of how things that are on the edges of a migration can demand significant amounts of time. For instance, Confluence supports a PDF export action, and although one might suggest that people just print a page to file from their browser, choosing PDF as the output format, there are reasonable arguments to be made that a direct export might also be desirable. Thus, after a brief survey of existing options for MoinMoin, I decided it would be useful to provide one myself. The conversion of Confluence content had also necessitated the use of more expressive table syntax. Had I not been sufficiently interested in implementing improved table facilities in MoinMoin prior to this work, I would have needed to invest quite a bit of effort in this seemingly peripheral activity.

Again, time passed. Much of the progress occurred off-list at this point. In fact, a degree of confusion, miscommunication and elements of other factors conspired to delay the availability of the infrastructure on which the new wiki would be deployed. Already in October 2013 there had been agreement about hosting within the python.org infrastructure, but the matter seemed to stall despite Barry Warsaw trying to push it along in February and April 2014. Eventually, after complaining from me on the PSF members’ mailing list at the end of May, some motion occurred on the matter and in July the task of provisioning the necessary resources began.

After returning from a long vacation in August, the task of performing the final migration and actually deploying the content could finally begin. Here, I was able to rely on expert help from Mark Sapiro who not only checked the results of the migration thoroughly, but also configured various aspects of the mail system functionality (one of the benefits of participating in a mail-oriented project, I guess), and even enhanced my code to provide features that I had overlooked. By September, we were already playing with the migrated content and discussing how the site would be administered and protected from spam and vandalism. By October, Barry was already confident enough to pre-announce the migrated site!

At Long Last

Alas, things stalled again for a while, perhaps due to other commitments of some of the volunteers needed to make the final transition occur, but in January the new Mailman Wiki was finally announced. But things didn’t stop there. One long-standing volunteer, Jim Tittsler, decided that the visual theme of the new wiki would be improved if it were made to match the other Mailman Web resources, and so he went and figured out how to make a MoinMoin theme to do the job! The new wiki just wouldn’t look as good, despite all the migrated content and the familiarity of MoinMoin, if it weren’t for the special theme that Jim put together.

The all-new Mailman Wiki with Jim Tittsler's theme

The all-new Mailman Wiki with Jim Tittsler's theme

There have been a few things to deal with after deploying the new wiki. Spam and vandalism have not been a problem because we have implemented a very strict editing policy where people have to request editing access. However, this does not prevent people from registering accounts, even if they never get to use them to do anything. To deal with this, we enabled textcha support for new account registrations, and we also enabled e-mail verification of new accounts. As a result, the considerable volume of new user profiles that were being created (potentially hundreds every hour) has been more or less eliminated.

It has to be said that throughout the process, once it got started in earnest, the Mailman development community has been fantastic, with constructive feedback and encouragement throughout. I have had disappointing things to say about the experience of being a volunteer with regard to certain projects and initiatives, but the Mailman project is not that kind of project. Within the limits of their powers, the Mailman custodians have done their best to enable this work and to see it through to the end.

Some Lessons

I am sure that offers of “for free” usage of certain proprietary tools and services are made in a genuinely generous way by companies like Atlassian who presumably feel that they are helping to make Free Software developers more productive. And I can only say that those interactions I experienced with Contegix, who were responsible for hosting the Confluence instance through which the old Mailman Wiki was deployed, were both constructive and polite. Nevertheless, proprietary solutions are ultimately disempowering: they take away the control over the working environment that users and developers need to have; they direct improvement efforts towards themselves and away from Free Software solutions; they also serve as a means of dissuading people from adopting competing Free Software products by giving an indication that only they can meet the rigorous demands of the activity concerned.

I saw a position in the Norwegian public sector not so long ago for someone who would manage and enhance a Confluence installation. While it is not for me to dictate the tools people choose to do their work, it seems to me that such effort would be better spent enhancing Free Software products and infrastructure instead of remedying the deficiencies of a tool over which the customer ultimately has no control, to which the customer is bound, and where the expertise being cultivated is relevant only to a single product for as long as that product is kept viable by its vendor. Such strategic mistakes occur all too frequently in the Norwegian public sector, with its infatuation with proprietary products and services, but those of us not constrained by such habits can make better choices when choosing tools for our own endeavours.

I encourage everyone to support Free Software tools when choosing solutions for your projects. It may be the case that such tools may not at first offer precisely the features you might be looking for, and you might be tempted to accept an offer of a “for free” product or to use a no-cost “cloud” service, and such things may appear to offer an easier path when you might otherwise be confronted with a choice of hosting solutions and deployment issues. But there are whole communities out there who can offer advice and will support their Free Software project, and there are Free Software organisations who will help you deploy your choice of tools, perhaps even having it ready to use as part of their existing infrastructure.

In the end, by embracing Free Software, you get the control you need over your content in order to manage it sustainably. Surely that is better than having some random company in charge, with the ever-present risk of them one day deciding to discontinue their service and/or, with barely enough notice, discard your data.

EOMA-68: The Return

Wednesday, April 8th, 2015

It is hard to believe that almost two years have passed since I criticised the Ubuntu Edge crowd-funding campaign for being a distraction from true open hardware initiatives (becoming one which also failed to reach its funding target, but was presumably good advertising for Ubuntu’s mobile efforts for a short while). Since then, the custodians of Ubuntu have pressed on with their publicity stunts, the most recent of which involving limited initial availability of an Ubuntu-branded smartphone that may very well have been shipping without the corresponding source code for the GPL-licensed software being available, even though it is now claimed that this issue has been remedied. Given the problems with the same chipset vendor in other products, I personally cannot help feeling that the matter might need more investigation, but then again, I personally do not have time to chase up licence compliance in other people’s products, either.

Meanwhile, some genuine open hardware initiatives were mentioned in that critique of Ubuntu’s mobile strategy: GTA04 is the continuing effort to produce a smartphone that continues the legacy of the Openmoko Neo FreeRunner, whose experiences are now helping to produce the Neo900 evolution of the Nokia N900 smartphone; Novena is an open hardware laptop that was eventually successfully crowd-funded and is in the process of shipping to backers; OpenPandora is a handheld games console, the experiences from which have since been harnessed to initiate the DragonBox Pyra product with a very similar physical profile and target audience. There is a degree of collaboration and continuity within some of these projects, too: the instigator of the GTA04 project is assisting with the Neo900 and the Pyra, for example, partly because these projects use largely the same hardware platform. And, of course, GNU/Linux is the foundation of the software for all this hardware.

But in general, open hardware projects remain fairly isolated entities, perhaps only clustering into groups around particular chipsets or hardware platforms. And when it comes to developing a physical device, the amount of re-use and sharing between projects is perhaps less than we might have come to expect from software, particularly Free Software. Not that this has necessarily slowed the deluge of boards, devices, products and crowd-funding campaigns: everywhere you look, there’s a new Arduino variant or something claiming to be the next big thing in the realm of the “Internet of Things” (IoT), but after a while one gets the impression that it is the same thing being funded and sold, over and over again, with the audience probably not realising that it has all mostly been done before.

The Case for Modularity

Against this backdrop, there is one interesting and somewhat unusual initiative that I have only briefly mentioned before: the development of the EOMA-68 (Embedded Open Modular Architecture 68) standard along with products to demonstrate it. Unlike the average single-board computer or system-on-module board, EOMA-68 attempts to define a widely-used modular computing unit which is also a complete computing device, delegating input (keyboards, mice, storage) and output (displays) to other devices. It has often been repeated that today phones are just general-purpose computers that happen to be able to make calls, and the same can be said for a lot of consumer electronics equipment that traditionally were either much simpler devices or which only employed special-purpose computing units to perform their work: televisions are a reasonably illustrative example of this.

And of course, computers as we know them come in all shapes and sizes now: phones, media players, handhelds, tablets, netbooks, laptops, desktops, workstations, and so on. But most of these devices are not built to be upgraded when the core computing part of them becomes obsolete or, at the very least, less attractive than the computing features of newer devices, nor can the purchaser mix and match the computing part of one device with the potentially more attractive parts of another: one kind of smart television may have a much better screen but a poorer user interface that one would want to replace, for example. There are workarounds – some people use USB-based “plug computers” to give their televisions “smart” capabilities – but when you buy a device, you typically have to settle for the bundled software and computing hardware (even if the software might eventually be modifiable thanks to the role of the GPL, subject to constraints imposed by manufacturers that might prevent modification).

With a modular computing unit, the element of choice is obviously enhanced, but it also helps those developing open hardware. First of all, the interface to the computing unit is well-defined, meaning that the designers of a device need not be overly concerned with the way the general-purpose computing functionality is to be provided beyond the physical demands of that particular module and the facilities provided by it. Beyond such constraints, being able to rely on a tested functional element, designers can focus on the elements of their device that differentiate it from other devices without having to master the integration of their own components of interest with those required for the computing functionality in one “make or break” hardware design that might prove too demanding to get right first time (or even second or third time). Prototyping complicated circuit designs can quickly incur considerable costs, and eliminating complexity from what might be described as the “peripheral board” – the part providing the input and output capabilities and the character of a particular device – not only reduces the risk of getting things wrong, but it could make the production of that board cheaper, too. And that might open up device design to a broader group of participants.

As Nico Rikken explains, EOMA-68 promises to offer benefits for hardware designers, software developers and customers. Modularity does make sense if properly considered, which is perhaps why other modularity initiatives like Phonebloks have plenty of critics even though they share the same worthy objectives of reducing waste and avoiding device obsolescence: with vague statements about modularity and the hint of everything being interchangeable and interoperating with everything, one cannot help be skeptical about the potential complexity and interoperability problems that could result, not to mention the ergonomic issues that most people can easily relate to. By focusing on the general-purpose computing aspect of modularity, EOMA-68 addresses the most important part of the hardware for Free Software and delegates modularity elsewhere in the system to other initiatives that do not claim to do it all.

A Few False Starts

Unfortunately, not everything has gone precisely according to schedule with EOMA-68 so far. After originally surfacing as part of an initiative to make a competitive ARM-based netbook, the plan was to make computing modules and “engineering boards” on the way to delivering a complete product, and the progress of the first module can be followed on the Allwinner A10 news page on the Rhombus Tech Web site. From initial interest from various parties at the start of 2012, and through a considerable amount of activity, by May 2013, working A10 boards were demonstrated running Debian Wheezy. And a follow-up board employing the Allwinner A20 instead of the A10 was demonstrated running Debian at the end of October 2014 as part of a micro-desktop solution.

One might have thought that these devices would be more widely available by now, particularly as development began in 2012 on a tablet board to complement the computing modules, with apparently steady progress being made. Now, the development of this tablet was driven by the opportunity to collaborate with the Vivaldi tablet project, whose own product had been rendered unusable for Free Software usage by the usual product iteration performed behind the scenes by the contract manufacturer changing the components in use without notice (as is often experienced by those buying computers to run Free Software operating systems, only to discover that the wireless chipset, say, is no longer one that is supported by Free Software). With this increased collaboration with KDE-driven hardware initiatives (Improv and Vivaldi), efforts seemingly became directed towards satisfying potential customers within the framework of those other initiatives, so that to acquire the micro-engineering board one would seek to purchase an Improv board instead, and to obtain a complete tablet product one would place an advance order for the Vivaldi tablet instead of anything previously under development.

Somehow during 2014, the collaboration between the participants in this broader initiative appears to have broken down, with there undoubtedly being different perspectives on the sequence of events that led to the cancellation of Improv and Vivaldi. Trawling the mailing list archives gives more detail but not much more clarity, and it can perhaps only be said that mistakes may have been made and that everybody learned new things about certain aspects of doing business with other people. The effect, especially in light of the deluge of new and shiny products for casual observers to purchase instead of engaging in this community, and with many people presumably being told that their Vivaldi tablet would not be shipping after all, probably meant that many people lost interest and, indeed, hope that there would be anything worth holding out for.

The Show Goes On

One might have thought that such a setback would have brought about the end of the initiative, but its instigator shows no sign of quitting, probably because genuine hardware has been made, and other opportunities and collaborations have been created on the way. Initially, the focus was on an ARM-based netbook or tablet that would run Free Software without the vendor neglecting to provide the complete corresponding source for things like the Linux kernel and bootloader required to operate the device. This requirement for licence compliance has not disappeared or diminished, with continuing scrutiny placed on vendors to make sure that they are not just throwing binaries over the wall.

But as experience was gained in evaluating suitable CPUs, it was not only ARM CPUs that were found to have the necessary support characteristics for software freedom as well as for low power consumption. The Ingenic jz4775, a sibling of the rather less capable jz4720 used by the Ben NanoNote, uses the MIPS architecture and may well be fully supported by the mainline Linux kernel in the near future; the ICubeCorp IC1T is a more exotic CPU that can be supported by Free Software toolchains and should be able to run the Linux kernel in addition to Android. Alongside these, the A20 remains the most suitable of the options from Allwinner, whose products have always been competitively priced (which has also been a consideration), but there are other ARM derivatives that would be more interesting from a vendor cooperation perspective, notably the TI AM389x series of CPUs.

Meanwhile, after years of questions about whether a crowd-funding campaign would be started to attract customers and to get the different pieces of hardware built in quantity, plans for such a campaign are now underway. While initial calls for a campaign may have been premature, I now think that the time is right: people have been using the hardware already produced for some time, and considerable experience has been amassed along the way up to this point; the risks should be substantially lower than quite a few other crowd-funding campaigns that seem to be approved and funded these days. Not that anyone should seek to conceal the nature of crowd-funding and the in-built element of risk associated with such campaigns, of course: it is not the same as buying a product from a store.

Nevertheless, I would be very interested to see this hardware being made, and I am even on record as having said so. Part of this is selfishness: I could do with some newer, quieter, less power-consuming hardware. But I also think that a choice of different computing modules, supporting Free Software operating systems out of the box, with some of them candidates for FSF endorsement, and offering a diversity of architectures, would be beneficial to a sustainable computing infrastructure in the longer term. If you also think so, maybe you should follow the progress of EOMA-68 over the coming weeks and months, too.

Open Hardware and Free Software: Not Just For The Geeks

Saturday, April 4th, 2015

Having seen my previous article about the Fairphone initiative’s unfortunate choice of technologies mentioned in various discussions about the Fairphone, I feel a certain responsibility to follow up on some of the topics and views that tend to get aired in these discussions. In response to an article about an “open operating system” for the Fairphone, a rather solid comment was made about how the initiative still seems to be approaching the problem from the wrong angle.

Because the article comments have been delegated to a proprietary service that may at some point “garbage-collect” them from the public record, I reproduce the comment here (and I also expanded the link previously provided by a link-shortening service for similar and other reasons):

You are having it all upside down.
Just make your platform open instead of using proprietary chipsets with binary blobs! Then porting Firefox OS to the Fairphone would be easy as pie.

Not listening to the people who said that only free software running on open hardware would be really fair is exactly what brought you this mess: Our approach to software and ongoing support for the first Fairphones
It is also why I advised all of my friends and acquaintances not to order a Fairphone until it becomes a platform that respects user freedom. Turns out I was more than right.
If the Fairphone was an open platform that could run Firefox OS, Replicant or pure Debian, I would tell everybody in need of a cellphone to buy one.

I don’t know the person who wrote this comment, but it is very well-formulated, and one wouldn’t think that there would be much to add. Unfortunately, some people seem to carry around their own misconceptions about some of the concepts mentioned above, and unfortunately, they are quite happy to propagate those misconceptions as if they were indisputable facts. Below, I state the real facts in the headings and quote each one of the somewhat less truthful misconceptions for further scrutiny.

Open Hardware and Free Software is for Everyone

Fairphone should not make the mistake of producing a phone for geeks. Instead, it should become a phone for everyone.

Just because people have an opinion about technology and wish to see certain guarantees made about the nature of that technology does not mean that the result is “for geeks”. In fact, making the hardware open means that more people can figure things out about it, improve it, understand it, and improve the way it works and the software that uses it. Making the software truly open means that more people can change it, fix it, enhance it, and extend the usable life of the device. All of this benefits everyone, whereas closed hardware and proprietary software ultimately benefit only the small groups of people who respectively designed the device and wrote the software, both of whom being very likely to lose interest in sustaining the life of that product as soon as they have another one they want to sell you. (And often, in the case of the hardware, as soon as it leaves the factory.)

User Freedom Means Exactly User Freedom

‘User freedom’ is often used when actually ‘developers freedom’ is meant. It is more of an ideology.

Incorrect! Those of us who use the term Free Software know exactly what we mean: it is the freedom of the end-user to exercise precisely those privileges that have resulted in the work being produced and delivered to them. Now, there are people who advocate “permissive licences” that do favour developers in that they allow people to use the work of others and to then provide a piece of software under conditions that grants the end-user only limited privileges, taking away those privileges to see how the entire work is constructed, along with those that allow the entire work to be improved and shared. Whether one sees either of these as an ideology, presumably emphasising one’s own “pragmatism” in contrast, is largely irrelevant because the genuine pragmatism involved in Free Software and the propagation of a broader set of privileges actually delivers sustainability: users – genuine end-users, not middle-men – get the freedom to participate in how the product turns out, and crucially, how it lives on after the original producer has decided to go off and do something else.

Openness Does Not Preclude Fanciness (But Security Requires Openness)

What people want is: user friendly interface, security/privacy, good specs and ability to install apps and games. […] OpenSource is a nice idea, but has its disadvantages too: who is caring about quality?

It’s just too easy for people to believe claims about privacy and security, even after everybody found out that they were targets of widespread surveillance, even after various large corporations who presumably care about their reputations have either lost the personal details of their users to criminals or have shared those details with others (who also have criminal or unethical intent), and when believing the sales-pitch about total privacy and robust security, those people will happily reassure themselves and others that no company would allow its reputation to be damaged by any breach of privacy or security! But there are no guarantees of security or privacy if you cannot trust the systems you use, and there is no way of trusting them without being able to inspect how they work. More than ever, people need genuine guarantees of security and privacy – not reassurances from salesmen and advertisers – and the best way to start off on the path towards such guarantees is to be able to deploy Free Software on a device that you fully control.

And as for quality, user-friendliness and all the desirable stuff: how many people use products like Firefox in its various forms every single day? Such Free Software solutions have not merely set the standard over the years, but they have kept technologies like the Web relevant and viable, in stark contrast to proprietary bundled programs like Internet Explorer that have actually impaired technological and social progress, with “IE” doing its bit by exhibiting a poor record of adherence to standards and a continuous parade of functionality and security bugs, not to mention constant usability frustrations endured by its unfortunate (and frequently involuntary) audience of users.

Your Priorities Make Free Software Important

I found the following comment to be instructive:

For me open source isn’t important. My priorities are longevity/updates, support, safety/privacy.

The problem is this: how can you guarantee longevity, updates, support, safety and privacy without openness? Safety and privacy would require you to have blind trust in someone whose claims you cannot verify. Longevity, updates and support require you to rely on the original producer’s continued interest in the product that you have just purchased from them, and should it become more profitable for them to focus on other products (that they might want you to buy instead of continuing to use the one you have), you might be able to rely on the goodwill of that producer to transfer their responsibilities to others to do the thankless tasks of maintenance and support. But it may well be the case that no amount of money will be able to keep that product viable for you: the producer may simply refuse to support it or to let others support it. Perhaps some people may step in and reverse-engineer the product and make an effort to keep it viable, but wouldn’t it be better to have an open product to start with, where people can choose how it is maintained – and thus sustained – for as long as people still want to use it?

Concepts like open hardware and Free Software sound like topics for the particularly-interested, but they provide the foundations for those topics of increasing interest and attention that people claim to care so much about. Everybody deserves things like choice, democracy, privacy, security, safety, control over their own lives and destinies, and so on. Closed hardware and proprietary software may be used on lots of devices, and people may be getting a lot of use out of those devices, but the users of those devices enjoy the benefits only as long as it remains in the interests of the producers of those devices and the accompanying software to allow them to do so. Furthermore, few or none of those users can be sure whether any of those important things – their rights – are being impaired by their use of those devices. Are their communications being intercepted, collected, analysed? Few people would ever know.

Free Software and open hardware empower their users with the control that proprietary technologies deny their users. But shouldn’t everybody be able to benefit from such control? That’s why a device that is open hardware and which runs Free Software really is for everyone, not just for “geeks”.

Lenovo: What Were They Thinking?

Wednesday, February 25th, 2015

In the past few days, there have been plenty of reports of Lenovo shipping products with a form of adware known as Superfish, originating from a company of the same name, that interferes with the normal operation of Web browser software to provide “shopping suggestions” in content displayed by the browser. This would be irritating enough all by itself, but what made the bundled software involved even more worrying was that it also manages to insert itself as an eavesdropper on the user’s supposedly secure communications, meaning that communications conducted between the user and Internet sites such as online banks, merchants, workplaces and private-and-confidential services are all effectively compromised.

Making things even worse still, the mechanism employed to pursue this undesirable eavesdropping managed to prove highly insecure in itself, exposing Lenovo customers to attack from others. So, we start this sordid affair with a Lenovo “business decision” about bundling some company’s software and end up with Lenovo’s customers having their security compromised for the dubious “benefit” of being shown additional, unsolicited advertisements in Web pages that didn’t have them in the first place. One may well ask what Lenovo’s decision-makers were thinking?

Symptoms of a Disease

Indeed, this affair gives us a fine opportunity to take a critical look at the way the bundling of software has corrupted the sale of personal computers for years, if not decades. First of all, most customers have never been given a choice of operating system or to be able to buy a computer without an operating system, considering the major channels and vendors to which most buyers are exposed: the most widely-available and widely-advertised computers only offer some Windows variant, and manufacturers typically insist that they cannot offer anything else – or even nothing at all – for a variety of feeble reasons. And when asked to provide a refund for this unwanted product that has been forced on the purchaser, some manufacturers even claim that it is free or that someone else has subsidised the cost, and that there is no refund to be had.

This subsidy – some random company acting like a kind of wealthy distant relative paying for the “benefit” of bundled proprietary software – obviously raises competition-related issues, but it also raises the issue of why anyone would want to pay for someone else to get something at no cost. Even in a consumer culture where getting more goodies is seen as surely being a good thing because it means more toys to play with, one cannot help but be a little suspicious: surely something is too good to be true if someone wants to give you things that they would otherwise make you pay for? And now we know that it is: the financial transaction that enriched Lenovo was meant to give Superfish access to its customers’ sensitive information.

Of course, Lenovo’s updated statement on the matter (expect more updates, particularly if people start to talk about class action lawsuits) tries to downplay the foul play: the somewhat incoherent language (example: “Superfish technology is purely based on contextual/image and not behavioral”) denies things like user profiling and uses terminology that is open to quite a degree of interpretation (example: “Users are not tracked nor re-targeted”). What the company lawyers clearly don’t want to talk about is what information was being collected and where it was being whisked off to, keeping the legal attack surface minimal and keeping those denials of negligence strenuous (“we did not know about this potential security vulnerability until yesterday”). Maybe some detail about those “server connections shut down in January” would shed some light on these matters, but the lawyers know that with that comes the risk of exposing a paper trail showing that everybody knew what they were getting into.

Your Money isn’t Good Enough

One might think that going to a retailer, giving them your money, and getting a product to take home would signal the start of a happy and productive experience with a purchase. But it seems that for some manufacturers, getting the customer’s money just isn’t enough: they just have to make a bit of money on the side, and perhaps keep making money from the product after the customer has taken it home, too. Consumer electronics and products from the “content industries” have in particular fallen victim to the introduction of advertising. Even though you thought you had bought something outright, advertisements and other annoyances sneak into the experience, often in the hope that you will pay extra to make them go away.

And so, you get the feeling that your money somehow isn’t good enough for these people. Maybe if you were richer or knew the right people, your money would be good enough and you wouldn’t need to suffer adverts or people spying on you, but you aren’t rich or well-connected and just have to go along with the indignity of it all. Naturally, the manufacturers would take offence at such assertions; they would claim that they have to take bribes subsidies to be able to keep their own prices competitive with the rest of the market, and of course everybody else is taking the money. That might be almost believable if it weren’t for the fact that the prices of things like bundled operating systems and “productivity software” – the stuff that you can’t get a refund for – are completely at the discretion of the organisations who make it. (It also doesn’t help these companies that they seem to be unable to deliver a quality product with a stable set of internal components, or that they introduce stupid hardware features that make their products excruciating to use.)

Everybody Hurts

For the most part, it probably is the case that if you are well-resourced and well-connected, you can buy the most expensive computer with the most expensive proprietary software for it, and maybe the likes of Lenovo won’t have tainted it with their adware-of-the-month. But naturally, proprietary software doesn’t provide you with any inherent assurances that it hasn’t been compromised: only Free Software can offer you that, and even then you must be able to insist on the right to be able to build and install that software on the hardware yourself. Coincidentally, I did once procure a Lenovo computer from a retailer that only supplied them with GNU/Linux preinstalled, with Lenovo being a common choice amongst such retailers because the distribution channel apparently made it possible for them to resell such products without Windows or other proprietary products ever becoming involved.

But sometimes the rich and well-connected become embroiled in surveillance and spying in situations of their own making. Having seen people become so infatuated with Microsoft Outlook that they seemingly need to have something bearing the name on every device they use, it is perhaps not surprising that members of the European Parliament had apparently installed Microsoft’s mobile application bearing the Outlook brand. Unfortunately for them, Microsoft’s “app” sends sensitive information including their authentication credentials off into the cloud, putting their communications (and the safety of their correspondents, in certain cases) at risk.

Some apologists may indeed claim that Microsoft and their friends and partners collecting everybody’s sensitive details for their own convenience is “not an issue for the average user”, but in fact it is a huge issue: when people become conditioned into thinking that surrendering their privacy, accepting the inconveniences of intrusive advertising, always being in debt to the companies from which they have bought things (even when those purchases have actually kept those companies in business), and giving up control of their own belongings are all “normal” things and that they do not deserve any better, then we all start to lose control over the ways in which we use technology as well as the technologies we are able to use. Notions of ownership and democracy quickly become attacked and eroded.

What Were They Thinking?

We ultimately risk some form of authority, accountable or otherwise, telling us that we no longer deserve to be able to enjoy things like privacy. Their reasons are always scary ones, but in practice it usually has something to do with them not wanting ordinary people doing unexpected or bothersome things that might question or undermine their own very comfortable (and often profitable) position telling everybody else what to do, what to worry about, what to buy, and so on. And it turns out that a piece of malware that just has to see everything in its rampant quest to monetize every last communication of the unwitting user now gives us a chance to really think about how we really want our computers and their suppliers to behave.

So, what were they thinking at Lenovo? That Superfish was an easy way to make a few extra bucks? That their customers don’t deserve anything better than to have their private communications infused with advertising? That their customers don’t need to know that people are tampering with their Internet connection? That the private information of their customers was theirs to sell to anyone offering them some money? Did nobody consider the implications of any of this at all, or was there a complete breakdown in ethics amongst those responsible? Was it negligence or contempt for their own customers that facilitated this pursuit of greed?

Sadly, the evidence from past privacy scandals involving major companies indicates that regulatory or criminal proceedings are unlikely, merely fuelling suspicions that supposed corporate incompetence – the existence of conveniently unlocked backdoors – actually serves various authorities rather nicely. It is therefore up to us to remain vigilant and, of course, to exercise our own forms of reward for those who act in our interests, along with punishment for those whose behaviour is unacceptable in a fair and democratic society.

Maybe after a break from seeing any of it for a while, our business and our money will matter more to Lenovo than that of some shady “advertising” outfit with the dubious and slightly unbelievable objective of showing more adverts to people while they do their online banking. And by then, maybe Lenovo (and everyone else) will let us install whatever software we like on their products, because many people aren’t going to be trusting the bundled software for a long time to come after this. Not that they should ever have trusted it in the first place, of course.

Making the Best of a Bad Deal

Wednesday, January 7th, 2015

I had the opportunity over the holidays to browse the January 2015 issue of “Which?” – the magazine of the Consumers’ Association in Britain – which, amongst other things, covered the topic of “technology ecosystems“. Which? has a somewhat patchy record when technology matters are taken into consideration: on the one hand, reviews consider the practical and often mundane aspects of gadgets such as battery life, screen brightness, and so on, continuing their tradition of giving all sorts of items a once over; on the other hand, issues such as platform choice and interoperability are typically neglected.

Which? is very much pitched at the “empowered consumer” – someone who is looking for a “good deal” and reassurances about an impending purchase – and so the overriding attitude is one that is often in evidence in consumer societies like Britain: what’s in it for me? In other words, what goodies will the sellers give me to persuade me to choose them over their competitors? (And aren’t I lucky that these nice companies are throwing offers at me, trying to win my custom?) A treatment of ecosystems should therefore be somewhat interesting reading because through the mere use of the term “ecosystem” it acknowledges that alongside the usual incentives and benefits that the readership is so keen to hear about, there are choices and commitments to be made, with potentially negative consequences if one settles in the wrong ecosystem. (Especially if others are hell-bent on destroying competing ecosystems in a “war” as former Nokia CEO Stephen Elop – now back at Microsoft, having engineered the sale of a large chunk of Nokia to, of course, Microsoft – famously threatened in another poor choice of imagery as part of what must be the one of the most insensitively-formulated corporate messages of recent years.)

Perhaps due to the formula behind such articles in Which? and similar arenas, some space is used to describe the benefits of committing to an ecosystem.  Above the “expert view” describing the hassles of switching from a Windows phone to an Android one, the title tells us that “convenience counts for a lot”. But the article does cover problems with the availability of applications and services depending on the platform chosen, and even the matter of having to repeatedly buy access to content comes up, albeit with a disappointing lack of indignance for a topic that surely challenges basic consumer rights. The conclusion is that consumers should try and keep their options open when choosing which services to use. Sensible and uncontroversial enough, really.

The Consequences of Apathy

But sadly, Which? is once again caught in a position of reacting to technology industry change and the resulting symptoms of a deeper malaise. When reviewing computers over the years, the magazine (and presumably its sister publications) always treated the matter of platform choice with a focus on “PCs and Macs” exclusively, with the latter apparently being “the alternative” (presumably in a feeble attempt to demonstrate a coverage of choice that happens to exist in only two flavours). The editors would most likely protest that they can only cover the options most widely available to buy in a big-name store and that any lack of availability of a particular solution – say, GNU/Linux or one of the freely available BSDs – is the consequence of a lack of consumer interest, and thus their readership would also be uninterested.

Such an unwillingness to entertain genuine alternatives, and to act in the interests of members of their audience who might be best served by those solutions, demonstrates that Which? is less of a leader in consumer matters than its writers might have us believe. Refusing to acknowledge that Which? can and does drive demand for alternatives, only to then whine about the bundled products of supposed consumer interest, demonstrates a form of self-imposed impotence when faced with the coercion of proprietary product upgrade schedules. Not everyone – even amongst the Which? readership – welcomes the impending vulnerability of their computing environment as another excuse to go shopping for shiny new toys, nor should they be thankful that Which? has done little or nothing to prevent the situation from occurring in the first place.

Thus, Which? has done as much as the rest of the mainstream technology press to unquestioningly sustain the monopolistic practices of the anticompetitive corporate repeat offender, Microsoft, with only a cursory acknowledgement of other platforms in recent years, qualified by remarks that Free Software alternatives such as GNU/Linux and LibreOffice are difficult to get started with or not what people are used to. After years of ignoring such products and keeping them marginalised, this would be the equivalent of denying someone the chance to work and then criticising them for not having a long list of previous employers to vouch for them on their CV.  Noting that 1.1-billion people supposedly use Microsoft Office (“one in seven people on the planet”) makes for a nice statistic in the sidebar of the print version of the article, but how many people have a choice in doing so or, for that matter, in using other Microsoft products bundled with computers (or foisted on office workers or students due to restrictive or corrupt workplace or institutional policies)? Which? has never been concerned with such topics, or indeed the central matter of anticompetitive software bundling, or its role in the continuation of such practices in the marketplace: strange indeed for a consumer advocacy publication.

At last, obliged to review a selection of fundamentally different ecosystem choices – as opposed to pretending that different vendor badges on anonymous laptops provide genuine choice – Which? has had to confront the practical problems brought about by an absence of interoperability: that consumers might end up stranded with a large, non-transferable investment in something they no longer wish to be a part of. Now, the involvement of a more diverse selection of powerful corporate interests have made such matters impossible to ignore. One gets the impression that for now at least, the publication cannot wish such things away and return to the lazy days of recommending that everyone line up and pay a single corporation their dues, refusing to believe that there could be anything else out there, let alone usable Free Software platforms.

Beyond Treating the Symptoms

Elsewhere in the January issue, the latest e-mail scam is revealed. Of course, a campaign to widen the adoption of digitally-signed mail would be a start, but that is probably too much to expect from Which?: just as space is dedicated to mobile security “apps” in this issue, countless assortments of antivirus programs have been peddled and reviewed in the past, but solving problems effectively requires a leader rather than a follower. Which? may do the tedious job of testing kettles, toasters, washing-up liquids, and much more to a level of thoroughness that would exhaust most people’s patience and interest. And to the publication’s credit, a certain degree of sensible advice is offered on topics such as online safety, albeit with the usual emphasis on proprietary software for the copy of Windows that members of its readership were all forced to accept. But in technology, Which? appears to be a mere follower, suggesting workarounds rather than working to build a fair market for safe, secure and interoperable products.

It is surely time for Which? to join the dots and to join other organisations in campaigning for fundamental change in the way technology is delivered by vendors and used throughout society. Then, by upholding a fair marketplace, interoperability, access to digital signature and encryption technologies, true ownership of devices and of purchased content, and all the things already familiar to Free Software and online rights advocates, they might be doing their readership a favour after all.

Python’s email Package and the “PGP/MIME” Question

Wednesday, January 7th, 2015

I vaguely follow the development of Mailpile – the Free Software, Web-based e-mail client – and back in November 2014, there was a blog post discussing problems that the developers had experienced while working with PGP/MIME (or OpenPGP as RFC 3156 calls it). A discussion also took place on the Gnupg-users mailing list, leading to the following observation:

Yes, Mailpile is written in Python and I've had to bend over backwards
in order to validate and generate signatures. I am pretty sure I still
have bugs to work out there, trying to compensate for the Python
library's flaws without rewriting the whole thing is, long term, a
losing game. It is tempting to blame the Python libraries, but the fact
is that they do generate valid MIME - after swearing at Python for
months, it dawned on me that it's probably the PGP/MIME standard that is
just being too picky.

Later, Bjarni notes…

Similarly, when generating messages I had to fork the Python lib's
generator and disable various "helpful" hacks that were randomly
mutating the behavior of the generator if it detected it was generating
an encrypted message!

Coincidentally, while working on PGP/MIME messaging in another context, I also experienced some problems with the Python email package, mentioning them on the Mailman-developers mailing list because I had been reading that list and was aware that a Google Summer of Code project had previously been completed in the realm of message signing, thus offering a potential source of expertise amongst the list members. Although I don’t think I heard anything from the GSoC participant directly, I had the benefit of advice from the same correspondent as Bjarni, even though we have been using different mailing lists!

Here‘s what Bjarni learned about the “helpful” hacks:

This is supposed to be http://bugs.python.org/issue1670765, which is
claimed to be resolved.

Unfortunately, the special-case handling of headers for “multipart/signed” parts is presumably of limited “help”, and other issues remain. As I originally noted

So, where the email module in Python 2.5 likes to wrap headers using tab
character indents, the module in Python 2.7 prefers to use a space for
indentation instead. This means that the module reformats data upon being
asked to provide a string representation of it rather than reporting exactly
what it received.

Why the special-casing wasn’t working for me remains unclear, and so my eventual strategy was to bypass the convenience method in the email API in order to assert some form of control over the serialisation of e-mail messages. It is interesting to note that the “fix” to the Python standard library involved changing the documentation to note the unsatisfactory behaviour and to leave the problem essentially unsolved. This may not have been unreasonable given the design goals of the email package, but it may have been better to bring the code into compliance with user expectations and to remedy what could arguably be labelled a design flaw of the software, even if it was an unintended one.

Contrary to the expectations of Python’s core development community, I still develop using Python 2 and probably won’t see any fixes to the standard library even if they do get made. So, here’s my workaround for message serialisation from my concluding message to the Mailman-developers list:

# given a message...
out = StringIO()
generator = Generator(out, False, 0) # disable reformatting measures
generator.flatten(message)
# out.getvalue() now provides the serialised message

It’s interesting to see such problems occur for different people a few months apart. Maybe I should have been following Mailpile development a bit more closely, but with it all happening at GitHub (with its supposedly amazing but, in my experience, rather sluggish and clumsy user interface), I suppose I haven’t been able to keep up.

Still, I hope that others experiencing similar difficulties become more aware of the issues by seeing this article. And I hope that Bjarni and the Mailpile developers haven’t completely given up on OpenPGP yet. We should all be working more closely together to get usable, Free, PGP-enabled, standards-compliant mail software to as many people as possible.

A Note on Kolab and Debian Packaging

Friday, December 19th, 2014

I thought it might be helpful if I wrote a quick note about my previous work on Debian packaging and Kolab. As my blog can attest, I have written a few articles about packaging and the effort required to make Kolab somewhat more amenable to a convenient and properly functioning installation on Debian systems. Unfortunately, perhaps due to a degree of overambition and perhaps due to me being unable to deliver a convincing and/or palatable set of modifications to achieve these goals, no progress was made over the year or so I spent looking at the situation. I personally do not feel that there was enough of a productive dialogue about aligning these goals with those of the core developers of Kolab, and despite a certain level of interest from others, I no longer have the motivation to keep working on this problem.

Occasionally, I receive mails from people who have read about my experiments with Debian packaging or certain elements of Kolab configuration that became part of the packaging work. This message is intended to communicate that I am no longer working on such things. Getting Kolab to work with other mail transport/delivery/storage systems or other directory systems is not particularly difficult for those with enough experience (and I am a good example of someone who has been able to gain that experience relatively quickly), but integrating this into setup-kolab in an acceptable fashion ultimately proved to be an unrealisable goal.

Other people will presumably continue their work packaging various Kolab libraries for Debian, and some of these lower-level packages may even arrive in the stable release of Debian fairly soon, perhaps even delivering a recent version of those libraries. I do not, however, see any progress on getting other packages into Debian proper. I continue to have the opinion that this unfortunate situation will limit wider adoption of the Kolab technologies and does nobody but the proprietary competition any good.

Since I do not believe in writing software that will never be used – having had that experience in a professional setting where I at least had the consolation of getting paid for such disappointing outcomes (and for the waste of my time) – my current focus is on developing a low-impact form of standards-based calendaring for existing mail systems, without imposing extensive infrastructure requirements when adopting such a solution, and I hope to have something useful to show in the fairly near future. This time last year, I was much more upbeat about the prospect of getting Kolab into Debian and into more people’s hands. Now, I only wish that I had changed course earlier and started on my current endeavour considerably sooner.

But as people like to say: better late than never. I look forward to sharing my non-Kolab groupware developments in the coming months.

People and their Walled Gardens

Monday, December 15th, 2014

I wasn’t the only one to notice a discussion about moving Python-related resources to GitHub recently. An article on LWN covered the matter and was followed by the usual assortment of comments, but having expected the usual expression of sentiments about how people should supposedly all migrate to Git (something I completely disagree with for a number of reasons), what I found surprising was a remark indicating that some people use GitHub as their “one stop” source of code for any project they might wish to use. In other words, GitHub is their “App Store”, curated experience, or “walled garden”: why should they bother with the rest of the Internet?

Of course, one could characterise such an interpretation of their remark as being somewhat unfair: after all, the author of the comment is not pushing their comment to GitHub to be magically pulled by LWN and published in a comment thread; they must therefore be actively using other parts of the Internet, too. But the “why bother with anything else?” mentality is worrying: demanding that everybody use a particular Internet site for their work to be considered as being something of value undermines freedom of choice, marginalises those who happen to prefer other solutions, and, in this case, cultivates a dependency on a corporate entity whose activities may not always prove to be benign. Corporate gatekeepers frequently act in ways that provoke accusations of censorship or of holding their users to ransom.

Sweetening a Bitter Pill

Many people seem to be infatuated with GitHub, perhaps because it offers conveniences that might make Git more bearable to use. I personally find Git’s command line interface to be incoherent in comparison to other tools, and despite praise of tools like gitk by Git advocates (with their claims of superior Git tooling), I find that things like the graphlog feature in Mercurial give me, in that particular instance, a proper graphical history at the command line in an instant, without messing around with some clumsy Tk-based interface with a suboptimal presentation of the different types of information (and I could always use things like TortoiseHg if I really wanted a graphical user interface myself). So maybe I wouldn’t see the point of a proprietary Web-based interface to use with Mercurial, especially since the built-in Web interface is pretty good and is in many ways better than attempts to provide similar functionality in a tool-neutral way.

People do seem rather willing to discard many of the benefits of the distributed nature of Git and are quite happy to have a single point of failure in their projects and businesses: when GitHub becomes unreachable in such environments, everything grinds to a halt, despite the fact that they all sit there with the code and could interact with each other directly. Popularity and the “network effect” seem to be the loudest arguments in favour of dumping all their code on some distant servers, with the idea being that “social” project hosting will bring in the contributors. Although I accept that for potential contributors who have no convenient way of hosting their own code, such services provide an obvious solution – “fork” the project, pull, make changes, push, dispatch a “pull request” – and allow them to avoid having to either provide hosting for their own code themselves or to coordinate their work in other ways, everything now has to go through the same infrastructure and everyone now has to sign up for the same service, adhere to the same terms and conditions, and risk interruptions to their work caused by anything from downtime and communications failures to the consequences of litigation or such services being obvious targets for criminal or politically-motivated misbehaviour. Not only do the custodians of a project no longer control their own project, but they also put that project at considerable risk.

Perhaps all the risks would be worth it if “going social” attracted contributors. When looking at project-hosting sites, I tend to see numerous forks of project code that mostly seem to have been created enthusiastically at some point in time, only to now be dormant and have seen no actual changes committed at all: whether the user concerned created a fork in an aspirational moment not unlike making a New Year’s resolution, or whether they have done it to demonstrate their supposed credibility (“look at me: I forked the Linux kernel!”), neither kind of motivation is helping such projects get any contributions of value. Making things easier is not a bad idea in itself, nor is getting the attention of the right people. The issue here is whether one finds the right people on such project-hosting sites, or whether one merely finds a lot of people who aspire to do something but who will still do very little regardless of how easy it supposedly is.

Body of Convenience

Although the original discussion is mostly concerned with the core development of the CPython implementation and the direction of the Python language, it is hard to separate the issues involved from the activities of the Python Software Foundation. Past experience suggests that some people involved with both of these things do not seem to prioritise ethical concerns – it is no coincidence that the word “pragmatic” appears in the LWN coverage – and it disappoints me greatly that when ethical concerns about GitHub’s corporate culture were raised, few people seemed interested in taking them into consideration. Once again, the label of “ideology” is wielded, together with the idea that doing the right thing is too complicated and therefore not worth pursuing at all, leading to the absurd conclusion that it is better to favour the party shown to have done wrong over all other parties who, as far as we know, have not done any wrong but should be suspected of it, anyway.

With the PSF taking diversity and equal opportunities seriously, one might expect people to “join the dots” on this matter and for people to be engaged in showing that the organisation is unconditionally committed to such causes and is willing to use its public influence for the common good, but I suppose that since GitHub is not a “Python company” anyone with a problem with the corporate culture in question is simply out of luck. Once again, ethics stand in the way of the toys, and “pragmatism” is a nice term to use to indicate that the core Python development community and the organisation that supports it should have as narrow a focus as possible, even if that means neglecting the social movements that brought about the environment that enabled Python to become the successful technology that it is today.

(The matter at the centre of those ethical concerns was wrapped up either definitively or inconclusively depending on who you wish to believe and how credible you regard the company’s own review of the matter to be. This article does not intend to express a view on that matter itself, but does stress that where ethical concerns have been raised, those concerns should be addressed and not ignored.)

Easy Way Out

Python is getting a lot of competition from other technologies these days. For example, Google’s Go stole some thunder from Python both within the company and elsewhere, and there are people who admit to switching to Go in order to remedy some of the long-standing issues they experienced with Python implementations (mostly related to performance and scalability). As I noted before, had Python implementations, libraries and the language definition been improved to alleviate concerns about Python’s suitability in various domains, Python would be in a stronger position than it is now: a position which arguably resembles that of Perl at the height of its popularity. Sure, choosing Python for your next project in certain domains is an easy decision (just as Perl devotees used to annoyingly insist that people just “use Perl”), but there are plenty of other areas where Python has more or less forfeited the contest: how is Python doing on Android, for instance? People who do Python or Django all-day-every-day may not see a problem, but that doesn’t mean that there isn’t a fundamental problem waiting for a remedy.

It is understandable that people want to play the popularity card: if there’s an easy way of clawing back the masses, why not play it? Unfortunately, there may not be an easy way. Catching up with the development backlog may actually be achieved more effectively by addressing other shortcomings of the development workflow. And there’s an assumption that a crowd of ready-to-start contributors are lurking on GitHub when, despite the aforementioned evidence of people only wanting to play inside the walled garden, anyone sufficiently motivated to improve Python would surely have moved beyond a consumer mindset and would have sought out the development community already.

Python development is a relatively well-resourced activity compared to many voluntary endeavours, and the PSF is responsible for much of the infrastructure that supports the developer communities around Python. Although many benefits are derived from things like the mailing lists hosted at python.org, there are always those who are enticed by other providers and technological platforms. In matters relating to the PSF, the occasional Google spreadsheet or form has been circulated, much to the dismay of some people who would rather not have to use online resources that may require a Google account (and if not that, then maybe a fast computer and cheap energy to power all the JavaScript). Bringing up additional PSF-driven services requires time and effort that may be in short supply (as I have experienced in recent months, despite the help of various stalwarts of the community, including one kind enough to drop my name into this particular debate!), and efforts to just procure help and bypass the community have arguably caused an even greater burden on volunteers, even leading to matters as severe as the temporary abandonment of valuable resources such as the Python Job Board (out of action since February 2014).

Concerns about the future relevance and popularity of Python itself may not be so easily addressed and overcome, but that is a topic for another time. But the task of rebalancing relationships – between the PSF, the core developers, and the community volunteers who keep the wheels turning on the Python online assets – is one that cannot be ignored, either. And retreating to the comfort of today’s favourite walled gardens is perhaps too much of an easy way out that ignores the lessons of the past (despite assertions to the contrary) and leaves such relationships in their current, somewhat precarious state, undermining those trying to uphold the independence and viability of the initiative, potentially causing even more work and inconvenience for infrastructure volunteers, presenting an incoherent collection of project resources to contributors, all in the vague hope of grabbing the attention of people who cannot otherwise be relied upon to look further than the ends of their own noses.

Maybe people should be looking for more substantial remedies than quick fixes in walled gardens to address Python’s contribution, popularity and development issues.

The Unplanned Obsolescence of the First Fairphone Device

Friday, December 12th, 2014

About a year-and-a-half ago, I gave my impressions of the Fairphone, noting that the initiative was worthy in terms of its social and sustainability goals, but that it had neglected the “fairness” of the software to be provided with each device. Although the Fairphone organisation had made “root access” – or more correctly stated, “owner control” – of the device a priority and had decided to provide its user interface enhancements to Android as Free Software, it had chosen to use a set of hardware technologies with a poor record of support for Free Software.

It might be said that such an initiative cannot possibly hope to act in the most prudent manner in every respect. However, unlike expertise in minerals sourcing, complicated global supply chains, and proprietary manufacturing activities, expertise on matters of hardware support for Free Software is available almost in abundance to anyone who can be bothered to ask. Many people already struggle with poorly-supported hardware for which only binary firmware or driver releases are available from the manufacturer, often resulting in incorrectly-performing hardware with no chance of future fixes as the manufacturer discontinues support in order to focus on selling new products. Others struggle with continuing but inconvenient forms of support on the manufacturer’s own timescale and terms.

Consequently, there are increasing numbers of people with experience of reverse engineering, documenting, and reimplementing firmware and drivers for proprietary hardware, many of whom would only be too happy to share their experiences with others wishing to avoid the pitfalls of being tied to a proprietary hardware vendor with a proprietary software mentality. There are also communities developing open hardware who seek out enlightened hardware vendors that encourage Free Software drivers for their products and may even support firmware that is also Free Software on their devices. There are even people developing smartphones in the open whose experiences and opinions would surely be valuable to anyone needing advice on the more reliable, open and trustworthy hardware vendors.

One community that has remained active despite various setbacks is the one pursuing the development of the EOMA-68 modular computing platform. It was precisely this kind of “ODM versus chipset vendor versus Free Software community” circus that prompted the development of an open platform and attempts to reach out and cultivate constructive communications with various silicon vendors. Such vendors, notably Allwinner Technology (in the case of EOMA-68), but also other companies that have previously been open to dialogue, have had the realisation that Free Software is an asset, and that Free Software communities are their partners and not just a bunch of people whose work can be taken and used without paying attention to the terms under which that work was originally shared. Such dialogues are ongoing and are subject to setbacks as well as progress, but it is far better to cultivate good practices than to ignore bad practices and to dump the ugly result onto the end-user.

Now, the Fairphone organisation has started to reconsider the software issue in light of the real possibility that their device will not be upgradeable beyond an old release of Android:

“We are actively looking at ways to achieve this goal, but we’re trying to be realistic and face the fact that the first Fairphones will most likely not be upgraded beyond Android 4.2.”

Given that the viability of devices depends not only on the continued functioning of the hardware but also the correct functioning of the software, and that one motivation that many people have stated for upgrading their phone is to gain access to a supported operating system distribution and/or one that supports applications they need or desire, the unfortunate neglect of software sustainability has undermined the general sustainability of the device. It may very well be the case that the Fairphone organisation’s initiatives around re-use and recycling can mitigate the problems caused by any abandonment of these devices, as people seek out replacements that do what they demand of them, but one of the most potent goals of reducing consumption by providing a long-lasting device has been undermined by something that should be the easiest part of the product to change, maintain, upgrade and even to remedy shortcomings with the chosen physical components; something whose lifespan is dictated far less by physical constraints than the assembly of physical components making up the device itself.

It is quite possible that certain industry practices have remained unknown to the Fairphone organisation, despite bitter experiences elsewhere, and that they are only now catching up with what many other people have learned over the years:

“Our chipset vendor MediaTek is only publicly releasing what it is bound to by the obligatory terms of the GNU public license GPL (the Linux Kernel and a few userland programs) and has chosen not to release any of the Android source code.”

Once again, the GPL demonstrates its worth as a necessary tool to ensure that the end-user remains in control. Unfortunately, Google decided that the often shoddy practices of its hardware and industry partners should be indulged by allowing them to make proprietary products with Google’s permissively-licensed code. It could be worse: some hardware vendors violate the GPL and blame their suppliers, requiring anyone seeking recourse to traverse the supply chain as far as it goes, potentially to some obscure company in a faraway land whose management plead poverty while actually doing very nicely selling their services to anyone willing to pay; others just appear to brazenly violate the GPL and dare someone to sue them.

The Fairphone organisation could have valued the sustainability benefits of Free Software and cooperative hardware vendors. In doing so, by merely asking for informed opinions, they would have avoided this mess entirely. Unfortunately, they may have focused too narrowly only on certain worthy and necessary topics, maintaining an oversimplified view of software that, if mainstream media punditry is to be believed, is merely transient and interchangeable: something that can be made to run on any hardware as if by magic, with each upgrade replacing what was there before with something that is always better, only ever offering improvements and benefits. Those of us with more than a passing knowledge of systems development know that such beliefs are really delusions produced from a lack of experience or a wish to believe that unfamiliar things are easier than they actually are.

Since we cannot go back and change the way things were done before, I suppose that now is the time to deliver on the sustainability promise by fully and properly promoting and supporting Free Software on any future Fairphone device. Which means that the Fairphone organisation has to start listening to people with experience of reliably deploying and supporting Free Software on open or properly-documented hardware, instead of going along with whatever some supplier (and their potentially GPL-violating associates) would have them do just to get the contract in the bag and the device out of the door.

Tuning digiKam’s Previews

Tuesday, June 17th, 2014

It is entirely possible that I am doing something wrong as usual, but my way of using digiKam is as follows:

  1. Plug in camera or other media.
  2. Choose to download using digiKam when prompted by the notifier.
  3. Wait while digiKam loads.
  4. Download new images in the pop-up window that digiKam offers containing thumbnails.
  5. View the images in the album, clicking on them to see them at a decent size, zooming in to see detail.

It is also entirely possible that by not choosing to edit images in digiKam, I am missing out on vital functionality, although I rather adhere to the school of photography that involves as little postprocessing as possible (also known as “PP” amongst squabbling online photography forum participants). Anyone who has had to scan negatives with a flatbed scanner with a negative adapter and deal with constant dust contamination, never mind develop film, soon realises what a burden has been lifted from them by digital photography. (On the subject of developing using film, ignoring some work experience in a print shop, I have only ever done so in the distant past with a pinhole photography kit, which was actually fun. Doing so regularly would be somewhat more tedious, I can imagine.)

What I didn’t realise until today, and it was driving me crazy, was that another piece of vital functionality seems to be reserved for the editing mode in digiKam unless you change the settings. I was seeing rather pixelated images at the “1:1” or 100% view and rather surprised that a camera I had recently bought was performing so badly: was my camera really so badly configured? To keep this story moving at an acceptable pace, I’ll give you the briefest comprehensible summary of my investigation and its conclusion.

First, I configured my camera to take raw images, and then I took some raw images: digiKam imported them, which is a nice touch, but I really wanted to see what the sensor was actually producing (and actually I still have to figure that out). And so I launched the editing mode and, to my surprise, got to see my pictures in all their proper glory, no pixelation or anything of the kind: they were what I was expecting from the very beginning. And then I realised that perhaps digiKam treats the album view as some kind of preview, even when you view the images at full resolution. This led me into the “Settings” menu and into “Configure digiKam…” and here is what I found:

The digiKam settings of note in this particular case

Here, I’ve helped you out a little in finding the offending setting: it’s in the “Preview Options” part at the bottom and is called “Embedded preview loads full-sized images.” Selecting this has a dramatic effect on the “preview” as can be seen from an image with this setting in its apparently natural unselected state:

Pixelation visible in a full resolution crop in digiKam

Pixelation visible in a full resolution crop in digiKam

And here is the result when selecting the setting:

A full resolution crop being displayed properly

A full resolution crop being displayed properly

So what do we learn from this? Well, it did occur to me that perhaps I’m just not using digiKam in the right way: “everybody else” goes into every image with the editor, or maybe the slideshow option displays images properly and they all use that to properly look at their photographic output. But I remain baffled as to why anyone would want to see some presumably optimised-for-performance preview when they are looking at their images at full resolution. For anyone suffering from this unfathomable behaviour, I hope this article floats into your Internet frame of view so that you can avoid the same frustration.

Update

It turns out that the low resolution preview feature is covered by a reported bug in digiKam.

Meanwhile, I have since realised that allowing digiKam to “download” pictures from an SD card, it wants to rotate them all even if there’s no rotation necessary, actually changing all the files. With this errant behaviour avoided by just copying the pictures from the card directly (and using Gwenview to view them), it turns out that digiKam is quite happy to show pictures in the correct orientation all by itself. Unfortunately, it seems completely unable to convince itself that its own erroneous understanding of the orientation of some pictures (“downloaded” by digiKam and subsequently replaced by the actual image files straight from the card) should be suspended even after I moved those pictures out of digiKam, removed all image entries from the album concerned from its database, and removed all thumbnail entries from the album concerned from its thumbnail database.

I am starting to feel that the simpler option – Gwenview – is looking better all the time, but I’ll miss the tagging features of digiKam and the relatively convenient EXIF perusal, especially since Gwenview’s tagging seems broken for me and its EXIF support seems somewhat perfunctory.