Paul Boddie's Free Software-related blog


Archive for the ‘FSFE’ Category

An Absence of Strategy?

Wednesday, January 16th, 2019

I keep starting articles but not finishing them. However, after responding to some correspondence recently, where I got into a minor rant about a particular topic, I thought about starting this article and more or less airing the rant for a wider audience. I don’t intend to be negative here, so even if this sounds like me having a moan about how things are, I really do want to see positive and constructive things happen to remedy what I see as deficiencies in the way people go about promoting and supporting Free Software.

The original topic of the correspondence was my brother’s article about submitting “apps” to F-Droid, the Free Software application repository for Android, which somehow got misattributed to me in the FSFE newsletter. As anyone who knows both of us can imagine, it is not particularly unusual that people mix us up, but it does still surprise me how people can be fluid about other people’s names and assume that two people with the same family name are the same person.

Eventually, the correction was made, for which I am grateful, and it must be said that I do also appreciate the effort that goes into writing the newsletter. Having previously had the task of doing some of the Fellowship interviews, I know that such things require more work than people might think, largely go either unnoticed or unremarked, and as a participant in the process it can be easy to wonder afterwards if it was worth the bother. I do actually follow the FSFE Planet and the discussion mailing list, so I’d like to think that I keep up with what other people do, but the newsletter must have some value to those who don’t want to follow a range of channels.

A Rant about Free Software on Mobile

Well, it wasn’t as much of a rant as it was a moan about how there doesn’t seem to be a coherent strategy about Free Software on mobile devices. The FSFE has had some kind of campaign about Android for quite some time. What it amounts to is promoting Free Software applications and Free Software distributions on phones.

This probably isn’t significantly different from the activities promoted by the FSF, whose Defective By Design campaign features a gift guide promoting phones that run Free Software. The FSF also promotes and funds the Replicant project, more of which below.

For all I know, the situation about getting Free Software applications onto a phone probably isn’t all that dire, assuming that Google and phone vendors don’t try and prevent users from installing software that isn’t delivered via Google Play or other officially-sanctioned channels. Or as the Android documentation puts it:

Of course, this is rather reminiscent of the “bad old days” where some people could copy things on and off their phone using Bluetooth (or for those with particularly long memories, infrared communication) whereas others had those features disabled by their provider. So, while some people get to enjoy the benefits of Free Software, others are denied them: another case of divide-and-rule in action, I suppose.

But it is the situation about Free Software distributions, more specifically having a Free Software operating system with Free Software device drivers and Free Software firmware, that is most worrying. The FSFE campaign points people to the two enduring initiatives for putting a different kind of Android distribution on phones: Replicant and LineageOS (previously CyanogenMod).

While LineageOS seems to try and support as many devices as possible, it relies on non-free software to support device hardware. Meanwhile, Replicant employs only Free Software and is therefore limited in which devices it may support and to what extent those device’s features will function.

Although I can’t really muster much enthusiasm for Android and its derivatives, I don’t think there is anything wrong with trying to provide a completely Free Software distribution of that software. Certainly, there will always be challenges with the evolution of the upstream code, being steered this way and that by its corporate parent for maximum corporate benefit, but this isn’t really much different to clinging on to the pace of change (and churn) in an openly-developed project like the Linux kernel.

But ultimately, these initiatives will always be reacting to what other people, specifically those working for large companies, have decided to do. It will always be about chasing the latest release of the upstream software and making it acceptable for a Free Software audience. And it will always be about seeing whether the latest devices can be supported or not and then trying to figure out how. And this is where most people start to wonder why things always have to be like this, spurring my rant.

For the Long Term

To be fair to the FSFE’s Android-related campaign, the advice given does give people some concrete activities to consider: it isn’t simply the “go out and write code” battle cry that sometimes drifts through the air after an acrimonious episode where nobody can agree with each other. Helping F-Droid get more applications published, writing more Free Software applications, helping the operating system projects with their efforts: these are all worthwhile things for people to do.

But we only need look at the FSF’s ethical gift guide to see the severe challenges being faced over the longer term. For yet another year, the only offerings are older, refurbished Samsung smartphones, the most recent being the Galaxy S3 and Galaxy Note 2 from 2012. Now, there is nothing wrong with older hardware or refurbished devices. After all, I have written about older and much less powerful hardware than that which I believe still should have a place in modern life.

Nor should people regard the price of such refurbished units as particularly expensive. Of course you can buy a new phone with better specifications for the same or even less, but that new phone won’t be running only Free Software. Yes, there is always the Fairphone whose creators’ grip on Free Software, software longevity and other matters that weren’t confronted in the beginning is supposedly now rather better, although the hardware drivers remain non-free, so it isn’t really comparable, either.

Here, it is illustrative to consider community-originating efforts to develop smartphones, particularly since there is a perception that such efforts eventually end up pitching “expensive” and “underpowered” devices that “aren’t competitive”. There are obviously a collection of such initiatives ongoing at any given point, but ignoring random crowdfunding campaigns and corporate publicity stunts, we might usefully focus on some more familiar projects: the GTA04 successor to the Openmoko FreeRunner and the Neo900 successor to the Nokia N900.

Both of these projects are in a not-easily-explained state. The GTA04 device was made in a number of incrementally refined versions and sold primarily to people who already had a FreeRunner into which they could install the new hardware. However, difficulties with the last hardware revision meant that it was no longer viable as a product, with the cost of overcoming production problems being rather likely to exceed any money otherwise made on the units.

Meanwhile, the Neo900 project is effectively stalled having experienced several setbacks, notably the freezing of funds by PayPal for no really good reason, and difficulties in finding and retaining qualified people to do things like board layout. Although there are aspirations to get to completion, perhaps with some modification of the original goals, the path to completion remains obscure and uncertain. It is certainly hard to sustain my initial enthusiasm for the project, even if I do sympathise deeply with the struggles and difficulties of those trying to deliver something that they want to see succeed perhaps more than anyone else.

The future is not necessarily entirely bleak for these projects, though. Experience from the GTA04 effort may have been beneficial to the development of the Pyra handheld computer, whose own genesis has been troubled at times and yet forthrightly communicated in an honourably transparent fashion by its initiator, and the CPU board for that device may end up as the basis for a new product known as the GTA15. Given the common architectural heritage of the GTA04, N900 and Neo900, it would not be completely inconceivable that if some kind of way forward could be found for the Neo900, GTA15 might be some kind of contributing element to that.

What these projects should illustrate, however, is that the foundations of a Free Software mobile device are difficult to prepare, subject to sudden and potentially catastrophic setbacks or outright failure, and they require persistence and plenty of resources, not least of the financial kind but also the dedication of people with the right competence and experience. Sadly, these projects never get the attention, the recognition, or the generosity they deserve.

Seeing the Bigger Picture

If we care about being able to support all the different elements of our phones with Free Software, instead of crossing out items in the specification list, sacrificing functionality because nobody knows how it works without signing a non-disclosure agreement and then only being allowed to release a binary blob for loading into specific Linux kernel versions, then we need to be there at the start, when the phone gets designed, and play a role in making sure that everything can be supported by Free Software. Spending time and effort on figuring out someone else’s hardware is not time and effort well spent.

Indeed, from the moment a proprietary product gets into the hands of developers, the clock is ticking. Already, given the pace of product development and shipment, the product is heading for obsolescence and its manufacturer will be tooling up to make its successor. Even if downstream developers work quickly and get as much of the hardware supported as they can, there will be only be a certain period before the product becomes difficult to obtain. And then the process of catch-up starts all over again with the next product.

Of course, product variations always used to happen with desktop and laptop computers. One day you’d get a laptop with one chipset and the next day the “same” laptop would contain something else. The only thing that eased the pain involved was broad hardware support for these kinds of devices, and even then there would be chipsets notorious for their lack of support in things like the Linux kernel.

Such pitfalls cultivated demands for products that could run Free Software and be supplied with such software instead of the usual proprietary products bundled as a consequence of Microsoft’s anticompetitive and coercive business practices. It was no longer enough to accept that we might buy a computer with bundled software and “install over it”, that this might emphasise our Free Software credentials. Credible advocates of Free Software have sought to identify vendors offering systems that are either already supplied with Free Software or that come without any preinstalled software at all, in both cases being fully capable of supporting a Free Software distribution.

But we now find ourselves in the absurd situation with mobile devices where remedial measures comparable to “install over it” are almost the only things people can suggest, that there really aren’t any mobile device vendors who can offer a bare, supportable device or one that is, say, already running Replicant and offering access to all of the device’s features. And although refurbished devices are sold that may run Replicant well enough, we lack another essential guarantee that may not have been so important in the past, one that community-originating hardware projects might be able to help with.

In being involved with the design of these devices, we can seek to dictate how long they remain viable. Instead of having a large corporation decide that now is the time for you to buy their next device and that the product you bought and liked is now deliberately unavailable, we can seek to keep making devices as long as they have a role and have people wanting to keep using them.

If something runs a Free Software distribution well enough, and if that device can still be made, it becomes a safe choice and something we can recommend to others. At last, we get some kind of certainty in a world whose stream of continual change is often fad-driven, exploitative and needless.

The Strategic Vacuum

So it seems obvious to me that if people want Free Software on phones, they need to cultivate the efforts to make that Free Software viable, which means cultivating sustainable hardware design and actually promoting and supporting the projects pursuing it. Otherwise, it is like trying to plant an orchard without paying attention to the soil, cursing that the trees will not grow whilst being oblivious to the fact that the ground is concrete.

And this goes beyond this particular domain. Free Software advocacy is all well and good, but there needs to be practical action that goes beyond pitching in and nudging things towards success. It is wonderful that collective effort and collaboration can take small endeavours and grow them into solutions that are useful for many, but it can be too much to expect everything to just coalesce as if by magic, that people and projects will just discover each other, work together, strengthen each other and multiply the benefits.

There needs to be a strategy, for people to identify real-world problems that are not being addressed by Free Software, to identify the projects that might be applied to those problems, and to propose actual solutions. In the absence of Free Software, proprietary and exploitative solutions are able to stake out their territory, entrench themselves and to thrive.

Here, I struggle to see which Free Software organisations have both the breadth of scope and the depth of focus to make a difference. Developer-driven organisations like Debian and KDE have a lot going on, and they deliver non-trivial software systems, and yet sustaining something like a viable mobile platform (encompassing hardware and software) is seemingly beyond their reach. Neither have a coherent answer to other significant challenges of our time, such as the dominance of proprietary (and destructive) communications platforms.

Meanwhile, advocacy-driven organisations like the FSFE merely give advice to people about how they might help Free Software. The FSF arguably combines this with actual development through the GNU project, and there are those lists of urgent activities, but one cannot help but have the impression that the urgency is reactive and not the product of strategic vision (beyond the goal of the proliferation of Free Software, of course). I would like to think that the Software Freedom Conservancy might combine things more effectively, but it perhaps remains more of an umbrella organisation with a similar emphasis on broad Free Software adoption.

I would like to think that the step of getting involved in Free Software is but the first step towards fairer, kinder, more transparent, more productive and more sustainable societies. Traces of such visions can be seen in the communications of various organisations and yet they largely hold back from suggesting what the next steps might be. And yet, I think, we will in future really need to take those steps in order to protect ourselves, our societies, and the things we care about.

In general, we notice the absence of strategy; in specific cases, we notice it keenly. Which organisations are willing and able to fill this strategic void coherently and decisively, to offer concrete solutions as opposed to suggestions, to have something definite to work towards, and to direct effort and resources into actually realising such goals? Surely, in this age of hoping for the best, those organisations will be the ones that deserve our support the most.

Sustainable Computing

Monday, September 3rd, 2018

Recent discussions about the purpose and functioning of the FSFE have led me to consider the broader picture of what I would expect Free Software and its developers and advocates to seek to achieve in wider society. It was noted, as one might expect, that as a central component of its work the FSFE seeks to uphold the legal conditions for the use of Free Software by making sure that laws and regulations do not discriminate against Free Software licensing.

This indeed keeps the activities of Free Software developers and advocates viable in the face of selfish and anticompetitive resistance to the notions of collaboration and sharing we hold dear. Advocacy for these notions is also important to let people know what is possible with technology and to be familiar with our rich technological heritage. But it turns out that these things, although rather necessary, are not sufficient for Free Software to thrive.

Upholding End-User Freedoms

Much is rightfully made of the four software freedoms: to use, share, study and modify, and to propagate modified works. But it seems likely that the particular enumeration of these four freedoms was inspired (consciously or otherwise) by those famously stated by President Franklin D. Roosevelt in his 1941 “State of the Union” address.

Although some of Roosevelt’s freedoms are general enough to be applicable in any number of contexts (freedom of speech and freedom from want, for instance), others arguably operate on a specific level appropriate for the political discourse of the era. His freedom from fear might well be generalised to go beyond national aggression and to address the general fears and insecurities that people face in their own societies. Indeed, his freedom of worship might be incorporated into a freedom from persecution or freedom from prejudice, these latter things being specialised but logically consequent forms of a universal freedom from fear.

But what might end-users have to fear? The list is long indeed, but here we might as well make a start. They might fear surveillance, the invasion of their privacy and of being manipulated to their disadvantage, the theft of their data, their identity and their belongings, the loss of their access to technology, be that through vandalism, technological failure or obsolescence, or the needless introduction of inaccessible or unintuitive technology in the service of fad and fashion.

Using technology has always entailed encountering risks, and the four software freedoms are a way of mitigating those risks, but as technology has proliferated it would seem that additional freedoms, or additional ways of asserting these freedoms, are now required. Let us look at some areas where advocacy and policy work fail to reach all by themselves.

Cultivating Free Software Development

Advocating for decent laws and the fair treatment of Free Software is an essential part of organisations like the FSFE. But there also has to be Free Software out in the wider world to be treated fairly, and here we encounter another absent freedom. Proponents of the business-friendly interpretation of “open source” insist that Free Software happens all by itself, that somewhere someone will find the time to develop a solution that is ripe for wider application and commercialisation.

Of course, this neglects the personal experience of any person actually doing Free Software development. Even if people really are doing a lot of development work in their own time, playing out their roles precisely as cast in the “sharing economy” (which rather seems to be about wringing the last drops of productivity out of the lower tiers of the economy than anyone in the upper tiers actually participating in any “sharing”), it is rather likely that someone else is really paying their bills, maybe an employer who pays them to do something else during the day. These people squeeze their Free Software contributions in around the edges, hopefully not burning themselves out in the process.

Freedom from want, then, very much applies to Free Software development. For those who wish to do the right thing and even get paid for it, the task is to find a sympathetic employer. Some companies do indeed value Free Software enough to pay people to develop it, maybe because those companies provide such software themselves. Others may only pay people as a consequence of providing non-free software or services that neglect some of the other freedoms mentioned above. And still others may just treat Free Software as that magical resource that keeps on providing code for nothing.

Making sure that Free Software may actually be developed should be a priority for anyone seriously advocating Free Software adoption. Otherwise, it becomes a hypothetical quantity: something that could be used for serious things but might never actually be observed in such forms, easily dismissed as the work of “hobbyists” and not “professionals”, never mind that the same people can act in either capacity.

Unfortunately, there are no easy solutions to suggest for this need. It is fair to state that with a genuine “business case”, Free Software can get funded and find its audience, but that often entails a mixture of opportunism, the right connections, and an element of good fortune, as well as the mindset needed to hustle for business that many developers either do not have or do not wish to cultivate. It also assumes that all Free Software worth funding needs to have some kind of sales value, whereas much of the real value in Free Software is not to be found in things that deliver specific solutions: it is in the mundane infrastructure code that makes such solutions possible.

Respecting the User

Those of us who have persuaded others to use Free Software have not merely been doing so out of personal conviction that it is the ethically-correct thing for us and those others to use. There are often good practical reasons for using Free Software and asserting control over computing devices, even if it might make a little more work for us when things do not work as they should.

Maybe the risks of malware or experience of such unpleasantness modifies attitudes, combined with a realisation that not much help is actually to be had with supposedly familiar and convenient (and illegally bundled) proprietary software when such malevolence strikes. The very pragmatism that Free Software advocates supposedly do not have – at least if you ask an advocate for proprietary or permissively-licensed software – is, in fact, a powerful motivation for them to embrace Free Software in the first place. They realise that control is an illusion without the four software freedoms.

But the story cannot end with the user being able to theoretically exercise those freedoms. Maybe they do not have the time, skills or resources to do so. Maybe they cannot find someone to do so on their behalf, perhaps because nobody is able to make a living performing such services. And all the while, more software is written, deployed and pushed out globally. Anyone who has seen familiar user interfaces becoming distorted, degraded, unfamiliar, frustrating as time passes, shaped by some unfathomable agenda, knows that only a very well-resourced end-user would be able to swim against such an overpowering current.

To respect the user must involve going beyond acknowledging their software freedoms and also acknowledge their needs: for secure computing environments that remain familiar (even if that seems “boring”), that do not change abruptly (because someone had a brainwave in an airport lounge waiting to go to some “developer summit” or other), that allow sensible customisation that can be reconciled with upstream improvements (as opposed to imposing a “my way or the highway”, “delete your settings” ultimatum). It involves recognising their investment in the right thing, not telling them that they have to work harder, or to buy newer hardware, just to keep up.

And this also means that the Free Software movement has to provide answers beyond those concerning the nature of the software. Free Software licensing does not have enough to say about privacy and security, let alone how those things might be upheld in the real world. Yet such concerns do impact Free Software developers, meaning that some kinds of solutions do exist that might benefit a wider audience. Is it not time to deliver things like properly secure communications where people can trust the medium, verify who it is that sends them messages, ignore the time-wasters, opportunists and criminals, and instead focus on the interactions that are meaningful and important?

And is it not time that those with the knowledge and influence in the Free Software realm offered a more coherent path to achieving this, instead of all the many calls for people to “use encryption” only to be presented with a baffling array of options and a summary that combines “it’s complicated” with “you’re on your own”? To bring the users freedom from the kind of fear they experience through a lack of privacy and security? It requires the application of technical knowledge, certainly, but it also requires us to influence the way in which technology is being driven by forces in wider society.

Doing the Right Thing

Free Software, especially when labelled as “open source”, often has little to say about how the realm of technology should evolve. Indeed, Free Software has typically reacted to technological evolution, responding to the demands of various industries, but not making demands of its own. Of course, software in itself is generally but a mere instrument to achieve other things, and there are some who advocate a form of distinction between the specific ethics of software freedom and ethics applying elsewhere. For them, it seems to be acceptable to promote “open source” while undermining the rights and freedoms of others.

Our standards should be far higher than that! Although there is a logical argument to not combine other considerations with the clearly-stated four software freedoms, it should not stop us from complementing those freedoms with statements of our own values. Those who use or are subject to our software should be respected and their needs safeguarded. And we should seek to influence the development of technology to uphold our ideals.

Let us consider a mundane but useful example. The World Wide Web has had an enormous impact on society, providing people with access to information, knowledge, communication, services on a scale and with a convenience that would have been almost unimaginable only a few decades ago. In the beginning, it was slow (due to bandwidth limitations, even on academic networks), it was fairly primitive (compared to some kinds of desktop applications), and it lacked support for encryption and sophisticated interactions. More functionality was needed to make it more broadly useful for the kinds of things people wanted to see using it.

In the intervening years, a kind of “functional escalation” has turned it into something that is indeed powerful, with sophisticated document rendering and interaction mechanisms, perhaps achieving some of the ambitions of those who were there when the Web first gathered momentum. But it has caused a proliferation of needless complexity, as sites lazily call out to pull down megabytes of data to dress up significantly smaller amounts of content, as “trackers” and “analytics” are added to spy on the user, as absurd visual effects are employed (background videos, animated form fields), with the user’s computer now finding it difficult to bear the weight of all this bloat, and with that user struggling to minimise their exposure to privacy invasions and potential exploitation.

For many years it was a given that people would, even should, upgrade their computers regularly. It was almost phrased as a public duty by those who profited from driving technological progress in such a selfish fashion. As is frequently the case with technology, it is only after people have realised what can be made possible that they start to consider whether it should have been made possible at all. Do we really want to run something resembling an operating system in a Web browser? Is it likely that this will be efficient or secure? Can we trust the people who bring us these things, their visions, their competence?

The unquestioning proliferation of technology poses serious challenges to the well-being of individuals and the ecology of our planet. As people who have some control over the way technology is shaped and deployed, is it not our responsibility to make sure that its use is not damaging to its users, that it does not mandate destructive consumer practices, that people can enjoy the benefits – modest as they often are when the background videos and animated widgets are stripped away – without being under continuous threat of being left behind, isolated, excluded because their phone or computer is not this season’s model?

Strengthening Freedoms

In rather too many words, I have described some of the challenges that need to be confronted by Free Software advocates. We need to augment the four software freedoms with some freedoms or statements of our own. They might say that the software and the solutions we want to develop and to encourage should be…

  • Sustainable to make: developers and their collaborators should be respected, their contributions fairly rewarded, their work acknowledged and sustained by those who use it
  • Sustainable to choose and to use: adopters should have their role recognised, with their choices validated and rewarded through respectful maintenance and evolution of the software on which they have come to depend
  • Encouraging of sustainable outcomes: the sustainability of the production and adoption of the software should encourage sustainability in other ways, promoting longevity, guarding against obsolescence, preventing needless and frivolous consumption, strengthening society and making it fairer and more resilient

It might be said that in order to have a fairer, kinder world there are no shortage of battles to be fought. With such sentiments, the discussion about what more might be done is usually brought to a swift conclusion. In this article, I hope to have made a case that what we can be doing is often not so different from what we are already doing.

And, of course, this brings us back to the awkward matter of why we, or the organisations we support, are not always so enthusiastic about these neglected areas of concern. Wouldn’t we all be better off by adding a dimension of sustainability to the freedoms we already recognise and enjoy?

Public Money, Public Code, Public Control

Thursday, September 14th, 2017

An interesting article published by the UK Government Digital Service was referenced in a response to the LWN.net coverage of the recently-launched “Public Money, Public Code” campaign. Arguably, the article focuses a little too much on “in the open” and perhaps not enough on the matter of control. Transparency is a good thing, collaboration is a good thing, no-one can really argue about spending less tax money and getting more out of it, but it is the matter of control that makes this campaign and similar initiatives so important.

In one of the comments on the referenced article you can already see the kind of resistance that this worthy and overdue initiative will meet. There is this idea that the public sector should just buy stuff from companies and not be in the business of writing software. Of course, this denies the reality of delivering solutions where you have to pay attention to customer needs and not just have some package thrown through the doorway of the customer as big bucks are exchanged for the privilege. And where the public sector ends up managing its vendors, you inevitably get an under-resourced customer paying consultants to manage those vendors, maybe even their own consultant colleagues. Guess how that works out!

There is a culture of proprietary software vendors touting their wares or skills to public sector departments, undoubtedly insisting that their products are a result of their own technological excellence and that they are doing their customers a favour by merely doing business with them. But at the same time, those vendors need a steady – perhaps generous – stream of revenue consisting largely of public money. Those vendors do not want their customers to have any real control: they want their customers to be obliged to come back year after year for updates, support, further sales, and so on; they want more revenue opportunities rather than their customers empowering themselves and collaborating with each other. So who really needs whom here?

Some of these vendors undoubtedly think that the public sector is some kind of vehicle to support and fund enterprises. (Small- and medium-sized enterprises are often mentioned, but the big winners are usually the corporate giants.) Some may even think that the public sector is a vehicle for “innovation” where publicly-funded work gets siphoned off for businesses to exploit. Neither of these things cultivate a sustainable public sector, nor do they even create wealth effectively in wider society: they lock organisations into awkward, even perilous technological dependencies, and they undermine competition while inhibiting the spread of high-quality solutions and the effective delivery of services.

Unfortunately, certain flavours of government hate the idea that the state might be in a role of actually doing anything itself, preferring that its role be limited to delegating everything to “the market” where private businesses will magically do everything better and cheaper. In practice, under such conditions, some people may benefit (usually the rich and well-represented) but many others often lose out. And it is not unknown for the taxpayer to have to pick up the bill to fix the resulting mess that gets produced, anyway.

We need sustainable public services and a sustainable software-producing economy. By insisting on Free Software – public code – we can build the foundations of sustainability by promoting interoperability and sharing, maximising the opportunities for those wishing to improve public services by upholding proper competition and establishing fair relationships between customers and vendors. But this also obliges us to be vigilant to ensure that where politicians claim to support this initiative, they do not try and limit its impact by directing money away from software development to the more easily subverted process of procurement, while claiming that procured systems not be subject to the same demands.

Indeed, we should seek to expand our campaigning to cover public procurement in general. When public money is used to deliver any kind of system or service, it should not matter whether the code existed in some form already or not: it should be Free Software. Otherwise, we indulge those who put their own profits before the interests of a well-run public sector and a functioning society.

Where Now for the Free Software Desktop?

Friday, May 31st, 2013

It is a recurring but tiresome joke: is this the year of the Linux desktop? In fact, the year I started using GNU/Linux on the desktop was 1995 when my university department installed the operating system as a boot option alongside Windows 3.1 on the machines in the “PC laboratory”, presumably starting the migration of Unix functionality away from expensive workstations supplied by Sun, DEC, HP and SGI and towards commodity hardware based on the Intel x86 architecture and supplied by companies who may not be around today either (albeit for reasons of competing with each other on razor-thin margins until the slightest downturn made their businesses non-viable). But on my own hardware, my own year of the Linux desktop was 1999 when I wiped Windows NT 4 from the laptop made available for my use at work (and subsequently acquired for my use at home) and installed Red Hat Linux 6.0 over an ISDN link, later installing Red Hat Linux 6.1 from the media included in the official boxed product bought at a local bookstore.

Back then, some people had noticed that GNU/Linux was offering a lot better reliability than the Windows range of products, and people using X11 had traditionally been happy enough (or perhaps knew not to ask for more) with a window manager, perhaps some helper utilities to launch applications, and some pop-up menus to offer shortcuts for common tasks. It wasn’t as if the territory beyond the likes of fvwm had not already been explored in the Unix scene: the first Unix workstations I used in 1992 had a product called X.desktop which sought to offer basic desktop functionality and file management, and other workstations offered such products as CDE or elements of Sun’s Open Look portfolio. But people could see the need for something rather more than just application launchers and file managers. At the very least, desktops also needed applications to be useful and those applications needed to look, act like, and work with the rest of the desktop to be credible. And the underlying technology needed to be freely available and usable so that anyone could get involved and so that the result could be distributed with all the other software in a GNU/Linux distribution.

It was this insight – that by giving that audience the tools and graphical experience that they needed or were accustomed to, the audience for Free Software would be broadened – that resulted first in KDE and then in GNOME; the latter being a reaction to the lack of openness of some of the software provided in the former as well as to certain aspects of the technologies involved (because not all C programmers want to be confronted with C++). I remember a conversation around the year 2000 with a systems administrator at a small company that happened to be a customer of my employer, where the topic somehow shifted to the adoption of GNU/Linux – maybe I noticed that the administrator was using KDE or maybe someone said something about how I had installed Red Hat – and the administrator noted how KDE, even at version 1.0, was at that time good enough and close enough to what people were already using for it to be rolled out to the company’s desktops. KDE 1.0 wasn’t necessarily the nicest environment in every regard – I switched to GNOME just to have a nicer terminal application and a more flexible panel – but one could see how it delivered a complete experience on a common technological platform rather than just bundling some programs and throwing them onto the screen when commanded via some apparently hastily-assembled gadget.

Of course, it is easy to become nostalgic and to forget the shortcomings of the Free Software desktop in the year 2000. Back then, despite the initial efforts in the KDE project to support HTML rendering that would ultimately produce Konqueror and lead to the WebKit family of browsers, the most credible Web browsing solution available to most people was, if one wishes to maintain nostalgia for a moment, the legendary Netscape Communicator. Indeed, despite its limitations, this application did much to keep the different desktop environments viable in certain kinds of workplaces, where as long as people could still access the Web and read their mail, and not cause problems for any administrators, people could pretty much get away with using whatever they wanted. However, the technological foundations for Netscape Communicator were crumbling by the month, and it became increasingly unsupported and unmaintained. Relying on a mostly proprietary stack of software as well as being proprietary itself, it was unmaintainable by any community.

Fortunately, the Free Software communities produced Web browsers that were, and are, not merely viable but also at the leading edge of their field in many ways. One might not choose to regard the years of Netscape Communicator’s demise as those of crisis, but we have much to be thankful for in this respect, not least that the Mozilla browser (in the form of SeaMonkey and Firefox) became a stable and usable product that did not demand too much of the hardware available to run it. And although there has never been a shortage of e-mail clients, it can be considered fortunate that projects such as KMail, Kontact and Evolution were established and have been able to provide years of solid service.

The sudden end of a pathway

You Are Here

With such substantial investment made in the foundations of the Free Software desktop, and with its viability established many years ago (at least for certain groups of users), we might well expect to be celebrating now, fifteen years or so after people first started to see the need for such an endeavour, reaping the rewards of that investment and demonstrating a solution that is innovative and yet stable, usable and yet reliable: a safe choice that has few shortcomings and that offers more opportunities than the proprietary alternatives. It should therefore come as a shock that the position of the Free Software desktop has not been as troubled as it is now for quite some time.

As time passed by, KDE 1 was followed by KDE 2 and then KDE 3. I still use KDE 3, clinging onto the functionality while it still runs on a distribution that is still just about supported. When KDE 2 came out, I switched back from GNOME because the KDE project had a more coherent experience on offer than bundling Mozilla and Evolution with what we would now call the GNOME “shell”, and for a long time I used Konqueror as my main Web browser. Although things didn’t always work properly in KDE 3, and there are things that will never be fixed and polished, it perhaps remains the pinnacle of the project’s achievements.

On KDE 3, the panel responsible for organising and launching applications, showing running applications and the status of various things, just works; I may have spent some time moving icons around a few years ago, but it starts up and everything uses the right amount of space. The “K” (or “start”) menu is just a menu, leaving itself open to the organisational whims of the distribution, but on my near-obsolete Kubuntu version there’s not much in the way of bizarre menu entry and application duplication. Kontact lets me read mail and apply spam filters that are still effective, filter and organise my mail into folders; it shows HTML mail only when I ask it to, and it shows inline images only when I tell it to, which are both things that I have seen Thunderbird struggle with (amongst certain other things). Konqueror may not be a viable default for Web browsing any more, but it does a reasonable job for files on both local disks and remote systems via WebDAV and ssh, the former being seemingly impossible with Nautilus against the widely-deployed Free Software Web server that serves my files. Digikam lets me download, view and tag my pictures, occasionally also being used to fix the metadata; it only occasionally refuses to access my camera, mostly because the underlying mechanisms seem to take an unhealthy interest in how many files there are in the camera’s memory; Amarok plays my music from my local playlists, which is all I ask of it.

So when my distribution finally loses its support, or when I need to recommend a desktop environment to others, even though KDE 3 has served me very well, I cannot really recommend it because it has for the most part been abandoned. An attempt to continue development and maintenance does exist in the form of the Trinity Desktop Environment project, but the immensity of the task combined with the dissipation of the momentum behind KDE 3, and the effective discontinuation of the Qt 3 libraries that underpin the original software, means that its very viability must be questioned as its maintainers are stretched in every direction possible.

Why Can’t We All Get Along?

In an attempt to replicate what I would argue is the very successful environment that I enjoy, I tried to get Trinity working on a recent version of Ubuntu. The encouraging thing is that it managed to work, more or less, although there were some disturbing signs of instability: things would crash but then work when tried once more; the user administration panel wasn’t usable because it couldn’t find a shared library for the Python runtime environment that did, in fact, exist on the system. The “K” menu seemed to suffer from KDE 4 (or, rather, KDE Plasma Desktop) also being available on the same system because lots of duplicate menu entries were present. This may be an unfortunate coincidence, Trinity being overly helpful, or it may be a consequence of KDE 4 occupying the same configuration “namespace”. I sincerely hope that it is not the latter: any project that breaks compatibility and continuity in the fashion that KDE has done should not monopolise or corrupt a resource that actually contains the data of the user.

There probably isn’t any fundamental reason why Trinity could not return KDE 3 to some level of its former glory as a contemporary desktop environment, but getting there could certainly be made easier if everyone worked together a bit more. Having been obliged to do some user management tasks in another environment before returning to my exploration of Trinity, I attempted to set up a printer. Here, with the printer plugged in and me perusing the list of supported printers, there appeared to be no way of getting the system to recognise a three- or four-year-old printer that was just too recent for the list provided by Trinity. Returning to the other environment and trying there, a newer list was somehow obtained and the printer selected, although the exercise was still highly frustrating and didn’t really provide support for the exact model concerned.

But both environments presumably use the same print system, so why should one be better supplied than the other for printer definitions? Surely this is common infrastructure stuff that doesn’t specifically relate to any desktop environment or user interface. Is it easier to make such functionality for one environment than another, or just too convenient to take a short-cut and hack something up that covers the needs of one environment, or is it too much work to make a generic component to do the job and to package it correctly and to test it in more than a narrow configuration? Do the developers get worried about performance and start to consider complicated services that might propagate printer configuration details around the system in a high-performance manner before their manager or supervisor tells them to scale back their ambition?

I remember having to configure network printers in the previous century on Solaris using special script files. I am also sure that doing things at the lowest levels now would probably be just as frustrating, especially since the pain of Solaris would be replaced by the need to deal with things other than firing plain PostScript across the network, but there seems to be a wide gulf between this and the point-and-click tools between which some level of stability could exist and make sure that no matter how old and nostalgic a desktop environment may be perceived to be, at least it doesn’t need to dedicate effort towards tedious housekeeping or duplicating code that everyone needs and would otherwise need to write for themselves.

The Producers (and the Consumers)

One might be inclined to think that my complaints are largely formed through a bitter acceptance that things just don’t stay the same, although one can always turn this around and question why functioning software cannot go on functioning forever. Indeed, there is plenty of software on the planet that now goes about its business virtualised and still with the belief, if the operating assumptions of software can be considered in such a way, that it runs on the same range of hardware that it was written for, that hardware having been introduced (and even retired) decades ago. But for software that has to be run in a changing environment, handling new ways of doing things as well as repelling previously unimagined threats to its correct functioning and reliability, needing a community of people who are willing to maintain and develop the software further, one has to accept that there are practical obstacles to the sustained use of that software in the environments for which it was intended.

Particularly the matter of who is going to do all the hard work, and what incentives might exist to persuade them to do it if not for personal satisfaction or curiosity, is of crucial importance. This is where conflicts and misunderstandings emerge. If the informal contract between the users and developers were taking place with no historical context whatsoever, with each side having expectations of the other, one might be more inclined to sympathise with the complaints of developers that they do all the hard work and yet the users merely complain. Yet in practice, the interactions take place in the context of the users having invested in the software, too. Certainly, even if the experience has not been one of complete, uninterrupted enjoyment, the users may not have invested as much energy as the developers, but they will have played their own role in the success of the endeavour. Any radical change to the contract involves writing off the users’ investment as well as that of the developers, and without the users providing incentives and being able to direct the work, they become exposed to unpredictable risk.

One fair response to the supposed disempowerment of the user is that the user should indeed “pay their way”. I see no conflict between this and the sustainable development of Free Software at all. If people want things done, one way that society has thoroughly established throughout the ages is that people pay for it. This shouldn’t stop people freely sharing software (or not, if they so choose), because people should ultimately realise that for something to continue on a sustainable basis, they or some part of society has to provide for the people continuing that effort. But do desktop developers want the user to pay up and have a say in the direction of the product? There is something liberating about not taking money directly from your customers and being able to treat the exercise almost like art, telling the audience that they don’t have to watch the performance if they don’t like it.

This is where we encounter the matter of reputation: the oldest desktop environments have been established for long enough that they are widely accepted by distributions, even though the KDE project that produces KDE 4 has quite a different composition and delivers substantially different code from the KDE project that produced KDE 3. By keeping the KDE brand around, the project of today is able to trade on its long-standing reputation, reassure distributions that the necessary volunteers will be able to keep up with packaging obligations, and indeed attract such volunteers through widespread familiarity with the brand. That is the good side of having a recognised brand; the bad side is that people’s expectations are higher and that they expect the quality and continuity that the brand has always offered them.

Globes against the light

What’s Wrong with Change?

In principle, nothing is essentially wrong with change. Change can complement the ways that things have always been done, and people can embrace new ways and come to realise that the old ways were just inferior. So what has changed on the Free Software desktop and how does it complement and improve on those trusted old ways of doing things?

It can be argued that KDE and GNOME started out with environments like CDE and Windows 95 acting as their inspiration at some level. As GNOME began to drift towards resembling the “classic” Mac OS environment in the GNOME 2 release series, having a menu bar at the top of the screen through which applications and system settings could be accessed, together with the current time and other status details, projects like XFCE gained momentum by appealing to the audience for whom the simple but configurable CDE paradigm was familiar and adequate. And now that Unity has reintroduced the Mac-style top menu bar as the place where application menus appear – a somewhat archaic interface even in the late 1980s – we can expect more users to discover other projects. In effect, the celebrated characteristics of the community around Free Software have let people go their own way in the company of developers who they felt shared the same vision or at least understood their needs best.

That people can indeed change their desktop environment and choose to run different software is a strength of Free Software and the platforms built on it, but the need for people to have to change – that those running GNOME, for example, feel that their needs are no longer being met and must therefore evaluate alternatives like XFCE – is a weakness brought about by projects that will happily enjoy the popularity delivered by the reputation of their “brand”, and who will happily enjoy having an audience delivered by previous versions of that software, but who then feel that they can change the nature of their product in ways that no longer meet that audience’s needs while pretending to be delivering the same product. If a user can no longer do something in a new release of a product, that should be acknowledged as a failing in that product. The user should not be compelled to find another product to use and be told that since a choice of software exists, he or she should be prepared to exercise that choice at the first opportunity. Such disregard for the user’s own investment in the software that has now abandoned him or her, not to mention the waste of the user’s time and energy in having to install alternative software just to be able to keep doing the same things, is just unacceptable.

“They are the 90%”

Sometimes attempts at justifying or excusing change are made by referring to the potential audience reached by a significantly modified product. Having a satisfied group of users numbering in the thousands is not always as exciting as one that numbers in the millions, and developers can be jealous of the success of others in reaching such numbers. One still sees labels like “10x” or “x10” being used and notions of ten-fold increases in audiences as being a necessary strategy phrased in a new and innovative way, mostly by people who perhaps missed such terminology and the accompanying strategic doctrine the first time round (or second, or third time) many years ago, and such order-of-magnitude increases are often dictated by the assumption that the 90% not currently in the audience for a product would find the product too complicated or too technologically focused and that the product must therefore discard features, or change the way it exposes its features, in order to appeal to that 90%.

Unfortunately, such initiatives to reach larger audiences risk alienating the group of users that are best understood in order to reach groups of users that are largely under-researched and thus barely understood at all. (The 90% is not a single monolithic block of identically-minded people, of course, so there is more than one group of users.) Now, one tempting way of avoiding the need to understand the untapped mass of potential users is to imitate those who are successfully reaching those users already or who at least aspire to do so. Thus, KDE 4 and Windows Vista had a certain similarity, presumably because various visual characteristics of Vista, such as its usage of desktop gadgets, were perceived to be useful features applicable to the wider marketplace and thus “must have” features that provide a way to appeal to people who don’t already use KDE or Windows. (Having a tiny picture frame on the desktop background might be a nice way of replicating the classic picture of one’s closest family on one’s physical desktop, but I doubt that it makes or breaks the adoption of a technology. Many people are still using their computer at a desk or at home where they still have those pictures in plain view; most people aren’t working from the beach where they desperately need them as a thumbnail carousel obscured by their application windows.)

However, it should be remembered that the products being imitated may also originate from organisations who also do not really understand their potential audience, either. Windows Vista was perceived to be a decisive response by Microsoft to the threat of alternative platforms but was regarded as a flop despite being forced on computer purchasers. And even if new users adopt such products, it doesn’t mean that they welcome or unconditionally approve of the supposed innovations introduced in their name, especially if Microsoft and its business partners forced those users to adopt such products when buying their current machine.

The Linux Palmtop and the Linux Desktop

With the success of Android – or more accurately, Android/Linux – claims may be made that radical departures from the traditional desktop software stacks and paradigms are clearly the necessary ingredient for success for GNU/Linux on the desktop, and it might be said that if only people had realised this earlier, the Free Software desktop would have become dominant. Moreover, it might also be argued that “desktop thinking” held back adoption of Linux on mobile devices, too, opening the door for a single vendor to define the payload that delivered Linux to the mobile-device-consuming masses. Certainly, the perceived need for a desktop on a mobile or PDA (personal digital assistant) is entrenched: go back a few years and the GNOME Palmtop Environment (GPE) was seen as the counterweight to Windows CE on something like a Compaq iPAQ. Even on the Golden Delicious GTA04 device, LXDE – a lightweight desktop environment – has been the default environment, admittedly more for verification purposes than for practical use as a telephone.

Naturally, a desktop environment is fairly impractical on a small screen with limited navigational controls. Although early desktop systems had fewer pixels than today’s smartphones, the increased screen size provided much greater navigational control and matched the desktop paradigm much better. It is interesting to note that the Xerox Star had a monochrome 1024×809 display, which is perhaps still larger than many smartphones, that (according to the Wikipedia entry) “…was meant to be able to display two 8.5×11 in pages side by side in actual size”. Even when we become able to show the equivalent amount of content on a smartphone screen at a resolution sufficient to permit its practical use, perhaps with very good eyesight or some additional assistance through magnification, it will remain a challenge to navigate that information precisely. Selecting text, for instance, will not be possible without a very precise pointing instrument.

Of course, one way of handling such challenges that is already prevalent is that of being able to zoom in and out of the content, with the focus in recent years on being able to do so through gestures, although the ability to zoom in on documents at arbitrary levels has been around for many years in the computer-aided design, illustration and desktop publishing fields amongst others, and the notion of general user interfaces permitting fluid scrolling and zooming over a surface showing content was already prominent before smartphones adopted and popularised it further. On the one hand, new and different “form factors” – kinds of device with different characteristics – offer improved methods of navigation, perhaps being more natural than the traditional mouse and keyboard attached to a desktop computer, but those methods may not lend themselves to use on a desktop computer with its proven ergonomics of sitting at a comfortable distance from a generously proportioned display and being able to enter textual input using a dedicated device with an efficiency that is difficult to match using, say, a virtual keyboard on a touchscreen.

Proclamations may occasionally be made that work at desks in offices will become obsolete in favour of the mobile workplace, but as that mobile workplace is apparently so often situated at the café table, the aircraft tray table, or once the lap or forearm is tired of supporting a device, some other horizontal surface, the desktop paradigm with its supporting cast of input devices and sophisticated applications will still be around in some form and cannot be convincingly phased out in favour of content consumption paradigms that refuse to tackle the most demanding of the desktop functionality we enjoy today.

A Microsoft keyboard with buttons for bundled applications

The Real Obstacle

Up to this point, my pontification has limited itself to considering what has made the “Linux desktop” attractive in the past, what makes it attractive or unattractive today, and whether the directions currently being taken might make it more attractive to new audiences and to existing users, or instead fail to capture the interest of new audiences whilst alienating existing users. In an ideal world, with every option given equal attention, and with every individual able to exercise a completely free choice and adopt the product or solution that meets his or her own needs best, we could focus on the above topic and not have to worry about anything else. Unfortunately, we do not live in such an ideal world.

Most hardware sold in retail outlets visited by the majority of potential and current computer users is bundled with proprietary software in the form of Microsoft Windows. Indeed, in these outlets, accounting for the bulk of retail sales of computers to private customers, it is not possible to refuse the bundled software and to choose to buy only the hardware so that an alternative software system may be used with the computer instead (or at least not without needing to pursue vendors afterwards, possibly via legal avenues). In such an anticompetitive environment, many customers associate the bundled software with computer hardware in general and remain unaware of alternatives, leaving the struggle to educate those customers to motivated individuals, organisations like the FSF (and FSFE), and to independent retailers.

Unfortunately, individuals and organisations have to spend their own time and money (or their volunteers’ time and their donors’ money) righting the wrongs of the industry. Independent retailers offer hardware without bundled software, or offer Free Software pre-installed as a convenience but without cost, but the low margins on such “bare” or Free Software systems mean that they too are fixing the injustices of the system at their own expense. Once again, our considerations need to be broadened so as to not merely consider the merits of Free Software delivered as a desktop environment for the end-user, but also to the challenge of getting that software in front of the end-user in the first place. And further still: although we might (and should) consider encouraging regulatory bodies to investigate the dubious practices of product bundling, we also need to consider how we might support those who attempt to reach out and educate end-users without waiting for the regulators to act.

Putting the Pieces Together

One way we might support those getting Free Software onto hardware and into the hands of end-users – in this case, purchasers of new computer hardware – is to help them, the independent retailers, command a better price for their products. If all other things are equal, people generally will not pay more for something that they can get cheaper elsewhere. If they can be persuaded to believe that a product is better in certain ways, then paying a little more doesn’t seem so bad, and the extra cost can often be justified. Sadly, convincing people about the merits of Free Software can be a time-consuming process, and people may not see the significance of those merits straight away, and so other merits are also required to help build up a sustainable margin that can be fed into the Free Software ecosystem.

Some organisations advocate that bundling services is an acceptable way of appealing to end-users and making a bit of money that can fund Free Software development. However, such an approach risks bringing with it all that is undesirable about the illegitimately-bundled software that customers are obliged to accept on their new computer: advertising, compromised user experiences, and the feeling that they are not in control of their purchase. Moreover, seeking revenue from service providers or selling proprietary services to existing end-users fails to seriously tackle the market access issue that impacts Free Software the most.

A better and proven way of providing additional persuasion is, of course, to make a better product, which is why it is crucial to understand whether the different communities developing the “Linux desktop” are succeeding in doing so or not. If not, we need to understand what we need to do to help people offer viable products that people will buy and use, because the alternative is to continue our under-resourced campaign of only partly successful persuasion and education. And this may put our other activities at risk, too, affording us only desperate resistance to all the nasty anticompetitive measures, both political and technical, that well-resourced and industry-dominating corporations are able to initiate with relative ease.

In short, our activities need to fit together and to support each other so that the whole endeavour may be sustainable and be able to withstand the threats levelled against it and against us. We need to deliver a software experience that people will use and continue to use, we need to recognise that those who get that software in front of end-users need our support in running a viable business doing so (far more than large vendors who only court the Free Software communities when it suits them), and we need to acknowledge the threats to our communities and to be prepared to fight those threats or to support those who do so.

Once we have shown that we can work together and act on all of these simultaneously and continuously as a community, maybe then it will become clear that the year of the Linux desktop has at last arrived for good.

The FSFE Fellowship Events Calendar and GriCal

Monday, October 24th, 2011

Some time ago, I got involved with the FSFE Fellowship Wiki and implemented the theme code for MoinMoin that blends the Wiki together with the visual styling for the different Fellowship Web sites, along with an event calendar that shows what different Fellowship groups and individuals are doing – an activity which led to the release of the EventAggregator extension for MoinMoin.

Back then, a simple solution which let people add events by adding event pages to the Wiki, perhaps using the new event form, was sufficient and it seems that people did manage to publish their events after all. Having a solution already integrated with the Wiki was arguably preferable to installing yet more software and then thinking about theming and integration all over again, and nothing from the range of existing solutions – at least those known to the Wiki team – appeared to be particularly compelling. So, a bit of Wiki extension work got the job more or less done.

The new event form

The form used to add new events to the Wiki

But now, some interesting work has become visible in the form of GriCal: a Free Software events service that not only carries details of numerous events related to Free Software, but is also a genuinely free solution whose code can be downloaded and deployed for those sufficiently motivated to do so. Some people prefer to publish events on GriCal and would rather see them appear automatically in the Fellowship events calendar. So why not aggregate events not only from the content on the Wiki but also from other sites?

EventAggregator has virtually always supported the iCalendar format as a way of downloading calendars, so that it is possible to view the Fellowship events calendar using your own calendar or organiser client, but could it not also support iCalendar as an input format? From the beginning, EventAggregator has used the fundamental information reminiscent of iCalendar to define events, albeit without all the BEGIN:VEVENT and DTSTART syntax, so surely a few adjustments were all that were necessary. Well, it did require a bit more work than that, necessitating some refactoring and the revisiting of assumptions that tends to happen when software evolves to do a bit more than it was originally intended to do, but now the distinction between events from remote sources and local pages is more and more blurred: the calendar doesn’t care where the events come from, although some caching within MoinMoin does attempt to stop those sites acting as event sources from being worried about having their events downloaded a bit too often.

So what does this mean for Wiki users? Well, defining calendars and creating events is still as straightforward as it used to be, especially if you use the “New event” link either below each calendar or in the menu when hovering over a day number in the calendar view. But if you know of some useful sources of high quality event information, you can also add an event source to the Wiki’s register of sources and then use those sources when showing your calendar.

The day view

The day view showing the timing of events within a particular day

Apart from all this new support for iCalendar aggregation, EventAggregator has also gained support for some other nice features over the past year or so. You can now view days individually and see the time information for events, and this can be useful when looking at things like conference schedules where lots of individually scheduled events are occurring within a short time-frame. You can also view events in any suitably prepared map, with events shown on a European map being perhaps the most obvious application for this feature in the Fellowship Wiki. Although the default map is not as fancy as something like OpenStreetMap or its proprietary competitors, the simple layering of active regions over a bitmap could be useful for customised maps, perhaps even things like building plans or anything that takes liberties with the concepts of latitude and longitude!

The map view

The map view using a map of Europe for Fellowship events

So I hope I’ve inspired you to at least take a look at the Fellowship events calendar. Remember that the Wiki is a collaborative resource: it doesn’t improve all by itself with only its increasing age to help it along. But with a bit of interest and even the most modest contributions, it has the potential to become a powerful tool and a medium to create greater and better things.

Finally, I’d like to thank Wolfgang for encouraging me to look into integrating EventAggregator with GriCal, opening up lots of new possibilities, and Ivan for telling me about GriCal and working hard to make the service work well with those of us who want to use its iCalendar output. I’d like to think that through such mutual encouragement and help, Free Software projects, products and services can only get better and better. The experience has certainly been a satisfying one for me, and I hope that Wiki users can now share in some of the benefits of this rewarding work.

Hacking the Fellowship Wiki

Saturday, November 21st, 2009

(That’s hacking in the good sense of the word, of course.)

A while ago, Matthias Kirschner encouraged me to blog a bit about the FSFE Fellowship Wiki and some of the enhancements that have been introduced by the small group of volunteers and administrators, so in this post I’ll explain what I’ve been doing with the Wiki since I volunteered to help out a while back.

First of all, the Wiki needed to use the same visual style as all the other new FSFE pages. Since the Wiki is run by MoinMoin, and since I have some experience with making themes for MoinMoin, this was an opportunity to help out. Thus, my place as a volunteer was established: there didn’t seem to be anyone else amongst the existing volunteers familiar enough with Moin’s theming framework to do the job in a hurry.

Fortunately, the hard part of doing the visual styling had already been done, and all I really needed to do was to write a theme module which generates HTML that then uses the various CSS classes and rules already provided by the visual designers. In this way, the visual elements are plugged into Moin’s content generation system such that when a page is formatted, the right kind of HTML is generated, appropriately styled, that can then make the browser put the visual elements in all the right places.

Once the theme was done, other enhancements became desirable or essential. One aspect of the visual design was the presence of a green title bar at the top of the content area of each page – you can see this on all the FSFE sub-sites (home, blogs, planet) – and this didn’t fit in with the way Moin generates its pages. Ideally, one would dedicate all of the content area to Moin, and to allow the page author to put whichever title they like in such a prominent position, perhaps the first heading on the page. Ultimately, a compromise was reached (after numerous experiments): page authors would omit any initial heading, instead relying on the page name itself to appear in the title bar (which is what solutions like MediaWiki do), and the theme would make a “pretty” version of the page name for this uppermost title.

Another enhancement, given that FSFE is a European organisation where many languages may be in common use, was to support multiple languages and to allow users to find translations of pages. Although Moin already supports many languages, there isn’t a built-in feature to help users find the translation of the current page in their own language, preferably by providing a link to such a translation in a prominent place. Thus, it was decided that the theme would incorporate such a function, and the result is a sidebar menu which lists languages for which a page has been translated. More information on how this works can be found in the Wiki user guide, and the advocacy FAQ page should show this function in action.

One project that didn’t start as a Wiki project was the Fellowship calendar. It did appear that some Moin-based solutions were somewhat relevant, such as MonthCalendar and EventCalendar. Having written a Moin extension called CategoryMenu, dynamically building and showing navigation menus which present the Wiki’s categories, together with expanded lists of pages for those categories to which the current page belongs, and considering the likelihood that calendar events would most likely be written on a separate page in order to fully document the event, show likely participants, schedules, and so on (see the FSCONS page for an example), I thought it would be interesting to write a macro that would gather all event pages (belonging to a designated category) and to present them upon request in a calendar view.

The result of this effort is the EventAggregator extension, which started out just showing a simple list of events in each month, but then evolved to support a calendar view (which is arguably a necessity and was always planned), RSS and iCalendar feeds, and a number of conveniences to encourage people to add events, such as a “new event” form which can be prefilled with information, making the task of editing an event page less intimidating. I imagine that more enhancements will be made to this extension, possibly to make it more useful for applications beyond the Fellowship Wiki, but it has been an interesting exercise in learning more about Moin’s features, benefits and shortcomings. The result can be seen on the Wiki in the Fellowship Events page, although calendar views can be added to any page using the macro.

The work doesn’t end here, of course. There are always plenty of things that could be better, but here are some of the simpler ones for anyone wanting to help out but not to take on something that might be too demanding:

  • Language translations for MoinMoin: the FSFE theme uses texts which do not have built-in translations provided by Moin, but it’s easy to add these texts for your own languages; see the English dictionary for all that’s required to make the Wiki friendlier to native speakers of your own languages.
  • Templates for editing: templates for common types of pages, such as local Fellowship group pages, are required; such templates reduce the amount of boilerplate that authors have to reproduce and make editing easier, just as has been done for event pages with the Fellowship events template.
  • Just using the Wiki makes it better! By adding content, there’s more of a reason to visit the Wiki, and by remembering to add events to the Fellowship Events calendar or by adding the right metadata to make events show up in the calendar, or by making more specific calendars (perhaps for your own local group), it becomes more interesting to use such resources.

Beyond the Wiki, there are many projects that can be tackled by anyone enthusiastic enough. It has been a rewarding experience to do all this work, and it has been a pleasure to work with the other volunteers. If you want to make the Fellowship infrastructure better, I can certainly recommend volunteering!