Paul Boddie's Free Software-related blog


Archive for the ‘desktop’ Category

One Step Forward, Two Steps Back

Thursday, August 10th, 2017

I have written about the state of “The Free Software Desktop” before, and how change apparently for change’s sake has made it difficult for those of us with a technology background to provide a stable and reliable computing experience to others who are less technically inclined. It surprises me slightly that I have not written about this topic more often, but the pattern of activity usually goes something like this:

  1. I am asked to upgrade or troubleshoot a computer running a software distribution I have indicated a willingness to support.
  2. I investigate confusing behaviour, offer advice on how to perform certain tasks using the available programs, perhaps install or upgrade things.
  3. I get exasperated by how baroque or unintuitive the experience must be to anyone not “living the dream” of developing software for one of the Free Software desktop environments.
  4. Despite getting very annoyed by the lack of apparent usability of the software, promising myself that I should at least mention how frustrating and unintuitive it is, I then return home and leave things for a few days.
  5. I end up calming down sufficiently about the matter to not to be so bothered about saying something about it after all.

But it would appear that this doesn’t really serve my interests that well because the situation apparently gets no better as time progresses. Back at the end of 2013, it took some opining from a community management “expert” to provoke my own commentary. Now, with recent experience of upgrading a system between Kubuntu long-term support releases, I feel I should commit some remarks to writing just to communicate my frustration while the experience is still fresh in my memory.

Back in 2013, when I last wrote something on the topic, I suppose I was having to manage the transition of Kubuntu from KDE 3 to KDE 4 on another person’s computer, perhaps not having to encounter this yet on my own Debian system. This transition required me to confront the arguably dubious user interface design decisions made for KDE 4. I had to deal with things like the way the desktop background no longer behaved as it had done on most systems for many years, requiring things like the “folder view” widget to show desktop icons. Disappointingly, my most recent experience involved revisiting and replaying some of these annoyances.

Actual Users

It is worth stepping back for a moment and considering how people not “living the dream” actually use computers. Although a desktop cluttered with icons might be regarded as the product of an untidy or disorganised user, like the corporate user who doesn’t understand filesystems and folders and who saves everything to the desktop just to get through the working day, the ability to put arbitrary icons on the desktop background serves as a convenient tool to present a range of important tasks and operations to less confident and less technically-focused users.

Let us consider the perspective of such users for a moment. They may not be using the computer to fill their free time, hang out online, or whatever the kids do these days. Instead, they may have a set of specific activities that require the use of the computer: communicate via e-mail, manage their photographs, read and prepare documents, interact with businesses and organisations.

This may seem quaint to members of the “digital native” generation for whom the interaction experience is presumably a blur of cloud service interactions and social media posts. But unlike the “digital natives” who, if you read the inevitably laughable articles about how today’s children are just wizards at technology and that kind of thing, probably just want to show their peers how great they are, there are also people who actually need to get productive work done.

So, might it not be nice to show a few essential programs and actions as desktop icons to direct the average user? I even had this set up having figured out the “folder view” widget which, as if embarrassed at not having shown up for work in the first place, actually shows the contents of the “Desktop” directory when coaxed into appearing. Problem solved? Well, not forever. (Although there is a slim chance that the problem will solve itself in future.)

The Upgrade

So, Kubuntu had been moaning for a while about a new upgrade being available. And being the modern way with KDE and/or Ubuntu, the user is confronted with parade after parade of notifications about all things important, trivial, and everything in between. Admittedly, it was getting to the point where support might be ending for the distribution version concerned, and so the decision was taken to upgrade. In principle this should improve the situation: software should be better supported, more secure, and so on. Sadly, with Ubuntu being a distribution that particularly likes to rearrange the furniture on a continuous basis, it just created more “busy work” for no good reason.

To be fair, the upgrade script did actually succeed. I remember trying something similar in the distant past and it just failing without any obvious remedy. This time, there were some messages nagging about package configuration changes about which I wasn’t likely to have any opinion or useful input. And some lengthy advice about reconfiguring the PostgreSQL server, popping up in some kind of packaging notification, seemed redundant given what the script did to the packages afterwards. I accept that it can be pretty complicated to orchestrate this kind of thing, though.

It was only afterwards that the problems began to surface, beginning with the login manager. Since we are describing an Ubuntu derivative, the default login manager was the Unity-styled one which plays some drum beats when it starts up. But after the upgrade, the login manager was obsessed with connecting to the wireless network and wouldn’t be denied the chance to do so. But it also wouldn’t connect, either, even if given the correct password. So, some work needed to be done to install a different login manager and to remove the now-malfunctioning software.

An Empty Desk

Although changing the login manager also changes the appearance of the software and thus the experience of using it, providing an unnecessary distraction from the normal use of the machine and requiring unnecessary familiarisation with the result of the upgrade, at least it solved a problem of functionality that had “gone rogue”. Meanwhile, the matter of configuring the desktop experience has perhaps not yet been completely and satisfactorily resolved.

When the machine in question was purchased, it was running stock Ubuntu. At some point, perhaps sooner rather than later, the Unity desktop became the favoured environment getting the attention of the Ubuntu developers, and finding that it was rather ill-suited for users familiar with more traditional desktop paradigms, a switch was made to the KDE environment instead. This is where a degree of peace ended up being made with the annoyingly disruptive changes introduced by KDE 4 and its Plasma environment.

One problem that KDE always seems to have had is that of respecting user preferences and customisations across upgrades. On this occasion, with KDE Plasma 5 now being offered, no exception was made: logging in yielded no “folder view” widgets with all those desktop icons; panels were bare; the desktop background was some stock unfathomable geometric form with blurry edges. I seem to remember someone associated with KDE – maybe even the aforementioned “expert” – saying how great it had been to blow away his preferences and experience the awesomeness of the raw experience, or something. Well, it really isn’t so awesome if you are a real user.

As noted above, persuading the “folder view” widgets to return was easy enough once I had managed to open the fancy-but-sluggish widget browser. This gave me a widget showing the old icons that was too small to show them all at once. So, how do you resize it? Since I don’t use such features myself, I had forgotten that it was previously done by pointing at the widget somehow. But because there wasn’t much in the way of interactive help, I had to search the Web for clues.

This yielded the details of how to resize and move a folder view widget. That’s right: why not emulate the impoverished tablet/phone interaction paradigm and introduce a dubious “long click” gesture instead? Clearly because a “mouseover” gesture is non-existent in the tablet/phone universe, it must be abolished elsewhere. What next? Support only one mouse button because that is how the Mac has always done it? And, given that context menus seem to be available on plenty of other things, it is baffling that one isn’t offered here.

Restoring the desktop icons was easy enough, but showing them all was more awkward because the techniques involved are like stepping back to an earlier era where only a single control is available for resizing, where combinations of moves and resizes are required to get the widget in the right place and to be the right size. And then we assume that the icons still do what they had done before which, despite the same programs being available, was not the case: programs didn’t start but also didn’t give any indication why they didn’t start, this being familiar to just about anyone who has used a desktop environment in the last twenty years. Maybe there is a log somewhere with all the errors in it. Who knows? Why is there never any way of troubleshooting this?

One behaviour that I had set up earlier was single-click activation of icons, where programs could be launched with a single click with the mouse. That no longer works, nor is it obvious how to change it. Clearly the usability police have declared the unergonomic double-click action the “winner”. However, some Qt widgets are still happy with single-click navigation. Try explaining such inconsistencies to anyone already having to remember how to distinguish between multiple programs, what each of them does and doesn’t do, and so on.

The Developers Know Best

All of this was frustrating enough, but when trying to find out whether I could launch programs from the desktop or whether such actions had been forbidden by the usability police, I found that when programs were actually launching they took a long time to do so. Firing up a terminal showed the reason for this sluggishness: Tracker and Baloo were wanting to index everything.

Despite having previously switched off KDE’s indexing and searching features and having disabled, maybe even uninstalled, Tracker, the developers and maintainers clearly think that coercion is better than persuasion, that “everyone” wants all their content indexed for “desktop search” or “semantic search” (or whatever they call it now), the modern equivalent of saving everything to the desktop and then rifling through it all afterwards. Since we’re only the stupid users, what would we really have to say about it? So along came Tracker again, primed to waste computing time and storage space, together with another separate solution for doing the same thing, “just in case”, because the different desktop developers cannot work together.

Amongst other frustrations, the process of booting to the login prompt is slower, and so perhaps switching from Upstart to systemd wasn’t such an entirely positive idea after all. Meanwhile, with reduced scrollbar and control affordances, it would seem that the tendency to mimic Microsoft’s usability disasters continues. I also observed spontaneous desktop crashes and consequently turned off all the fancy visual effects in order to diminish the chances of such crashes recurring in future. (Honestly, most people don’t want Project Looking Glass and similar “demoware” guff: they just want to use their computers.)

Of Entitlement and Sustainable Development

Some people might argue that I am just another “entitled” user who has never contributed anything to these projects and is just whining incorrectly about how bad things are. Well, I do not agree. I enthusiastically gave constructive feedback and filed bugs while I still believed that the developers genuinely wanted to know how they might improve the software. (Admittedly, my enthusiasm had largely faded by the time I had to migrate to KDE 4.) I even wrote software using some of the technologies discussed in this article. I always wanted things to be better and stuck with the software concerned.

And even if I had never done such things, I would, along with other users, still have invested a not inconsiderable amount of effort into familiarising people with the software, encouraging others to use it, and trying to establish it as a sustainable option. As opposed to proprietary software that we neither want to use, nor wish to support, nor are necessarily able to support. Being asked to support some Microsoft product is not only ethically dubious but also frustrating when we end up having to guess our way around the typically confusing and poorly-designed interfaces concerned. And we should definitely resent having to do free technical support for a multi-billion-dollar corporation even if it is to help out other people we know.

I rather feel that the “entitlement” argument comes up when both the results of the development process and the way the development is done are scrutinised. There is this continually perpetuated myth that “open source” can only be done by people when those people have “enough time and interest to work on it”, as if there can be no other motivations or models to sustain the work. This cultivates the idea of the “talented artist” developer lifestyle: that the developers do their amazing thing and that its proliferation serves as some form of recognition of its greatness; that, like art, one should take it or leave it, and that the polite response is to applaud it or to remain silent and not betray a supposed ignorance of what has been achieved.

I do think that the production of Free Software is worthy of respect: after all, I am a developer of Free Software myself and know what has to go into making potentially useful systems. But those producing it should understand that people depend on it, too, and that the respect its users have for the software’s development is just as easily lost as it is earned, indeed perhaps more easily lost. Developers have power over their users, and like anyone in any other position of power, we expect them to behave responsibly. They should also recognise that any legitimate authority they have over their users can only exist when they acknowledge the role of those users in legitimising and validating the software.

In a recent argument about the behaviour of systemd, its principal developer apparently noted that as Free Software, it may be forked and developed as anyone might wish. Although true, this neglects the matter of sustainable software development. If I disagree with the behaviour of some software or of the direction of a software project, and if there is no reasonable way to accommodate this disagreement within the framework of the project, then I must maintain my own fork of that software indefinitely if I am to continue using it.

If others cannot be convinced to participate in this fork, and if other software must be changed to work with the forked code, then I must also maintain forks of other software packages. Suddenly, I might be looking at having to maintain an entire parallel software distribution, all because the developers of one piece of software are too precious to accept other perspectives as being valid and are unwilling to work with others in case it “compromises their vision”, or whatever.

Keeping Up on the Treadmill

Most people feel that they have no choice but to accept the upgrade treadmill, the constant churn of functionality, the shiny new stuff that the developers “living the dream” have convinced their employers or their peers is the best and most efficient way forward. It just isn’t a practical way of living for most users to try and deal with the consequences of this in a technical fashion by trying to do all those other people’s jobs again so that they may be done “properly”. So that “most efficient way” ends up incurring inefficiencies and costs amongst everybody else as they struggle to find new ways of doing the things that just worked before.

How frustrating it is that perhaps the only way to cope might be to stop using the software concerned altogether! And how unfortunate it is that for those who do not value Free Software in its own right or who feel that the protections of Free Software are unaffordable luxuries, it probably means that they go and use proprietary software instead and just find a way of rationalising the decision and its inconvenient consequences as part of being a modern consumer engaging in yet another compromised transaction.

So, unhindered by rants like these and by more constructive feedback, the Free Software desktop seems to continue on its way, seemingly taking two steps backward for every one step forward, testing the tolerance even of its most patient users to disruptive change. I dread having to deal with such things again in a few years’ time or even sooner. Maybe CDE will once again seem like an attractive option and bring us full circle for “Unix on the desktop”, saving many people a lot of unnecessary bother. And then the tortoise really will have beaten the hare.

Tuning digiKam’s Previews

Tuesday, June 17th, 2014

It is entirely possible that I am doing something wrong as usual, but my way of using digiKam is as follows:

  1. Plug in camera or other media.
  2. Choose to download using digiKam when prompted by the notifier.
  3. Wait while digiKam loads.
  4. Download new images in the pop-up window that digiKam offers containing thumbnails.
  5. View the images in the album, clicking on them to see them at a decent size, zooming in to see detail.

It is also entirely possible that by not choosing to edit images in digiKam, I am missing out on vital functionality, although I rather adhere to the school of photography that involves as little postprocessing as possible (also known as “PP” amongst squabbling online photography forum participants). Anyone who has had to scan negatives with a flatbed scanner with a negative adapter and deal with constant dust contamination, never mind develop film, soon realises what a burden has been lifted from them by digital photography. (On the subject of developing using film, ignoring some work experience in a print shop, I have only ever done so in the distant past with a pinhole photography kit, which was actually fun. Doing so regularly would be somewhat more tedious, I can imagine.)

What I didn’t realise until today, and it was driving me crazy, was that another piece of vital functionality seems to be reserved for the editing mode in digiKam unless you change the settings. I was seeing rather pixelated images at the “1:1” or 100% view and rather surprised that a camera I had recently bought was performing so badly: was my camera really so badly configured? To keep this story moving at an acceptable pace, I’ll give you the briefest comprehensible summary of my investigation and its conclusion.

First, I configured my camera to take raw images, and then I took some raw images: digiKam imported them, which is a nice touch, but I really wanted to see what the sensor was actually producing (and actually I still have to figure that out). And so I launched the editing mode and, to my surprise, got to see my pictures in all their proper glory, no pixelation or anything of the kind: they were what I was expecting from the very beginning. And then I realised that perhaps digiKam treats the album view as some kind of preview, even when you view the images at full resolution. This led me into the “Settings” menu and into “Configure digiKam…” and here is what I found:

The digiKam settings of note in this particular case

Here, I’ve helped you out a little in finding the offending setting: it’s in the “Preview Options” part at the bottom and is called “Embedded preview loads full-sized images.” Selecting this has a dramatic effect on the “preview” as can be seen from an image with this setting in its apparently natural unselected state:

Pixelation visible in a full resolution crop in digiKam

Pixelation visible in a full resolution crop in digiKam

And here is the result when selecting the setting:

A full resolution crop being displayed properly

A full resolution crop being displayed properly

So what do we learn from this? Well, it did occur to me that perhaps I’m just not using digiKam in the right way: “everybody else” goes into every image with the editor, or maybe the slideshow option displays images properly and they all use that to properly look at their photographic output. But I remain baffled as to why anyone would want to see some presumably optimised-for-performance preview when they are looking at their images at full resolution. For anyone suffering from this unfathomable behaviour, I hope this article floats into your Internet frame of view so that you can avoid the same frustration.

Update

It turns out that the low resolution preview feature is covered by a reported bug in digiKam.

Meanwhile, I have since realised that allowing digiKam to “download” pictures from an SD card, it wants to rotate them all even if there’s no rotation necessary, actually changing all the files. With this errant behaviour avoided by just copying the pictures from the card directly (and using Gwenview to view them), it turns out that digiKam is quite happy to show pictures in the correct orientation all by itself. Unfortunately, it seems completely unable to convince itself that its own erroneous understanding of the orientation of some pictures (“downloaded” by digiKam and subsequently replaced by the actual image files straight from the card) should be suspended even after I moved those pictures out of digiKam, removed all image entries from the album concerned from its database, and removed all thumbnail entries from the album concerned from its thumbnail database.

I am starting to feel that the simpler option – Gwenview – is looking better all the time, but I’ll miss the tagging features of digiKam and the relatively convenient EXIF perusal, especially since Gwenview’s tagging seems broken for me and its EXIF support seems somewhat perfunctory.

Notions of Progress on the Free Software Desktop

Thursday, November 7th, 2013

Once again, discussion about Free Software communities is somewhat derailed by reflections on the state of the Free Software desktop. To be fair to participants in the discussion, the original observations about communities were so unspecific that people would naturally wonder which communities were being referenced.

Usability and Accessibility

As always, frustrating elements of recent Free Software desktop environments were brought up for criticism and evaluation. One of them concerned the “plasmoid” enhancements of KDE 4 (or KDE Plasma Desktop as it is known according to the rebranding of KDE assets) which are often regarded as superfluous distractions from the work of perfecting the classic desktop environment. Amidst all this, the “folder view” plasmoid (or desktop widget) in particular came under scrutiny. As I understand it, the “folder view” is just a panel or window that groups icons in a region on the desktop background, and I acknowledged that it certainly represents an improvement over managing icons on a normal desktop, but that it can also confuse people when they accidentally close the folder view – easy to do with a stray mouse click – leaving them to wonder where their icons went.

Such matters of usability make me wonder how well tested some of the concepts employed in these environments really are, despite insistences that usability experts have been involved and that non-experts in the field of usability are unable to see the forest for the trees. From my own experiences, I feel that the developers would really benefit from doing phone support for their wares, especially with users who haven’t learned all the fancy terminology and so must describe what they see from first principles and be told what to do at a similar level. Even better: such support should be undertaken from memory and not sitting in front of a similarly configured computer.

Although it is a somewhat separate discipline with different constraints, I also suspect that such “over the phone” exercises might help accessibility as well. An inexperienced user may provide different information to that provided by something like a screen reader, where the former may struggle to articulate concepts and the latter may merely describe the environment according to prescribed terms, and where the former may be able to use more flexible powers of description whereas the latter can only rely on the cooperation of other programs to populate a simplistic description of the state of the environment, but the exercise of being a person cut off from the rich graphical scenery and their familiar interaction mechanisms might put the usability and accessibility of the software into perspective for the developers.

The Measure of Progress

But back to the Free Software desktop in general, if only to contemplate notions of progress and to consider whether lessons really have been learned, or whether people would rather not think about the things which went wrong, labelling them as “finished business” or “water under the bridge” and urging people not to bring such matters up again. One participant remarked about how it took six years from 2005 to 2011 for KDE 4 to become as usable as its predecessor. A response to this indicated that this was actually “fantastic” progress given that Google used as much time to make Android “decent”.

Fantastic it may be, but we should not consider the endeavour as a software development project in isolation, with the measure of success being that something was created from nothing in six short years. Indeed, we must consider what was already there – absolutely not nothing – and how the result of the development measures up against that earlier system. As far as getting Free Software in front of people and building on earlier achievements are concerned, those six years can almost be considered six lost years. Nobody should be patting themself on the back upon hearing that someone in 2013 can move from KDE 3 to KDE 4 and feel that at least they didn’t lose much functionality.

The Role of Applications

It was also noted that KDE development now focuses more on application development than on the environment itself. One must therefore ask where we are with regard to parity with the suite of applications running under KDE 3. Here, I can only describe my own experiences, but this should be flattering to any constrained selection of updated applications because of my own rather conservative application choices.

Kontact is usable because I imagine various companies needed it to be usable to stay in business (and even then I don’t know the story of the diversions via Nepomuk and other PIM initiatives that could have endangered that application’s viability); Digikam is usable because the developers remained interested in improving the software and even maintained the KDE 3 version for a while; Okular has picked up where KPDF left off; K3B still works much the same as before. There are presumably regressions and improvements in all these: Kontact, for instance, is much slower in certain areas such as message sorting, but it probably has more robust and coherent PGP and S/MIME support than its predecessor (which may have been suffering from lack of maintenance at both the project and distribution level).

Meanwhile, Amarok has become a disaster with an incoherent interface involving lots of “in the know” controls, and after it stopped playback mid-track for the nth time and needed a complete restart to get sound back, I switched to Minirok out of desperation. Other applications took a permanent holiday, such as Kopete which I don’t miss because my IRC needs are covered by Konversation.

Stuff like Konqueror is still around, despite being under threat of complete replacement by Dolphin, although it has picked up the little “+” and “-” controls that pervade KDE now. Such controls confuse various classes of user through poor visual contrast (a tiny symbol in red or green superimposed on a multicolour icon!) while demanding from such users better than average motor skills (“to open the document aim at the tiny area but not the tiny area within the tiny area”).

Change You Can Believe In?

You wouldn’t think that I appreciate the work done on the Free Software desktop, but I do. What frustrates me and a lot of other people, however, is the way that things that should have been “behind the scenes” infrastructure improvements (Qt 3 being superseded by Qt 4, for instance) that could have been undertaken whilst preserving continuity for users have instead been thrust at those users in the form of unnecessary decisions about which functionality they can afford to lose in order to have a supported and secure system that will not gradually fall apart over time. (Not that KDE is unique in this respect, consider the Python 2 to Python 3 transition and the disruption even such a leisurely transition can cause.)

Exposing change to a group of people creates costs for those people, and when those people have other things than computing as the principal focus in their lives, such change can have damaging effects on their continued use of the software and on the quality of their lives. Following the latest trends, discovering the newest software, or just discovering how their existing software functions since the last vendor-initiated update are all distractions for people who just want to sit down, do some things on the computer, and then go back to their lives. In today’s gadget-pushing society, the productivity benefits of personal computing are being eroded by a fanaticism for showing off new and different things mostly for the sake of them being, well, new and different. Bored children may enjoy the fire-hose of new “apps”, tricks and gadgets, but that shouldn’t mean that everybody else has to be made to enjoy it as well or be considered backward “technophobes” who “don’t understand” or “won’t embrace” new technology.

One can argue that by failing to shield users from the cost of change, especially when the level of functionality remains largely similar, Free Software desktop developers have imperilled their own mission with the result that they now have to make up lost ground in the struggle to get people to use their software. But even to those developers who don’t care about such things, the other criticism that could be levelled against them might be a more delicate matter and more difficult to reconcile with their technical reputation: churning up change and making others deal with it can arguably be regarded as bad software project management and, indeed, bad project management in general.

Maybe such considerations also have something to say about the direction any given community might choose to follow, and whether bold new ideas should be embraced without a thorough consideration of the consequences.

Where Now for the Free Software Desktop?

Friday, May 31st, 2013

It is a recurring but tiresome joke: is this the year of the Linux desktop? In fact, the year I started using GNU/Linux on the desktop was 1995 when my university department installed the operating system as a boot option alongside Windows 3.1 on the machines in the “PC laboratory”, presumably starting the migration of Unix functionality away from expensive workstations supplied by Sun, DEC, HP and SGI and towards commodity hardware based on the Intel x86 architecture and supplied by companies who may not be around today either (albeit for reasons of competing with each other on razor-thin margins until the slightest downturn made their businesses non-viable). But on my own hardware, my own year of the Linux desktop was 1999 when I wiped Windows NT 4 from the laptop made available for my use at work (and subsequently acquired for my use at home) and installed Red Hat Linux 6.0 over an ISDN link, later installing Red Hat Linux 6.1 from the media included in the official boxed product bought at a local bookstore.

Back then, some people had noticed that GNU/Linux was offering a lot better reliability than the Windows range of products, and people using X11 had traditionally been happy enough (or perhaps knew not to ask for more) with a window manager, perhaps some helper utilities to launch applications, and some pop-up menus to offer shortcuts for common tasks. It wasn’t as if the territory beyond the likes of fvwm had not already been explored in the Unix scene: the first Unix workstations I used in 1992 had a product called X.desktop which sought to offer basic desktop functionality and file management, and other workstations offered such products as CDE or elements of Sun’s Open Look portfolio. But people could see the need for something rather more than just application launchers and file managers. At the very least, desktops also needed applications to be useful and those applications needed to look, act like, and work with the rest of the desktop to be credible. And the underlying technology needed to be freely available and usable so that anyone could get involved and so that the result could be distributed with all the other software in a GNU/Linux distribution.

It was this insight – that by giving that audience the tools and graphical experience that they needed or were accustomed to, the audience for Free Software would be broadened – that resulted first in KDE and then in GNOME; the latter being a reaction to the lack of openness of some of the software provided in the former as well as to certain aspects of the technologies involved (because not all C programmers want to be confronted with C++). I remember a conversation around the year 2000 with a systems administrator at a small company that happened to be a customer of my employer, where the topic somehow shifted to the adoption of GNU/Linux – maybe I noticed that the administrator was using KDE or maybe someone said something about how I had installed Red Hat – and the administrator noted how KDE, even at version 1.0, was at that time good enough and close enough to what people were already using for it to be rolled out to the company’s desktops. KDE 1.0 wasn’t necessarily the nicest environment in every regard – I switched to GNOME just to have a nicer terminal application and a more flexible panel – but one could see how it delivered a complete experience on a common technological platform rather than just bundling some programs and throwing them onto the screen when commanded via some apparently hastily-assembled gadget.

Of course, it is easy to become nostalgic and to forget the shortcomings of the Free Software desktop in the year 2000. Back then, despite the initial efforts in the KDE project to support HTML rendering that would ultimately produce Konqueror and lead to the WebKit family of browsers, the most credible Web browsing solution available to most people was, if one wishes to maintain nostalgia for a moment, the legendary Netscape Communicator. Indeed, despite its limitations, this application did much to keep the different desktop environments viable in certain kinds of workplaces, where as long as people could still access the Web and read their mail, and not cause problems for any administrators, people could pretty much get away with using whatever they wanted. However, the technological foundations for Netscape Communicator were crumbling by the month, and it became increasingly unsupported and unmaintained. Relying on a mostly proprietary stack of software as well as being proprietary itself, it was unmaintainable by any community.

Fortunately, the Free Software communities produced Web browsers that were, and are, not merely viable but also at the leading edge of their field in many ways. One might not choose to regard the years of Netscape Communicator’s demise as those of crisis, but we have much to be thankful for in this respect, not least that the Mozilla browser (in the form of SeaMonkey and Firefox) became a stable and usable product that did not demand too much of the hardware available to run it. And although there has never been a shortage of e-mail clients, it can be considered fortunate that projects such as KMail, Kontact and Evolution were established and have been able to provide years of solid service.

The sudden end of a pathway

You Are Here

With such substantial investment made in the foundations of the Free Software desktop, and with its viability established many years ago (at least for certain groups of users), we might well expect to be celebrating now, fifteen years or so after people first started to see the need for such an endeavour, reaping the rewards of that investment and demonstrating a solution that is innovative and yet stable, usable and yet reliable: a safe choice that has few shortcomings and that offers more opportunities than the proprietary alternatives. It should therefore come as a shock that the position of the Free Software desktop has not been as troubled as it is now for quite some time.

As time passed by, KDE 1 was followed by KDE 2 and then KDE 3. I still use KDE 3, clinging onto the functionality while it still runs on a distribution that is still just about supported. When KDE 2 came out, I switched back from GNOME because the KDE project had a more coherent experience on offer than bundling Mozilla and Evolution with what we would now call the GNOME “shell”, and for a long time I used Konqueror as my main Web browser. Although things didn’t always work properly in KDE 3, and there are things that will never be fixed and polished, it perhaps remains the pinnacle of the project’s achievements.

On KDE 3, the panel responsible for organising and launching applications, showing running applications and the status of various things, just works; I may have spent some time moving icons around a few years ago, but it starts up and everything uses the right amount of space. The “K” (or “start”) menu is just a menu, leaving itself open to the organisational whims of the distribution, but on my near-obsolete Kubuntu version there’s not much in the way of bizarre menu entry and application duplication. Kontact lets me read mail and apply spam filters that are still effective, filter and organise my mail into folders; it shows HTML mail only when I ask it to, and it shows inline images only when I tell it to, which are both things that I have seen Thunderbird struggle with (amongst certain other things). Konqueror may not be a viable default for Web browsing any more, but it does a reasonable job for files on both local disks and remote systems via WebDAV and ssh, the former being seemingly impossible with Nautilus against the widely-deployed Free Software Web server that serves my files. Digikam lets me download, view and tag my pictures, occasionally also being used to fix the metadata; it only occasionally refuses to access my camera, mostly because the underlying mechanisms seem to take an unhealthy interest in how many files there are in the camera’s memory; Amarok plays my music from my local playlists, which is all I ask of it.

So when my distribution finally loses its support, or when I need to recommend a desktop environment to others, even though KDE 3 has served me very well, I cannot really recommend it because it has for the most part been abandoned. An attempt to continue development and maintenance does exist in the form of the Trinity Desktop Environment project, but the immensity of the task combined with the dissipation of the momentum behind KDE 3, and the effective discontinuation of the Qt 3 libraries that underpin the original software, means that its very viability must be questioned as its maintainers are stretched in every direction possible.

Why Can’t We All Get Along?

In an attempt to replicate what I would argue is the very successful environment that I enjoy, I tried to get Trinity working on a recent version of Ubuntu. The encouraging thing is that it managed to work, more or less, although there were some disturbing signs of instability: things would crash but then work when tried once more; the user administration panel wasn’t usable because it couldn’t find a shared library for the Python runtime environment that did, in fact, exist on the system. The “K” menu seemed to suffer from KDE 4 (or, rather, KDE Plasma Desktop) also being available on the same system because lots of duplicate menu entries were present. This may be an unfortunate coincidence, Trinity being overly helpful, or it may be a consequence of KDE 4 occupying the same configuration “namespace”. I sincerely hope that it is not the latter: any project that breaks compatibility and continuity in the fashion that KDE has done should not monopolise or corrupt a resource that actually contains the data of the user.

There probably isn’t any fundamental reason why Trinity could not return KDE 3 to some level of its former glory as a contemporary desktop environment, but getting there could certainly be made easier if everyone worked together a bit more. Having been obliged to do some user management tasks in another environment before returning to my exploration of Trinity, I attempted to set up a printer. Here, with the printer plugged in and me perusing the list of supported printers, there appeared to be no way of getting the system to recognise a three- or four-year-old printer that was just too recent for the list provided by Trinity. Returning to the other environment and trying there, a newer list was somehow obtained and the printer selected, although the exercise was still highly frustrating and didn’t really provide support for the exact model concerned.

But both environments presumably use the same print system, so why should one be better supplied than the other for printer definitions? Surely this is common infrastructure stuff that doesn’t specifically relate to any desktop environment or user interface. Is it easier to make such functionality for one environment than another, or just too convenient to take a short-cut and hack something up that covers the needs of one environment, or is it too much work to make a generic component to do the job and to package it correctly and to test it in more than a narrow configuration? Do the developers get worried about performance and start to consider complicated services that might propagate printer configuration details around the system in a high-performance manner before their manager or supervisor tells them to scale back their ambition?

I remember having to configure network printers in the previous century on Solaris using special script files. I am also sure that doing things at the lowest levels now would probably be just as frustrating, especially since the pain of Solaris would be replaced by the need to deal with things other than firing plain PostScript across the network, but there seems to be a wide gulf between this and the point-and-click tools between which some level of stability could exist and make sure that no matter how old and nostalgic a desktop environment may be perceived to be, at least it doesn’t need to dedicate effort towards tedious housekeeping or duplicating code that everyone needs and would otherwise need to write for themselves.

The Producers (and the Consumers)

One might be inclined to think that my complaints are largely formed through a bitter acceptance that things just don’t stay the same, although one can always turn this around and question why functioning software cannot go on functioning forever. Indeed, there is plenty of software on the planet that now goes about its business virtualised and still with the belief, if the operating assumptions of software can be considered in such a way, that it runs on the same range of hardware that it was written for, that hardware having been introduced (and even retired) decades ago. But for software that has to be run in a changing environment, handling new ways of doing things as well as repelling previously unimagined threats to its correct functioning and reliability, needing a community of people who are willing to maintain and develop the software further, one has to accept that there are practical obstacles to the sustained use of that software in the environments for which it was intended.

Particularly the matter of who is going to do all the hard work, and what incentives might exist to persuade them to do it if not for personal satisfaction or curiosity, is of crucial importance. This is where conflicts and misunderstandings emerge. If the informal contract between the users and developers were taking place with no historical context whatsoever, with each side having expectations of the other, one might be more inclined to sympathise with the complaints of developers that they do all the hard work and yet the users merely complain. Yet in practice, the interactions take place in the context of the users having invested in the software, too. Certainly, even if the experience has not been one of complete, uninterrupted enjoyment, the users may not have invested as much energy as the developers, but they will have played their own role in the success of the endeavour. Any radical change to the contract involves writing off the users’ investment as well as that of the developers, and without the users providing incentives and being able to direct the work, they become exposed to unpredictable risk.

One fair response to the supposed disempowerment of the user is that the user should indeed “pay their way”. I see no conflict between this and the sustainable development of Free Software at all. If people want things done, one way that society has thoroughly established throughout the ages is that people pay for it. This shouldn’t stop people freely sharing software (or not, if they so choose), because people should ultimately realise that for something to continue on a sustainable basis, they or some part of society has to provide for the people continuing that effort. But do desktop developers want the user to pay up and have a say in the direction of the product? There is something liberating about not taking money directly from your customers and being able to treat the exercise almost like art, telling the audience that they don’t have to watch the performance if they don’t like it.

This is where we encounter the matter of reputation: the oldest desktop environments have been established for long enough that they are widely accepted by distributions, even though the KDE project that produces KDE 4 has quite a different composition and delivers substantially different code from the KDE project that produced KDE 3. By keeping the KDE brand around, the project of today is able to trade on its long-standing reputation, reassure distributions that the necessary volunteers will be able to keep up with packaging obligations, and indeed attract such volunteers through widespread familiarity with the brand. That is the good side of having a recognised brand; the bad side is that people’s expectations are higher and that they expect the quality and continuity that the brand has always offered them.

Globes against the light

What’s Wrong with Change?

In principle, nothing is essentially wrong with change. Change can complement the ways that things have always been done, and people can embrace new ways and come to realise that the old ways were just inferior. So what has changed on the Free Software desktop and how does it complement and improve on those trusted old ways of doing things?

It can be argued that KDE and GNOME started out with environments like CDE and Windows 95 acting as their inspiration at some level. As GNOME began to drift towards resembling the “classic” Mac OS environment in the GNOME 2 release series, having a menu bar at the top of the screen through which applications and system settings could be accessed, together with the current time and other status details, projects like XFCE gained momentum by appealing to the audience for whom the simple but configurable CDE paradigm was familiar and adequate. And now that Unity has reintroduced the Mac-style top menu bar as the place where application menus appear – a somewhat archaic interface even in the late 1980s – we can expect more users to discover other projects. In effect, the celebrated characteristics of the community around Free Software have let people go their own way in the company of developers who they felt shared the same vision or at least understood their needs best.

That people can indeed change their desktop environment and choose to run different software is a strength of Free Software and the platforms built on it, but the need for people to have to change – that those running GNOME, for example, feel that their needs are no longer being met and must therefore evaluate alternatives like XFCE – is a weakness brought about by projects that will happily enjoy the popularity delivered by the reputation of their “brand”, and who will happily enjoy having an audience delivered by previous versions of that software, but who then feel that they can change the nature of their product in ways that no longer meet that audience’s needs while pretending to be delivering the same product. If a user can no longer do something in a new release of a product, that should be acknowledged as a failing in that product. The user should not be compelled to find another product to use and be told that since a choice of software exists, he or she should be prepared to exercise that choice at the first opportunity. Such disregard for the user’s own investment in the software that has now abandoned him or her, not to mention the waste of the user’s time and energy in having to install alternative software just to be able to keep doing the same things, is just unacceptable.

“They are the 90%”

Sometimes attempts at justifying or excusing change are made by referring to the potential audience reached by a significantly modified product. Having a satisfied group of users numbering in the thousands is not always as exciting as one that numbers in the millions, and developers can be jealous of the success of others in reaching such numbers. One still sees labels like “10x” or “x10” being used and notions of ten-fold increases in audiences as being a necessary strategy phrased in a new and innovative way, mostly by people who perhaps missed such terminology and the accompanying strategic doctrine the first time round (or second, or third time) many years ago, and such order-of-magnitude increases are often dictated by the assumption that the 90% not currently in the audience for a product would find the product too complicated or too technologically focused and that the product must therefore discard features, or change the way it exposes its features, in order to appeal to that 90%.

Unfortunately, such initiatives to reach larger audiences risk alienating the group of users that are best understood in order to reach groups of users that are largely under-researched and thus barely understood at all. (The 90% is not a single monolithic block of identically-minded people, of course, so there is more than one group of users.) Now, one tempting way of avoiding the need to understand the untapped mass of potential users is to imitate those who are successfully reaching those users already or who at least aspire to do so. Thus, KDE 4 and Windows Vista had a certain similarity, presumably because various visual characteristics of Vista, such as its usage of desktop gadgets, were perceived to be useful features applicable to the wider marketplace and thus “must have” features that provide a way to appeal to people who don’t already use KDE or Windows. (Having a tiny picture frame on the desktop background might be a nice way of replicating the classic picture of one’s closest family on one’s physical desktop, but I doubt that it makes or breaks the adoption of a technology. Many people are still using their computer at a desk or at home where they still have those pictures in plain view; most people aren’t working from the beach where they desperately need them as a thumbnail carousel obscured by their application windows.)

However, it should be remembered that the products being imitated may also originate from organisations who also do not really understand their potential audience, either. Windows Vista was perceived to be a decisive response by Microsoft to the threat of alternative platforms but was regarded as a flop despite being forced on computer purchasers. And even if new users adopt such products, it doesn’t mean that they welcome or unconditionally approve of the supposed innovations introduced in their name, especially if Microsoft and its business partners forced those users to adopt such products when buying their current machine.

The Linux Palmtop and the Linux Desktop

With the success of Android – or more accurately, Android/Linux – claims may be made that radical departures from the traditional desktop software stacks and paradigms are clearly the necessary ingredient for success for GNU/Linux on the desktop, and it might be said that if only people had realised this earlier, the Free Software desktop would have become dominant. Moreover, it might also be argued that “desktop thinking” held back adoption of Linux on mobile devices, too, opening the door for a single vendor to define the payload that delivered Linux to the mobile-device-consuming masses. Certainly, the perceived need for a desktop on a mobile or PDA (personal digital assistant) is entrenched: go back a few years and the GNOME Palmtop Environment (GPE) was seen as the counterweight to Windows CE on something like a Compaq iPAQ. Even on the Golden Delicious GTA04 device, LXDE – a lightweight desktop environment – has been the default environment, admittedly more for verification purposes than for practical use as a telephone.

Naturally, a desktop environment is fairly impractical on a small screen with limited navigational controls. Although early desktop systems had fewer pixels than today’s smartphones, the increased screen size provided much greater navigational control and matched the desktop paradigm much better. It is interesting to note that the Xerox Star had a monochrome 1024×809 display, which is perhaps still larger than many smartphones, that (according to the Wikipedia entry) “…was meant to be able to display two 8.5×11 in pages side by side in actual size”. Even when we become able to show the equivalent amount of content on a smartphone screen at a resolution sufficient to permit its practical use, perhaps with very good eyesight or some additional assistance through magnification, it will remain a challenge to navigate that information precisely. Selecting text, for instance, will not be possible without a very precise pointing instrument.

Of course, one way of handling such challenges that is already prevalent is that of being able to zoom in and out of the content, with the focus in recent years on being able to do so through gestures, although the ability to zoom in on documents at arbitrary levels has been around for many years in the computer-aided design, illustration and desktop publishing fields amongst others, and the notion of general user interfaces permitting fluid scrolling and zooming over a surface showing content was already prominent before smartphones adopted and popularised it further. On the one hand, new and different “form factors” – kinds of device with different characteristics – offer improved methods of navigation, perhaps being more natural than the traditional mouse and keyboard attached to a desktop computer, but those methods may not lend themselves to use on a desktop computer with its proven ergonomics of sitting at a comfortable distance from a generously proportioned display and being able to enter textual input using a dedicated device with an efficiency that is difficult to match using, say, a virtual keyboard on a touchscreen.

Proclamations may occasionally be made that work at desks in offices will become obsolete in favour of the mobile workplace, but as that mobile workplace is apparently so often situated at the café table, the aircraft tray table, or once the lap or forearm is tired of supporting a device, some other horizontal surface, the desktop paradigm with its supporting cast of input devices and sophisticated applications will still be around in some form and cannot be convincingly phased out in favour of content consumption paradigms that refuse to tackle the most demanding of the desktop functionality we enjoy today.

A Microsoft keyboard with buttons for bundled applications

The Real Obstacle

Up to this point, my pontification has limited itself to considering what has made the “Linux desktop” attractive in the past, what makes it attractive or unattractive today, and whether the directions currently being taken might make it more attractive to new audiences and to existing users, or instead fail to capture the interest of new audiences whilst alienating existing users. In an ideal world, with every option given equal attention, and with every individual able to exercise a completely free choice and adopt the product or solution that meets his or her own needs best, we could focus on the above topic and not have to worry about anything else. Unfortunately, we do not live in such an ideal world.

Most hardware sold in retail outlets visited by the majority of potential and current computer users is bundled with proprietary software in the form of Microsoft Windows. Indeed, in these outlets, accounting for the bulk of retail sales of computers to private customers, it is not possible to refuse the bundled software and to choose to buy only the hardware so that an alternative software system may be used with the computer instead (or at least not without needing to pursue vendors afterwards, possibly via legal avenues). In such an anticompetitive environment, many customers associate the bundled software with computer hardware in general and remain unaware of alternatives, leaving the struggle to educate those customers to motivated individuals, organisations like the FSF (and FSFE), and to independent retailers.

Unfortunately, individuals and organisations have to spend their own time and money (or their volunteers’ time and their donors’ money) righting the wrongs of the industry. Independent retailers offer hardware without bundled software, or offer Free Software pre-installed as a convenience but without cost, but the low margins on such “bare” or Free Software systems mean that they too are fixing the injustices of the system at their own expense. Once again, our considerations need to be broadened so as to not merely consider the merits of Free Software delivered as a desktop environment for the end-user, but also to the challenge of getting that software in front of the end-user in the first place. And further still: although we might (and should) consider encouraging regulatory bodies to investigate the dubious practices of product bundling, we also need to consider how we might support those who attempt to reach out and educate end-users without waiting for the regulators to act.

Putting the Pieces Together

One way we might support those getting Free Software onto hardware and into the hands of end-users – in this case, purchasers of new computer hardware – is to help them, the independent retailers, command a better price for their products. If all other things are equal, people generally will not pay more for something that they can get cheaper elsewhere. If they can be persuaded to believe that a product is better in certain ways, then paying a little more doesn’t seem so bad, and the extra cost can often be justified. Sadly, convincing people about the merits of Free Software can be a time-consuming process, and people may not see the significance of those merits straight away, and so other merits are also required to help build up a sustainable margin that can be fed into the Free Software ecosystem.

Some organisations advocate that bundling services is an acceptable way of appealing to end-users and making a bit of money that can fund Free Software development. However, such an approach risks bringing with it all that is undesirable about the illegitimately-bundled software that customers are obliged to accept on their new computer: advertising, compromised user experiences, and the feeling that they are not in control of their purchase. Moreover, seeking revenue from service providers or selling proprietary services to existing end-users fails to seriously tackle the market access issue that impacts Free Software the most.

A better and proven way of providing additional persuasion is, of course, to make a better product, which is why it is crucial to understand whether the different communities developing the “Linux desktop” are succeeding in doing so or not. If not, we need to understand what we need to do to help people offer viable products that people will buy and use, because the alternative is to continue our under-resourced campaign of only partly successful persuasion and education. And this may put our other activities at risk, too, affording us only desperate resistance to all the nasty anticompetitive measures, both political and technical, that well-resourced and industry-dominating corporations are able to initiate with relative ease.

In short, our activities need to fit together and to support each other so that the whole endeavour may be sustainable and be able to withstand the threats levelled against it and against us. We need to deliver a software experience that people will use and continue to use, we need to recognise that those who get that software in front of end-users need our support in running a viable business doing so (far more than large vendors who only court the Free Software communities when it suits them), and we need to acknowledge the threats to our communities and to be prepared to fight those threats or to support those who do so.

Once we have shown that we can work together and act on all of these simultaneously and continuously as a community, maybe then it will become clear that the year of the Linux desktop has at last arrived for good.