Paul Boddie's Free Software-related blog


Archive for the ‘English’ Category

Tuning digiKam’s Previews

Tuesday, June 17th, 2014

It is entirely possible that I am doing something wrong as usual, but my way of using digiKam is as follows:

  1. Plug in camera or other media.
  2. Choose to download using digiKam when prompted by the notifier.
  3. Wait while digiKam loads.
  4. Download new images in the pop-up window that digiKam offers containing thumbnails.
  5. View the images in the album, clicking on them to see them at a decent size, zooming in to see detail.

It is also entirely possible that by not choosing to edit images in digiKam, I am missing out on vital functionality, although I rather adhere to the school of photography that involves as little postprocessing as possible (also known as “PP” amongst squabbling online photography forum participants). Anyone who has had to scan negatives with a flatbed scanner with a negative adapter and deal with constant dust contamination, never mind develop film, soon realises what a burden has been lifted from them by digital photography. (On the subject of developing using film, ignoring some work experience in a print shop, I have only ever done so in the distant past with a pinhole photography kit, which was actually fun. Doing so regularly would be somewhat more tedious, I can imagine.)

What I didn’t realise until today, and it was driving me crazy, was that another piece of vital functionality seems to be reserved for the editing mode in digiKam unless you change the settings. I was seeing rather pixelated images at the “1:1” or 100% view and rather surprised that a camera I had recently bought was performing so badly: was my camera really so badly configured? To keep this story moving at an acceptable pace, I’ll give you the briefest comprehensible summary of my investigation and its conclusion.

First, I configured my camera to take raw images, and then I took some raw images: digiKam imported them, which is a nice touch, but I really wanted to see what the sensor was actually producing (and actually I still have to figure that out). And so I launched the editing mode and, to my surprise, got to see my pictures in all their proper glory, no pixelation or anything of the kind: they were what I was expecting from the very beginning. And then I realised that perhaps digiKam treats the album view as some kind of preview, even when you view the images at full resolution. This led me into the “Settings” menu and into “Configure digiKam…” and here is what I found:

The digiKam settings of note in this particular case

Here, I’ve helped you out a little in finding the offending setting: it’s in the “Preview Options” part at the bottom and is called “Embedded preview loads full-sized images.” Selecting this has a dramatic effect on the “preview” as can be seen from an image with this setting in its apparently natural unselected state:

Pixelation visible in a full resolution crop in digiKam

Pixelation visible in a full resolution crop in digiKam

And here is the result when selecting the setting:

A full resolution crop being displayed properly

A full resolution crop being displayed properly

So what do we learn from this? Well, it did occur to me that perhaps I’m just not using digiKam in the right way: “everybody else” goes into every image with the editor, or maybe the slideshow option displays images properly and they all use that to properly look at their photographic output. But I remain baffled as to why anyone would want to see some presumably optimised-for-performance preview when they are looking at their images at full resolution. For anyone suffering from this unfathomable behaviour, I hope this article floats into your Internet frame of view so that you can avoid the same frustration.

Update

It turns out that the low resolution preview feature is covered by a reported bug in digiKam.

Meanwhile, I have since realised that allowing digiKam to “download” pictures from an SD card, it wants to rotate them all even if there’s no rotation necessary, actually changing all the files. With this errant behaviour avoided by just copying the pictures from the card directly (and using Gwenview to view them), it turns out that digiKam is quite happy to show pictures in the correct orientation all by itself. Unfortunately, it seems completely unable to convince itself that its own erroneous understanding of the orientation of some pictures (“downloaded” by digiKam and subsequently replaced by the actual image files straight from the card) should be suspended even after I moved those pictures out of digiKam, removed all image entries from the album concerned from its database, and removed all thumbnail entries from the album concerned from its thumbnail database.

I am starting to feel that the simpler option – Gwenview – is looking better all the time, but I’ll miss the tagging features of digiKam and the relatively convenient EXIF perusal, especially since Gwenview’s tagging seems broken for me and its EXIF support seems somewhat perfunctory.

Mobile Tethering, Privacy and Predatory Practices

Saturday, April 26th, 2014

Daniel Pocock provides some fairly solid analogies regarding arbitrary restrictions on mobile network usage, but his blog system seems to reject my comment, so here it is, mostly responding to Adam Skutt’s remark on “gas-guzzlers” (cars that needlessly consume more petrol/gasoline than they really need to).

The second example or analogy mentions “exotic” cars, not necessarily gas-guzzling ones. The point being highlighted is that when producers can ascertain or merely speculate that customers can afford higher prices, they may decide to exploit those customers; things like “tourist prices” are another example of such predatory practices.

I think the first example is a fairly solid rebuttal of the claim that this is all about likely bandwidth consumption (and that tethered devices would demand more than mobile devices). Just as the details of the occupants of a vehicle should be of no concern to a petrol station owner, so should the details of network-using programs be of no concern to a mobile network operator.

Operators are relying on the assumption that phones are so restricted that their network usage will be constrained accordingly, but this won’t be the case forever and may not even be the case now. They should stop pining for the days when phones were totally under their own control, with every little feature being a paid upgrade to unlock capabilities that the device had from the moment it left the factory.

I know someone whose carrier-locked phone wouldn’t share pictures over Bluetooth whereas unlocked phones of the same type would happily do so. Smartphones are said to be a computer in one’s pocket: this means that we should also fight to uphold the same general purpose computing rights that, throughout the years, various organisations have sought to deny us all from our desktop and laptop computers.

Minimal Kolab: Unbundling the LDAP and IMAP Components

Saturday, March 8th, 2014

In my last post about Kolab I hinted about unbundling the LDAP server and using a remote server for LDAP service. Since then, I’ve been looking at making the IMAP server an “unbundled” component – IMAP service still being required, of course – and even at supporting Dovecot in setup-kolab as well. After all, choice is an important motivation for adopting Free Software, and we should at least try and make that choice convenient to exercise where possible. Supporting a choice of IMAP servers gives everyone a bit more flexibility and should make Kolab a bit easier to adopt. The Dovecot work is still very much in progress, however.

As you may recall, I wanted to deploy an XMPP server – ejabberd – on one host and the rest of the Kolab stack on another host, and yet retain the possibility of configuring these different components using setup-kolab. To support this specific situation, and to eventually move beyond it to support other architectural configurations, I have chosen to introduce a new metapackage called kolab-minimal: the glue components of Kolab plus the Web components minus the “infrastructure” components, those being the components providing the LDAP, IMAP and mail transport services. Here’s where this takes us:

A dependency diagram for Kolab featuring a kolab-minimal package and the different setup-kolab invocations

A dependency diagram for Kolab featuring a kolab-minimal package and the different setup-kolab invocations

If you want to install the complete stack, the kolab metapackage will bring everything you need into the installation and hopefully provide you with a self-contained solution. But now, the kolab metapackage is formulated in terms of the “infrastructure” components plus kolab-minimal. And if you want to install something that gets its services from other computers, kolab-minimal should be all you need.

Obviously, this work deviates from the “official” Kolab packages, but since it appears that those packages aren’t being refreshed with upstream fixes as often as might be desirable, I’ve decided to put some source packages online. If you should be tempted to build them and try them out, please remember to do so in a test environment and that you do so at your own peril! And if you want to track my packaging changes, I’ve put a collection of repositories online, too.

Note that the XMPP dependencies (Converse and ejabberd) are integrated into the above, but more testing is really needed to make sure that they become robust additions. Indeed, as part of the Converse plugin package there’s a patch that fixes an annoying bug that makes deploying the plugin almost like an “all or nothing” affair. So I’ve put them under a new metapackage – kolab-extra – for now.

And the way the packaging invokes the configuration program – setup-kolab – probably needs some review, particularly the way setup-kolab wants to edit or replace configuration files of other packages. Some insights into the proper Debian way of doing this would be very useful indeed. I can imagine Kolab using configuration files in slightly different locations and changing references to where the configuration lives in only one key place (or a few key places), leaving the originals intact, but I haven’t been able to look at this in depth.

And with that, I’ll give my modestly equipped desktop computer a rest from running two User Mode Linux instances and pdebuild in 1GB RAM (of the semiconductor variety, plus some swap).

Kolab, Debian, LDAP and XMPP

Friday, February 21st, 2014

I had another chance to look at Kolab and the dependency graph recently. Having been inspired by the prospect of chat integration within Roundcube, I set out to install a suitable XMPP server, and it seemed that ejabberd was the most likely choice on Debian systems such as my own. Here is what the configuration would ideally look like:

Kolab, Roundcube and ejabberd running in a User Mode Linux environment

Kolab, Roundcube and ejabberd running in a User Mode Linux environment

But I then discovered that ejabberd did not seem to work with the User Mode Linux environment in which I test my packages. This then gave me an excuse to contemplate the relationship between the different components. LDAP is central to the way Kolab manages credentials and is employed by ejabberd to authenticate users, but the LDAP service does not need to be located on the same server. Indeed, it is likely that in a larger organisation the services would reside on a number of different computers.

Repositioning LDAP

Since I was interested in writing a component to configure ejabberd for integration with the other Kolab components, but that this service would have to be installed outside my own User Mode Linux environment (within which Roundcube happens to reside), I therefore needed to find a way of teaching setup-kolab (Kolab’s setup script) about remote LDAP services as an alternative to any such service running on the same machine. And from this perspective I realised that the dependency on LDAP is a “soft” one: it is entirely possible to want to install Kolab without also installing an LDAP server suite, but the need for LDAP service remains. It thus falls on other computers to provide LDAP services to the computer running the chat service (and to the other Kolab services, too).

A bit of adjustment to the setup_ldap module in pykolab and it became possible to choose a local directory or a remote one accessible via LDAP. At this point, running ejabberd outside User Mode Linux (UML) and connecting to the LDAP service running inside UML looked feasible, and I developed a setup-kolab component to propagate Kolab settings to ejabberd’s configuration file, but my desktop environment’s chat program didn’t seem interested in joining the testing effort. That meant that I really had to get the Converse plugin working within Roundcube, thus enabling chat within the webmail environment.

Enlisting Converse

Naturally, this meant figuring out a reliable way of configuring Converse, and thus another setup-kolab component was created for this purpose. So far, so straightforward: get Converse to talk to an XMPP service and the job is done. But now in my arrangement, the XMPP service – ejabberd – is situated in a remote location from Converse and in a separate location from Roundcube, and is thus not accessible without some additional measures. Converse runs JavaScript in the browser, but that code needs to “bind” to the XMPP service in order to be able to use it, and a general security measure enforced in browsers is that scripts aren’t allowed to talk to any location on the Internet just because they want to: instead, they may be restricted to only being capable of sending information to the server that delivered them to the browser in the first place. Here is a diagram illustrating the problem:

Kolab, Roundcube and ejabberd, but with Converse wanting to communicate with two different hosts

Kolab, Roundcube and ejabberd, but with Converse wanting to communicate with two different hosts

Since Converse wants to talk to the XMPP service, but given that the XMPP service is not located in the same place as the Web server that sent it to the browser, a proxy must be deployed to listen to Converse within the Web environment and then relay the communications to ejabberd. This involves configuring Apache to receive requests and pretend to be the “connection manager”, and then Apache forwards such requests to the real connection manager provided by ejabberd. Thus, the following diagram illustrates the solution to this distribution of services problem:

Kolab, Roundcube and ejabberd, with the latter being reached via a proxy from Roundcube

Kolab, Roundcube and ejabberd, with the latter being reached via a proxy from Roundcube

Thus, the task of setting up chat in Roundcube, integrated with Kolab, involves the following:

  • The configuration of ejabberd to authenticate users using Kolab account details stored in the LDAP directory
  • The configuration of Roundcube to enable the Converse plugin and…
  • The deployment of a proxy site in Apache to forward Converse’s chat requests to ejabberd

The State of Play

There seems to be plenty of integration work still to be done. Although Converse can obtain contact details supplied by ejabberd from the LDAP service and thus provide immediate access to other users in the same organisation, the level of integration with the rest of the interface is still fairly loose: you cannot find a chat button in the address book for each contact, for example. Even so, the level of convenience probably already matches various other groupware solutions.

I can’t wait to see what kind of communication or collaboration technology will be next, even if there will be a degree of work to make it a bit easier to set up with Kolab. And that reminds me to get the configuration nuts and bolts packed off and sent upstream so that everybody else can try it out.

The Current Kolab Package Dependency Graph

Sunday, January 26th, 2014

A few weeks ago, I published a suggested dependency arrangement for Kolab in Debian. For completeness, here is a reasonably nice diagram showing the dependency arrangement used by the existing “vendor” packages:

The dependencies used by the current "vendor" packages for Kolab

The dependencies used by the current "vendor" packages for Kolab

As noted before, the experiment I have been performing reorganises the packages so that they can be configured one at a time, and I even managed to get the debconf system to ask the user any necessary installation-related questions.

(Once again, the slightly fancier than normal output is thanks to the notugly.xsl stylesheet – described here – and updated further by myself to fix a few things that seemed to go wrong. At some point, I’ll try and send my changes to vidarh and see if he likes them.)

Python 3: The Feast after the Enforced Python 2 Feature Diet?

Thursday, January 9th, 2014

Victor Stinner makes the case for OpenStack moving to Python 3 “right now”. As a member of the Python core development community, it should not be a surprise that he might have such a view. However, any large Python project would arguably be better off just waiting for the core developers to acknowledge the obvious: that deliberately breaking compatibility and making needless work for people was a bad idea, and that bringing people over to the supposed future of the Python language will involve building a bridge for them and not just telling them to jump some difficult-to-measure divide while teasing them with goodies, many of which being things that have been deliberately withheld from Python 2.x for the sake of making Python 3.x look better than it inherently is.

I notice that the referenced article claims that “Python 3 is fast”. First of all, PyPy offers much more convincing performance benefits for many programs, and PyPy did not support Python 3 last time I checked. Secondly, CPython 3 had a lot of scope for being faster than it was initially, and I seem to recall that there were many Python 3 migration measures in Python 2.6 that led to performance regressions there as well, so everyone had to suffer from the break in the roadmap while an illusion was preserved to suggest that CPython 3’s relative performance was not that bad in comparison to the 2.x series. Thirdly, stuff like the “computed goto” work could have been delivered for CPython 2.x, but there has been a deliberate policy of denying such new features to CPython 2 over the past few years. (For example, repeated attempts to diminish CPython’s perverse resistance to being cross-built, with patches against CPython 2 supplied and largely ignored or shuffled about over many years, presumably languish now as tickets targeted against CPython 3.)

I can totally understand that the core developers don’t really want to look at a code base that is effectively an archaic version of the one they now work on, but as a consequence of deciding that things must be broken, people need to understand that many improvements which could have reached them in CPython 2 never made it that far, partly as a policy decision and partly because the deliberate break probably makes backporting much harder than it might otherwise have been.

OpenStack may well end up being better on Python 3 because of all those new things mentioned in the article, but getting there may well be harder than it needed to be. If the project does end up benefiting, the exercise can hardly be considered an endorsement of the Python 3 strategy. And putting Python 2 users on a feature diet in order to make Python 3 look like a feature feast is no way to justify the strategy, either.

Python 3: I Told You So?

Saturday, January 4th, 2014

Five or so years ago I managed to achieve a small amount of notoriety by being quoted in a Linux.com article about Python 3 where I stated that Python 3 was all very well, but the efforts of the core development community would have been better spent on fixing the major perceived flaws with Python, specifically those affecting the “reference implementation” known as CPython that were giving Python a bad name. Now, in an article discussing the current state of Python 3 adoption, Alex Gaynor has belatedly arrived at the same conclusion. What is also interesting, however, is the response to his remarks about bridging the compatibility gap between Python 2 and Python 3.

Some people evidently seem to think that Python’s “benevolent dictator for life” Guido van Rossum should try harder to coerce people to use Python 3 instead of Python 2. Back in 2008, I pointed out what was rather obvious from the very start of the introduction of Python 3: it represents a considerable inconvenience for relatively little reward; people effectively need to put in lots of work just to arrive at what they already have with their Python 2 systems, albeit with some kind of future-proofing as a bonus (and even then, Python 3 has had its own share of compatibility discontinuities and false starts).

When people choose to adopt Free Software technologies, which is what most Python implementations are and what Python the language effectively is, they tend to want more control over those technologies, their roadmaps, and the pace at which they need to update and migrate their systems to benefit from the continued development of those technologies. Indeed, people resent being told to do thankless work at the command of other people, since in many cases this is precisely why they have abandoned or rejected proprietary software whose vendors have acted in such a dictatorial fashion towards their users and customers. And coercion can never really work in a viable Free Software community, anyway: as some people have pointed out, if the majority reject the direction of the project, they will happily see the leaders of the project recede into the distance leading a minority group that may even be diminishing, which is arguably what has happened here.

Free Software project leaders and developers have to remember that even if they have an abundance of time, energy and resources to bring to a project, the people who use and depend on that project’s software may not be able to match such investments themselves. Stamping one’s foot at the indifference of a community to adapt to exciting new ways of doing things shows a disregard for the investments the members of that community have already made in the project, and it demonstrates an arrogance which suggests that the supposed needs of the project surpass any and all needs of the project’s users, even when those of the project may be frivolous whereas those of its users may actually determine the continued viability of those users’ businesses or operations.

Opening Up the Road Ahead

When Python core developers by their own admission often do not use Python 3 as their primary platform, members of the wider community should hardly feel bad about sticking to Python 2. Pushing others in front to suffer the trials of migration is just one of the unfortunate traits exhibited by the whole Python 3 affair. The other one that comes to mind is the pandering to people who insisted that breaking backwards compatibility was essential to making Python a better language for their particular domain. I am reminded of various delegations of Lisp users supposedly entertaining a switch to Python (as their own language languishes in popularity) who insisted that certain improvements be made to make Python a worthy enough candidate for their use. Such people are best ignored for the most part (after taking on board any genuinely useful suggestions) because they never tend to stick around, anyway, and if they really are serious to adopt a technology then they will eventually make the effort to migrate all by themselves.

Certainly, one should never inconvenience the existing community to pander to a bunch of fickle potential users. And one should always lead by example, demonstrate the viability of the project’s direction, and show that as the project leadership one is willing to share the resulting inconvenience and thereby seek to reduce it for others through hard-won experience with the problems one is advocating that others should choose to face. Back in 2008, only a small part of my response to the author of the Linux.com article was quoted, but I originally wrote the following:

There is this argument, especially with open source projects but it applies more generally, that no-one is forced to adopt backward incompatible change, yet by indicating that the favoured version of the future is a backward incompatible one, people are confronted with what could be called a roadblock on their roadmap. Who will be responsible for maintenance of the language system if they choose not to take the favoured path? If any security fixes are required in the future, where will they come from?

I sincerely hope that the Python core developers do not continue to impose this roadblock on the community in the hope that people will be encouraged through such coercion to adopt Python 3. Yet it appears that hopes are still high that GNU/Linux distributions will still be able to switch to Python 3 and deprecate Python 2 in the relatively near future, provoked by a refusal of the core developers to continue to support Python 2 (either by maintaining the CPython implementation of it or by acknowledging it as the more widespread form of the language) and an apparent reluctance to allow others to do so under the Python community umbrella.

People will continue to use Python 2 for some time to come, and if they might one day consider switching to Python 3 it will only happen if they have not already been alienated by seemingly petty acts of the Python core developers who have sought to punish them for not enthusiastically embracing the project’s latest vision by some arbitrary deadline. Because, as has also been noted, it may be easier for those users to take another path and thus choose another technology entirely than to deal with this needless obstruction on the road ahead. And then, not even the reluctant embrace of actively maintained Python 2 implementations such as PyPy by the core developers will be able to mend the damage.

Integrating setup-kolab with Debian Packaging

Wednesday, December 18th, 2013

My recent diversion via pykolab may appear to have rather little to do with the matter of improving the Debian packaging situation for Kolab, but after initial tidying exercises with the pykolab code, I started to consider how the setup-kolab program should behave in the context of Debian packaging operations: what it should do when a package is installed, if anything, and whether it can rely on anything having been configured in advance. Until now, setup-kolab has been regarded as a tool that is run once the Kolab software is mostly or completely present on a system, but this is a somewhat different approach to the way many service-providing Debian packages are set up.

Take MySQL as an example. Upon installing the appropriate Debian package providing the MySQL server, the user is prompted to set up the administrative credentials for the server. After this brief interaction, MySQL should be available for general use without further configuration work, although it may be the case that some tuning might be beneficial. It seems to me that Kolab could be delivered with the same convenience in Debian.

The Different Ways of Configuring Kolab

Until now, many of the individual Kolab meta-packages – those grouping together individual components as functional units providing mail transport (MTA), storage (IMAP) or directory (LDAP) functionality – have only indirectly relied on the presence of the kolab-conf package providing the setup-kolab program. Indeed, if the top-level kolab meta-package is installed, kolab-conf will be pulled in as a dependency and setup-kolab will be available, and as noted above, since setup-kolab has been the program that configures a complete system, this makes some sense. However, this causes the work of configuring Kolab to be put off until the very end and thus to pile up so that a user may have to undertake a lengthy question-and-answer process to get the software working, and this can lead to mistakes and a non-working installation.

But setup-kolab is familiar with the notion of configuring individual components. Although the Kolab documentation has emphasised a “one shot” configuration of everything at once, setup-kolab can be asked to configure different individual things, such as the LDAP integration or the integration with Roundcube (the webmail software supported by Kolab). Thus, it becomes interesting to consider whether individual packages can be configured one by one using setup-kolab until they have all been configured, and whether this produces the same desired result of everything having been set up (and hopefully without the same intensity of questioning).

To make setup-kolab available to packages during their configuration, the kolab-conf package providing the program must be a dependency of each of the packages concerned. Here is a nice little diagram illustrating the result of a few adjustments to the dependency graph:

A revised dependency graph for Kolab

A revised dependency graph for Kolab (produced using Graphviz and tidied up using a modified version of vidarh's notugly.xsl stylesheet)

(Here, the pykolab packages are coloured in green and have been allowed to float freely to make the layout cleaner: pykolab contributes in a number of ways and the introduction of kolab-conf as a dependency of a few packages means that it is very much “in demand” by all parts of the graph.)

Now, with kolab-conf available when, for example, kolab-imap is being installed, the post-installation script of kolab-imap will be able to invoke setup-kolab and to make sure that a usable configuration is available by the time the Debian packaging system is done with the installation activity. But one thing might be bothering the attentive reader: why is kolab-ldap not dependent on kolab-conf? In fact, the configuration process has to start somewhere, and since information related to the LDAP directory is rather central to further configuration, this initial configuration takes place when the kolab-conf package is itself installed. And since this initial configuration needs access to the LDAP system, the dependency relationship is “inverted” for this single case, and kolab-conf populates the basic configuration so that subsequent package configuration can take advantage of this initial work.

Communicating with the User

Command line usage of setup-kolab involves some fairly straightforward console-style prompting and input, but during Debian packaging activities, it is widely regarded as a bad thing to have packaging scripts just ask for things from standard input and just write out messages to standard output or standard error: people may be using graphical package management tools that offer alternative facilities for user interaction, and it is possible that console-style operations go completely unnoticed by both the user and such tools. Thus, there is a need to make setup-kolab aware of the facility known as debconf (which is not to be confused with the DebConf series of conferences that may well appear in any searches made to try and discover documentation about the debconf system).

It would appear that debconf is used primarily within shell scripts, as the packaging scripts most commonly use the shell command language, but a goal of this work is surely to avoid replicating the activities of setup-kolab in other programs: it is not particularly desirable to have to rewrite the component-specific parts of pykolab in shell command language just to be able to use debconf to talk to users. Fortunately, the debconf package provides a Python wrapper for the debconf system, and although the documentation and examples are not particularly substantial or enlightening, a bit of experimentation and some consultation of the debconf specification has led to a workable level of interaction between setup-kolab and debconf so that the latter can prompt the user and supply the user’s input to the former without too much complaint.

What Next?

As previously stated, the aim now is to try and get the pykolab changes upstream – regardless of the Debian-related modifications, pykolab has been enhanced in other ways – and to try and reach consensus about whether this way of structuring the Debian dependencies is sensible or not. A substantial number of iterations involving pykolab adjustments, packaging improvements, pbuilder sessions and eventual deployment of the new packages in a virtual machine seem to indicate that the approach is not completely impractical, but it is possible that there are better ways of doing some of this work.

But certainly, my confidence in the Debian packaging situation for Kolab is significantly higher than it was before. I can now see a time when the matter of installing and configuring Kolab (at least for common deployment situations) will be seen as entirely uninteresting and a largely solved problem. Then, people will be able to concentrate on more interesting matters instead.

Adventures in Kolab Packaging and pykolab

Friday, December 13th, 2013

After my previous efforts at building Kolab packages using more traditional Debian methods, my attention then turned to the nature of the software itself, particularly since the Debian Kolab packages (as currently available from the Kolab project’s own repository) do not configure the software so that it is immediately usable: that job is assigned to a program called setup-kolab, which is provided in the kolab-conf Debian package but really lives in the pykolab software distribution. Thus, my focus shifted to what pykolab is and does, and to any ways I might be able to improve or adjust that software.

A quick examination of the Debian packages produced by the pykolab source package gives an indication of the different things pykolab provides: a command framework for configuration (kolab-conf), a command framework for administration (kolab-cli), and an underlying library that supports these and other packages (pykolab). It surprised me slightly that there was so much Python involved with Kolab – my perception had been that the project was mostly KDE-related C++ libraries combined with PHP Web applications (sitting on top of some fairly well-known and established programs and libraries) – and since my primary programming language has been Python for quite some time, I took the opportunity to go through the code and see if there were any trivial but worthwhile adjustments that could be made to make more substantial further improvements more convenient.

Upon emerging from some verbose tidying efforts, hopefully being merged into the upstream code in the near future, I turned my attention to the way setup-kolab behaves. Although the tool mostly does the job already, there are things about it that are perhaps not quite as convenient as they could be. For example, until now, setup-kolab has made the user specify database credentials to access the MySQL database system (unless an explicit component other than MySQL or Roundcube has been indicated as an argument to the program), and in many cases this is not really necessary: the required database may already exist, and on a Debian system there are established ways to query the database system about existing databases without asking the user for the login details. Another desirable feature is that of being able to run setup-kolab and not have it try and perform configuration tasks that have already been done: although prior LDAP directory instances are detected and setup-kolab refuses to go any further unless a special option is given as a command argument, it would be nice if setup-kolab just mentioned that no work needs to be done, at least for most of the components.

Ultimately, setup-kolab might itself be called from packaging scripts invoked during the installation of various different packages, perhaps bringing in the debconf framework to present a standardised interface on Debian systems, but hopefully not demanding the user’s attention at all. Until then, I hope that it can offer a convenient way of setting Kolab up and/or checking that the different components are ready to support Kolab in use.

More Fun with Packaging

Changing the actual pykolab source code did also involve building the different binary packages from the pykolab source package, but this meant that doing other packaging-related things was largely ignored until someone asked a question on the #kolab IRC channel about the free/busy component and how it should be configured to function. I had not looked very much at this component at all, and I discovered that I did not even have it installed. The kolab-freebusy package sits on top of the other Kolab-related packages, so as I had been installing the kolab package and watching various dependencies being pulled in, kolab-freebusy managed to avoid being installed. (This is understandable from a packaging perspective as the free/busy functionality is an extension to the basic services provided by Kolab.)

Upon inspecting the configuration of the kolab-freebusy package, I noticed that it had not been set up right, either in its default configuration or by setup-kolab. Some further investigation revealed that setup-kolab was trying to configure a file that the kolab-freebusy package had renamed, thus making configuration somewhat ineffective. Fortunately, although not without some duplicated work before I realised, this particular bug had already been fixed in a later version of the package. Still, one or two other things also needed changing and thus a trip via pbuilder was in order.

One of the things that needed changing, or at least refining, concerns the relationship between kolab-freebusy and kolab-utils. The latter package provides the “free/busy daemon”, and one may indeed wonder what this is when the free/busy functionality must surely be provided by the package with the more obvious name. In fact, this is where a picture can be very descriptive:

The basic free/busy architecture in Kolab

The basic free/busy architecture in Kolab (made with Graphviz and made nicer using vidarh's notugly.xsl template)

So, since Kolab stores calendar information in IMAP but wants to be able to publish it in a relatively efficient way, the daemon runs periodically and exports free/busy resources for calendar users. Such information is then provided upon request when clients access the free/busy Web service provided by the kolab-freebusy package.

Although one can install only the Web service (in the kolab-freebusy package) and ask for free/busy information, the exercise is rather pointless: there will be no information to retrieve, at least in the default configuration of the system and with the default functionality being offered; only after obtaining and running the daemon (in the kolab-utils package) will the Web service have any use. I am inclined to think that if someone wants to install and use kolab-freebusy, they should also have kolab-utils pulled in for their convenience (or perhaps a package that only provides the daemon).

In Conclusion

All in all, I think Kolab offers plenty of things to explore, review, and – on occasion – fix or improve, but it has to be mentioned that once the packages provided the right directories for logging and free/busy resource storage, and once the Web service started to use the correct configuration, everything seemed to work as intended. I hope that with some more polishing, we will see something that more people will be able to try out and perhaps feel comfortable using in their own organisation.

Some things I would like to see, as I noted on the mailing list, include…

  • Getting some of my changes upstream and into the packaging. On the latter front, the Kolab packaging effort in Debian itself is apparently being revived: this will be highly beneficial in a number of ways, not least that of using Debian-native packaging tools instead of the less-than-optimal Open Build Service.
  • A revival of the Kolab Wiki. It is nice to be able to write things up somewhere, evolve a consensus about how things might be done, and collaboratively describe how things are already done: it is rather hard to do this on mailing lists or using reference documentation tools like Sphinx, and blogging about it only achieves so much.
  • Actually improving the functionality and not just the nuts and bolts of getting Kolab installed and set up.

Obviously, I aim to continue doing these things and not just ask for things and expect to see others doing them, and I am quite sure that there are others in the community who are similarly motivated. If your organisation could benefit from Kolab or does so already, or if it benefits from any Free Software groupware solution, please consider supporting the developers of those solutions in their work.

Without actively developed Free Software groupware, there is a risk that, over time, other Free Software deployed in the average organisation may be under threat as proprietary software vendors and uninformed decision-makers use excuse after excuse to flush out diversity and replace it with their own favourite monoculture of proprietary products and solutions. By supporting Free Software groupware and establishing basic infrastructure founded on Free Software and open standards in your own organisation, you establish an environment of fair competition, nurture a healthy and often necessary technological diversity, and you give your organisation the strategic freedom that would otherwise be denied to you.

To have the choice of Free Software in any particular application or field, we need to sustain those giving us that choice and not just say that we would use it “if only it were better” and then pay a proprietary vendor for something nicer or shinier instead. Please keep such things in mind when evaluating solutions for your organisation’s immediate and future needs.

Neo900: Turning the corner

Monday, December 2nd, 2013

Back when I last wrote about the status of the Neo900 initiative, the fundraising had just begun and the target was a relatively modest €25000 by “crowdfunding” standards. That target was soon reached, but it was only ever the initial target: the sum of money required to prototype the device and to demonstrate that the device really could be made and could eventually be sold to interested customers. Thus, to communicate the further objectives of the project, the Neo900 site updated their funding status bar to show further funding objectives that go beyond mere demonstrations of feasibility and that also cover different levels of production.

So what happened here? Well, one of the slightly confusing things was that even though people were donating towards the project’s goals, it was not really possible to consider all of them as potential customers, so if 200 people had donated something (anything from, say, €10 right up to €5000), one could not really rely on them all coming back later to buy a finished device. People committing €100 or more might be considered as a likely purchaser, especially since donations of that size are effectively treated as pledges to buy and qualify for a rebate on a finished device, but people donating less might just be doing so to support the project. Indeed, people donating €100 or more might also only be doing so to support the project, but it is probably reasonable to expect that the more people have given, the more likely they are to want to buy something in the end. And, of course, if someone donates the entire likely cost of a device, a purchase has effectively been made already.

So even though the initiative was able to gauge a certain level of interest, it was not able to do so precisely purely by considering the amount of financial support it had been receiving. Consequently, by measuring donations of €100 or more, a more realistic impression of the scale of eventual production could be obtained. As most people are aware, producing things in sufficient quantity may be the only way that a product can get made: setup costs, minimum orders of components, and other factors mean that small runs of production are prohibitively expensive. With 200 effective pledges to buy, the initiative can move beyond the prototyping phase and at least consider the production phase – when they are ready, of course – without worrying too much that there will be a lack of customers.

Since my last report, media coverage has even extended into the technology mainstream, with Wired even doing a news article about it. Meanwhile, the project itself demonstrated mechanically compatible hardware and the modem hardware they intend to use, also summarising component availability and potential problems with the sourcing of certain components. For the most part, things are looking good indeed, with perhaps the only cloud on the horizon being a component with a 1000-unit minimum order quantity. That is why the project will not be stopping with 200 potential customers: the more people that jump on board, the greater the chances that everyone will be able to get a better configuration for the device.

If this were a mainstream “crowdfunding” effort, they might call that a “stretch goal”, but it is really a consequence of the way manufacturing is done these days, giving us economies of scale on the one hand, but raising the threshold for new entrants and independent efforts on the other. Perhaps we will eventually see innovations in small-scale manufacturing, not just in the widely-hyped 3D printing field, but for everything from electronic circuits to screens and cases, that may help eliminate some of the huge fixed costs and make it possible to design and make complicated devices relatively cheaply.

It will certainly be interesting to see how many more people choose to extend the lifespan of their N900 by signing up, or how many embrace the kind of smartphone that the “fickle market” supposedly does not want any more. Maybe as more people join in, more will be encouraged to join in as well, and so some kind of snowball effect might occur. Certainly, with the transparency shown in the project so far, people will at least be able to make an informed decision about whether they join in or not. And hopefully, we will eventually see some satisfied customers with open hardware running Free Software, good to go for another few years, emphasizing once again that the combination is an essential ingredient in a sustainable technological society.