Paul Boddie's Free Software-related blog


Archive for December, 2013

Integrating setup-kolab with Debian Packaging

Wednesday, December 18th, 2013

My recent diversion via pykolab may appear to have rather little to do with the matter of improving the Debian packaging situation for Kolab, but after initial tidying exercises with the pykolab code, I started to consider how the setup-kolab program should behave in the context of Debian packaging operations: what it should do when a package is installed, if anything, and whether it can rely on anything having been configured in advance. Until now, setup-kolab has been regarded as a tool that is run once the Kolab software is mostly or completely present on a system, but this is a somewhat different approach to the way many service-providing Debian packages are set up.

Take MySQL as an example. Upon installing the appropriate Debian package providing the MySQL server, the user is prompted to set up the administrative credentials for the server. After this brief interaction, MySQL should be available for general use without further configuration work, although it may be the case that some tuning might be beneficial. It seems to me that Kolab could be delivered with the same convenience in Debian.

The Different Ways of Configuring Kolab

Until now, many of the individual Kolab meta-packages – those grouping together individual components as functional units providing mail transport (MTA), storage (IMAP) or directory (LDAP) functionality – have only indirectly relied on the presence of the kolab-conf package providing the setup-kolab program. Indeed, if the top-level kolab meta-package is installed, kolab-conf will be pulled in as a dependency and setup-kolab will be available, and as noted above, since setup-kolab has been the program that configures a complete system, this makes some sense. However, this causes the work of configuring Kolab to be put off until the very end and thus to pile up so that a user may have to undertake a lengthy question-and-answer process to get the software working, and this can lead to mistakes and a non-working installation.

But setup-kolab is familiar with the notion of configuring individual components. Although the Kolab documentation has emphasised a “one shot” configuration of everything at once, setup-kolab can be asked to configure different individual things, such as the LDAP integration or the integration with Roundcube (the webmail software supported by Kolab). Thus, it becomes interesting to consider whether individual packages can be configured one by one using setup-kolab until they have all been configured, and whether this produces the same desired result of everything having been set up (and hopefully without the same intensity of questioning).

To make setup-kolab available to packages during their configuration, the kolab-conf package providing the program must be a dependency of each of the packages concerned. Here is a nice little diagram illustrating the result of a few adjustments to the dependency graph:

A revised dependency graph for Kolab

A revised dependency graph for Kolab (produced using Graphviz and tidied up using a modified version of vidarh's notugly.xsl stylesheet)

(Here, the pykolab packages are coloured in green and have been allowed to float freely to make the layout cleaner: pykolab contributes in a number of ways and the introduction of kolab-conf as a dependency of a few packages means that it is very much “in demand” by all parts of the graph.)

Now, with kolab-conf available when, for example, kolab-imap is being installed, the post-installation script of kolab-imap will be able to invoke setup-kolab and to make sure that a usable configuration is available by the time the Debian packaging system is done with the installation activity. But one thing might be bothering the attentive reader: why is kolab-ldap not dependent on kolab-conf? In fact, the configuration process has to start somewhere, and since information related to the LDAP directory is rather central to further configuration, this initial configuration takes place when the kolab-conf package is itself installed. And since this initial configuration needs access to the LDAP system, the dependency relationship is “inverted” for this single case, and kolab-conf populates the basic configuration so that subsequent package configuration can take advantage of this initial work.

Communicating with the User

Command line usage of setup-kolab involves some fairly straightforward console-style prompting and input, but during Debian packaging activities, it is widely regarded as a bad thing to have packaging scripts just ask for things from standard input and just write out messages to standard output or standard error: people may be using graphical package management tools that offer alternative facilities for user interaction, and it is possible that console-style operations go completely unnoticed by both the user and such tools. Thus, there is a need to make setup-kolab aware of the facility known as debconf (which is not to be confused with the DebConf series of conferences that may well appear in any searches made to try and discover documentation about the debconf system).

It would appear that debconf is used primarily within shell scripts, as the packaging scripts most commonly use the shell command language, but a goal of this work is surely to avoid replicating the activities of setup-kolab in other programs: it is not particularly desirable to have to rewrite the component-specific parts of pykolab in shell command language just to be able to use debconf to talk to users. Fortunately, the debconf package provides a Python wrapper for the debconf system, and although the documentation and examples are not particularly substantial or enlightening, a bit of experimentation and some consultation of the debconf specification has led to a workable level of interaction between setup-kolab and debconf so that the latter can prompt the user and supply the user’s input to the former without too much complaint.

What Next?

As previously stated, the aim now is to try and get the pykolab changes upstream – regardless of the Debian-related modifications, pykolab has been enhanced in other ways – and to try and reach consensus about whether this way of structuring the Debian dependencies is sensible or not. A substantial number of iterations involving pykolab adjustments, packaging improvements, pbuilder sessions and eventual deployment of the new packages in a virtual machine seem to indicate that the approach is not completely impractical, but it is possible that there are better ways of doing some of this work.

But certainly, my confidence in the Debian packaging situation for Kolab is significantly higher than it was before. I can now see a time when the matter of installing and configuring Kolab (at least for common deployment situations) will be seen as entirely uninteresting and a largely solved problem. Then, people will be able to concentrate on more interesting matters instead.

Adventures in Kolab Packaging and pykolab

Friday, December 13th, 2013

After my previous efforts at building Kolab packages using more traditional Debian methods, my attention then turned to the nature of the software itself, particularly since the Debian Kolab packages (as currently available from the Kolab project’s own repository) do not configure the software so that it is immediately usable: that job is assigned to a program called setup-kolab, which is provided in the kolab-conf Debian package but really lives in the pykolab software distribution. Thus, my focus shifted to what pykolab is and does, and to any ways I might be able to improve or adjust that software.

A quick examination of the Debian packages produced by the pykolab source package gives an indication of the different things pykolab provides: a command framework for configuration (kolab-conf), a command framework for administration (kolab-cli), and an underlying library that supports these and other packages (pykolab). It surprised me slightly that there was so much Python involved with Kolab – my perception had been that the project was mostly KDE-related C++ libraries combined with PHP Web applications (sitting on top of some fairly well-known and established programs and libraries) – and since my primary programming language has been Python for quite some time, I took the opportunity to go through the code and see if there were any trivial but worthwhile adjustments that could be made to make more substantial further improvements more convenient.

Upon emerging from some verbose tidying efforts, hopefully being merged into the upstream code in the near future, I turned my attention to the way setup-kolab behaves. Although the tool mostly does the job already, there are things about it that are perhaps not quite as convenient as they could be. For example, until now, setup-kolab has made the user specify database credentials to access the MySQL database system (unless an explicit component other than MySQL or Roundcube has been indicated as an argument to the program), and in many cases this is not really necessary: the required database may already exist, and on a Debian system there are established ways to query the database system about existing databases without asking the user for the login details. Another desirable feature is that of being able to run setup-kolab and not have it try and perform configuration tasks that have already been done: although prior LDAP directory instances are detected and setup-kolab refuses to go any further unless a special option is given as a command argument, it would be nice if setup-kolab just mentioned that no work needs to be done, at least for most of the components.

Ultimately, setup-kolab might itself be called from packaging scripts invoked during the installation of various different packages, perhaps bringing in the debconf framework to present a standardised interface on Debian systems, but hopefully not demanding the user’s attention at all. Until then, I hope that it can offer a convenient way of setting Kolab up and/or checking that the different components are ready to support Kolab in use.

More Fun with Packaging

Changing the actual pykolab source code did also involve building the different binary packages from the pykolab source package, but this meant that doing other packaging-related things was largely ignored until someone asked a question on the #kolab IRC channel about the free/busy component and how it should be configured to function. I had not looked very much at this component at all, and I discovered that I did not even have it installed. The kolab-freebusy package sits on top of the other Kolab-related packages, so as I had been installing the kolab package and watching various dependencies being pulled in, kolab-freebusy managed to avoid being installed. (This is understandable from a packaging perspective as the free/busy functionality is an extension to the basic services provided by Kolab.)

Upon inspecting the configuration of the kolab-freebusy package, I noticed that it had not been set up right, either in its default configuration or by setup-kolab. Some further investigation revealed that setup-kolab was trying to configure a file that the kolab-freebusy package had renamed, thus making configuration somewhat ineffective. Fortunately, although not without some duplicated work before I realised, this particular bug had already been fixed in a later version of the package. Still, one or two other things also needed changing and thus a trip via pbuilder was in order.

One of the things that needed changing, or at least refining, concerns the relationship between kolab-freebusy and kolab-utils. The latter package provides the “free/busy daemon”, and one may indeed wonder what this is when the free/busy functionality must surely be provided by the package with the more obvious name. In fact, this is where a picture can be very descriptive:

The basic free/busy architecture in Kolab

The basic free/busy architecture in Kolab (made with Graphviz and made nicer using vidarh's notugly.xsl template)

So, since Kolab stores calendar information in IMAP but wants to be able to publish it in a relatively efficient way, the daemon runs periodically and exports free/busy resources for calendar users. Such information is then provided upon request when clients access the free/busy Web service provided by the kolab-freebusy package.

Although one can install only the Web service (in the kolab-freebusy package) and ask for free/busy information, the exercise is rather pointless: there will be no information to retrieve, at least in the default configuration of the system and with the default functionality being offered; only after obtaining and running the daemon (in the kolab-utils package) will the Web service have any use. I am inclined to think that if someone wants to install and use kolab-freebusy, they should also have kolab-utils pulled in for their convenience (or perhaps a package that only provides the daemon).

In Conclusion

All in all, I think Kolab offers plenty of things to explore, review, and – on occasion – fix or improve, but it has to be mentioned that once the packages provided the right directories for logging and free/busy resource storage, and once the Web service started to use the correct configuration, everything seemed to work as intended. I hope that with some more polishing, we will see something that more people will be able to try out and perhaps feel comfortable using in their own organisation.

Some things I would like to see, as I noted on the mailing list, include…

  • Getting some of my changes upstream and into the packaging. On the latter front, the Kolab packaging effort in Debian itself is apparently being revived: this will be highly beneficial in a number of ways, not least that of using Debian-native packaging tools instead of the less-than-optimal Open Build Service.
  • A revival of the Kolab Wiki. It is nice to be able to write things up somewhere, evolve a consensus about how things might be done, and collaboratively describe how things are already done: it is rather hard to do this on mailing lists or using reference documentation tools like Sphinx, and blogging about it only achieves so much.
  • Actually improving the functionality and not just the nuts and bolts of getting Kolab installed and set up.

Obviously, I aim to continue doing these things and not just ask for things and expect to see others doing them, and I am quite sure that there are others in the community who are similarly motivated. If your organisation could benefit from Kolab or does so already, or if it benefits from any Free Software groupware solution, please consider supporting the developers of those solutions in their work.

Without actively developed Free Software groupware, there is a risk that, over time, other Free Software deployed in the average organisation may be under threat as proprietary software vendors and uninformed decision-makers use excuse after excuse to flush out diversity and replace it with their own favourite monoculture of proprietary products and solutions. By supporting Free Software groupware and establishing basic infrastructure founded on Free Software and open standards in your own organisation, you establish an environment of fair competition, nurture a healthy and often necessary technological diversity, and you give your organisation the strategic freedom that would otherwise be denied to you.

To have the choice of Free Software in any particular application or field, we need to sustain those giving us that choice and not just say that we would use it “if only it were better” and then pay a proprietary vendor for something nicer or shinier instead. Please keep such things in mind when evaluating solutions for your organisation’s immediate and future needs.

Neo900: Turning the corner

Monday, December 2nd, 2013

Back when I last wrote about the status of the Neo900 initiative, the fundraising had just begun and the target was a relatively modest €25000 by “crowdfunding” standards. That target was soon reached, but it was only ever the initial target: the sum of money required to prototype the device and to demonstrate that the device really could be made and could eventually be sold to interested customers. Thus, to communicate the further objectives of the project, the Neo900 site updated their funding status bar to show further funding objectives that go beyond mere demonstrations of feasibility and that also cover different levels of production.

So what happened here? Well, one of the slightly confusing things was that even though people were donating towards the project’s goals, it was not really possible to consider all of them as potential customers, so if 200 people had donated something (anything from, say, €10 right up to €5000), one could not really rely on them all coming back later to buy a finished device. People committing €100 or more might be considered as a likely purchaser, especially since donations of that size are effectively treated as pledges to buy and qualify for a rebate on a finished device, but people donating less might just be doing so to support the project. Indeed, people donating €100 or more might also only be doing so to support the project, but it is probably reasonable to expect that the more people have given, the more likely they are to want to buy something in the end. And, of course, if someone donates the entire likely cost of a device, a purchase has effectively been made already.

So even though the initiative was able to gauge a certain level of interest, it was not able to do so precisely purely by considering the amount of financial support it had been receiving. Consequently, by measuring donations of €100 or more, a more realistic impression of the scale of eventual production could be obtained. As most people are aware, producing things in sufficient quantity may be the only way that a product can get made: setup costs, minimum orders of components, and other factors mean that small runs of production are prohibitively expensive. With 200 effective pledges to buy, the initiative can move beyond the prototyping phase and at least consider the production phase – when they are ready, of course – without worrying too much that there will be a lack of customers.

Since my last report, media coverage has even extended into the technology mainstream, with Wired even doing a news article about it. Meanwhile, the project itself demonstrated mechanically compatible hardware and the modem hardware they intend to use, also summarising component availability and potential problems with the sourcing of certain components. For the most part, things are looking good indeed, with perhaps the only cloud on the horizon being a component with a 1000-unit minimum order quantity. That is why the project will not be stopping with 200 potential customers: the more people that jump on board, the greater the chances that everyone will be able to get a better configuration for the device.

If this were a mainstream “crowdfunding” effort, they might call that a “stretch goal”, but it is really a consequence of the way manufacturing is done these days, giving us economies of scale on the one hand, but raising the threshold for new entrants and independent efforts on the other. Perhaps we will eventually see innovations in small-scale manufacturing, not just in the widely-hyped 3D printing field, but for everything from electronic circuits to screens and cases, that may help eliminate some of the huge fixed costs and make it possible to design and make complicated devices relatively cheaply.

It will certainly be interesting to see how many more people choose to extend the lifespan of their N900 by signing up, or how many embrace the kind of smartphone that the “fickle market” supposedly does not want any more. Maybe as more people join in, more will be encouraged to join in as well, and so some kind of snowball effect might occur. Certainly, with the transparency shown in the project so far, people will at least be able to make an informed decision about whether they join in or not. And hopefully, we will eventually see some satisfied customers with open hardware running Free Software, good to go for another few years, emphasizing once again that the combination is an essential ingredient in a sustainable technological society.