Paul Boddie's Free Software-related blog


Archive for the ‘mail’ Category

Some Ideas for 2019

Wednesday, January 23rd, 2019

Well, after my last article moaning about having wishes and goals while ignoring the preconditions for, and contributing factors in, the realisation of such wishes and goals, I thought I might as well be constructive and post some ideas I could imagine working on this year. It would be a bonus to get paid to work on such things, but I don’t hold out too much hope in that regard.

In a way, this is to make up for not writing an article summarising what I managed to look at in 2018. But then again, it can be a bit wearing to have to read through people’s catalogues of work even if I do try and make my own more approachable and not just list tons of work items, which is what one tends to see on a monthly basis in other channels.

In any case, 2018 saw a fair amount of personal focus on the L4Re ecosystem, as one can tell from looking at my article history. Having dabbled with L4Re and Fiasco.OC a bit in 2017 with the MIPS Creator CI20, I finally confronted certain aspects of the software and got it working on various devices, which had been something of an ambition for at least a couple of years. I also got back into looking at PIC32 hardware and software experiments, tidying up and building on earlier work, and I keep nudging along my Python-like language and toolchain, Lichen.

Anyway, here are a few ideas I have been having for supporting a general strategy of building flexible, sustainable and secure computing environments that respect the end-user. Such respect not being limited to software freedom, but also extending to things like privacy, affordability and longevity that are often disregarded in the narrow focus on only one set of end-user rights.

Building a General-Purpose System with L4Re

Apart from writing unfinished articles about supporting hardware devices on the Ben NanoNote and Letux 400, I also spent some time last year considering the mechanisms supporting filesystems in L4Re. For an outsider like myself, the picture isn’t particularly clear, but the mechanisms don’t really seem particularly well documented, and I am not convinced that the filesystem support is what people might expect from a microkernel-based system.

Like L4Re’s device support, the way filesystems are made available to tasks appears to use libraries extensively, whereas I would expect more use of individual programs, with interprocess messages and shared memory being employed to move the data around. To evaluate my expectations, I have been writing programs that operate in such a way, employing a “toy” filesystem to test the viability of such an approach. The plan is to make use of libext2fs to expose ext2/3/4 filesystems to L4Re components, then to try and replace the existing file access mechanisms with ones that access these file-serving components.

It is unfortunate that systems like these no longer seem to get much attention from the wider Free Software community. There was once a project to port GNU Hurd to L4-family microkernels, but with the state of the art having seemingly been regarded as insufficient or inappropriate, the focus drifted off onto other things, and now there doesn’t seem to be much appetite to resume such work. Even the existing Hurd implementation doesn’t get that much interest these days, either. And yet there are plenty of technical, social and practical reasons for not just settling for using the Linux kernel and systems based on it for every last application and device.

Extending Hardware Support within L4Re

It is all very well developing filesystem support, but there also has to be support for the things that provide the storage on which those filesystems reside. This is something I didn’t bother to look at when getting L4Re working on various devices because the priority was to have something to show, meaning that the display had to work, along with testing and demonstrating other well-understood peripherals, with the keyboard or keypad being something that could be supported with relative ease and then used to demonstrate other aspects of the solution.

It seems perverse that one must implement support for SD or microSD card storage all over again when the software being run is already being loaded from such storage, but this is rather like the way that “live CD” versions of GNU/Linux would load an environment directly from a CD, yet an installed version of such an environment might not have the capability to access the CD drive. Still, this is an unavoidable step along the path to make a practical system.

It might also be nice to make the CI20 support a bit better. Although a device notionally supported by L4Re, various missing pieces of hardware support needed to be added, and the HDMI output capability remains unavailable. Here, the mystery hardware left undocumented by the datasheet happens to be used in other chipsets and has been supported in the Linux kernel for many of them for quite some time. Hopefully, the exercise will not be too frustrating.

Another device that might be a good candidate for L4Re is the Efika MX Smartbook. Although having a modest specification by today’s bloated-Web and pointless-eye-candy standards, it has a nice keyboard (with a more sensible layout than the irritating HP Elitebook, as I recently discovered) and is several times more powerful than the Letux 400. My brother, David, has already investigated certain aspects of the hardware, and it might make the basis of a nice portable system. And since support in Linux and other more commonly-used technologies has been left to rot, why not look into developing a more lasting alternative?

Reviving Mail-Based Communication

It is tiresome to see the continuing neglect of the health of e-mail, despite it being used as the bedrock of the Internet’s activities, while proprietary and predatory social media platforms enjoy undeserved attention and promotion in mass media and in society at large. Governmental and corporate negligence mean that the average person is subjected to an array of undesirable, disturbing and generally unwanted communications from chancers and criminals through their e-mail accounts which, if it had ever happened to the same degree with postal mail, would have seen people routinely arrested and convicted for their activities.

It is tempting to think that “they” do not want “us” to have a more reliable form of mail-based communication because that would involve things like encryption and digital signatures. Even when these things are deemed necessary, they always seem to be rolled out as part of a special service that hosts “our” encryption and signing keys for us, through which we must go to access our messages. It is, I suppose, yet another example of the abandonment of common infrastructure: that when something better is needed, effort and attention is siphoned off from the “commons” and ploughed into something separate that might make someone a bit of money.

There are certainly challenges involved in making e-mail better, with any discussion of managing identities, vouching for and recognising correspondents, and the management of keys most likely to lead to dispute about the “best” way of doing things. But in the end, we probably find ourselves pursuing perfect solutions that do everything whilst proprietary competitors just focus on doing a handful of things effectively. So I envisage turning this around and evaluating whether a more limited form of mail-based communication can be done in the way that most people would need.

I did look fairly recently at a selection of different projects seeking to advise and support people on providing their own e-mail infrastructure. That is perhaps worth an article in its own right. And on the subject of mail-based communication, I hope to look a bit more at imip-agent again after neglecting it for so long.

Sustaining a Python Alternative

One motivation for developing my Python-like language and toolchain, Lichen, was to explore ways in which Python might have been developed to better suit my own needs and preferences. Although I still use Python extensively, I remain mindful of the need to write conservative, uncomplicated code without the kind of needless tricks and flourishes that Python’s expanding feature set can tempt the developer with, and thus I almost always consider the possibility of being able to use the Lichen toolchain with my projects one day.

Lichen may still be a proof of concept, but there has been work done on gradually pushing it towards being genuinely usable. I spent some time considering the way floating point numbers might be represented, and this raised some interesting issues around how they might be stored within instances. Like the tuple performance optimisations, I hope to introduce floating point support into the established feature set of Lichen and hopefully offer decent-enough performance, with the latter aspect being yet another path of investigation this year.

Documenting and Publishing

A continuing worry I have is whether I should still be running MoinMoin sites or even sites derived from MoinMoin “export dumps” for published information that is merely static. I have spent some time developing a parsing and formatting tool to generate static content from Moin content, thus avoiding running Moin altogether and also avoiding having to run a script acting as a URL-preserving front-end to exported Moin content (which is unfortunately required because of how the “export dump” seems to work).

Currently, this tool supports HTML and Moin output, the latter to test the parsing activity, with Graphviz content rendered as inline SVG or in other supported formats (although inline SVG is really what you want). Some macros are supported, but I need to add support for others, like the search macros which are useful for generating page listings. Themes are also supported, but I need to make sure that things like tables of contents – already supported with a macro – can be styled appropriately.

Already, I can generate the bulk of my existing project documentation, and the aim here is to be able to focus on improving that documentation, particularly for things like Lichen that really need explanations to be written before I need to start reviewing the code from scratch as if I were a total newcomer to the work. I have also considered using this tool as the basis for a decentralised wiki solution, but that can probably wait for a while given how many other things I have said I want to do!

Anything More?

There are probably other things that are worth looking at, but these are perhaps the ones I feel are most pressing. It could be said that pursuing all these at once would spread me and my efforts very thin, but I tend to cycle through projects periodically and hope that I can catch up with what I was previously doing, hence the mention above of documenting my work.

I wonder how much other people think about the year ahead, whether it is a productive and ultimately rewarding exercise to state aspirations and goals in this kind of way. New Year’s resolutions are a familiar practice, of course, but here I make no promises!

Nevertheless, a belated Happy New Year to anyone still reading! I hope we can all pursue our objectives enthusiastically over the year ahead and make a real and positive difference to computing, its users and to our societies.

In Defence of Mail

Monday, November 6th, 2017

A recent LWN.net article, “The trouble with text-only email“, gives us an insight through an initially-narrow perspective into a broader problem: how the use of e-mail by organisations and its handling as it traverses the Internet can undermine the viability of the medium. And how organisations supposedly defending the Internet as a platform can easily find themselves abandoning technologies that do not sit well with their “core mission”, not to mention betraying that mission by employing dubious technological workarounds.

To summarise, the Mozilla organisation wants its community to correspond via mailing lists but, being the origin of the mails propagated to list recipients when someone communicates with one of their mailing lists, it finds itself under the threat of being blacklisted as a spammer. This might sound counterintuitive: surely everyone on such lists signed up for mails originating from Mozilla in order to be on the list.

Unfortunately, the elevation of Mozilla to being a potential spammer says more about the stack of workaround upon workaround, second- and third-guessing, and the “secret handshakes” that define the handling of e-mail today than it does about anything else. Not that factions in the Mozilla organisation have necessarily covered themselves in glory in exploring ways of dealing with their current problem.

The Elimination Problem

Let us first identify the immediate problem here. No, it is not spamming as such, but it is the existence of dubious “reputation” services who cause mail to be blocked on opaque and undemocratic grounds. I encountered one of these a few years ago when trying to send a mail to a competition and finding that such a service had decided that my mail hosting provider’s Internet address was somehow “bad”.

What can one do when placed in such a situation? Appealing to the blacklisting service will not do an individual any good. Instead, one has to ask one’s mail provider to try and fix the issue, which in my case they had actually been trying to do for some time. My mail never got through in the end. Who knows how long it took to persuade the blacklisting service to rectify what might have been a mistake?

Yes, we all know that the Internet is awash with spam. And yes, mechanisms need to be in place to deal with it. But such mechanisms need to be transparent and accountable. Without these things, all sorts of bad things can take place: censorship, harassment, and forms of economic crime spring readily to mind. It should be a general rule of thumb in society that when someone exercises power over others, such power must be controlled through transparency (so that it is not arbitrary and so that everyone knows what the rules are) and through accountability (so that decisions can be explained and judged to have been properly taken and acted upon).

We actually need better ways of eliminating spam and other misuse of common communications mechanisms. But for now we should at least insist that whatever flawed mechanisms that exist today uphold the democratic principles described above.

The Marketing Problem

Although Mozilla may have distribution lists for marketing purposes, its problem with mailing lists is something of a different creature. The latter are intended to be collaborative and involve multiple senders of the original messages: a many-to-many communications medium. Meanwhile, the former is all about one-to-many messaging, and in this regard we stumble across the root of the spam problem.

Obviously, compulsive spammers are people who harvest mail addresses from wherever they can be found, trawling public data or buying up lists of addresses sourced during potentially unethical activities. Such spammers create a huge burden on society’s common infrastructure, but they are hardly the only ones cultivating that burden. Reputable businesses, even when following the law communicating with their own customers, often employ what can be regarded as a “clueless” use of mail as a marketing channel without any thought to the consequences.

Businesses might want to remind you of their products and encourage you to receive their mails. The next thing you know, you get messages three times a week telling you about products that are barely of interest to you. This may be a “win” for the marketing department – it is like advertising on television but cheaper because you don’t have to bid against addiction-exploiting money launderers gambling companies, debt sharks consumer credit companies or environment-trashing, cure peddlers nutritional supplement companies for “eyeballs” – but it cheapens and worsens the medium for everybody who uses it for genuine interpersonal communication and not just for viewing advertisements.

People view e-mail and mail software as a lost cause in the face of wave after wave of illegal spam and opportunistic “spammy” marketing. “Why bother with it at all?” they might ask, asserting that it is just a wastebin that one needs to empty once a week as some kind of chore, before returning to one’s favourite “social” tools (also plagued with spam and surveillance, but consistency is not exactly everybody’s strong suit).

The Authenticity Problem

Perhaps to escape problems with the overly-zealous blacklisting services, it is not unusual to get messages ostensibly from a company, being a customer of theirs, but where the message originates from some kind of marketing communications service. The use of such a service may be excusable depending on how much information is shared, what kinds of safeguards are in place, and so on. What is less excusable is the way the communication is performed.

I actually experience this with financial institutions, which should be a significant area of concern both for individuals, the industry and its regulators. First of all, the messages are not encrypted, which is what one might expect given that the sender would need some kind of public key information that I haven’t provided. But provided that the message details are not sensitive (although sometimes they have been, which is another story), we might not set our expectations so high for these communications.

However, of more substantial concern is the way that when receiving such mails, we have no way of verifying that they really originated from the company they claim to have come from. And when the mail inevitably contains links to things, we might be suspicious about where those links, even if they are URLs in plain text messages, might want to lead us.

The recipient is now confronted with a collection of Internet domain names that may or may not correspond to the identities of reputable organisations, some of which they might know as a customer, others they might be aware of, but where the recipient must also exercise the correct judgement about the relationship between the companies they do use and these other organisations with which they have no relationship. Even with a great deal of peripheral knowledge, the recipient needs to exercise caution that they do not go off to random places on the Internet and start filling out their details on the say-so of some message or other.

Indeed, I have a recent example of this. One financial institution I use wants me to take a survey conducted by a company I actually have heard of in that line of business. So far, so plausible. But then, the site being used to solicit responses is one I have no prior knowledge of: it could be a reputable technology business or it could be some kind of “honeypot”; that one of the domains mentioned contains “cloud” also does not instil confidence in the management of the data. To top it all, the mail is not cryptographically signed and so I would have to make a judgement on its authenticity based on some kind of “tea-leaf-reading” activity using the message headers or assume that the institution is likely to want to ask my opinion about something.

The Identity Problem

With the possibly-authentic financial institution survey message situation, we can perhaps put our finger on the malaise in the use of mail by companies wanting our business. I already have a heavily-regulated relationship with the company concerned. They seemingly emphasise issues like security when I present myself to their Web sites. Why can they not at least identify themselves correctly when communicating with me?

Some banks only want electronic communications to take place within their hopefully-secure Web site mechanisms, offering “secure messaging” and similar things. Others also offer such things, either two-way or maybe only customer-to-company messaging, but then spew e-mails at customers anyway, perhaps under the direction of the sales and marketing branches of the organisation.

But if they really must send mails, why can they not leverage their “secure” assets to allow me to obtain identifying information about them, so that their mails can be cryptographically signed and so that I can install a certificate and verify their authenticity? After all, if you cannot trust a bank to do these things, which other common institutions can you trust? Such things have to start somewhere, and what better place to start than in the banking industry? These people are supposed to be good at keeping things under lock and key.

The Responsibility Problem

This actually returns us to the role of Mozilla. Being a major provider of software for accessing the Internet, the organisation maintains a definitive list of trusted parties through whom the identity of Web sites can be guaranteed (to various degrees) when one visits them with a browser. Mozilla’s own sites employ certificates so that people browsing them can have their privacy upheld, so it should hardly be inconceivable for the sources of Mozilla’s mail-based communications to do something similar.

Maybe S/MIME would be the easiest technology to adopt given the similarities between its use of certificates and certificate authorities and the way such things are managed for Web sites. Certainly, there are challenges with message signing and things like mailing lists, this being a recurring project for GNU Mailman if I remember correctly (and was paying enough attention), but nothing solves a longstanding but largely underprioritised problem than a concrete need and the will to get things done. Mozilla has certainly tried to do identity management in the past, recalling initiatives like Mozilla Persona, and the organisation is surely reasonably competent in that domain.

In the referenced article, Mozilla was described as facing an awkward technical problem: their messages were perceived as being delivered indiscriminately to an audience of which large portions may not have been receiving or taking receipt of the messages. This perception of indiscriminate, spam-like activity being some kind of metric employed by blacklisting services. The proposed remedy for potential blacklisting involved the elimination of plain text e-mail from Mozilla’s repertoire and the deployment of HTML-only mail, with the latter employing links to images that would load upon the recipient opening the message. (Never mind that many mail programs prevent this.)

The rationale for this approach was that Mozilla would then know that people were getting the mail and that by pruning away those who didn’t reveal their receipt of the message, the organisation could then be more certain of not sending mail to large numbers of “inactive” recipients, thus placating the blacklisting services. Now, let us consider principle #4 of the Mozilla manifesto:

Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.

Given such a principle, why then is the focus on tracking users and violating their privacy, not on deploying a proper solution and just sending properly-signed mail? Is it because the mail is supposedly not part of the Web or something?

The Proprietary Service Problem

Mozilla can be regarded as having a Web-first organisational mentality which, given its origins, should not be too surprising. Although the Netscape browser was extended to include mail facilities and thus Navigator became Communicator, and although the original Mozilla browser attempted to preserve a range of capabilities not directly related to hypertext browsing, Firefox became the organisation’s focus and peripheral products such as Thunderbird have long struggled for their place in the organisation’s portfolio.

One might think that the decision-makers at Mozilla believe that mundane things like mail should be done through a Web site as webmail and that everyone might as well use an established big provider for their webmail needs. After all, the vision of the Web as a platform in its own right, once formulated as Netscape Constellation in more innocent times, can be used to justify pushing everything onto the Web.

The problem here is that as soon as almost everyone has been herded into proprietary service “holding pens”, expecting a free mail service while having their private communications mined for potential commercial value, things like standards compliance and interoperability suffer. Big webmail providers don’t need to care about small mail providers. Too bad if the big provider blacklists the smaller one: most people won’t even notice, and why don’t the users of the smaller provider “get with it” and use what everybody else is using, anyway?

If everyone ends up almost on the same server or cluster of servers or on one of a handful of such clusters, why should the big providers bother to do anything by the book any more? They can make all sorts of claims about it being more efficient to do things their own way. And then, mail is no longer a decentralised, democratic tool any more: its users end up being trapped in a potentially exploitative environment with their access to communications at risk of being taken away at a moment’s notice, should the provider be persuaded that some kind of wrong has been committed.

The Empowerment Problem

Ideally, everyone would be able to assert their own identity and be able to verify the identity of those with whom they communicate. With this comes the challenge in empowering users to manage their own identities in a way which is resistant to “identity theft”, impersonation, and accidental loss of credentials that could have a severe impact on a person’s interactions with necessary services and thus on their life in general.

Here, we see the failure of banks and other established, trusted organisations to make this happen. One might argue that certain interests, political and commercial, do not want individuals controlling their own identity or their own use of cryptographic technologies. Even when such technologies have been deployed so that people can be regarded as having signed for something, it usually happens via a normal secured Web connection with a button on a Web form, everything happening at arm’s length. Such signatures may not even be any kind of personal signature at all: they may just be some kind of transaction surrounded by assumptions that it really was “that person” because they logged in with their credentials and there are logs to “prove” it.

Leaving the safeguarding of cryptographic information to the average Internet user seems like a scary thing to do. People’s computers are not particularly secure thanks to the general neglect of security by the technology industry, nor are they particularly usable or understandable, especially when things that must be done right – like cryptography – are concerned. It also doesn’t help that when trying to figure out best practices for key management, it almost seems like every expert has their own advice, leaving the impression of a cacophony of voices, even for people with a particular interest in the topic and an above-average comprehension of the issues.

Most individuals in society might well struggle if left to figure out a technical solution all by themselves. But institutions exist that are capable of operating infrastructure with a certain level of robustness and resilience. And those institutions seem quite happy with the credentials I provide to identify myself with them, some of which being provided by bits of hardware they have issued to me.

So, it seems to me that maybe they could lead individuals towards some kind of solution whereupon such institutions could vouch for a person’s digital identity, provide that person with tools (possibly hardware) to manage it, and could help that person restore their identity in cases of loss or theft. This kind of thing is probably happening already, given that smartcard solutions have been around for a while and can be a component in such solutions, but here the difference would be that each of us would want help to manage our own identity, not merely retain and present a bank-issued identity for the benefit of the bank’s own activities.

The Real Problem

The LWN.net article ends with a remark mentioning that “the email system is broken”. Given how much people complain about it, yet the mail still keeps getting through, it appears that the brokenness is not in the system as such but in the way it has been misused and undermined by those with the power to do something about it.

That the metric of being able to get “pull requests through to Linus Torvalds’s Gmail account” is mentioned as some kind of evidence perhaps shows that people’s conceptions of e-mail are themselves broken. One is left with an impression that electronic mail is like various other common resources that are systematically and deliberately neglected by vested interests so that they may eventually fail, leaving those vested interests to blatantly profit from the resulting situation while making remarks about the supposed weaknesses of those things they have wilfully destroyed.

Still, this is a topic that cannot be ignored forever, at least if we are to preserve things like genuinely open and democratic channels of communication whose functioning may depend on decent guarantees of people’s identities. Without a proper identity or trust infrastructure, we risk delegating every aspect of our online lives to unaccountable and potentially hostile entities. If it all ends up with everyone having to do their banking inside their Facebook account, it would be well for the likes of Mozilla to remember that at such a point there is no consolation to be had any more that at least everything is being done in a Web browser.

imip-agent: Integrating Calendaring with E-Mail

Tuesday, November 17th, 2015

Longer ago than I had, until now, realised, I wrote an article about my ongoing exploration of groupware and, specifically, calendaring. As I noted in that article, I felt that a broader range of options may be needed for those wishing to expand their use of communications technologies beyond plain e-mail and into the structured exchange of other kinds of information, whilst retaining and building upon that e-mail infrastructure.

And I noted that more often than not, people wanting to increase their ambitions in this regard are often confronted with the prospect of abandoning what they already use successfully, instead being obliged to adopt a complete package of technologies, some of which they may not even need. While proprietary software and service vendors might pursue such strategies of persuasion – getting the big sale or the big contract – it is baffling that Free Software projects might also put potential users on the spot in the same way. After all, Free Software is very much about choice and control.

So, as I spelled out in that previous article, there may be some mileage in trying to offer extensions to existing infrastructure so that people can increase their communications capabilities whilst retaining the technologies they already know. And in some depth (and at some length), I described what a mail-centred calendaring solution might need to provide in order to address most people’s needs. Finally, I promised to make my own efforts available in this area so that anyone remotely interested in the topic might get some benefit from it.

Last month, I started a very brief exchange on a Debian- and groupware-related mailing list about such matters, just to see what people interested in groupware projects might think, also attempting to find out what they use for calendaring themselves. (Unfortunately, there doesn’t seem to be so many non-product-specific, public and open places to discuss matters such as this one. Search mail software lists for calendaring discussions and you may even get to see hostility towards anyone mentioning groupware.) Ultimately, to keep the discussion concrete, I decided to announce informally what I have been working on.

Introducing imip-agent

imip-agent logoCalendaring and distributed scheduling can be achieved over e-mail using the iMIP standard. My work relies on this standard to function, providing programs that are integrated in mail transfer agents (MTAs) acting as calendaring agents. Thus, I decided to call the project imip-agent.

Initially, and as noted previously, my interest in such matters started with the mail handling functionality of Kolab and the component called Wallace that is responsible for responding to requests sent to certain e-mail addresses. Meanwhile, Kolab provided (and maybe still provides) a rather inelegant way of preparing “free/busy” information describing the availability of calendar system participants: a daemon program would run periodically, scanning mailboxes for events stored in special folders, and generate completely new manifests of each user’s schedule. (This may have changed since I last looked at Kolab in any serious manner.)

It occurred to me that the exchange of messages between participants in a scheduling transaction should be sufficient to maintain a live record of each participant’s availability, and that some experimentation would demonstrate the feasibility or infeasibility of such an approach. I had already looked into how existing architectures prepare and consume free/busy information, and felt that I had enumerated the relevant essentials for a viable calendaring architecture based on e-mail exchanges alone.

And so I set about learning about mail handling programs and expanding my existing knowledge of calendar-related standards. Fortunately, my work trying to get Kolab configured in a nice way didn’t go entirely to waste after all, although I also wanted to support different MTAs and not use convoluted Postfix-specific integration mechanisms, and so had to read up about more convenient and approachable mechanisms that other systems use to integrate with mail pipelines without trying hard to be all “high performance” about it. And I also wanted to make it possible for people to adopt a solution that didn’t force them to roll out LDAP in a scary “cross your fingers and run this script” fashion, even if many organisations already rely on LDAP and are comfortable with it.

The resulting description of this work is now available on the Web, and an attempt has been made to document the many different aspects of development, deployment and integration. Naturally, it is a work in progress and not a finished product: one step on the road to hopefully becoming a dependable solution involves packaging for Free Software distributions, which would result in the effort currently required to configure the software being minimised for the person setting it up. But at the same time, the mechanisms for integration with other systems (such as mail, mailboxes and Web servers) still need to be documented so that such work may have a chance to proceed.

Why Bother?

For various reasons unrelated to the work itself, it has taken a bit longer to get to this point than previously anticipated. But the act of making it available is, for me, a very necessary part of what I regard as a contribution to a kind of conversation about what kinds of software and solutions might work for certain groups of people, touching upon topics like how such solutions might be developed and realised. For instance, the handling of calendar data, although already supported by various Python libraries, hasn’t really led to similar Python-based solutions being developed as far as I can tell. Perhaps my contribution can act as an encouragement there.

There are, of course, various Python-based CalDAV servers, but I regard the projects around them to be somewhat opaque, and I perceive a common tendency amongst them to provide something resembling a product that covers some specific needs but then leaves those people deploying that product with numerous open-ended questions about how they might address related needs. I also wonder whether there should be more library sharing between these projects for more than basic data interpretation, but I know that this is quite difficult to achieve in practice, even if these projects should be largely functionally identical.

With such things forming the background of Free Software groupware, I can understand why some organisations are pitching complete solutions that aim to do many things. But here, in certain regards, I perceive a lack of opportunity for that conversation I mentioned above: there’s either a monologue with the insinuation that some parties know better than others (or worse, that they have the magic formula to total market domination) or there’s a dialogue with one side not really extending the courtesy of taking the other side’s views or contributions seriously.

And it is clear that those wanting to use such solutions should also be part of a conversation about what, in the end, should work best for them. Now, it is possible that organisations might see the benefit in the incremental approach to improving their systems and services that imip-agent offers. But it is also possible that there are also organisations who will contrast imip-agent with a selection of all-in-one solutions, possibly being dangled in front of them on special terms by vendors who just want to “close the deal”, and in the comparison shopping exercise that ensues, they will buy into the sales pitch of one of those vendors.

Without a concerted education exercise, that latter group of potential users are never likely to be a serious participant in our conversations (although I would hope that they might ultimately see sense), but the former group of potential users should be most welcome to participate in our conversations and thus enrich the wealth of choices and options that we should be offering. They would, I hope, realise that it is not about what they can get out of other people for nothing (or next to nothing), but instead what expertise and guidance they can contribute so that they and others can benefit from a sustainable and durable solution that, above all else, serves them and their needs and interests.

What Next?

Some people might point out that calendaring is only a small portion of what groupware is, if the latter term can even be somewhat accurately defined. This is indeed true. I would like to think that Free Software projects in other domains might enter the picture here to offer a compelling, broader groupware alternative. For instance, despite the apparent focus on chat and real-time communications, one doesn’t hear too much about one of the most popular groupware technologies on the Web today: the wiki. When used effectively, and when the dated rhetoric about wikis being equivalent to anarchy has been silenced by demonstrating effective collaborative editing and content management techniques, a wiki can be a potent tool for collaboration and collective information management.

It also turns out that Free Software calendar clients could do with some improvement. Their deficiencies may be a product of an unfortunate but fashionable fascination with proprietary mail, scheduling and social networking services amongst the community of people who use and develop Free Software. Once again, even though imip-agent seeks to provide only basic functionality as a calendar client, I hope that such functionality may inform or, at the very least, inspire developers to improve existing programs and bring them up to the expected levels of functionality.

Alongside this work, I have other things I want (and need) to be looking at, but I will happily entertain enquiries about how it might be developed further or deployed. It is, after all, Free Software, and given sufficient interest, it should be developed and improved in a collaborative fashion. There are some future plans for it that I take rather seriously, but with the privileges or freedoms granted in the licence, there is nothing stopping it from having a life of its own from now on.

So, if you are interested in this kind of solution and want to know more about it, take a look at the imip-agent site. If nothing else, I hope that it reminds you of the importance of independently-developed solutions for communication and the value in retaining control of the software and systems you rely on for that communication.

When will they stop pretending and just rename Mozilla to Firefox?

Monday, October 19th, 2015

It’s an odd-enough question. After all, the Firefox browser is surely called “Mozilla Firefox” if you use its full name, and the organisation behind it is called “Mozilla Corporation“. Mozilla has been responsible for various products and projects over the years, but if you actually go to the Mozilla site and look around now, it’s all Firefox, Firefox and, digging deeper, Firefox. Well, there’s also a mention of something called Webmaker, “apps”, and some developer-related links, presented within a gallery of pictures of the cool people working for Mozilla.

Now, I use Iceweasel, which is Debian’s version of Firefox, and it’s a good browser. But what concerns me is what has happened to certain other products produced by Mozilla that people also happen to be using. In the buzz that Mozilla are trying to create around their Firefox-centred strategy, with Firefox-the-browser, Firefox-the-mobile-OS, and whatever else the Firefox name will soon be glued onto, what treatment do things like Thunderbird get? Go to the Mozilla site and try and find the page for it: it’s easier to just use a search engine instead.

And once you’ve navigated to the product page for Thunderbird, the challenge of finding useful, concrete information continues. It may very well be the case that most people just want a download button and to be in and out of the site as fast as possible, on their way to getting the software installed and running. (If so, one really hopes that they did use a search engine and didn’t go in via Mozilla’s front page.) But what if you want to find out more about the code, the community, the development processes? Dig too deep in the support section – a reasonable path to take – and you’ll be back in Firefox country where there are no Thunderbirds to be found.

Now, I don’t use Thunderbird for my daily e-mail needs: given that I’ve used KDE for a decade and a half, I’ve been happy with Kontact for my modest e-mail retrieving, reading, writing and sending activities. But Thunderbird is used by quite a few other people, and I did indeed use it for a few years in a former workplace. I didn’t always like how it worked, especially compared to Kontact, but then again Kontact needed quite a bit of tuning to work to my tastes, especially when I moved over to KDE 4 (or Plasma, if you insist) and had to turn off all sorts of things that were bolted on but didn’t really work. Generally, however, both products do their job well enough.

When Mozilla announced that Thunderbird would take a back seat to other activities (which looks more like being pushed off the desk now, but anyway), people complained a lot about it. One would have thought that leveraging the common Mozilla codebase to keep delivering a cross-platform, Free Software e-mail client would help uphold the kind of freedom and interoperability in messaging that the organisation seeks to uphold on the Web generally. But I suppose the influencers think that webmail is enough, not least because the browser remains central in such a strategy. Unfortunately, webmail doesn’t exactly empower end-users with things like encryption and control over their own data, at least in the traditional sense. (Projects like Mailpile which deliver a Web-based interface locally via the browser are different, of course.)

So, given any need to remedy deficiencies with Thunderbird, where should one go? Fortunately, I did some research earlier in the year – maybe Mozilla’s site was easier to navigate back then – and found the Thunderbird page on the Mozilla Wiki. Looking again, I was rather surprised to see recent activity at such a level that it apparently necessitates weekly status meetings. Such meetings aren’t really my kind of thing, but the fact that they are happening does give me a bit more confidence about a product that one might think is completely abandoned if one were only paying attention to the official Mozilla channels. My own interests are more focused on the Lightning calendar plugin, and its official page is more informative than that of Thunderbird, but there’s also a collection of wiki pages related to it as well.

Once upon a time, there was a company called Mosaic Communications Corporation that became Netscape Communications Corporation, both of these names effectively trading on the familiarity of the Mosaic and Netscape product names. Given Mozilla’s apparent focus on “Firefox”, it wouldn’t surprise me if they soon went the other way and called themselves Firefox Corporation. But I would rather they sought to deliver a coherent message through a broad range of freedom-upholding and genuinely useful products than narrowing everything to a single brand and one-and-a-bit products that – in case those things don’t work for you – leave you wondering what your options are, especially in this day and age of proprietary, cloud-based services and platforms that are increasingly hostile to interoperability.

Without even a peripheral Mozilla Messaging organisation to block the tidal flow towards “convenient” but exploitative cloud services, one has to question Mozilla’s commitment in this regard. But those responsible could at least fix up the incoherent Web site design that would leave many wondering whether Thunderbird and other actively-supported Mozilla products were just products of their own vivid and idealistic imagination.

Upholding Freedoms of Communication

Friday, September 18th, 2015

Recently, I was alerted to a blog post by Bradley Kuhn of the Software Freedom Conservancy where he describes the way in which proprietary e-mail infrastructure not only withholds the freedoms end-users should expect from their software, but where the operators of such infrastructure also stifle the free exchange of information by refusing to deliver mail, apparently denying delivery according to seemingly arbitrary criteria (if we are to be charitable about why, for example, Microsoft might block the mails sent by an organisation that safeguards the rights of Free Software users and developers).

The article acknowledges that preventing spam and antisocial activities is difficult, but it rightfully raises the possibility that if things continue in the same way they have been going, one day the only way to use e-mail will be through subscribing to an opaque service that, for all we as customers would know, censors our messages, gathers and exploits personal information about us, and prevents people from contacting each other based on the whims of the operator and whatever agenda they might have.

Solved and Boring

Sadly, things like e-mail don’t seem to get the glory amongst software and solutions developers that other methods of online communication have experienced in recent years: apparently, it’s all been about real-time chat, instant messaging, and social networking. I had a conversation a few years ago on the topic of social networking with an agreeable fellow who was developing a solution, but I rather felt that when I mentioned e-mail as the original social networking service and suggested that it could be tamed for a decent “social” experience, he regarded me as being somewhat insane to even consider e-mail for anything serious any more.

But anyway, e-mail is one of those things that has been regarded as a “solved problem“, meaning that the bulk of the work needed to support it is regarded as having been done a long time ago, and the work needed to keep it usable and up-to-date with evolving standards is probably now regarded as a chore (if anyone really thinks about that work at all, because some clearly do not). Instead of being an exciting thing bringing us new capabilities, it is at best seen as a demanding legacy that takes time away from other, more rewarding challenges. Still, we tell ourselves, there are solid Free Software solutions available to provide e-mail infrastructure, and so the need is addressed, a box is ticked, and we have nothing to worry about.

Getting it Right

Now, mail infrastructure can be an intimidating thing. As people will undoubtedly tell you, you don’t want to be putting a mail server straight onto the Internet unless you know what you are doing. And so begins the exercise of discovering what you should be doing, which either entails reading up about the matter or just paying someone else to do the thinking on your behalf, which in the latter case either takes the form of getting some outside expertise to get you set up or instead just paying someone else to provide you with a “mail solution”. In this day and age, that mail solution is quite likely to be a service – not some software that you have to install somewhere – and with the convenience of not having to manage anything, you rely completely on your service provider to do the right thing.

So to keep the software under your own control, as Bradley points out, Free Software needs to be well-documented and easy to adopt in a foolproof way. One might regard “foolproof” as an unkind term, but nobody should need to be an expert in everything, and everybody should be able to start out on the path to understanding without being flamed for being ignorant of the technical details. Indeed, I would go further and say that Free Software should lend itself to secure-by-default deployment which should hold up when integrating different components, as opposed to finger-pointing and admonishments when people try and figure out the best practices themselves. It is not enough to point people at “how to” documents and tell them to not only master a particular domain but also to master the nuances of specific technologies to which they have only just been introduced.

Thankfully, some people understand. The FreedomBox initiative is ostensibly about letting people host their own network services at home, which one might think is mostly a matter of making a computer small and efficient enough to sit around switched on all the time, combined with finding ways to let people operate such services behind potentially restrictive ISP barriers. But the work required to achieve this in a defensible and sustainable way involves providing software that is easily and correctly configured and which works properly from the moment the system is deployed. It is no big surprise that such work is being done in close association with Debian.

Signs of the Times

With regard to software that end-users see, the situation could be a lot worse. KDE’s Kontact and KMail have kept up a reasonably good experience over the years, despite signs of neglect and some fairly incoherent aspects of the user interface (at least as I see it on Debian); I guess Evolution is still out there and still gets some development attention, as is presumably the case with numerous other, less well-known mail programs; Thunderbird is still around despite signs that Mozilla presumably thought that people should have been using webmail solutions instead.

Indeed, the position of Mozilla’s leadership on Thunderbird says a lot about the times we live in. Web and mobile things – particularly mobile things – are the new cool, and if people aren’t excited by e-mail and don’t see the value in it, then developers don’t see the value in developing solutions for it, either. But sometimes those trying to focus on current fashions fail to see the value in the unfashionable, and a backlash ensued: after all, what would people end up using at work and in “the enterprise” if Thunderbird were no longer properly supported? At the same time, those relying on Thunderbird’s viability, particularly those supplying it for use in organisations, were perhaps not quite as forthcoming with support for the project as they could have been.

Ultimately, Thunderbird has kept going, which is just as well given that the Free Software cross-platform alternatives are not always obvious or necessarily as well-maintained as they could be. Again, a lesson was given (if not necessarily learned) about how neglect of one kind of Free Software can endanger the viability of Free Software in an entire area of activity.

Webmail is perhaps a slightly happier story in some ways. Roundcube remains a viable and popular Web-hosted mail client, and the project is pursuing an initiative called Roundcube Next that seeks to refactor the code in order to better support new interfaces and changing user expectations. Mailpile, although not a traditional webmail client – being more a personal mail client that happens to be delivered through a Web interface – continues to be developed at a steady pace by some very committed developers. And long-established solutions like SquirrelMail and Horde IMP still keep doing good service in many places.

Attitude Adjustment

In his article, Bradley points out that as people forsake Free Software solutions for their e-mail needs, whether deciding to use an opaque and proprietary webmail service for personal mail, or whether deciding that their organisation’s mail can entirely be delegated to a service provider, it becomes more difficult to make the case for Free Software. It may be convenient to “just get a Gmail account” and if your budget (of time and/or money) doesn’t stretch to using a provider that may be friendlier towards things like freedom and interoperability, you cannot really be blamed for taking the easiest path. But otherwise, people should be wary of what they are feeding with their reliance on such services.

And those advocating such services for others should be aware that the damage they are causing extends far beyond the impact on competing solutions. Even when everybody was told that there is a systematic programme of spying on individuals, that industrial and competitive espionage is being performed to benefit the industries of certain nations, and that sensitive data could end up on a foreign server being mined by random governmental and corporate agencies, decision-makers will gladly exhibit symptoms of denial dressed up in a theatrical display of level-headedness: making a point of showing that they are not at all bothered by such stories, which they doubt are true anyway, and will with proud ignorance more or less say so. At risk are the details of other people’s lives.

Indicating that privacy, control and sustainability are crucial to any organisation will, in the face of such fact-denial, indeed invite notions that anyone bringing such topics up is one of the “random crazy people” for doing so. And by establishing such a climate of denial and marginalisation, the forces that would control our communications are able to control the debate about such matters, belittling concerns and appealing to the myth of the benign corporation that wants nothing in return for its “free” or “great value” products and who would never do anything to hurt its customers.

We cannot at a stroke solve such matters of ignorance, but we can make it easier for people to do the right thing, and to make it more obvious and blatant when people have chosen to do the wrong thing in the face of more conveniently and appropriately doing the right thing. We can evolve our own attitudes more easily, making Free Software easier to deploy and use, and in the case of e-mail not perpetuate the myth that nothing more needs to be done.

We will always have work to do to keep our communications free and unimpeded, but the investment we need to make is insignificant in comparison to the value to society of the result.

A Long Voyage into Groupware

Wednesday, August 26th, 2015

A while ago, I noted that I had moved on from attempting to contribute to Kolab and had started to explore other ways of providing groupware through existing infrastructure options. Once upon a time, I had hoped that I could contribute to Kolab on the basis of things I mostly knew about, whilst being able to rely on the solution itself (and those who made it) to take care of the things I didn’t really know very much about.

But that just didn’t work out: I ultimately had to confront issues of reliably configuring Postfix, Cyrus, 389 Directory Server, and a variety of other things. Of course, I would have preferred it if they had just worked so that I could have got on with doing more interesting things.

Now, I understand that in order to pitch a total solution for someone’s groupware needs, one has to integrate various things, and to simplify that integration and to lower the accompanying support burden, it can help to make various choices on behalf of potential users. After all, if they don’t have a strong opinion about what kind of mail storage solution they should be using, or how their user database should be managed, it can save them from having to think about such things.

One certainly doesn’t want to tell potential users or customers that they first have to go off, read some “how to” documents, get some things working, and then come back and try and figure out how to integrate everything. If they were comfortable with all that, maybe they would have done it all already.

And one can also argue about whether Kolab augments and re-uses or merely replaces existing infrastructure. If the recommendation is that upon adopting Kolab, one replaces an existing Postfix installation with one that Kolab provides in one form or another, then maybe it is possible to re-use the infrastructure that is already there.

It is harder to make that case if one is already using something else like Exim, however, because Kolab doesn’t support Exim. Then, there is the matter of how those components are used in a comprehensive groupware solution. Despite people’s existing experiences with those components, it can quickly become a case of replacing the known with the unknown: configuring them to identify users of the system in a different way, or to store messages in a different fashion, and so on.

Incremental Investments

I don’t have such prior infrastructure investments, of course. And setting up an environment to experiment with such things didn’t involve replacing anything. But it is still worthwhile considering what kind of incremental changes are required to provide groupware capabilities to an existing e-mail infrastructure. After all, many of the concerns involved are orthogonal:

  • Where the mail gets stored has little to do with how recipients are identified
  • How recipients are identified has little to do with how the mail is sent and received
  • How recipients actually view their messages and calendars has little to do with any of the above

Where components must talk to one another, we have the benefit of standards and standardised protocols and interfaces. And we also have a choice amongst these things as well.

So, what if someone has a mail server delivering messages to local users with traditional Unix mailboxes? Does it make sense for them to schedule events and appointments via e-mail? Must they migrate to another mail storage solution? Do they have to start using LDAP to identify each other?

Ideally, such potential users should be able to retain most of their configuration investments, adding the minimum necessary to gain the new functionality, which in this case would merely be the ability to communicate and coordinate event information. Never mind the idea that potential users would be “better off” adopting LDAP to do so, or whichever other peripheral technology seems attractive for some kinds of users, because it is “good practice” or “good experience for the enterprise world” and that they “might as well do it now”.

The difference between an easily-approachable solution and one where people give you a long list of chores to do first (involving things that are supposedly good for you) is more or less equivalent to the difference between you trying out a groupware solution or just not bothering with groupware features at all. So, it really does make sense as someone providing a solution to make things as easy as possible for people to get started, instead of effectively turning them away at the door.

Some Distractions

But first, let us address some of the distractions that usually enter the picture. A while ago, I had the displeasure of being confronted with the notion of “integrated e-mail and calendar” solutions, and it turned out that such terminology is coined as a form of euphemism for adopting proprietary, vendor-controlled products that offer some kind of lifestyle validation for people with relatively limited imagination or experience: another aspirational possession to acquire, and with it the gradual corruption of organisations with products that shun interoperability and ultimately limit flexibility and choice.

When standards-based calendaring has always involved e-mail, such talk of “integrated calendars” can most charitably be regarded as a clumsy way of asking for something else, namely an e-mail client that also shows calendars, and in the above case, the one that various people already happened to be using that they wanted to impose on everyone else as well. But the historical reality of the integration of calendars and e-mail has always involved e-mails inviting people to events, and those people receiving and responding to those invitation e-mails. That is all there is to it!

But in recent times, the way in which people’s calendars are managed and the way in which notifications about events are produced has come to involve “a server”. Now, some people believe that using a calendar must involve “a server” and that organising events must also involve doing so “on the server”, and that if one is going to start inviting people to things then they must also be present “on the same server”, but it is clear from the standards documents that “a server” was never a prerequisite for anything: they define mechanisms for scheduling based on peer-to-peer interactions through some unspecified medium, with e-mail as a specific medium defined in a standards document of its own.

Having “a server” is, of course, a convenient way for the big proprietary software vendors to sell their “big server” software, particularly if it encourages the customer to corrupt various other organisations with which they do business, but let us ignore that particular area of misbehaviour and consider the technical and organisational justifications for “the server”. And here, “server” does not mean a mail server, with all the asynchronous exchanges of information that the mail system brings with it: it is the Web server, at least in the standards-adhering realm, that is usually the kind of server being proposed.

Computer components

The Superfluous Server

Given that you can send and receive messages containing events and other calendar-related things, and given that you can organise attendance of events over e-mail, what would the benefit of another kind of server be, exactly? Well, given that you might store your messages on a server supporting POP or IMAP (in the standards-employing realm, that is), one argument is that you might need somewhere to store your calendar in a similar way.

But aside from the need for messages to be queued somewhere while they await delivery, there is no requirement for messages to stay on the server. Indeed, POP server usage typically involves downloading messages rather than leaving them on the server. Similarly, one could store and access calendar information locally rather than having to go and ask a server about event details all the time. Various programs have supported such things for a long time.

Another argument for a server involves it having the job of maintaining a central repository of calendar and event details, where the “global knowledge” it has about everybody’s schedules can be used for maximum effect. So, if someone is planning a meeting and wants to know when the potential participants are available, they can instantly look such availability information up and decide on a time that is likely to be acceptable to everyone.

Now, in principle, this latter idea of being able to get everybody’s availability information quickly is rather compelling. But although a central repository of calendars could provide such information, it does not necessarily mean that a centralised calendar server is a prerequisite for convenient access to availability data. Indeed, the exchange of such data – referred to as “free/busy” data in the various standards – was defined for e-mail (and in general) at the end of the last century, although e-mail clients that can handle calendar data typically neglect to support such data exchanges, perhaps because it can be argued that e-mail might not obtain availability information quickly enough for someone impatiently scheduling a meeting.

But then again, the routine sharing of such information over e-mail, plus the caching of such data once received, would eliminate most legitimate concerns about being able to get at it quickly enough. And at least this mechanism would facilitate the sharing of such data between organisations, whereas people relying on different servers for such services might not be able to ask each other’s servers about such things (unless they have first implemented exotic and demanding mechanisms to do so). Even if a quick-to-answer service provided by, say, a Web server is desirable, there is nothing to stop e-mail programs from publishing availability details directly to the server over the Web and downloading them over the Web. Indeed, this has been done in certain calendar-capable e-mail clients for some time, too, and we will return to this later.

And so, this brings us to perhaps the real reason why some people regard a server as attractive: to have all the data residing in one place where it can potentially be inspected by people in an organisation who feel that they need to know what everyone is doing. Of course, there might be other benefits: backing up the data would involve accessing one location in the system architecture instead of potentially many different places, and thus it might avoid the need for a more thorough backup regime (that organisations might actually have, anyway). But the temptation to look and even change people’s schedules directly – invite them all to a mandatory meeting without asking, for example – is too great for some kinds of leadership.

With few truly-compelling reasons for a centralised server approach, it is interesting to see that many Free Software calendar solutions do actually use the server-centric CalDAV standard. Maybe it is just the way of the world that Web-centric solutions proliferate, requiring additional standardisation to cover situations already standardised in general and for e-mail specifically. There are also solutions, Free Software and otherwise, that may or may not provide CalDAV support but which depend on calendar details being stored in IMAP-accessible mail stores: Kolab does this, but also provides a CalDAV front-end, largely for the benefit of mobile and third-party clients.

Decentralisation on Demand

Ignoring, then, the supposed requirement of a central all-knowing server, and just going along with the e-mail infrastructure we already have, we do actually have the basis for a usable calendar environment already, more or less:

  • People can share events with each other (using iCalendar)
  • People can schedule events amongst themselves (using iTIP, specifically iMIP)
  • People can find out each other’s availability to make the scheduling more efficient (preferably using iTIP but also in other ways)

Doing it this way also allows users to opt out of any centralised mechanisms – even if only provided for convenience – that are coordinating calendar-related activities in any given organisation. If someone wants to manage their own calendar locally and not have anything in the infrastructure trying to help them, they should be able to take that route. Of course, this requires them to have capable-enough software to handle calendar data, which can be something of an issue.

That Availability Problem Mentioned Earlier

For instance, finding an e-mail program that knows how to send requests for free/busy information is a challenge, even though there are programs (possibly augmented with add-ons) that can send and understand events and other kinds of objects. In such circumstances, workarounds are required: one that I have implemented for the Lightning add-on for Thunderbird (or the Iceowl add-on for Icedove, if you use Debian) fetches free/busy details from a Web site, and it is also able to look up the necessary location of those details using LDAP. So, the resulting workflow looks like this:

  1. Create or open an event.
  2. Add someone to the list of people invited to that event.
  3. View that person’s availability.
  4. Lightning now uses the LDAP database to discover the free/busy URL.
  5. It then visits the free/busy URL and obtains the data.
  6. Finally, it presents the data in the availability panel.

Without LDAP, the free/busy URL could be obtained from a vCard property instead. In case you’re wondering, all of this is actually standardised or at least formalised to the level of a standard (for LDAP and for vCard).

If only I had the patience, I would add support for free/busy message exchange to Thunderbird, just as RFC 6047 would have had us do all along, and then the workflow would look like this:

  1. Create or open an event.
  2. Add someone to the list of people invited to an event.
  3. View that person’s availability.
  4. Lightning now uses the cached free/busy data already received via e-mail for the person, or it could send an e-mail to request it.
  5. It now presents any cached data. If it had to send a request, maybe a response is returned while the dialogue is still open.

Some people might argue that this is simply inadequate for “real-world needs”, but they forget that various calendar clients are likely to ask for availability data from some nominated server in an asynchronous fashion, anyway. That’s how a lot of this software is designed these days – Thunderbird uses callbacks everywhere – and there is no guarantee that a response will be instant.

Moreover, a request over e-mail to a recipient within the same organisation, which is where one might expect to get someone’s free/busy details almost instantly, could be serviced relatively quickly by an automated mechanism providing such information for those who are comfortable with it doing so. We will look at such automated mechanisms in a moment.

So, there are plenty of acceptable solutions that use different grades of decentralisation without needing to resort to that “big server” approach, if only to help out clients which don’t have the features one would ideally want to use. And there are ways of making the mail infrastructure smarter as well, not just to provide workarounds for clients, but also to provide genuinely useful functionality.

Public holidays

Agents and Automation

Groupware solutions offer more than just a simple means for people to organise things with each other: they also offer the means to coordinate access to organisational resources. Traditionally, resources such as meeting rooms, but potentially anything that could be borrowed or allocated, would have access administered using sign-up sheets and other simple mechanisms, possibly overseen by someone in a secretarial role. Such work can now be done automatically, and if we’re going to do everything via e-mail, the natural point of integrating such work is also within the mail system.

This is, in fact, one of the things that got me interested in Kolab to begin with. Once upon a time, back at the end of my degree studies, my final project concerned mobile software agents: code that was transmitted by e-mail to run once received (in a safe fashion) in order to perform some task. Although we aren’t dealing with mobile code here, the notion still applies that an e-mail is sent to an address in order for a task to be performed by the recipient. Instead of some code sent in the message performing the task, it is the code already deployed and acting as the recipient that determines the effect of the transaction by using the event information given in the message and combining it with the schedule information it already has.

Such agents, acting in response to messages sent to particular e-mail addresses, need knowledge about schedules and policies, but once again it is interesting to consider how much information and how many infrastructure dependencies they really need within a particular environment:

  • Agents can be recipients of e-mail, waiting for booking requests
  • Agents will respond to requests over e-mail
  • Agents can manage their own schedule and availability
  • Other aspects of their operation might require some integration with systems having some organisational knowledge

In other words, we could implement such agents as message handlers operating within the mail infrastructure. Can this be done conveniently? Of course: things like mail filtering happen routinely these days, and many kinds of systems take advantage of such mechanisms so that they can be notified by incoming messages over e-mail. Can this be done in a fairly general way? Certainly: despite the existence of fancy mechanisms involving daemons and sockets, it appears that mail transport agents (MTAs) like Postfix and Exim tend to support the invocation of programs as the least demanding way of handling incoming mail.

The Missing Pieces

So what specifically is needed to provide calendaring features for e-mail users in an incremental and relatively non-invasive way? If everyone is using e-mail software that understands calendar information and can perform scheduling, the only remaining obstacles are the provision of free/busy data and, for those who need it, the provision of agents for automated scheduling of resources and other typically-inanimate things.

Since those agents are interesting (at least to me), and since they may be implemented as e-mail handler programs, let us first look at what they would do. A conversation with an agent listening to mail on an address representing a resource would work like this (ignoring sanity checks and the potential for mischief):

  1. Someone sends a request to an address to book a resource, whereupon the agent provided by a handler program examines the incoming message.
  2. The agent figures out which periods of time are involved.
  3. The agent then checks its schedule to see if the requested periods are free for the resource.
  4. Where the periods can be booked, the agent replies indicating the “attendance” of the resource (that it reserves the resource). Otherwise, the agent replies “declining” the invitation (and does not reserve the resource).

With the agent needing to maintain a schedule for a resource, it is relatively straightforward for that information to be published in another form as free/busy data. It could be done through the sending of e-mail messages, but it could also be done by putting the data in a location served by a Web server. And so, details of the resource’s availability could be retrieved over the Web by e-mail programs that elect to retrieve such information in that way.

But what about the people who are invited to events? If their mail software cannot prepare free/busy descriptions and send such information to other people, how might their availability be determined? Well, using similar techniques to those employed during the above conversation, we can seek to deduce the necessary information by providing a handler program that examines outgoing messages:

  1. Someone sends a request to schedule an event.
  2. The request is sent to its recipients. Meanwhile, it is inspected by a handler program that determines the periods of time involved and the sender’s involvement in those periods.
  3. If the sender is attending the event, the program notes the sender’s “busy” status for the affected periods.

Similarly, when a person responds to a request, they will indicate their attendance and thus “busy” status for the affected periods. And by inspecting the outgoing response, the handler will get to know whether the person is going to be busy or not during those periods. And as a result, the handler is in a position to publish this data, either over e-mail or on the Web.

Mail handler programs can also be used to act upon messages received by individuals, too, just as is done for resources, and so a handler could automatically respond to e-mail requests for a person’s free/busy details (if that person chose to allow this). Such programs could even help support separate calendar management interfaces for those people whose mail software doesn’t understand anything at all about calendars and events.

Lifting materials for rooftop building activities

Building on Top

So, by just adding a few handler programs to a mail system, we can support the following:

  • Free/busy publishing and sharing for people whose software doesn’t support it already
  • Autonomous agents managing resource availability
  • Calendar interfaces for people without calendar-aware mail programs

Beyond some existing part of the mail system deciding who can receive mail and telling these programs about it, they do not need to care about how an organisation’s user information is managed. And through standardised interfaces, these programs can send messages off for storage without knowing what kind of system is involved in performing that task.

With such an approach, one can dip one’s toe into the ocean of possibilities and gradually paddle into deeper waters, instead of having to commit to the triathlon that multi-system configuration can often turn out to be. There will always be configuration issues, and help will inevitably be required to deal with them, but they will hopefully not be bound up in one big package that leads to the cry for help of “my groupware server doesn’t work any more, what has changed since I last configured it?” that one risks with solutions that try to solve every problem – real or imagined – all at the same time.

I don’t think things like calendaring should have to be introduced with a big fanfare, lots of change, a new “big box” product introducing different system components, and a stiff dose of upheaval for administrators and end-users. So, I intend to make my work available around this approach to groupware, not because I necessarily think it is superior to anything else, but because Free Software development should involve having a conversation about the different kinds of solutions that might meet user needs: a conversation that I suspect hasn’t really been had, or which ended in jeering about e-mail being a dead technology, replaced by more fashionable “social” or “responsive” technologies; a bizarre conclusion indeed when you aren’t even able to get an account with most fancy social networking services without an e-mail address.

It is possible that no-one but myself sees any merit in the strategy I have described, but maybe certain elements might prove useful or educational to others interested in similar things. And perhaps groupware will be less mysterious and more mundane as a result: not something that is exclusive to fancy cloud services, but something that is just another tiny part of the software available through each user’s and each organisation’s chosen operating system distribution, doing the hard work and not making a fuss about it. Which, like e-mail, is what it probably should have been all along.

Python’s email Package and the “PGP/MIME” Question

Wednesday, January 7th, 2015

I vaguely follow the development of Mailpile – the Free Software, Web-based e-mail client – and back in November 2014, there was a blog post discussing problems that the developers had experienced while working with PGP/MIME (or OpenPGP as RFC 3156 calls it). A discussion also took place on the Gnupg-users mailing list, leading to the following observation:

Yes, Mailpile is written in Python and I've had to bend over backwards
in order to validate and generate signatures. I am pretty sure I still
have bugs to work out there, trying to compensate for the Python
library's flaws without rewriting the whole thing is, long term, a
losing game. It is tempting to blame the Python libraries, but the fact
is that they do generate valid MIME - after swearing at Python for
months, it dawned on me that it's probably the PGP/MIME standard that is
just being too picky.

Later, Bjarni notes…

Similarly, when generating messages I had to fork the Python lib's
generator and disable various "helpful" hacks that were randomly
mutating the behavior of the generator if it detected it was generating
an encrypted message!

Coincidentally, while working on PGP/MIME messaging in another context, I also experienced some problems with the Python email package, mentioning them on the Mailman-developers mailing list because I had been reading that list and was aware that a Google Summer of Code project had previously been completed in the realm of message signing, thus offering a potential source of expertise amongst the list members. Although I don’t think I heard anything from the GSoC participant directly, I had the benefit of advice from the same correspondent as Bjarni, even though we have been using different mailing lists!

Here‘s what Bjarni learned about the “helpful” hacks:

This is supposed to be http://bugs.python.org/issue1670765, which is
claimed to be resolved.

Unfortunately, the special-case handling of headers for “multipart/signed” parts is presumably of limited “help”, and other issues remain. As I originally noted

So, where the email module in Python 2.5 likes to wrap headers using tab
character indents, the module in Python 2.7 prefers to use a space for
indentation instead. This means that the module reformats data upon being
asked to provide a string representation of it rather than reporting exactly
what it received.

Why the special-casing wasn’t working for me remains unclear, and so my eventual strategy was to bypass the convenience method in the email API in order to assert some form of control over the serialisation of e-mail messages. It is interesting to note that the “fix” to the Python standard library involved changing the documentation to note the unsatisfactory behaviour and to leave the problem essentially unsolved. This may not have been unreasonable given the design goals of the email package, but it may have been better to bring the code into compliance with user expectations and to remedy what could arguably be labelled a design flaw of the software, even if it was an unintended one.

Contrary to the expectations of Python’s core development community, I still develop using Python 2 and probably won’t see any fixes to the standard library even if they do get made. So, here’s my workaround for message serialisation from my concluding message to the Mailman-developers list:

# given a message...
out = StringIO()
generator = Generator(out, False, 0) # disable reformatting measures
generator.flatten(message)
# out.getvalue() now provides the serialised message

It’s interesting to see such problems occur for different people a few months apart. Maybe I should have been following Mailpile development a bit more closely, but with it all happening at GitHub (with its supposedly amazing but, in my experience, rather sluggish and clumsy user interface), I suppose I haven’t been able to keep up.

Still, I hope that others experiencing similar difficulties become more aware of the issues by seeing this article. And I hope that Bjarni and the Mailpile developers haven’t completely given up on OpenPGP yet. We should all be working more closely together to get usable, Free, PGP-enabled, standards-compliant mail software to as many people as possible.