The Post-PRISM Society: Totalitarian Clouds

After a somewhat brief overview over the world we find ourselves in, the question is what does this mean to us as a society?

As highlighted in the previous article, governments have no realistic option not to engage in some form of activities to protect their people from threats that originate on-line or have an on-line component. These were the grounds for German chancellor Angela Merkel to make statements of support for PRISM. The problem is that I doubt it is effective and adequate to the threat. The side effects seem out of sync with the gain. That this gain is only claimed, not proven due to alleged security concerns, also does nothing to help the case.

It has become public knowledge these technologies exist and make mass surveillance can and is being implemented, and works efficiently. Calling for a general ban is unrealistic, and naive. Of course these technologies will be used against people, businesses and governments by someone – be they states or organisations. So the actual question is: Which are the circumstances under which use of such technology is acceptable?

Looking at the initial reactions, a great number of people – consciously or not – base their reaction on Article 12 of the Universal Declaration of Human Rights. And considering the consequences, that does not seem very far fetched.

Consequences of the Surveillance Society

There are some stories floating around where people have suffered repercussions such as being denied entry to the United States. But that’s probably the extent of it for many of us unless you are a public figure, or ever find yourself in a job where you would have influence on a decision that might be of major consequence to the United States.

But that’s only the fairly superficial perspective.

Consider for a moment the Arab spring where governments desperately tried to remain in power. In several cases the governments overthrown were the same ones that received strategic and practical support by the United States – including military and secret service activities – as part of their plans for the region. These governments in their desperate attempts to retain control knew which activists to imprison, sometimes torture and often confronted them with their own private messages from Facebook and Twitter.

Could those governments have to obtained that data itself? Possibly. But there is another option.

FISA makes it legal for the United States to obtain and make use of that data for the strategic interests of the United States. And Prism would have made it almost trivial. So the simpler way for those governments to know what was planned would have been to receive dossiers from their US contacts. Does this prove it happened that way? Certainly not. But it demonstrates the level of influence this combination of technical ability is giving the United States and other countries.

“Still,” most people think ‘I am living in a safe country and have no plans to overthrow my government.’

Nothing to hide, nothing to fear” has been used to justify surveillance for a long time. It’s a simple and wrong answer. Because everyone has areas they would prefer to remain private. If someone has the ability to threaten you with exposing something you do not wish to see exposed, they have power over you. But what’s more: People who have to assume to be watched at all times, even in their most intimate moments and inner thoughts, behave differently.

A culture of surveillance leads to self regulation, with fundamental impact how people behave at all times. Will you still speak up against things you perceive as wrong when you fear there might be repercussions? Or would you perhaps ask yourself whether this particular issue is important enough to risk so much, and hope that it won’t be as bad, or that someone else will take action?

Also, consider the situation of people who absolutely rely upon a certain level of privacy for their professional lives, such as lawyers, journalists and others. That no-one in these professions should be using these services should be self-evident. But if a society adopts the “Nothing to hide, nothing to fear” dogma, those who communicate for good reason with such professions will stand out as dark shadows in an otherwise fully lit room, and will raise suspicion.

If privacy becomes the exception those who require privacy will easily be singled out. The only way to avoid this is to make privacy the norm: If everyone has privacy, no-one will be suspicious for it.

And there are good reasons you would want to do your part to live in such a society, because the functioning of democracy as a whole is linked to a set of factors, including a working media, ability to form political opinion, and become politically active to achieve change for the better. And even if you yourself have no ambition in this way at this point in time.

Privacy is one of the essential building blocks of a free society.

You might find yourself activated by misspent tax money, a new highway being planned through your back yard, or the plans to re-purpose your favourite city park for a shopping mall. And if it isn’t yourself, perhaps something will make your parents, siblings, spouse, kids, best friend want to take action and then require a society that grants privacy in order not to be intimidated into silence.

So there are good reasons why people worry about this level of surveillance.

Why, then, are they choosing to voluntarily support it?

Feudal Agents of the Totalitarian State

It has been subject of discussion in the software freedom community for some time, but only now appears to hit the radar of a larger subset of the forward thinking IT literates: The large US service providers own users and their data in ways that led security guru Bruce Schneier to comparing them to feudal lords, leaving their users as hapless peasants in a global Game of Thrones power struggle.

Some time ago already, Geek & Poke probably summarized it slightly more pointedly:

Geek & Poke:

One aspect of using these services is that users place themselves under surveillance as part of their payment for the service. The plethora of knowledge Facebook keeps on everyone that is using it, and everyone that is not using it, has been disclosed time and again, last time during the shadow profile exposure. But this has not been the first time. Nor can anyone reasonably expect Google, Microsoft or Apple to behave any different.

What is important to understand is that the centralisation of these services, and turning devices into increasingly dumb data gathering and supply devices is not accidental, nor is it technologically necessary. We all carry around a lot more computing power all the time than was readily available just some years ago.

So these devices and services could operate in a de-centralized and meshed fashion.

But then the companies would not get to profile their users in such detail, potentially gathering every intimate detail about them, such as whether they were aroused when they last used voice search to find the nearest hotel. Or did you think that command was analysed on your smart phone, and not by the (almost) infinitely powerful processing power in the data centres of your service provider?

Data is the new gold, and these companies are mining it as best they can.

Naturally these companies are always downplaying the amount of data collected, or the impact that use of this data might have on individuals. PRISM exposed this carefully crafted fallacy to some extent.

It also raised the question: What is it worse? That the government which can be held accountable to a larger degree gets access to some data gathered by a company? Or that a company that is responsible to no-one but its shareholders gathered all of it?

In fact, cynically speaking, one might even think these companies are mostly unhappy about the fact that the US government wants free, unlimited access to the raw data rather than the paid for refined access they offer as part of their business model.

But the root cause is in the centralised gathering of such data under terms that do not make these companies your service providers, but you their peasant. This treasure trove will always attract desires, and countries have ways to get access because they have ways to impact profits. Now that the PRISM disclosure taught them what’s possible, countries such as Turkey are quickly catching on, demanding access to details of Gezi protesters.

So while these companies are often wrapping themselves in liberty, the internet and all that is good for humankind, by their existence and business model they make a contribution to a totalitarian society.

Whether that contribution is decisive, or outweighs the instances where they do good, I cannot judge.

But using these providers for your services and getting all up in arms about PRISM is somewhat hypocritical, I’m afraid. It’s a bit like complaining about losing your foot when you’ve voluntarily and without need amputated your entire leg before to be able to make use of the special “one-legged all you can eat buffet.”

Choices for Free Citizens

So assuming you want to break free of this surveillance and the tendencies towards a totalitarian society, which are your options?

Firstly, choose Open Source / Free Software and Open Standards. There is a plethora of applications out there and the way in which their internal workings and control structures are transparent and publicly developed makes it much more likely they will not provide back doors to your data. Following the PRISM leaks, sites such as http://prism-break.org have sprung up that try to help you do just that.

Secondly, start making use of encryption, which is easier and more effective than you might think.

Chances are that someone in your circle of friends or family is already using some or even many of these applications. Get them to help you get started yourself.

But assuming you are not a technical person, which is most of society, the most important choice you can likely make is with your feet and wallet by choosing services that work for you and put themselves at your service – rather than services that process you and put you at their service.

The important place for this is to look at the terms of the services you are using.

I know this is tedious, and these terms are often deliberately written to make eyes glaze over when trying to understand what they actually say.

But there is a web site that can help you with it: Terms of Service; Didn’t Read. Check out the services you are currently using, and get yourself the browser extension so you at least start getting an idea of what kinds of rights you are surrendering by making use of the services.

As for providers that offer you the same convenience, but without the mandatory cavity search, there are still quite a few. Naturally it makes sense to look at their terms of services carefully, ensure they are based in a legislation of your choice, and use technologies that you can trust. If you are not sure, ask them to explain what standards they observe with regards your data. And ensure you can switch providers, even switch to self-hosting if you want to, without necessarily changing technologies.

And once you’ve looked through all those criteria and made your homework on which solutions can deliver all of this, without compromise, take control of your data and software.

Disclaimer

I’m not a party without interest in this debate. You can easily inform yourself about what I’ve done in the past in this area. And my past years I’ve dedicated to building a technology that would allow people to own their data and software, while providing all the features users have grown accustomed to.

That technology is called Kolab, and of course I’d be delighted if you got in touch with us, or installed Kolab.org on your own, or even made use of the http://MyKolab.com service. Because all of this will help us continue to work to the goal of allowing people secure, powerful collaboration across platforms while owning their own data and software.

But it’s this work that has followed from my analysis, not the other way around.

So make up your own mind.


All articles:

Posted in Collaborate in Confidence, Political Commentary | Tagged , , , , , , , , | 3 Comments

Welcome to the Post-PRISM Society: A primer.

Questions of privacy, security and control have occupied me for a long time, both personally and professionally. In fact it was a significant aspect of my decision to switch focus from the Free Software Foundation Europe to Kolab Systems: I wanted to reduce the barriers to actually putting the principles into practice. That required a professional solution which would offer all the benefits and features people have grown accustomed to, but would provide it as high quality Open Source / Free Software with a strong focus on Open Standards.

What surprised me at the time was the amount of discussions I had with other business people and potential customers whether there was really a point in investing so much into such a business and technology since Google Apps and similar services were so strong already, so convenient, and so deceptively cheap.

I remember similar conversations about Free Software in the 90s, where people were questioning whether the convenience of the proprietary world could ever be challenged. Now the issues of control over your software strategy and the ability to innovate are increasingly becoming commonplace.

Data control wasn’t really a topic for many so far although the two are clearly inseparable. But somehow too much of it sounded like science fiction or bad conspiracy theories.

There have of course been discussions among people who paid attention.

Following the concerns about the United States’ capabilities to monitor most of the world’s transmitted information through ECHELON, many people were alarmed about the Foreign Intelligence Surveillance Act (FISA). It has given rise to many conspiracy theories about how the United States have access to virtually all the information hosted with US technology companies anywhere in the world and would be able to use that information to their military, political and economic advantage. But no-one wanted to believe them, as the United States feel so familiar thanks to Hollywood and other cultural exports, and in Europe still thanks to the gratefulness many people still hold for the US contribution to liberating Europe 50 years ago.

Only stories about US surveillance weren’t conspiracy theories, it seems.

There has been a flurry of public reports around a large number of security and privacy relevant issues in the past weeks. But due to the complexity of the issue, most articles only deal with a tiny piece of the puzzle, and often miss the bigger picture that I am seeing right now.

Trying to provide that picture has quickly left me with an article much too long for general reading, so I’ve decided to try and break it up into four articles, of which this is the first. Its goal is to get you up to speed with some of today’s realities, in case you hadn’t been paying attention.

Part I: What We Know

The recent disclosures about the NSA PRISM program have made it quite clear that what is written in black and white in US law is also being put into action. As Caspar Bowden summarized clearly in his presentation at the ORGCon2013, FISA provides agents of the United States with access to “information with respect to a foreign based political organization or foreign territory that relates to the conduct of the foreign affairs of the United States.” It’s limiting factor is the 4th Amendment, which does not apply to people who are not located in the United States. Which is most of us.

In other words: The United States have granted themselves unlimited access to all information they deem relevant to their interests, provided at least one party to that information is not located in the United States.

And they have installed a very effective and largely automated system to get access to that kind of information. Michael Arrington has done a good job at speculating how this system likely works, and his explanation is certainly consistent with the known facts as well as knowledge of how one would design such a system. If true, mining all this information would be as easy and not much slower as any regular Google search query.

What’s more, there is no functioning legal oversight over this system, as the US allow for warrantless wiretapping and access to information. The largest amount of queries most likely never saw a judge while simultaneously being labelled secret. And according to what one has to intepret from the statements of Edward Snowden, only the smallest number of queries ever make it to the secret FISA Court (FISC). A court which is secret itself and has been described as a “rubberstamping court” in many reports.

And we know the United States is far from the only country involved in such activities.

Turns out the United Kingdom has been just as active, and might even have gone to further extremes in their storing, analysis and access of personal information as part of its “Mastering the Internet” activities. It would be naive to assume that is where it stops. We know that other countries have well trained IT specialists working on similar activities, or even offensive measures.

China has been a major target. But it also successfully read the internal documents of German ministries for years, and managed to even breach into Google‘s internal infrastructure. Israel has been known to have some of the best IT security specialists in the world, and countries such as India and Brazil are certainly large enough and with major IT expertise.

Naturally there is not a whole lot of publicly documented evidence, but given that this subject has been discussed for over a decade one would have to assume total ineptitude and incompetence in the rest of the world outside the US and UK to assume these are the only such programs.

The most reasonable working assumption under these circumstances is:

Surveillance is omnipresent and commonly employed by everyone with sufficient ability.

But it’s not just surveillance of readily available data with support from companies that are required by law to comply with such requests.

Offensive Measures

Another way in which countries engage in the digital world is through active intrusion. In Germany there was a large debate around the ‘Federal Trojan‘, which in some ways goes a good step further than PRISM. Such active intrusion damages the integrity of systems, has the potential to leave them damaged, and potentially subject to easier additional break-ins. How easy it is to make use of this kind of technology has become clear during the public FinFisher debate.

The price tag of this kind of tool is easily within reach of any government worldwide, and it would be naïve to assume that countries and their secret agencies do not make use of it.

But in the flurry disclosures another interesting aspect has also been revealed: At least some software vendors are complicit with a number of governments to facilitate break-ins into customer systems. The company that has been highlighted for this behaviour is Microsoft, source of the world’s dominant desktop platform.

Rumours about a door in Microsoft Windows to allow the US government access have been floating around now for a long time, but always been denied. And rightly so, apparently. It is not that Microsoft has deliberately weakened their software in a specific place. They didn’t have to. Instead, they manipulated the process of addressing vulnerabilities in ways to allow the NSA and others to break into 95% of the world’s desktop systems.

But Microsoft is not the only party with knowledge about vulnerabilities in their systems.

So the situation of users would arguably have been better if they had installed a back door as that would limit the exploit to a number of parties that are given access through SSL or other mechanisms. That would have been imperfect, but still better than the current situation: There is no way to know who has knowledge of these vulnerabilities, and what use they made of it.

How that kind of information can be used in addition to the FinFisher type of software has been demonstrated by Stuxnet, the computer worm that was apparently targeted at the Iranian uranium centrifuges and was in fact capable of killing people.

We now live in a world where cyber-weapons can kill.

Just a couple of days ago, the death of Michael Hastings in a car crash in Los Angeles was identified as a possible cyber-weapon assassination. I have no knowledge of whether that is the case, but what I know is that it has become possible. And of course anyone sufficiently capable and motivated is generally capable of creating such a weapon – no manufacturing plants or special materials required.

All of this of course is also known to all the security agencies around the world. So they are trying to increase their detection and defence. But since this is an asymmetrical threat scenario, it is hard to defend against.

PRISM wasn’t motivated by an anti-democratic conspiracy

Too many comments following the PRISM disclosures sounded like there was a worldwide conspiracy involving hundreds of thousands of people, including many heads of states, to undo democracy. And it seems that some people, such as US president Barack Obama, became part of the conspiracy when they came into power.

To me it seems more likely they received more information and became deeply concerned about what would happen if we for instance started seeing large-scale attacks on the cars in a country. To them, PRISM probably looked like an appropriate, measured response. That is not to say I believe it is an effective countermeasure against such threats. And if Edward Snowden is to be believed, it has likely been subverted for other purposes. Considering he threw away his previous life and took substantial personal risk, and reading up on what people such as Caspar Bowden have to say, I have little reason to doubt his credibility.

Given the physical and other security implications of all of the above I guess only very few people would argue that the state has no role in digital technologies. So I think governments should in fact be competent in these matters and ensure that people are safe from harm. That is part of their responsibility, after all. Just banning all the tools would put a country at a severe disadvantage to fulfil that role for its people.

At the same time these tools are extremely powerful and intrusive. So what should governments be allowed to do in this pursuit, and how should they do it? Also, how do we have sufficient control to uphold the principles and liberties of our democratic societies? Also, what does all of this mean for international business and politics?

These will be some questions for the upcoming articles, so stay tuned.


Updates:

 

Posted in Political Commentary | Tagged , , , , , , , , , , , | Leave a comment

Warming up to sprint: Kolab 3.0 Pre Sprint Release, Talk Schedule and next round of drafts for Web Client

If you follow our streams at Identi.ca, Twitter, Google+ or Facebook, you’ll have noticed that we pushed out the first installable Kolab 3.0 release yesterday. I lack the words that adequately describe how excited I am about this, to be honest.

This is not a full release yet, mind you, as we are just getting ready to sprint, so this is not yet feature complete – a pre-alpha, if you will. It is however nearing feature equivalence for the 2.3 series, so we are quite confident for the upcoming Kolab 3.0 release. Whatever we manage to complete during the sprint, including whatever the community manages to come up with during the sprint, will then end up in the Kolab 3.0 release.

In order to make that even easier, we’ll also have a series of talks during the sprint, starting with a Kolab 3.0 walk through by Jeroen van Meeuwen, our Systems Architect, who will give you the tour de force of what has been done for Kolab 3.0. On Tuesday Christian Mollekopf will talk about libkolab which provides an API for any technology that wishes to integrate with Kolab 3.0. Never before has it been more easy to hook other technologies up with Kolab.

Wednesday and Thursday will then be covered by the Kolab Systems Web Powerhouse, Thomas Brüderli and Aleksander Machniak, the original architect and author, as well as the most active contributor of the Roundcube Web Client and the new Kolab Web Client which incorporates Roundcube and adds more groupware functionality. Their talks will be about the new ActiveSync stack we have been experimenting with, and the next generation of the web client, including the new skin.

With regards the new skin, Michael Krautwasser, Roundcube’s designer for many years, has provided a new set of designs for the Kolab Community Web Client and seeks comments. You can find them at

and your comments are best sent to kolab-devel@kolab.org.

So if you want to take a look, help iron out last glitches, participate in the sprint remotely or on site, here are the installation instructions for the pre-sprint pre-alpha Kolab 3.0 release. More information on the sprint is found here and of course on the Kolab Community Wiki.

Hope to see you next week in Berlin!

Posted in Collaborate in Confidence, Free Software Business, Updates | Tagged , , , , | 1 Comment

Kolab 3.0: Update,overview and release plans

Almost half a year ago I had the pleasure to write the Kolab 3.0 primer, and ways of getting involved. Optimistically I scheduled the release for May/June 2012 in that posting. Attentive readers may have noticed that it is no longer June and Kolab 3.0 hasn’t been released yet.

So perhaps it is time to provide an update and overview.

The main culprits for delays in this first release done by the new team are pretty much the usual suspects: Everything is more work than expected, you end up having to do more than you initially planned, including unforeseeable interruptions and there was less help than you hoped for.

The good news is: We’re almost there.

Much work has gone into the invisible underbelly of the technology, starting from the Kolab XML

format itself. Christian Mollekopf has done an unbelievable amount of work on libkolabxml and libkolab, the refactored Kolab XML format, and its API with wrappers in multiple programming languages to make Kolab integration as easy as being able to call the API to manipulate Kolab objects.

Christian also put Kolab XML V2 format support into libkolab so that clients using libkolab can work against either version of the format, and largely finished a migration tool from version 2 to version 3 to provide users with a data upgrade path. And finally he re-based Kolab support in the KDE Kontact client that is the basis for our desktop client on libkolab for the KDE PIM 4.9 release already. In fact thanks to some supersonic packaging in the Fedora community I am already using libkolab with KDE PIM 4.9 rc 1 against all my Kolab 2.3 servers.

Also we had to shed the dependency on the outdated Horde 3 framework for Kolab 3.0, which meant a good deal of conceptual work, such as coming up with a new Free/Busy System or dealing with conflicts in ways that are far superior than anything Kolab has ever done while maintaining full off-line capability, one of Kolab’s great advantages over other solutions.

When looking at these pages it should become obvious how much time has gone into truly understanding the problems at hand and resolving them solidly in a way that is publicly documented and allows participation from anyone in the community.

Enabling participation is in fact what we spent a lot of time on throughout the past months, from the Kolab Community web site relaunch, over the IRC meetings for Kolab 3.0 planning and development, to the hiring of Torsten Grote as Kolab Evangelist who went to work on the community resources straight away and is your dedicated community-go-to-guy-for-all-things-kolab, all the way to the intermediate release of Kolab 2.4 to make it easy for people to get Kolab servers up and running that would allow to tap into and participate in the ongoing development.

That release also featured quite a bit of work by Jeroen van Meeuwen, our Systems Architect and specialist for the most complex set-ups that scale to hundreds of thousands of users or do things that are widely considered impossible. Again much of that work has happened in the background, but will be fundamental for a lot of things you’re about to see Kolab doing in the next years to come.

Among these things are trimming back LDAP schema extensions to ensure that Kolab integrates into existing directory services more easily, be they in pre-existing corporate infrastructures, in products that wish to integrate Kolab, in cloud offerings or in proprietary directory services where Kolab provides the first bridgehead for migration towards more freedom of choice and Open Standards.

Also Jeroen and Christoph have been giving a lot of thought to how resource management should work, because our experience all too often was that many things were not done right to enable the kinds of work flows and scenarios people wanted to implement – not just in Kolab, but pretty much anywhere. So we gave this one quite a bit of thought that Jeroen shared on his blog.

Other parts are configuration management, including the REST inspired API for configuration of the server and the server underlying configuration management which will allow using any kind of configuration management in the future. And of course Jeroen was the key person to get the 2.4 release out of the door, as well as many other things.

The first application to make use of that API is the new web administration front end developed by Aleksander Machniak, one of the main Roundcube developers on staff at Kolab Systems. Already available within Kolab 2.4, this web admin interface is independent of the kind of directory service or configuration mechanism used in the background and extensible to virtually any scenario. If you wonder how it looks, Jeroen put some screen shots up on his blog. And last but not least he has spent much time on getting our documentation up to speed.

But of course it wouldn’t be a proper release without something falling victim to triage. In this case the victim is Server Side Akonadi. While it will add truly magical capabilities to the Kolab server, we designed the Kolab 3.0 release such that it would remain an optional component to make sure we preserve the ability to scale all the way down to small embedded devices. So because it is optional, and because we did not want to delay the release further, we have put it at the back of the priority chain and removed it from the list of blockers. But you should expect to find it in one of the next series 3 releases.

And of course we haven’t stopped at having given Roundcube its push to the 0.7 release and developing our new web interface on top of it. We’re now also trying to think about how that next web client should look like and how to bring things together with the desktop clients.

For this we are working together with the professional designer who is also responsible for the current and future Roundcube skins, and you can find some of his designs for the next generation web client of Kolab online here. If you have comments, we’d be happy to hear your input and receive your help.

In case if you want to get involved in any of the areas we’re working on, the upcoming Kolab 3.0 Technology Sprint in Berlin is perfect place for that.

This is also where we will be working on finishing of some of the more exciting things we’ve been playing with, such as ownCloud integration for the web interface. This is something we already have sketched in our webmail.klab.cc demo instance and several people have found this close to usable. So we’re overjoyed that Frank, Arthur and Georg of ownCloud will join us for the sprint and invite others who have technology or projects that would work with Kolab in interesting ways to also join us during that week.

And we particularly invite packagers for all the various distributions out there to join us for the sprint. Because we would love to have Kolab 3.0 be natively available on all platforms just weeks after it is released, and make its way into the upstream distributions. Doing this ourselves for all distributions is more than we can reliably ensure, especially since we also have to take enterprise distributions such as Univention Corporate Server (UCS) into account that add substantial work on that front.

Also, we’re not just developing the next generation server, we have also just enabled Mozilla Thunderbird & Lightning for professional usage with Kolab through plenty of work that has gone into SyncKolab by Niko Berger who has joined Kolab Systems to also provide a professional maintenance path for supported users.

And naturally there are still customers who want support as we do all of the above.

So even though I would still have plenty of things to feature I guess it should have become obvious that we have been far from inactive and I truly feel honoured and somewhat humbled to be working with such a great team of dedicated professionals and great minds.

A lesser group could not have achieved that much in such short time, and Kolab 3.0 when it comes out this summer is going to be one exciting piece of technology. I hope you’ll give it a try and will check out our company web site for how we can assist you in your professional needs.

But now, the magic incantation: Go forth and make Kolab 3.0!

 

Posted in Collaborate in Confidence, Free Software Business, KDE, Kontact, Updates | Tagged , , , , , , , , | Leave a comment

On the Kolab Server 2.4 Release

So a while back I gave a primer and insight into what would happen with Kolab 3.0, and now we’ve released an out-of-schedule Kolab Server 2.4 – what’s that?

There are a couple of reasons for this. Firstly, the Kolab 3.0 development cycle is well under way, and progressing nicely for the most part, even if we may have to do some feature triaging for the 3.0 release depending on how many contributors come to the task in the next month or two.

But even so it is going to be some time before that release is out after some testing, and simultaneously the OpenPKG set of packages of the Kolab Server is ageing. Quickly. Providing security updates is something that would be done in the ideal world, but it takes around two weeks to wrap a release, as even an individual component easily means the entire stack needs to be rebuilt.

That’s a lot of effort for something that’s been discontinued.

From a business perspective it is also completely wasted, as there are zero customers of Kolab Systems on that particular technology base. None. Some other service providers may have paying customers on that basis, which is fine. But in the way they have chosen to maintain those customers on that basis without upstream support, they have themselves chosen to become the upstream for the solution their customers are on. So we gladly give them everything they need to provide such updates for their customers, but they’ll have to do the work themselves, I am afraid.

Naturally they could also hire us to do this for them. But I’d rather prefer if they didn’t, because this packaging base and some of the technology contained within is fundamentally unmaintainable, while the new basis is much leaner, more modular and each component can be updated as required without affecting the entire stack. In other words: Up to date (release) engineering.

In any case, even if an employee of ours were hired for another OpenPKG release, that person would be missing from other activities, such as the native packages available through our software subscription for customers with upstream support. So I’d much prefer to have the employee work on that, to be honest.

At the same time, we do not want to let our community be without an update for too long, and we want to lower the barrier to becoming active in the Kolab 3.0 development cycle. The answer to all those questions was the intermediate Kolab 2.4 release. That release already gets so many things right that we really encourage anyone with interest in Kolab, Roundcube or Free Software Groupware to take a look themselves.

The fastest way to a running virtual machine is if you’re on Fedora 16 or 17 and have the virtualisation packages installed & the service running.

Simply run the script below kindly provided by Jeroen van Meeuwen, our Systems Architect.

Or take a look at the quick installation instructions on the kolab.org web site.

 

Fastest way to Kolab, courtesy of Jeroen van Meeuwen

Assumptions to the script:

  1. Purely demonstrative,
  2. Assumes libvirtd managed ‘virbr0′ network,
  3. Assumes no kolab-demo system already exists,
  4. is to be executed with something like as follows:
    sudo TMPDIR=/path/to/my/tmp/dir /path/to/setup-el6-k24.sh

Save the following as setup-el6-k24.sh and make it executable (e.g. chmod 755 setup-el6-k24.sh):

#!/bin/bash

virsh destroy kolab-demo
virsh undefine kolab-demo
rm -rf ${TMPDIR:-/tmp}/kolab-demo.img
qemu-img create ${TMPDIR:-/tmp}/kolab-demo.img 8G
virt-install \
--name=kolab-demo \
--ram=2048 \
--vcpus=2 \
--disk="path=${TMPDIR:-/tmp}/kolab-demo.img" \
--location=http://mirror.switch.ch/ftp/mirror/centos/6/os/x86_64/ \
--extra-args='ks=http://hosted.kolabsys.com/~vanmeeuwen/ks.cfg' \
--network='bridge=virbr0' \
--hvm \
--virt-type=kvm

 

Posted in Collaborate in Confidence, Free Software Business, Updates | Tagged , , , , | 3 Comments

A bridge leading nowhere: Outlook-centric groupware

I have a confession to make.

I do not believe that Windows is the future of the Free Software desktop.

Perhaps you wonder why I feel it necessary to make this point?

A surprising number of Free Software (or Open Source, take your pick) companies, evangelists and journalists these days advocate some Open Core groupware solutions that focus on Microsoft Outlook as their primary client as “consequential” and “the best approach.” The term “pragmatic” is also quite popular among such comments.

Although some things could and should be said about this, let’s ignore the fact that not everything that calls itself Open Source actually is. That is a case of deception and deceit, of misleading advertising where the users only notice they’ve been locked in at the time they try to make use of the freedoms they thought they had gained. It is not specific to the area of groupware, though, and not the focus for this article.

There is a set of technical and strategic issues that make this approach a dead end.

That is not to negate the strength of Microsoft Windows on the desktop, or to try and ignore it. We always need to take the prevalence of Microsoft on the desktop into account. But there are paths of action that reduce dependency, and there are paths that increase it. Samba, Mozilla Firefox, LibreOffice/OpenOffice.org are all excellent examples of solutions that create more degrees of freedom. These are bridge-building applications. But where do these bridges lead?

Their approach is to interoperate by basing themselves on Open Standards that are equally available on all platforms, and then do their utmost to ensure they also support the Microsoft specific formats and the deviations from Open Standards that were often deliberately introduced to create incompatibility in order to facilitate lock-in. So they bridge towards empowering the user with Free Software applications that can now interoperate, thus enabling multiple platforms and reducing dependency upon Microsoft.

A groupware application that focuses primarily around Microsoft Outlook may seem related, but where does this particular bridge lead?

For one interoperability is often achieved at tremendous cost, such as storing the binary blobs of Outlook that are based on the in-memory application specific data structure in SQL databases. A somewhat better approach is MAPI as the transport layer for Microsoft compatibility. As long as there is a truly open and interoperable communication and storage layer and mechanism underneath, that is. The inherent danger is that MAPI becomes the primary and most important protocol in such an application, genuinely turning things back into a “every platform as long as it is Microsoft Windows” situation.

But even more importantly: By building a deeper habitual and technological dependency on Outlook, which only runs properly on Windows.

So that bridge leads towards where users already are: An ever increasing dependency on Microsoft Windows, which is the opposite effect of applications such as Samba, Firefox or OpenOffice.org/LibreOffice.

Worse, even, they block in particular the office applications due to a quirk in Microsoft’s licensing strategy which bundles Microsoft Outlook and Office. As a result, where one is already deployed, the other is already fully paid for. For the office suites that means LibreOffice / OpenOffice.org would have to pay users for using them. Everything else would be an added expense. Try getting this across the accounting department in a company that is struggling to stay within budget.

With groupware being a critical core functionality of any business, as long as MS Outlook stays firmly entrenched, the Free Software offices continue to have a much harder time catching up. So if your concern is to provide companies and users with more choice, investing into an Open Core groupware on the server can in fact strengthen the dependency on Microsoft Office if the deployment is predicated on Outlook as the client.

To make it worse the customer has now on good faith invested into something that promised openness and finds themselves deeper in the hole. Good luck getting that customer to trust in another solution that promises more degrees of freedom in a similar way and requires migration and further investment.

So while these Exchange competitors provide temporary relief in terms of cash flow, they do nothing to resolve the underlying problems, and companies that provide these kinds of solutions to their customers would be well advised not to oversell them as “Open Source Solutions with all the great advantages of Open Source” because they’d be misleading their customers.

Chances are the customer will anyhow harbour unjustified expectations even without the overselling, but overselling definitely increases the chance of leaving permanently scorched earth for Open Source / Free Software.

So what would be a sustainable approach?

Firstly, the solution should be based upon Open Standards as much as possible.

Secondly, it should be fully Free Software that is deserving of the name.

Thirdly, that solution should not predicate itself primarily upon Microsoft Outlook support. Support for Microsoft Outlook can clearly be a plus, but it should not permeate the design of the solution, nor should it be the only or even primary client of choice. So the client would be focused towards a truly heterogeneous client ecosystem, and ideally one that also assumes a multi platform world.

Then it should come with an up to date web client, mobile phone support and all the technical aspects users require, but it should not require a huge data centre to run it. In other words it should be able to scale up as well as down, to be installable on a single machine in an office as well as in a distributed cloud setup that can serve hundreds of thousands of users.

Why would you care about that level of scalability? Because it provides the grounds for ubiquity. And Microsoft has done a pretty good job at demonstrating how powerful ubiquity really can be. But that ubiquity depends upon a couple more factors. Such as the development process.

Does the solution you’re looking at actually have public development mailing list, issue trackers, wikis and such where the actual developers of the company driving it participate and can the community participate in the steering of the solution on all levels? Is there transparency of the development process, and is there a development process to speak of?

But most importantly: You don’t know your business requirements for the next ten years in advance.

What you do know, however, is that the domain of groupware is going to be a central part of that, because exchanging messages, planning your days and keeping track of the people you interact with is not going to become less important. Neither are the extended functionalities that are often associated, such as instant communication, telephony, video conferencing, collaborating on documents and so on and so forth. In all likelihood, its importance is going to increase as we move towards a more interconnected and cooperative world.

What does this mean for your decision right now?

You want technology that you can innovate upon and integrate into other technologies easily. That is partially covered by the Free Software & Open Standards points above. But there are also architectural aspects to consider here, and conceptual questions as to whether the solution is flexible enough to evolve with your needs.

Especially your groupware solution merits such in-depth analysis before you make a call.

Because lock-in starts at the application level this choice is an essential part of what you will be able to decide in the future. So next time you’re thinking about your groupware strategy you might want to ask yourself: Do you think that Windows is the future of the Free Software desktop? Do you believe it is the only desktop you should ever be able to choose?

If you don’t think so  I would unsurprisingly suggest you take a look at Kolab. Good starting points might be the Kolab Story, the Kolab 3.0 Primer, and of course the Kolab Systems web page.

But perhaps even  more importantly I believe this shows we need to be addressing groupware & office jointly if we want to displace Microsoft Outlook & Office.

So I invite everyone working on promoting the Free Software office solutions to get in touch and work together.

Posted in Collaborate in Confidence, Free Software Business, Kontact | Tagged , , , , , , , , , | 3 Comments

When you go to FOSDEM…

…don’t forget to pack your résumé.

If you’re looking for a job working on Free Software/Open Source, or want a change of positions, that is.

At the current point in time I am aware of existing openings for all sorts of profiles, including, but not limited to:

  • Red Hat/RHEL Systems Engineer
  • Debian Systems Engineer
  • Developers for Python, Qt, C, C++, Qt, KDE; PHP and Java with experience in solutions such as KDE PIM, Akonadi, Roundcube, Cyrus IMAP, OpenERP, TYPO3
  • Support Engineer
  • Technical Sales & Support
  • Marketing & Sales (of Free Software, mind you)

some of them positions we’ll be looking to fill in our own company, Kolab Systems, some time this year. Some of them are in our company group, some in befriended companies that keep asking me for viable candidates in various areas.

Naturally for Kolab Systems candidates with community experience, connection and participation will be preferred. For some of our partner companies it’s not that important. Some of these jobs would offer the opportunity to relocate to Switzerland, some of them would offer the opportunity to work from home, most of them are located in Europe.

And while I cannot promise that I’ll find jobs for everyone, or that I’ll have your dream job for you, I may just know an interesting place for you and will be happy to pass your résumé along.

So don’t hesitate to get track me down to have a chat!

Posted in Collaborate in Confidence, Conference, FOSDEM, Free Software Business, KDE, Kontact | Tagged , , , , , , | Leave a comment

A primer for Kolab 3.0 – and ways of getting involved

After several months of development sprint the new Kolab web frontend has been unveiled for RHEL and UCS. We’re in fact quite proud of what our team has achieved this year and hope you will agree:

Kolab Webmail

The main email view

Kolab Calendar

The calendar week view

 

This new web client is building upon the Roundcube Webmailer, considered the best Free Software web mail applications by many, and all changes made have been provided to the respective upstreams. The Kolab specific modules are being hosted by Kolab Systems.

In case you would like to see for yourself how this new client has turned out, we have set up a test & demo instance. You can request an account by sending email with your name, email & affiliation to sysadmin-main+kolab@klab.cc. If you want, you can also request several accounts in the same way to test calendar sharing and such. But please be aware that this instance is running on a fairly small virtual machine, so speed won’t be what you see in a full fledged installation. Also this is a test bed for some experiments of ours, which means there may be occasional breakage. If you find something that is broken and remains broken, please file an issue at https://bugzilla.kolab.org.

This web client is now available for customers as part of our standard supported offering, and for those currently using the Version 2.3 Community Release we have a KVM image that you can hook up against an existing instance to give you the interface right away. We would have liked to provide it even easier, and will probably do something in the future, but for the moment we felt that speed was more important than perfection and so wanted to let you have a look at this immediately.

Because OpenPKG has been on the deprecation path for two years now and no future release will use it, there won’t be the same smooth upgrade possibility. So we felt that one clean break is better than two successive ones over a few years and already did a lot of the cleanup of LDAP idiosyncrasies we had on our radar for some time. This has happened in the 2.4 experimental branch already, but as a result the old web admin interface which was hard-coded against the LDAP schema no longer works. Now of course one could try to hard-code it against a new schema. But then that would be a lot of effort for very little gain.

Knowing that we had reached the end of the line for incremental updates, it was time to jump.

That is why our next community release will be Kolab Server 3.0 as announced last week on our development list. Allow me to give you a little bit of an overview.

Towards new horizons

There will be a couple of under-the-hood changes for Kolab 3.0, and some very visible ones. A lot of work under the hood has already been prepared or begun on the grounds of the Kolab Enhancement Process (KEP) which has produced some pretty good output so far. These address capabilities in the format, as well as updates to match a technological world that has been evolving fast.

Under the hood

When Kolab started using IMAP as a NoSQL storage data base, this concept was not all that well understood by many people, and IMAP itself had only just begun lending itself to this kind of approach through the ANNOTATEMORE draft RFC. This is what Kolab has been using up and until version 2.3, but since this draft has long expired and has become RFC 5464 – The IMAP METADATA Extension, it is time to finally lay ANNOTATEMORE to rest. With KEP 9, we also introduce per-message meta data based on RFC 5257 – Internet Message Access Protocol – ANNOTATE Extension for which we have some plans that will hopefully become clear after the 3.0 release.

More importantly, we are giving the Kolab XML Format & Specification a fairly comprehensive overhaul based on a wide range of customer experience and also because the RFC process has completed two fairly important RFCs for us this year: RFC 6321 xCal: The XML Format for iCalendar and RFC 6351 xCard: vCard XML Representation. These will be the basis of our new Event, Task & Address book objects.

The entire format will be described in normative XSD, the code generated & provided through an API with language bindings for a wide variety of programming languages, making it easier than ever to write a Kolab client. This effort is led by Christian Mollekopf, who has prepared a KEP for the specification, and provided a good summary on the why’s and how’s of this approach, which came out of a community consultation process that took place on the kolab-format mailing list.

Kolab Server: Each box can be clustered individually

We also wanted to emphasize further on one of the great strengths of the Kolab Groupware Solution: Scalability. It is possible to set up the Kolab Server in ways that allow for natural high-availability, load-balancing & site reliability with a granularity of performance monitoring and adjustment that allows each individual component to be scaled up or down as required.

(And yes, we have implemented this kind of setup before. In two separate geographical locations. With all optional components. Built so it can scale up to 100s of thousands of users. Any machine can fail at any point without even disturbing the individual session of the user. It is a thing of beauty of which we are proud. We really wish we could talk about it.)

Naturally we like this aspect very much, but believe it may be possible to do this one better through our client-side technology developed in the recent re-factoring of what to us and our customers is the Kolab Client, and which you might simply know as KDE Kontact. We think this technology has potential beyond the desktop that we would like to explore. To us, it is called Server Side Akonadi.

This should be an interesting experiment, and will hopefully also contribute towards the overall speed, quality and flexibility of Akonadi on all platforms, including the desktop & mobile phone.

This will then be rounded off by the LDAP cleanups which will make Kolab near-fully agnostic towards existing LDAP setups, and of course configuration management updates, of which the most important and most visible will be the new Kolab Configuration API.

What you’ll see

Because we need to re-do the web admin in any case, we decided to do it right and make it a RESTful configuration API. This process is already in full swing with a Python backend and the new PHP based web admin being scoped out by Jeroen van Meeuwen and Aleksander Machniak (a.k.a. Alec) based on a draft by Thomas Brüderli. There is even some documentation already. Once we have a version that does at least what the old web admin did, we plan to wrap this into a 3.0-development release including the new web front end. Please note that this will be the starting point for the public 3.0 development cycle, and not a release you should use productively. Because things will break badly in the process of making all the under-the-hood changes described above.

In any case, the new web client will of course be the other major visible change in Kolab 3.0. But of course we are strongly committed towards keeping the interchangeable components approach of the server intact. So we also hope that people will help to make Horde 4 an option for the Kolab 3.0 server.

Meanwhile we’re getting on with the work, and we hope that some of you will join us. If you’re looking for something fun and interesting to do, what about any of these ideas?

  • Create a GTD module for the web client to complement Zanshin
  • Create a web client notes module compatible with the newer versions of KDE Kontact
  • Integrate a web based XMPP client on the web
  • Integrate ownCloud with Kolab on the server
  • [... please insert your idea here ...]

There is in fact a “formalized” approach in which you can throw your own ideas into the mix. You can find information about it here.

According to schedule, Kolab 3.0 will then see the light of the net in May/June 2012, and your favorite feature could be part of that.

So don’t just watch. Get involved! :)

Posted in Collaborate in Confidence, Free Software Business, KDE, Updates | Tagged , , , , , , | 10 Comments

So what might Digital Sustainability be?

There is a group of Swiss parliamentarians who are organized in a group for “Digital Sustainability” for which I’ve been asked to participate as part of an expert group that consists of  practitioners in a variety of fields, including Free Software and Open Standards. But while German Wikipedia at least has an article about Digital Sustainability, most people simply seem to apply the “I know it when I see it” test, which is somewhat less than satisfactory. What can be said is that most people intuitively seem to agree that Digital Sustainability would include aspects such as Free Software, Open Standards, Open Governmental Data, Privacy and a couple of other aspects. But how to define or describe it in a simple and transferable way?

So I recently found myself in a room with several other people trying to understand what we expect from Digital Sustainability and how to express it. In this discussion, after several other attempts, we narrowed it down to three aspects:

Trying to sketch Digital Sustainability

Digital Sustainability: Your digital relationship to society

So what you want for Digital Sustainability is:

  • Transparency: Access to know and understand the world around you, its power structures, and to the data & information to form your own opinion;
  • Participation: You are not limited to watching events unfold, can participate in the political process, shape opinions and provide processed information on the grounds of the data that is available to you and others;
  • Self-Determination: You define your own privacy, including for your digital environment, and determine how much of your information you are providing, and to whom.

In order for something to be digitally sustainable,  none of the above three principles may be violated.

Another way to think about it might be to see self-determination as the natural limitation towards how transparent your person should be to others and how much they should participate in your life, based on a principle of reciprocity since this is valid for every individual in society. The agglomeration of all of this then forms a consensus within and throughout society as to what things shall be governed jointly, and with equal participation of all.

So all three aspects need some form of balance, as your right to request influence is linked to the limits you set for your own self-determination. But pushing the limits of your own self-determination eventually causes friction once it comes in conflict with the self-determination of another person. That is when transparency and participation need to help to find a workable balance.

Or, as Richard coined it for the reciprocity principle behind the GNU General Public License: “Your freedom to swing your fist ends at my nose.”

Naturally this still represents work in progress, so I am not sure it is the answer to all questions in this area. But it seems to meet some of the criteria that I’d set for such a conceptual definition. Most importantly it is simple, understandable without technical knowledge, and allows to check existing services or situations for violation of these principles, and the result comes out at the right side of what I’d consider digitally sustainable.

So for me, this seems workable for the moment.

And if you like it, the next time someone asks you what is Digital Sustainability, you can draw them a picture.

 

Posted in Political Commentary | Tagged , , , | 3 Comments

OFE Summit 2011: Creating an Open climate for entrepreneurs in Europe

During the 2011 OpenForum Europe Summit I had the pleasure and privilege to chair the session on “Creating an Open climate for entrepreneurs in Europe” and the videos of the opening presentations are now online on YouTube, and included below in chronological order:

Fabien Pinckaers, CEO OpenERP is talking about his experience in setting up a Free Software/Open Source business and how his business works not despite, but because of software freedom:

Laura Creighton, VC is talking about some of the systematic issues of promoting innovation and entrepreneurship, and gives some insight as to why current EU funding is so ineffective due to addressing the wrong sector:

Chris Taggart, CEO, OpenCorporates is talking about his approach to increasing transparency in the corporate world for all, and the potential this holds for entrepreneurial activity:

The discussion that followed these presentations was interesting, lively and with good controversial points, which brought out some very valuable insights, in my opinion. Among others:

  • The European Commission is currently targeting less than 1% of European businesses with its research & development programmes, which look at heavy, centralized, old-school industrial development, and fail to target the knowledge economy ecosystem of small, agile, intelligent players that characterizes IT innovation;
  • The “Silicon Valley” is a social phenomenon more than it is technical that is unique in time and space and cannot be recreated. Through a tradition of sharing best and worst practices between entrepreneurs has allowed to overcome the obstacles for new businesses, which are hardly ever technical;
  • Advertising or technical development are not where most businesses fail. It’s getting the first 100 customers that cause the greatest issues because European businesses as users of IT are not innovation-seeking and are afraid to stand out between their competition for trying something new that may give them a competitive edge, or not. While companies in the U.S. love to try new technologies, the demand in Europe for new and innovative technologies is much smaller;
  • The Commission could aim to tackle this issue by allocating some of its R&D funding towards helping adoption of new technologies, e.g. through tax breaks for companies that adopt new technologies early and seek innovative edge;
  • Software patents remain the single greatest threat to a competitive European IT industry and are likely to destroy the beneficial impact of all R&D funding to date and in the future.

But these are of course only some points that stuck with me, there were many more.

For more excellent insights during the summit, the Open Forum Europe YouTube channel has the other presentations during the summit, I recommend in particular the ones on Open Data, which are highly pertinent and interesting.

Posted in Collaborate in Confidence, Conference, European Union, Free Software Business, Free Software Foundation Europe, Political Commentary | Leave a comment