Evaluating Free Software for procurement

When you’re a public body, how do you evaluate Free Software solutions, and how do you procure them? Recently I’ve been getting this question fairly regularly. Here are the main resources I point people to.

First stop: A guideline from the European Commission on “public procurement of open source software“. This answers most of the fundamental questions, such as “is it ok for us to just download a program?” (yes), or “How can we specify that we really want Free Software and Open Standards?”. When it comes to evaluating potential solutions, the EC guideline is pretty curt.

The UK government has produced an “Open Source procurement toolkit“. This is a very useful resource. It highlights Open Standards and the need to avoid lock-in. The documents in the toolkit are clearly structured and well written.

They sometimes make the common mistake of describing “open source” and “commercial software” as opposites. Lots of commercial companies that have successfully built their business around Free Software would beg to differ. But this is a minor quibble with a generally very useful resource.

So let’s say you’re putting out a call for tender for a Free Software-based solution. How do you evaluate and compare the different bidders? Here, the Swedes have some helpful advice for you. In early 2011, one of Sweden’s two national procurement agencies launched a number of Free Software framework contracts. They specified some pretty detailed criteria for evaluating suppliers. Some of them are pretty nifty: In one example, bidders can score the highest number of points if they have committed code to a project, and the project has accepted and integrated it.

The original documents are in Swedish (of course), and have been translated into German. If you read neither language, this presentation by Daniel Melin has a pretty good overview. You’ll also want to check out this write-up of Sweden’s and other countries’ public sector approaches to procuring Free Software.

Thankfully, there are many other useful resources on this topic. If you want to see your favourite one included here, please get in touch!


Free Software in the Church: From principles to practice

Several interesting conversations resulted from my visit to the European Christian Internet Conference. One of them was with a pastor (and computer scientist) who works in the church administration of the Rhineland in Germany. He shared with me a draft strategy (pdf, in German) to move the churches in his region towards Free Software. He asked me to comment, and I’m happy to do so.

The pastor will also be joining one of FSFE’s upcoming Fellowship meetings in Düsseldorf, Germany, to get more input from the Free Software supporters there. I love seeing connections like these come together.

The first problem he’s encountering with the strategy is getting his superiors to understand that IT is actually part and parcel of what the church does in this world, and should follow the organisation’s guidelines.

Eine IT-Strategie als eine „Ordnung“ der Kirche dient dazu, den Verkündigungsauftrag der Kirche zu unterstützen. Sie ist eine menschliche Ordnung, die der presbyterial-synodalen Ordnung der EKiR Rechnung trägt und selbst soweit es geht den Auftrag der Kirche widerspiegelt.

This is perhaps the most difficult challenge to overcome, and he’ll need to do some more work to explain this. Explaining ethics-based software to an ethics-based institution shouldn’t be too hard; but it still requires effort.

Next, the strategy document brings up cost as a key point in favour of Free Software:

Da der Einsatz von IT der Verwirklichung des Auftrages der Kirche dient und kein Selbstzweck ist, unterliegt der Einsatz von IT der Wirtschaftlichkeit, damit die Kirche als gute Haushälterin der ihr anvertrauten Gaben diese soweit wie möglich für Verkündigung, Seelsorge, Diakonie und das Eintreten für Frieden, Gerechtigkeit und Bewahrung der Schöpfung einsetzen kann.

Free Software is very often cheaper to deploy and use than non-free programs. But for us at FSFE, that’s never the primary argument. Instead, we always first talk about how Free Software empowers users and puts them in control of their computing. If you say “this is cheaper than that”, that’s both easy and pretty convincing – until the competitor decides to dramatically lower his prices, or presents an alternative calculation with a different methodology.

We’ve learned that Total Cost of Ownership in particular is a pretty insidious concept. It sounds perfectly reasonable, but since there are many different ways to calculate it, the competitor with the bigger PR budget will eventually win out.

So instead of cost, the strategy should highlight how Free Software and the Church’s ideas fit together, and why Free Software is a fundamentally better choice. This is also why the strategy document would be more effective if it used the term Free Software rather than open source.

So now we’ve dealt with the principles. Off we go with the practical stuff:

IT-Anforderungen sollen durch Standardanwendungen, die ggf. angepasst werden, abgedeckt werden. Proprietäre Software und Infrastruktur kommen nur zum Einsatz, wenn Standardlösungen nicht die notwendigen Anforderungen erfüllen können. In der Regel begründen nur spezielle kirchliche Anforderungen die Notwendigkeit von Eigenentwicklungen.

Three points deserve improvement here. First, what are “standard applications” and “standard solutions”? Better to talk about “Free Software applications / solutions”, with a footnote pointing to the Free Software definition. Many similar documents also talk about programs and solutions under “OSI-certified licenses“.

Rather than saying that “proprietary software will only be used if standard solutions (i.e. Free Software solutions) cannot meet the necessary requirements”, those who want to deploy non-free software should be asked (or, if possible, required) to first conduct a thorough review of potential Free Software solutions, and document why none of them are suitable.

Sometimes organisations have requirements that aren’t met by existing software. In cases where the Church is paying for new programs to be developed, it should make sure those programs are published as Free Software, and should get hold of the copyright on the code.

Open-Source-Lösungen sind zu präferieren, offener Quelltext ermöglicht im Zweifelsfalle, nachvollziehen zu können, wie die Software funktioniert. In der EKiR erarbeitete Lösungen kommen wiederum der Allgemeinheit zu Gute (Apg 20,35). Bei Beschaffungsmaßnahmen sind Ziele der Nachhaltigkeit und des gerechten Wirtschaftens zu bedenken und in die Abwägung zur Wirtschaftlichkeit einzubeziehen.
Preferring Free Software solutions in principle is a statement of intent, not a strategy. If the idea here is to provide more reasons for using Free Software, then there are many good arguments. The ability to review the source code is just one of them, and for this particular organisation, it’s probably not the strongest one. In particular, it would be wise to follow the practice recently adopted by the UK government, and figure future exit costs into the price of any new solution.
Um einen offenen und barrierefreien Datenaustausch zu ermöglichen, sollen zukünftig das „Open Document Format“ (ODF) sowie offene-Standards (XML) verwendet werden.
A clear preference for ODF as an Open Standard is great. However, there are other document types out there. There’s nothing on how the organisation will get from its current dependence on proprietary file formats to the future Open Standards practice. The UK government’s proposal on “Sharing or collaborating with government documents” is an excellent reference here.
Dokumenten-Management- und Archivierungssystem: Der Aufbau eines öffentlichen, zentralen Dokumenten-Auskunfts- und Archivierungssystems (s. Open-Data-Konzept) ist in Abstimmung mit den entsprechenden Fachabteilungen anzustreben,[...]

If you’re talking about long-term archiving, formats based on Open Standards aren’t only the sensible choice – they’re the only choice. Think of the Church what you will, but unlike the IT industry, at least they don’t define “long term” as “the next five years”.

While this strategy draft still needs quite a bit of work, it’s certainly going in the right direction. I very much welcome the commitment of this pastor to make a real difference in his organisation, and I’m grateful for the chance to support this effort.

Talking to the Church about Free Software

Telling lots of different kinds of people about Free Software is one of the parts I like best about my work with FSFE. Recently I was invited to deliver a keynote at the European Christian Internet Conference. It’s a small event that has been running for a long time. A core group of ca. 50 people from (mostly protestant) churches from around Europe meets to discuss Internet-related aspects of their work. Maybe a third of the participants are priests; others are laypersons working with their local congregations.

I broke my talk down into two parts. The first one was a general introduction to Free Software: The idea, its history, what Free Software is doing today, and how the licenses work. With the second part, I focused on power, technology and surveillance, and the role that churches might play in righting the wrongs that governments are perpetrating against their people.

What stuck out with this group was the strong focus on ethical questions. I highlighted that Free Software and the ideas which the Church espouses go together very well: Sharing, helping each other out, and making sure that everyone can participate in society. If churches can agree that they should only buy Fair Trade coffee that respects the integrity of the producers, then they should also be able to agree to use only Free Software, which respects the integrity of its users. (If you read German, have a look at LuKi’s pages.)

The issue of surveillance resonated strongly with the participants. Priests often discuss highly intimate matters with people in their congregation, and these days, they find themselves doing so by electronic means. How is this confidentiality supposed to work if the priest knows that one or more governments are recording the conversation? And on a larger scale, how can we have a just society if those in power help themselves to near-total insight into the lives of everyone else?

The most useful role I can see for the church in this debate is to leverage its position as a moral institution. Church leaders need to step up and speak out against mass surveillance, again and again, until we have ended the practice.


Four social rules for a “No Asshole Zone”

Free Software needs a strong community. If we fail to attract everyone willing to work for Free Software, we’re shooting ourselves in the foot. Also, we’re probably not being as friendly and open towards people as we should be, morally speaking. That’s a serious failure for a community where morals matter.

The low share of women participating in the community (oh, where to start with the links? I’ll just pick this one by Jodi Biddle) is an especially egregious problem. I’m sorry to say that FSFE is doing no better in this regard than the overall community, much as we want to.

One easy step that we’ve already taken is to update our internship page to say that when choosing between two or more similarly qualified candidates, we’ll prefer female applicants. Yes I know, hardly revolutionary – but you have to start somewhere.

I just came across Sumana Harihareswara’s keynote at WikiCon 2014, titled “Hospitality, Jerks, and What I Learned“. The whole text is interesting, and I recommend you read it in full.

In one section, Sumana talks about constructing a “No Asshole Zone”, and specifically focuses on four social rules that help people to work in public – something that sometimes includes failing and showing ignorance.  The rules are:

No feigned surprise. No well-actuallys. No back-seat driving. And no sexism, racism, homophobia, and so on.

It’s important to realise that these rules aren’t specifically about being more welcoming towards women. They’re about being more welcoming towards everyone. They’re about making our work more productive and satisfying.

A more detailed explanation

Feigning surprise. When someone says “I don’t know what X is”, you don’t say “You don’t know what X is?!” or “I can’t believe you don’t know what X is!” Because that’s just a dominance display. That’s grandstanding. That makes the other person feel a little bit bad and makes them less likely to show you vulnerability in the future. It makes them more likely to go off and surround themselves in a protective shell of seeming knowledge before ever contacting you again.

Well-actuallys. That’s the pedantic corrections that don’t make a difference to the conversation that’s happening.  Sometimes it’s better to err on the side of clarity rather than precision. Well-actuallys are breaking that. You sometimes see, when people actually start trying to take this rule in, that in a conversation, if they have a correction, they struggle and think about it. Is it worth making? Is this actually important enough to break the flow of what other people are learning and getting out of this conversation. Kind of like I think we in Wikimedia world will say “This might be bikeshedding but -”. It’s a way of seeing that this rule actually has soaked in.

So far, so good. But what happens when someone fails to stick to these rules?

I think it’s also important to note, well, how do these rules get enforced? Well, all of us felt empowered to say to anyone else, quickly and a bit nonchalantly, “Hey, that was a well-actually,” or “That’s kind of feigned surprise, don’t you think?” And the other person said sorry, and moved on. I can’t tell you how freeing it felt that first week, to say “I don’t know” a million times. Because I had been trained not to display ignorance for fear of being told I didn’t belong.  [...]

If you don’t understand why something you did broke the rules, you don’t ask the person who corrected you. You ask a facilitator. You ask someone who’s paid to do that emotional labor, and you don’t bring everyone else’s work to a screeching halt. This might sound a little bit foreign to some of us right now. Being able to ask someone to stop doing the thing that’s harming everyone else’s work and knowing that it will actually stop and that there’s someone else who’s paid to do that emotional labor who will take care of any conversation that needs to happen.

Within FSFE, we put a lot of importance on keeping conversations polite and productive. We already rely quite a bit on FSFE’s staffers and other experienced list moderators to sort out conflicts. However, a lot of our rules are implicit. Sumana’s talk makes some of them explicit, and suggests some useful new approaches we should try.


We’re all Gmail users now – Pt. 2

The other day I wrote about how even if you don’t use Gmail, Google still ends up with access to a lot of your personal conversations. My own analysis was pretty poor imitation of the interesting work done by Benjamin Mako Hill. Where he used Python and R, I just fumbled around with Mutt’s limit patterns. Due to the different methodology, our figures weren’t really comparable.

Now I took the time to actually run Mako’s scripts. It turned out to be easier than I thought. The archives I analysed contain data starting in the first half of 2009, but anything before 2010 is patchy. I changed my mail setup at the start of 2010, and most of the mail from before then isn’t included in this analysis.

Email since 2010: All mail vs mail handled by Google
Number of mails, overall and from Google

This shows us the absolute number of mails I’ve received and replied to. It tells me that my mail volume is fairly constant over the long term, but that my mail load can oscillate wildly on a weekly basis. And it tells you when I was on vacation these past years.

The share of mail that goes through Google’s servers is pretty low. But how low?

Share of mail going through Google's servers

Between 10% and 15%, that’s how low. As Mako (and me) would expect, Google is somewhat more involved in conversations that I carry on actively (Email with Replies) than in the overall set of email I’ve received. This is because there’s lots of spam and auto-generated mail in the “All Mail” category, and most of it doesn’t go through Google’s servers.

I have no idea what causes the slight uptick in Google’s share among mails I’ve replied to after mid-2013.

Hugo Roy has run Mako’s scripts, and his Google share moves between 25% and 50%. The results that I obtained from Mutt’s limit patterns match the output of Mako’s scripts pretty closely, by the way.


What can we learn from this? A large share of my contacts doesn’t rely on Google for email service. That’s good news.

On the other hand, Edward Snowden telling us that the NSA and its buddies are after our mail apparently hasn’t dissuaded people from using a provider they know is being tapped. Or at least, it hasn’t really increased the number of my contacts who avoid using Google for mail.

Finally, looking at the figures from Mako and Hugo alongside mine, your privacy against the large web companies (and against the spies who hoover up the data they store) largely depends on your environment. If you work in a place where lots of people rely on Google for mail service, your data will end up on the company’s servers. If, on the other hand, your employer and your friends rely on their own servers, or on smaller providers, you have a much better shot at protecting your privacy.

Taking control of your own systems is the easy bit. Persuading everyone around you to do the same is harder, but has a bigger impact.



We’re all Gmail users now

Is your privacy important to you? And if so, are you running your own mail server? Good. But if your concern is to keep Google’s tentacles out of your personal conversations, that’s not enough.

Benjamin Mako Hill published a fine project he undertook over the weekend. He wrote a bunch of scripts to check how much of the mail in his archives had gone through Google’s servers.

The answer: About 57% of the mails in his inbox had been delivered by Google. That’s still a conservative calculation, and it’s pretty depressing for someone who goes to the length he does to keep his data private.

Mako’s work inspired me to do the same. It was late and I was tired, so instead of futzing around with Python and R, I decided to simply use the tool I had available anyway, and rely on Mutt’s limit patterns. The archives I analysed go back to September 2009 – not quite as comprehensive as Mako’s own, but still significant.

I used a pretty simple limit pattern:

Limit to messages matching: ~h google.com

which translates to “show me all messages that have the string ‘google.com’ somewhere  in the header”.

Out of 140,819 messages, 15,746 matched the pattern. That’s 11.18% – much lower than Mako’s share. Why is this?

Besides the fact that I run my own mail server, the reason is probably that most of my email concerns my work as FSFE’s president. I exchange a lot of mail with FSFE’s staff and volunteers, most of whom use @fsfe.org addresses. These addresses are just a redirect that people can point anywhere they like (hey, if you want one, you can become a Fellow, and support FSFE’s work!).

A few people use them to point to Gmail, but most apparently don’t. A lot of the people I routinely exchange mail with run their own mail server, or host their mail with a small provider. (I assure you that there aren’t a lot of Hotmail users in FSFE.)

The figures above don’t include most public mailing lists that I subscribe to. So I took a look at those, too. Here, I was expecting the share of mail that passed through Google’s servers to be higher. It turns out that the opposite is true: From January 2012 to today, I received 46,163 messages in this folder. Of these, 2,547 have the string “google.com” somewhere in their headers – that’s just 5.52%.

I’m happy to admit that I’m not entirely sure about the methodology. Feel free to criticise and suggest improvements in the comments!

The upshot is that yes, hosting your own server – or keeping your mail somewhere other than the big web service companies – is an important component in reducing your data exhaust. But the size of that reduction depends on the providers used by the people you usually talk to. Privacy, as Eben Moglen highlights, is an ecological issue.

What share of your mail goes through Google’s servers? Post your figures in the comments.

W3C: Who’s working on DRM in HTML5?

Our intern Michele Marrali just had a look at the companies who are participating in the W3C discussion about making DRM a part of the HTML5 standard – something that’s a horrible idea if you care about security and freedom. Here’s what he found:

Google – US
Netflix – US
Sony – JPN
Adobe – US
Microsoft – US
Pierre Sandflow Consulting – US
Apple – US
Nicta – AU
Verimatrix -  US
PacketVideo – US
Huawei – China
Telecom Paristech – EU academic institution
irdeto – NL
Comcast – (NBC, Universal, etc) US
Yandex – RU

So that means there’s at least one company from Europe involved in the discussion – right?

Oh, wait. irdeto is a subsidiary of Naspers, a global publishing company based in South Africa.

That brings the number of European businesses involved in the discussion to zero. Zilch. Nada.

Interesting times: Speaking about Free Software in Istanbul

On March 29, I had the pleasure of giving a talk at the annual conference of the Turkish GNU/Linux Users Association in Istanbul, Turkey. This was a pretty interesting time to speak about freedom and technology.

Local elections were scheduled across the country for the following day. The government had blocked both YouTube and Twitter. This largely had the effect of teaching Turkish Internet users about VPN, DNS and Tor. Tor usage numbers in Turkey doubled during the week leading up to my talk.

In my talk, I discussed the relationship between technology and power. The same technology that we are using to liberate ourselves is being used by states and corporations to monitor and control us.

In a nutshell, the spies are largely doing what they were trained to do. It’s the politicians – and therefore, in democracies, us as citizens – who have failed to limit the spies’ invasion of our privacy.

In combination with what Bruce Schneier has appropriately labelled “the public-private surveillance partnership“, this means that our privacy is under threat. Having privacy is essential to our ability to freely decide how we want to live our lives.

Number of Tor users in Turkey doubles during Twitter block

I concluded that in order to live in freedom, we need to recover ways of reading without being watched, and decentralise the locations where we store our data. We need to make laws that strengthen our privacy, and further develop the technical tools such as encryption that make those laws effective. These same tools will also give us cover while we  undertake all these efforts.

(All this is something I’m planning to write up as an essay as soon as I get the time. Hah.)

The audience reaction was pretty strong, as you could expect from a young, educated, technically minded group where people are clearly fearing for their country’s future. Many of the people in the room had probably been protesting on Taksim square in 2013, and were acutely aware that that future depends on them.

One of the details from my talk even made it into the national press the next day. I had mentioned that Turkey is probably using a border control IT system that it received as a gift from the United States – on condition that Turkey shares the data on who enters and leaves the country with the US government. Seeing this plastered across the front page on election day was an interesting experience.

And finally, thanks to the very inspiring Nermin Canik for being a great host!

Comments on UK government’s consultation on document standards

The UK is currently inviting comments on the standards it should use for “sharing or collaborating with government documents”. Among other things, the government proposes to make ODF the sole standard for office-type documents.

FSFE has submitted its comments on this proposal, which we believe is very positive. Just now, in the final hours of the process, Microsoft has submitted a lengthy comment, urging the government to include OOXML in its list of standards.

We have filed a short response to Microsoft’s submission. While it should appear on the consultation page shortly, I’m publishing it here right now.

If you, too, believe that the UK government should in future rely on Open Standards alone, please hurry up and file comments of your own.

The lengthy discussion Microsoft offers here essentially boils down to a single demand: That the UK government should in future rely on OOXML simply because it’s what Microsoft’s products support.

This claim is diametrally opposed to the significant efforts that the UK government has recently made to break free from vendor lock-in and stop the IT procurement gravy train, and to the progress that it has made in this direction. Microsoft’s claim also ignores the great extent of preparation which has gone into this proposal, and the thorough analysis of user needs which the government has conducted, and on which the present proposal is based.

Competition takes place on top of standards, not between them. OOXML fails the UK government’s Open Standards definition, in that it is clearly dependent on a single supplier: Microsoft itself.

Whenever a government breaks out of the status quo, and takes bold action to improve matters for the long term, it is easy to manufacture fear, uncertainty, and doubt. We would hope that Microsoft will instead embrace competition, and ensure that all its office products work well with ODF. The company could then rely on the strengths of its product portfolio, rather than on the lock-in strategies that have made it the target of competition regulators around the world.

We are confident that when assessing Microsoft’s response, the UK government will keep the question of “cui bono?” firmly in mind.

UK government sets “red lines” on wasteful IT contracts

While working on FSFE’s response to the UK government’s consultation on using Open Standards by default for government documents, I noticed something that I had apparently overlooked during the busy days ahead of FOSDEM. On Jan 24, the UK government published a few principles for future government IT contracts.
They’re quite clear, quite brief, and quite powerful:

  •     no IT contract will be allowed over £100 million in value –  unless there is an exceptional reason to do so, smaller contracts mean competition from the widest possible range of suppliers
  •     companies with a contract for service provision will not be allowed to provide system integration in the same part of government
  •     there will be no automatic contract extensions; the government won’t extend existing contracts unless there is a compelling case
  •     new hosting contracts will not last for more than 2 years

Regarding the first one: 100 million GBP may seem quite a lot. OTOH, the UK government apparently has several IT contracts worth
over a billion pounds each, so this is a significant improvement.

If other governments – and especially the European Commission – followed this approach, that would mean a lot of progress.