Techno-activism: Why tools matter

Will technology make us freer? Cory Doctorow has a nice little article about the role of Free Software as a tool for social activism. In particular, he’s writing about why it’s important that the tools we use for activism should be free:

Herein lies the difference between a ‘‘technology activist’’ and ‘‘an activist who uses technology’’ – the former prioritizes tools that are safe for their users; the latter prioritizes tools that accomplish some activist goal. The trick for technology activists is to help activists who use technology to appreciate the hidden risks and help them find or make better tools. That is, to be pessimists and optimists: without expert collaboration, activists might put themselves at risk with poor technology choices; with collaboration, activists can use technology to outmaneuver autocrats, totalitarians, and thugs.

The text makes it pretty clear why Facebook is horribly ill-suited to social activism. It also reminds me of Malcom Gladwell’s argument from October 2010 that the revolution will not be tweeted.

Art and fun at Mediamatic, Amsterdam

Yesterday, I hopped over to Amsterdam to speak at Mediamatic about Free Software and FSFE at one of their Ignite events. The format was interesting: A strict time limit of five  minutes per speaker, with slides that auto-advance every 15 seconds.

The Mediamatic Bank is an art and exhibition space in central Amsterdam. Most or all of the other presentations were artists telling the audience about their own work. There were some projects that I liked a lot:

  • Miktor and Molf divide their projects into “good work” and “bad work”. Good work is when they get paid, bad work is when they don’t. They’re right now  building an empty swimming pool for skating in, in Amsterdam. Like a number of the other projects presented that evening, it’s crowdfunded.
  • Niels van Koevorden and Sabine Lubbe Bakker are making a movie about “The  End of Belgium”. The country hasn’t had a proper government for about a year  now. So these two are now setting off on a month-long tour in a converted army  truck, interviewing people around the country and trying to work out what holds  Belgium together (or not). Given that I live about 200 metres from the Belgian  border, I’ll invite them to come round to my place and look at Belgium from the  outside.
  • Journalist Mariette Hummel is kicking off her DigiMe project. For six months, she  will try to document all the traces she is leaving in the digital world, whether  actively (e.g. Tweets) or passively (e.g. mobile phone location records).

There was  much more goodness, such as Alex Fischer carving treesaround the world (my favourite was a tree with a power socket in it), Arthur de Vries turning photos of faceless  mercenaries into statues, and Erik de Graaf with his graphic novel “Scherven” (Fragments)  about a Dutch couple being torn apart in WWII.

The current Piece de Resistance exhibition at Mediamatic Bank (Vijzelstraat 68) looks very  interesting, though I didn’t get time to really see it. In case you’re in the area on May 21, Mediamatic is hosting a fingerprint forgery workshop courtesy of the Chaos Computer Club, along with an RFID Zapper workshop. Should be a treat.

Bundeskartellamt: Freie Software schützt den Wettbewerb

(Article originally published on netzpolitik.org)
Das Bundeskartellamt und das US-Justizministerium haben gestern den Verkauf von 882 Novell-Patenten genehmigt. Die Software-Monopolrechte gehen an eine Firmengruppe namens CPTN. Die wiederum besteht aus Microsoft, Oracle, Apple und EMC. Das Erfreuliche: Offenbar unter massivem Druck aus der Freie Software-Welt mussten sich die CPTN-Firmen offenbar auf Bedingungen einlassen, die den Wettbewerb durch Freie Software schützen sollen. In ihren Mitteilungen heben die Behörden nicht nur die Rolle Freier Software für den Wettbewerb hervor. Sie stellen auch klar, dass Patentklagen ebenso wettbewerbsfeindlich sein können wie “Fear, Uncertainty and Doubt”-Strategien.

Nach der Übernahme: Ein neues Geschäftsmodell für Novell?

Continue »

When suing your customers is good for business

At FSFE, we’re closely watching how the public sector goes about buying software. A lot of money changes hands here, so it’s worth paying attention, especially since a couple of studies have shown that public authorities frequently get the process wrong.

Fortunately, there are legal remedies available. If your company bids for a contract, and the offer is rejected, you can go to court if you feel treated unfairly. You can do the same if you think the call for tender is designed in a way that prevents your company from bidding for the contract in the first place.

There’s certainly a lot of unfairness out there. One of the most common mistakes that public bodies make is to use a brand name in a call for tender, such as saying “we want software from company X”. Under European rules (see Directive 2004/18/EC, para 23.8) , that’s illegal unless there’s no other way to describe what you’re looking for. In software, there almost always is such a way. If you’re looking for a word processor, you should say so, rather than asking for “Microsoft Word” or similar – otherwise you’re excluding competitors from bidding. And that’s illegal.

So there are clear rules, and there are lots of public bodies breaking them. Yet when I ask companies that offer Free Software or related services why they don’t go to court more frequently to appeal against such unlawful calls for tender, they often tell me “we don’t want to annoy our potential customers”. They’re afraid that if they sue a public body (for example, the European Commission) over a bad procurement action, they’ll never get any business from the Commission again.

But is that fear justified?

Our intern Natalia was looking at the EC’s own procurement practices the other day. The Commission’s desktop computers run on Windows XP. The necessary licenses are sourced through a contract with Fujitsu-Siemens from 2008, with a total final value just short of 49 million Euro. When the EC awarded that contract, an unsuccessful bidder sued (Case T-121/08, pdf). The bidder’s name? PC-Ware. That sounded familiar, and indeed: It’s the very same company that won a the 189 million Europe SACHA II contract from the European Commission in December 2010. That’s almost four times the total of the Fujitsu-Siemens contract.

Which goes to show that suing your customers can be, to paraphrase Neelie Kroes, a very smart business decision indeed.

UK finally moves on Open Standards

When it comes to Free Software and Open Standards, the UK has long lagged way behind other countries. There were a few policies that sounded good on paper, but that’s exactly where they stayed.

This may be finally changing. The UK Cabinet Office has issued a “procurement policy notice” (.pdf) that is, well, surprising. In a good way. It tells public bodies in the UK how they should go about buying software. It says the right things:

When purchasing software, ICT infrastructure, ICT security and other ICT goods and
services, Cabinet Office recommends that Government departments should wherever
possible deploy open standards in their procurement specifications.

and for the right reasons:

Government assets should be interoperable and open for re-use in order to maximise
return on investment, avoid technological lock-in, reduce operational risk in ICT
projects and provide responsive services for citizens and businesses.

But if you’ve followed the epic EIFv2 debate and its outcome, you’ll know that the key to it all is the definition of what an Open Standard is, exactly. This is where this little unassuming procurement notice really shines:

Government defines “open standards” as standards which:

  • result from and are maintained through an open, independent process;
  • are approved by a recognised specification or standardisation organisation, for example W3C or ISO or equivalent. […]
  • are thoroughly documented and publicly available at zero or low cost;
  • have intellectual property made irrevocably available on a royalty free basis;
    and
  • as a whole can be implemented and shared under different development approaches and on a number of platforms.

So, what does this amount to?

This is one of the stronger policies that we’ve seen from European governments. It certainly is a leap ahead for the UK, which until now has lagged behind many other European countries in terms of Free Software adoption in the public sector. We’d like to see similarly well-considered steps from more European governments.

The policy note is refreshingly clear on what constitutes an Open Standard. The requirement that patents which are included in Open Standards should be made available royalty-free is a welcome improvement over the fudged compromise in the new European Interoperability Framework. It’s good to see the UK government take leadership on this important issue, in its own interest and that of its citizens.

As the lamentable OOXML charade has shown, it’s important that standards are developed in a process that’s independent of any particular vendor, and open to all competitors and third parties. We commend the UK government for making this an explicit requirement. The definition of Open Standards could have been even further improved by demanding a reference implementation in Free Software.

Procurement based on this policy will bring the UK public sector strategic independence in its IT choices, freedom from vendor lock-in, and financial savings. It will also make it easier for UK citizens to communicate with their authorities using Free Software and Open Standards.

What next?

While this is an excellent document, we’re not quite there yet. The UK government has just opened a public consultation on Open Standards in government ICT that needs your input. Notably, the survey includes a section on what the definition of an Open Standard should be.

As always, the proof of the pudding will be in the eating. Implementation is what counts in the end, and the UK public sector has some credit left to earn in that respect. But this policy note takes the UK’s public sector one large step closer to software freedom, and into Europe’s fast lane of Free Software policy.

FOSDEM talk on “Power, Software, Freedom”

On the invitation of the GNU hackers, I spoke today in the GNU DevRoom at FOSDEM. The talk was on “Power, Software, Freedom — Why we need to divide and re-conquer our systems”. Here are the slides.

What makes a free service? If we do our computation on machines that we don’t control, how can we make sure we get the same freedoms that Free Software gives us on our own computers?

This is a discussion that urgently needs your brainpower and your coding magic. As a starting point, do you know a project in this area that should be added to this list?

Why we’re concerned about the sale of Novell’s patents

On November 22, 2010, Novell agreed to be bought by Attachmate. While this move wasn’t particularly controversial, a detail raised eyebrows. At the same time, Novell announced that it was selling 882 of its patents to a consortium made up of Microsoft, Oracle, Apple and EMC.

This is worrying. Given that Novell has been involved in Free Software development for a long time, it’s likely that a number of the company’s patents cover important Free Software technologies. In many markets such as operating systems, desktop productivity, or web servers, Free Software programs are the key competitors to Microsoft’s offerings.

Allowing a consortium of Microsoft, Oracle, Apple and EMC to acquire patents that are likely to read on key Free Software technologies would do huge damage to competition in the software market. This is yet another reason to continue FSFE’s ongoing work against software patents.

Microsoft has used patent lawsuits to stifle competition from Free Software (e.g. TomTom), and has long used unsubstantiated patent claims for a continued campaign of fear, uncertainty and doubt against Free Software. Oracle also has used its patents aggressively against Google.

CPTN might also decide to sell the patents on to third parties. These could be patent trolls (“non-practicing entities”), or members of the consortium itself. In September 2009, Microsoft sold 22 patents related to GNU/Linux during an auction where only non-practicing entities were invited.

All these cases would be bad for competition in the software market. Microsoft in particular would be holding a stash of patents that everybody believes to relate to Free Software. At the very least, this would make their FUD campaign much more powerful. The company could also move into patent litigation much more aggressively, suing competitors out of the market. (Both the legal costs and the potential damages in a patent lawsuit are so large that they represent a serious threat to any company that’s not really, really large.) Or CPTN sell the patents to a patent troll and let that organisation do all the dirty work.

As a consequence, if the sale of Novell’s patents to CPTN is allowed to go ahead, this will significantly increase the legal threat level for Free Software.

This is why FSFE is extremely concerned about the sale of Novell’s patents to CPTN. We have shared our concerns with the German competition authorities on December 22, 2010.

CPTN apparently withdrew its filing with the German authorities on December 30. This could mean that the companies behind CPTN are changing their strategy, or that they’re merely reformulating their application. It definitely doesn’t mean that the danger is over.

The competition authorities should only allow this deal if there are effective measures in place to prevent the patents in question from being used against Free Software in an attempt to restrict competition. As an effective measure, CPTN Holdings should be required to make the patents in question available under conditions which allow their use in Free Software, including in programs distributed under GNU General Public License (GPL) and other copyleft licenses.

Assessing the new European Interoperability Framework

Yesterday, the European Commission finally published the new version of the European Interoperability Framework [pdf] [link updated]. We at FSFE have been working on this document for a long time. When it was published yesterday, we gave it a welcome despite some reservations.

Glyn Moody points out a number of weak spots in the new document. Actually, I’m concerned about many of the same points as he is. Still, I don’t agree with his judgement that EIFv2 is a “great defeat”. The document would certainly have been a lot worse without the hard work of FSFE and others. Even though it leaves some key issues open, it represents some progress.

Whether to welcome EIFv2 or not is a question of what you take as a baseline for comparison, and if you view the document isolated or in context. A lot will also depend on how the EIF is implemented.

But let’s take the issues in turn.

The baseline

What do we compare EIFv2 to? Compared to the original EIF from 2004,  which we’ve now uploaded to FSFE’s website for reference, it’s certainly a letdown. However, compared to the drafts we had to raise the alarm about during the process that created EIFv2, it’s quite a lot better. A year ago we saw a draft that said

“it is also true that interoperability can be obtained without openness, for example via homogeneity of the ICT systems”

and many other nonsensical formulations that would have left the document entirely meaningless.
So the content of EIFv2 is not as strong as that of EIFv1. There are also various loopholes (or rather barn doors) that would allow governments and public bodies to pretty much ignore the EIF,  if this document was just standing on its own.

EIFv2 in context

That’s where it becomes important to view the EIF versions in context.

EIFv1, good as it was, was just a recommendation by an expert group somewhere in the innards of the Commission. It didn’t have any official status. That’s why it was possible for this document to contain such strong statements.

EIFv2, on the other hand is an official communication by the Commission. That makes it a binding policy document, rather than something that member states and the EC itself can ignore at their leisure. That status also explains why there was so much fighting about it, both outside and within the Commission.

Then there’s the fact that EIFv2 needs to be read together with other documents. There’s the eGovernment Action Plan [pdf], which defines some concrete actions that the EC and member states will take by certain deadlines. The plan says that “Member States are fully committed to the political priorities of the Malmö Declaration” (p. 15).

This declaration from 2009 makes it a political priority for EU members to

“[p]ay particular attention to the benefits resulting from the use of open specifications”, and “ensure that open specifications are promoted in our national interoperability frameworks in order to lower barriers to the market” (paragraph 21). It goes on to state that “the Open Source model could be promoted for use in eGovernment projects”

and that

“it is important to create a level playing field where competition can take place in order to ensure best value for money.”

The eGovernment Action plan also states that by 2013, member states

“should have aligned their national interoperability framework to the EIF” (p. 13).

So now member states actually have to do something. That wasn’t the case with EIFv1.

Open Standards, FRAND, and implementation

Then there’s the issue of what, exactly, the EIF says an Open Standard is. The definition in EIFv1 was pretty good. The definition in EIFv2 is less good, but again it’s better than it was in the intermediate drafts, where it sometimes was simply missing.

Most importantly, EIFv2 clarifies that Open Standards (or “open specifications”, as the EC has decided to name them) must be implementable in Free Software, while explicitely allowing for FRAND licensing — as long as it’s compatible with Free Software.

The question is what, exactly, the Commission and the member states will do with this clause. As Glyn Moody argues, they can drink the BSA’s Kool-Aid and use standards that are impossible to implement under copyleft licenses like the GPL, due to their attached FRAND conditions.

They can also take this to mean that even where a standard comes with FRAND conditions, it must be implementable in copyleft Free Software. There may be some ways to achieve this. Some people argue that royalty-free is simply FRAND with zero licensing fees.  That’s a pretty roundabout way of saying “royalty-free”, but it could work here. Or one could discard running royalties (ie.  patent licensing fees paid for every copy that’s made of a program) in favour of a one-time payment.
The EIF’s formulation here is not a nicely balanced compromise.  It’s a smoking crater in the middle of a battlefield where many more shots will yet be fired. We will observe very closely and take the appropriate action.

Conclusion

So what we have now is a strategy statement, without the level of detail that made EIFv1 such a useful document. But this strategy generally goes in the right direction, and it’s much more powerful than before, thanks to its official status.
I’m guessing that the change we’ll see across Europe will be slow, but that it will be continuous and very broad. EIFv1 provided a rallying point for those member states and public bodies that were interested in Free Software and Open Standards. EIFv2 is a general push for everyone to use more Open Standards, even though it contains generous get-out clauses.

On the whole, we welcome EIFv2. It’s not everything we wished for, but it’s far better than we feared. We’ll watch its implementation very carefully, and will nudge it along where necessary.

Novell: After sale, new business model?

Has Novell changed its business model?
After sale to Attachmate, has Novell changed its business model?

Spotted near Puerta de Sol in Madrid, Spain. Dec 3, 2010.

WIPO CDIP/6: Moving the glacier

Progress sometimes comes very gently. Last week’s session of the WIPO committee in charge of implementing the Development Agenda (CDIP) was a case in point.

As in previous sessions, a lot of the discussion still revolved around procedural issues. Member states are battling over the question of how much power the committee should have, and again failed to agree during this round of negotiations.

Yet there was also some substantial work done, with a few new studies commissioned and progress in some studies that are already underway. Believe it or not, but the mere fact that these studies exist is progress already. Where WIPO used to be single-mindedly drive the rightsholder perspective, the organisation is now making an effort to look outward and get an idea of the reality that it is legislating upon.

That’s not to say that those studies are perfect. During last week’s meeting, member states adopted a project to study “open collaborative projects”. The proposal focuses on activities that are essentially just R&D outsourcing, all but ignoring Free Software. We pointed this out in a written submission to the committee and highlighted the issue with several member states, who took a keen interest. Unfortunately, our comments and suggestions eventually took a back seat to the procedural wrangling that still dominated much of the sessions.

In conversations with some WIPO staff, there was a palpable interest in making innovative approaches to knowledge management, such as Free Software, a greater part of WIPO’s regular activities. There are a number of people who realise that to stay relevant and truly serve its member states, the organisation will have to learn some new tricks.

While WIPO has some very good in-house expertise on the traditional way of handling these monopolies, they will need advice on more recent ideas and practices. We’ll definitely be there to help, and seize the opportunity to move WIPO a bit closer to becoming the World Intellectual Wealth Organisation we described years ago.

A study (see p.79) on “Intellectual Property, Information and Communication Technologies (ICTs), the Digital Divide and Access to Knowledge” that’s currently underway may turn out to be interesting, though results aren’t due before early 2012.

Yet WIPO clearly still has some way to go. A seminar organised as part of a project on “IP and competition policy” at the end of October not only lacked the perspective of users and consumers. WIPO had also invited none other than Microsoft to present the industry perspective on the interactions of copyright, patents and competition policy. While I won’t deny that the company’s convictions for anticompetitive behaviour in high-profile lawsuits on two continents have given it some relevant experience, inviting them as experts on competition policy is like asking Jack the Ripper for advice on public safety. We pointed this out to member states and the secretariat, and will make sure that the seminar’s results are taken with a rather large pinch of salt.

Overall, this was a good week at WIPO, where the road is alway long and progress happens at a glacial pace. Sure, we’re impatient to see more movement on the organisation’s part. But this is a big wheel, and it turns slowly. Yet some of open, friendly and constructive conversations we had this week would plainly not have happened a few years ago. We’ll stay on the case.