Tonnerre Lombard

FFII’s coordinator for Switzerland

EU commission takes another shot at software patents

May 13th, 2009

After their failure to introduce software patents in Europe directly through two directives, then through the community patent and then finally through the «European Patent Litigation Agreement» (EPLA), the European Commission has come up with a new way to legalize software patents: the «United Patent Litigation System» (UPLS).

The proposal displays a vast amount of similarity with the EPLA, except that the highest instance is moved to a specialized patent court. Instead of judges, this court is run by «patent judges», who, just like in the EPLA, do not have a legal degree but are only trained by the European Patent Office. The European Court of Justice (ECJ) has no role to play in this and no right to review the decisions of the patent court.

This is another attempt of the patent system to move all control over patents and their applicability to the participants.

Why software should not be patentable

The big problem with regard to software patents is the question of invested effort. The whole debate about software patents usually evolves around the question whether or not copyright is a sufficient protection for software. In my opinion it is, which can be shown very easily:

  1. First you have an idea. This costs you nothing.
  2. Then you sit down and invest work in an implementation of your idea. This implementation is fully covered by copyright, and is your first real investment into the idea.

Surely, anybody could look at your product and clone it, but that requires that person to start at step 2 and re-do your entire investment in implementing the idea. Thus, this person has no competitive advantage of taking your idea. The investment software patents protect is essentially zero. This is a large difference from developing e.g. a machine, where a lot of material is usually invested into prototypes.

At the same time, the impact is not: software patents would forbid the competitor to implement his own variant of your idea. The idea is essentially monopolized, and the cost is carried by the community.


Federal government grants 42 million franks contract to Microsoft — without tender

May 6th, 2009

The Swiss federal government published in the Swiss Official Gazette of Commerce that it has granted a maintenance contract over CHF 42 million to Microsoft — however, without a prior tender. The monopolist apparently had been granted the contract under exclusion of any potential competition.

The Federal Office of Construction and Logistics (BBL) apparently signed the maintenance contract over Windows and Office licenses, SharePoint et cetera in February already. A tender had never been held, so competitors had never been given a chance to demonstrate their own products. This, however, is clearly against the official regulations for acquisition of resources. A speaker of the Open Source corporation group /ch/open announced that the decision would be contested in front of the Federal Court which, incidentally, is a known user of the suite.

In a television interview on the popular Swiss talk show «10vor10», the responsible official defended the decision with the rather bogus words «We cannot be expected to migrate everything to Open Source software over night.»

In the meanwhile, the decision has caused a lot of press echo. Not only IT newspapers such as ProLinux, Inside-IT and IT Reseller Online have published articles detailing the deal, there were also articles in the Neue Züricher Zeitung (NZZ), 20 Minuten (print version only) and Infoweek as well as the aforementioned emission of the popular talk show «10vor10».

Not to be outdone, some parliamentarians announced shortly after the SHAB article that they created the «working group digital sustainability» which is pushing for more use of Open Source software in the federal government. Enough precedence cases exist already, with the canton Solothurn using the Linux operating system on the desktop, and other cantons introducing a variety of Open Source tools. But surely, it won’t happen over night.

An attempt at forbidding «hacker tools» in Switzerland

May 6th, 2009

The Federal Department of Justice and Police recently proposed to introduce legislation illegalizing so-called «hacker tools» in Switzerland as well. However, the proposed paragraph deviates massively from the original European cybercrime convention which it attempts to implement. Consequently, the legislation would not only outlaw «hacker tools» which can be used only by evildoers breaking into other people’s machines without permission, but in fact any type of tool used to test or ensure system security (such as Nessus, Metasploit, or even simple administrative tools used for network debugging, such as tcpdump, snoop or wireshark).

The currently proposed version introduces an article simply stating that «Whoever publishes programs or other data or makes them available in spite of having to assume that they will be used for any purpose mentioned in article 1 [i.e. breaking into systems], shall be punished with prison for up to three years or with a fine.» This article appears to be based on the false assumption that software which can be used to break into systems is per se evil, and that no dual use exists. However, with the possible exception of combined attacking and spam software (e.g. botnet software), every system and network security tool is basically a dual use tool. This is due to the very nature of network security. IT security companies are basically just hackers who are getting paid to break into the customer’s systems in order to discover and verify existing security problems. Surely, a tool used in such a so-called «penetration test» could be used in the very same way without the target’s prior consent. An IT security tool cannot determine if consent of the target has been granted, the difference is purely administrative.

Moreover, for companies such as Internet service providers, network traffic monitoring tools are a very crucial element in determining connectivity problems. Of course, however, the same tools could be used to read passwords transmitted over the line, thus making it usable as a tool in a «hacking» attack. However, without the network traffic monitoring tools, debugging network problems becomes an insurmountable task for network administrators.

The current proposal can thus be considered as totally inappropriate, and will need a complete makeover. In order to convince the federal council and EJPD of this, everybody is invited to submit a response to the currently running hearing on the proposal to the EJPD.


German petition against Internet censorship attracts attention

May 6th, 2009

A petition against Internet censorship launched on the petition web site of the German parliament has recently gained a lot of attention, and consequently, a lot of signatures.

The subject of the petition is a proposal of the German federal police, which aims to introduce an infrastructure using which the government can block arbitrary sites on the infrastructure of all ISPs in Germany. The basic idea is that if cases of child pornography or similar are brought to the attention of the federal police, the sites are added to a blacklist. This blacklist is then distributed to all ISPs in Germany, which consequently have to redirect the users to a server of the federal government using DNS spoofing. This server will then record the IP address of the person visiting the site as a suspected consumer of pornographic material involving minors.

Ineffective measures

The Chaos Computer Club, as well as a lot of other organizations and computer magazines such as c’t, have already protested against the proposal, calling it ineffective — which is indeed the case. Any potential consumer of child pornography can simply configure their own  name server or set one of a server hosted by a friend or not located in Germany, thus escaping the measure. Also, the whole material remains on the Internet, for everybody not living in Germany to see. In order to stop the abuse of the children in question, the only effective measure would be to ask the content provider, which means the company providing hosting or housing to the web site owner, to take down the web site. Experience shows that in the vast majority of cases, this happens immediately.

Moreover, the proposal will simply not work, for a very simple reason. What the German government wants to impose here is simple basic DNS spoofing, just like the DNS spoofing attack presented by Dan Kaminsky. Since susceptibility to DNS spoofing is a serious security issue, measures have been proposed and built into major DNS servers and clients now. The principle, nowadays known as DNSSEC, is a simple public key infrastructure by the means of which every DNS zone owner (i.e. every person hosting host name records for a domain) signs their zone digitally using a so-called zone key. The public part of this key is then published to a special, cryptographically secured, service which can then subsequently be queried for such keys. If the presence of the DNS Security extension is detected on a domain, the client host will then request the public key and verify the signature of the queried data.

Since there is no way the federal police could forge such a signature, the modified DNS data would be noticed immediately and cause an error to be displayed to the user. But not only will this ruin the use case of finding people visiting child pornography sites, it will also potentially affect other data in the same zone, thus having a serious effect on the end user experience.

Creating terrorists

Another case which could be brought against these measures is that they enable an arbitrary attacker to generate terrorists. The procedure is very easy to implement, hard to notice and can be used by any random home page owner. The only thing one needs to do is to include a small iframe or image on one’s home page which leads to a server on the child pornography block list. This will get every visitor of the web site onto the list of suspected consumers of child pornographic material.

If this appears too offensive, it is possible to have a server side include or CGI script which only includes the iframe or image every once in a while. This will make the mechanism very hard to detect.

Another method would be to include an URL to the site in a banner exchange facility. This would mark a small fraction of the visitors of every web site which is a member of the banner exchange as a suspected consumer of child pornographic material.

As a summary, the mechanisms are very easy to overcome and carry a massive inherent potential for abuse. (The government could for example block the web sites of political activists, automatically, and nobody would be able to tell.) The fact that the governmental agencies threatened to sue everybody who receives, owns or publishes a copy of the list does not really help to establish the trust that this list will not be abused for somebody’s agenda.


If you want to help fighting this, here are some links:

New «OSS Jam» with a lecture from my part

May 2nd, 2009

On May 7th of 2009, a new OSS Jam is going to take place at the Google Zurich office. While this seems like nothing unusual as OSS Jams tend to take place about once per month, it is slightly special for me, as I’m going to give a small lecture there.

Monitoring Systems lecture

The topic is going to be monitoring systems, as the most popular monitoring system Nagios recently added a PHP dependency for its web interface. Since a monitoring server is supposed to be a hardened setup as it needs to work reliably rather than sending out SPAM into the wide world, this means that for everyone at least half a bit into security related matters, Nagios just turned into a no-go.

The Hobbit Monitor provides a nice alternative to Nagios, but while its notification system is way more configurable, its plain text checks are easy to define and while it performs a great lot better than Nagios, unfortunately it lacks some features which might be necessary for larger setups. Also, better performance doesn’t mean good performance — with a few thousand hosts, the Hobbit Monitor also puts a rather large load on the monitoring server.

Finally, as always, the conclusion tends towards a Kästnerian «Do it yourself». Thus, I’m introducing a new monitoring system I’m going to develop in the closer future, entirely in C with a nice templating system, and decent performance.

Binary patches

As I’m currently implementing a binary patch management system for the NetBSD Foundation, I’m also going to talk a bit about requirements of binary patch systems and how my system meets them. Since I only have a working prototype with basic functionality so far, people are also welcome to join this effort.

Petition against biometric passport a success

October 19th, 2008

The federal chancellery announced recently that the freedom campaign against biometric passports and ID cards was a success. The Freedom Campaign itself has a press release on the subject.

Out of 64’064 collected signatures, 63’733 signatures were considered valid. This greatly exceeds the 55’000 signatures submitted by the freedom campaign itself, apparently, signatures have as well been submitted by other campaigns.

Congratulations to such a great success! But the petition is only the first step. Should a referendum take place, it must be won, and should it be won, then alternative legislation should be proposed so we won’t end up with the same situation in a couple of years.


Debian OpenSSH key weakness FAQ

May 16th, 2008

A lot of confusion has turned up about the OpenSSL insecure PRNG vulnerability in Debian and related systems. This is an attempt to clear these up.

Which distributions were affected?

All distributions which pulled their OpenSSL changes directly from Debian. Those are namely:

Debian Etch and Lenny, Ubuntu/Kubuntu/Xubuntu and related, grml, Knoppix and all living customizations and Univention UCS 2.0. Other Linux distributions may also be affected.

Known not to be affected are: Fedora, Debian Sarge, NetBSD, OpenBSD, FreeBSD, DragonFlyBSD, MirBSD, Gentoo Linux, Univention UCS 1.x, Red Hat Enterprise Linux, OpenSuSE, SuSE Linux Enterprise, CentOS, pfSense, m0n0wall, Sun Solaris 10 and prior and OpenSolaris.

What exactly is the problem?

Due to a slightly misguided valgrind warning patch, the only “random” element used in key generation and other random number generation processes by Debian was the process ID. Since typical process IDs under Linux range from 0 to 65’535, there were only 65’536 possible different keys generated by the OpenSSL toolchain, also including SSH.

This means specificially that an attacker needs only 65’536 attempts to bruteforce a key generated by any Debian tool during this period of time. The impact of this depends on the usage of the key: for SSH user keys, it means that an attacker can impersonate the affected user and log in as the affected user to any system where the key is in the authorized_keys file. For keys used for certification and encryption, such as SSH host keys and SSL certificates, an attacker can impersonate the affected SSH or web server, and can potentially read currently running and recorded sessions, depending on the procedure used for session key establishment.

How can I figure out if my key was affected?

Debian and Ubuntu have released tools for key analysis which scan for patterns of the vulnerable keys by connecting to named hosts and looking into user’s home directories for authorized_keys files which contain the patterns. An updated version of OpenSSH for Debian and Ubuntu now ships with a tool to automatically discover and refuse the vulnerable keys.

My key is affected – what should I do?

The first point is of course to immediately update the affected packages if you use a Debian derived system. Then, generate new SSH keys and replace them on all systems where your old SSH keys are located. Replace them as well on the servers of this nasty customer who left for the concurrence – imagine what would happen if he found out that you left a vulnerable SSH key on his host and that his host was compromitted by your negligence.

All affected OpenSSL certificates should also be revoked immediately. Generate new certificates and let them be signed and re-issued through your CA. Commercial CAs should let you reissue the certificate with the same Subject until the end of the certification period you paid up to. Please note that revokation is a critical step here, otherwise people might still impersonate your old certificate which might, after all, still be valid.

Then make sure your infrastructure was not taken over by botnets through an insecure SSH key. Check for rootkits as well while you’re at it. If your log host is affected, tough luck.

How urgent is this? Will I have to act immediately?

Yes, this item requires your immediate attention as there are already botnets out there which search for accounts with vulnerable SSH keys. The question is not “Does someone care about me little Internet user?” — these bots are out to compromise hosts and to send SPAM and malware to other hosts. They don’t care if you are an attractive target, they attack anything they can find and try to send SPAM with it.

I have put my securely generated private SSH user key onto a Debian system. Should I replace it?

Yes. On a Debian system, your private key was not safe during the last 2 years. The system may have been compromitted during that time, or someone may even only have been eavesdropping your communication and have gained knowledge about your SSH key. You should definitely consider it compromitted.

I have put my securely generated public SSH user key onto a Debian system. Should I replace it?

This depends. If your key is an RSA key, it is not compromitted simply by putting the public key onto a server and authenticating against it. The SSH 2.0 protocol, as described in RFCs 4252 and 4253, part of the token being signed as challenge by the user is the “session identifier”, which is a hash from the key exchange. This effectively prevents replay attacks of authentication processes done using a non-vulnerable SSH key, because the random material used as challenge is not only controlled by the vulnerable SSH host, but also by the non-vulnerable client. Thus, the data your SSH key has to sign as a challenge is not vulnerable to the weak PRNG of the SSH server, and thus cannot compromise your key.

This is however not true for DSA keys. DSA has a weakness when used in the Diffie-Hellmann key exchange process, rendering it basically uneffective. If the attacker gets hold of the random number used by the Debian SSH server in the key exchange process, this can be used to calculate the private DSA key from the public key with a complexity of 216, being 65’536.



  • Change any key pair generated using an affected version of the pseudo-random number generator. This applies both to the user and host SSH keys, and is of course also valid for certificates.
  • If you have used a DSA key or certificate on a host affected by the vulnerability, it must be regenerated.
  • Assume that all data read from and written to a vulnerable machine may be intercepted and/or tampered with, like if no crypto layer had been applied in the first place.
  • RSA keys used to authenticate to vulnerable hosts are secure.


Special thanks for this goes to Steven M. Bellovin, who took the time to go through an analysis of this entire process with me and to clear up my misunderstandings about the OpenSSH challenge-response procedure.

(Original source)

Blind trust in valgrind – the Debian OpenSSL vulnerability

May 13th, 2008

The big run on valgrind way back in 2005 to 2006 has already demanded its first prominent victim: the OpenSSL implementation shipped with Debian.

Way back in May 2006, one of the Debian developers ran valgrind on OpenSSL in an attempt to make it more secure. Along the findings of valgrind was an uninitialized buffer named buf in the ssleay_rand_add function in openssl/crypto/rand/md_rand.c. The programmer simply commented out the MD_Update call which added the random data to the pool in order to fix the presumed flaw.

This blind patch was not exactly the correct thing to do. The data contained in buf was exactly the random pool initialization data, which was now no longer being added.

Apparently, the OpenSSL team also had its part in this game though. The Debian developer sent the patch upstream, and it was approved for debugging purposes by the OpenSSL team. Apparently, this was slightly misunderstood by the Debian developer, so he committed the now-defunct MD based PRNG into the Debian codebase.

According to the audit trail of the corresponding Debian bug, the Debian SSL team approved the patch and released a “fixed” package in May 2006.

The impact

As soon as the new OpenSSL release was deployed, the Debian users would now create keys using an MD as pseudo random number generator with hardly any modifications in the randon pool. As a short explanation to non-cryptographers: it was not really random.

The Debian Security team then discovered certain patterns which would emerge magically in most of their SSH and SSL keys, as well as keys from all other products which were based on OpenSSL. After several days if not weeks of analysis, the culprit had been tracked down to be that precise valgrind-triggered change.

The effect of this could be observed in the past couple of days by close followers of the Debian community. All of a sudden, the web certificates changed, all authorized_keys files were removed from the project servers, and some SSH host keys had changed, even though non of them had expired. This confused the Debian community very much, and was perceived as “A large security incident immediately ahead”.

With the release of the Debian Security Advisory today, this expectation was finally fulfilled, and the incident was indeed a major one: users were asked to regenerate all OpenSSL generated cryptographic keys since May 2006. A script was released to detect and warn about common patterns(!) in the various key files.

Lessons learned

There are certainly various lessons to be learned from this, both on the cryptographic, the programming and the practical side.

  1. Don’t blindly trust valgrind’s output.
    This has been repeated over and over again. If valgrind finds a presumed flaw in your code, it does not necessarily mean it is really a flaw. It must be investigated very thoroughly by the programmer, and not patched away lightly just because it’s there.
  2. Cryptography may be counter intuitive to a programmer.
    I personally can’t stop repeating this. What might appear as a runtime optimization to a programmer can indeed be a timing based information disclosure on the cryptographic level, and what might look like an uninitialized variable might actually not want to be zeroed out.
    This is also an argument against GnuTLS I keep repeating. Cryptography is not something which can be handled just like that by any good programmer. One needs at least a diploma in maths and programming plus be a very focused computer geek and close follower of the cryptographic community to even be able to touch cryptographic products successfully. This is the reason why I have major concerns with the GNU community rewriting an SSL implementation from scratch just because they do not like the OpenSSL license.
  3. A diversification of infrastructures may be useful at times.
    This might be a bit counter-intuitive to those who followed the argument from the last paragraph, but the sole reason why the chain of trust did not break for the Debian team was that besides their working OpenSSL PKI, they also had a working, trusted and distributed GnuPG PKI. Thus, even though all OpenSSL keys were compromitted, the GnuPG keys could still be used to verify the origin of various security credentials and to verify that the new key material et cetera was indeed originating from the Debian project.

That said, I would like to proudly add that neither the NetBSD base nor the pkgsrc version of OpenSSL are affected by this bug.

Audit trail

  • 22:20: Added more precise information on what keys and certificates changed
  • 23:25: Added reference to what exactly happened to get the patch approved

(Original source)

OSS Jam Reloaded in Zurich

February 17th, 2008

After the success of the first OSS jam, Google invites people to a second round in its new Zurich office on February 28th, 2008. Once again, participants are invited to present their projects in a short 5-minute time frame, trying to find future project contributers.

This time, a topic has been set for submissions: «The desktop in the past, present and future». People who are seeking for participation in their desktop projects are invited to present them at the jam.

OSS Jam and Google

The question is of course how this fits into Google’s search engine business. To this question, there are two different, unrelated answers.

Firstly, Google has extended its scope beyond search engines quite some time ago. Like Yahoo delivered widgets for web applications, Google also delivered the Google Widget Toolkit for Java web applications and similar products, and is expanding its scope to Open Source and the community.

Secondly, Google’s OSS Jam provides the Open Source community with ways to find new participants for their projects, and as such, there is a philosophical relation to the search business.

Either way, it is going to be interesting to watch the future development of this tradition.

For more information please visit Google’s Open Source Jam information site

European Union and the Lisbon Treaty: the birth of a new country

December 17th, 2007

On December 13th, 2007, exactly 26 years after Poland called for martial law in 1981 in order to gain back control over the opposition, the European Union members have signed a treaty which became known as the Lisbon Treaty of 2007. This treaty practically establishes the European Union as a state of its own, along with a new constitution.

Most of the flaws which have been pointed out in the EU Constitution are also present in the Lisbon Treaty, but have not been addressed yet. As an example, the Lisbon Treaty contains provisions that the EU may go to war while individual member states may «constructively abstain» – thus being practically incapable of preventing having to go to war.

The Brussels Journal has an analysis of the contract by Professor Anthony Coughlan which enumerates 10 major changes the contract is making (while surely going too far to the Eurosceptic direction in suggesting that the harmonization effort in itself is wrong).

  1. It establishes a legally new European Union in the constitutional form of a supranational European State.
  2. It empowers this new European Union to act as a State vis-a-vis other States and its own citizens.
  3. It makes all citizens of European member states also citizens of this new European Union.
  4. The same name «European Union» will be kept while the Lisbon Treaty changes fundamentally the legal and constitutional nature of the Union.
  5. It creates a Union Parliament for the Union’s new citizens.
  6. It creates a Cabinet Government of the new Union.
  7. It creates a new Union political President.
  8. It creates a civil rights code for the new Union’s citizens.
  9. It makes national Parliaments subordinate to the new Union.
  10. It gives the new Union self-empowerment powers.

While the establishment of an European state is certainly a long-term goal to aim for, some elements of this treaty are still not acceptable. The current contract still contains some provisions which are not adequate for the constitution, and should be refined to meet the high democratic standards set by the member states.

The military cooperation charter a rather unfortunate chapter, remembering the controversy of the war against Iraq, which Germany and France chose to abstain. Would such a situation take place in the future, then Germany and France might be forced to participate in the war. This is of course one of the consequences of the harmonization process, but there should be provisions declaring that an unanimous decision is required in order to go to war – the only way to really justify it. An exception would of course be when an aggression against a member state has to be encountered.

This looks like yet another treaty which has not been balanced properly beforehand and needs a lot of further work before it will be adequate for the reason it was intended to.