Web-Seite zur informationellen Selbstbestimmung überarbeitet

Auf meiner Web-Seite zur informationellen Selbstbestimmung im Internet habe ich größere Änderungen in zwei nichttechnischen Abschnitten vorgenommen. Einerseits habe ich die Diskussion des faulen Arguments, Privatsphäre im Internet sei angesichts von Datenkraken und Massenüberwachung nicht möglich, besser strukturiert. Andererseits ist der Abschnitt zum Nothing-to-Hide-Argument ebenfalls neu strukturiert und erweitert. So stelle ich unter den kafkaesken Problemen des Verlusts unserer Privatsphäre nun konkretere Beispiele wie die Schwangerschaftserkennung anhand von Analysen des Kaufverhaltens, die Erstellung von Persönlichkeitsanalysen (1) anhand von Facebook Likes und (2) anhand von Sprachanalysen sowie die Manipulation von Entscheidungen durch Nudging vor.

Über Anregungen zur Erweiterung würde ich mich freuen.

You need Tor, and Tor is asking for your support

For decades, privacy and freedom of thought and expression have been valued as human rights. Please take a moment to read articles 12, 18, and 19 of ↑The Universal Declaration of Human Rights dating back to 1948.

In free, democratic countries we took those rights for granted. Even without knowing or caring about human rights, among family and friends we chatted without being overheard, we shopped anonymously and paid with cash, we walked the streets anonymously without being afraid of unknown strangers crossing our paths. In libraries we read books and newspapers while nobody spied on our interests.

That situation has changed dramatically in recent years with two noteworthy twists. First, in an article worth your time Eben Moglen explained in 2013 (predating the Snowden disclosures!) how we have ↑entangled ourselves in the Net. We surround ourselves with all kinds of “smart” devices such as phones, smart watches, smart meters, smart TVs, e-book readers, glasses, cars, all of which have eyes and ears of their own, monitor our location, our communication, our behavior most—if not all—of the time. Apps in those devices transmit what they observe into the Net, under incomprehensible privacy piracy policies. Once in the Net, that data is traded, stolen, re-purposed—beyond our control, consent, or knowledge.

Second, for reasons that I cannot comprehend, (even) democratic governments seem to assume that our human rights do not need to be respected on the Net. They demand openly and enforce covertly access to our communication, to our data, effectively—the more we entangle ourselves in the Net—to our lives.

If you believe that the human rights to privacy as well as freedom of thought and expression should also be your rights, even if you move on the Net, you must learn and practice digital self-defense. Essentially, you need to choose apps and tools that respect your freedom and privacy, which requires ↑free software based on strong cryptography. For example, to protect your e-mails you may want to use ↑GnuPG as explained in ↑this guide to e-mail self-defense. As messenger you may want to switch to ↑Signal. For Web surfing you may want to use the ↑Tor Browser.

Tor is cutting-edge anonymity research turned into easily usable free software. Briefly, Tor re-routes your network data through randomly selected relays on the Net, thus hiding who is communicating what with whom. More information on how Tor works can be found on the ↑Tor overview page and in the ↑Tor FAQ. Tor is built by a non-profit organization that is currently asking for ↑donations.

Make sure to read the ↑download warning and try the ↑Tor Browser today. It’s easy to install and run. Be warned, however, that surfing will be slower in general and that some misguided sites may refuse to load unless you solve a ↑captcha (which may or may not go away if you reload the page via “New Circuit for this site” from Tor Browser’s menu underneath the onion icon).

If you like Tor, the project is asking for your ↑donations, and you are in good company: ↑Edward Snowden and Laura Poitras use Tor. Security guru ↑Bruce Schneier recommends that you use Tor.

If you don’t like Tor, you can help to ↑improve it.

And, as a final remark, if you believe that Tor is being misused by “bad guys,” then, of course, you are right. Tor, just like about any other tool created by mankind, can be used by law-abiding citizens as well as by the scum of the earth.

Wieder wider Vorratsdatenspeicherung

Verfassungsbeschwerde bei Digitalcourage e.V. unterstützen (CC BY-SA 3.0 Digitalcourage e.V.)

Wir haben in Deutschland wieder die Vorratsdatenspeicherung (VDS), und das ist schlecht. VDS bedeutet flächendeckende, anlasslose Überwachung des Kommunikationsverhaltens der gesamten Bevölkerung, was gut zu totalitären Regimes passt, aber nicht zu demokratischen Gesellschaften.

Wer das nicht so verfolgt: Es gab in Europa schon mal die VDS, per EU-Richtlinie. Die entsprechende Umsetzung in Deutschland wurde 2010 vom Bundesverfassungsgericht für verfassungswidrig erklärt. Die gesamte Richtlinie wurde 2014 vom Europäischen Gerichtshof für ungültig erklärt, weil sie unsere Grundrechte auf Achtung des Privatlebens und auf den Schutz personenbezogener Daten verletzt. Nichtsdestotrotz haben wir jetzt wieder eine per Gesetz verordnete VDS, im Oktober vom Bundestag beschlossen, am 6. November vom Bundesrat abgesegnet.

Der Verein Digitalcourage bereitet eine Verfassungsbeschwerde vor und bittet um Unterstützung. Macht mit!

Außerdem ist es natürlich notwendig, digitale Selbstverteidigung zu lernen. Wie das mit freier Software geht, stelle ich anderswo dar.

Firefox with Tor/Orbot on Android

In my previous post, I explained three steps for more privacy on the Net, namely (1) opt out from the cloud, (2) encrypt your communication, and (3) anonymize your surfing behavior. If you attempt (3) via Tor on Android devices, you need to be careful.

I was surprised how complicated anonymized browsing is on Android with Firefox and Tor. Be warned! Some believe that Android is simply a dead end for anonymity and privacy, as phones are powerful surveillance devices, easily exploitable by third parties. An excellent post by Mike Perry explains how to harden Android devices.

Anyways, I’m using an Android phone (without Google services as explained elsewhere), and I want to use Tor for the occasional surfing while resisting mass surveillance. Note that my post is unrelated to targeted attacks and espionage.

The Tor port to Android is Orbot, which can potentially be combined with different browsers. In any case, the browser needs to be configured to use Tor/Orbot as proxy. Some browsers need to be configured manually, while others are pre-configured. At the moment, nothing works out of the box, though, as you can see in this thread on the Tor Talk mailing list.

Firefox on Android mostly works with Orbot, but downloads favicons without respecting proxy preferences, which is a known bug. In combination with Tor, this bug is critical, as the download of favicons reveals the real IP address, defeating anonymization.

Some guides for Orbot recommend Orweb, which has too many open issues to be usable. Lightning Browser is also unusable for me. Currently, Orfox is under development (a port of the Tor Browser to Android). Just as plain Firefox, though, Orfox deanonymizes Tor users by downloading favicons without respecting proxy preferences, revealing the real IP address.

The only way of which I’m aware to use Firefox or Orfox with Tor requires the following manual proxy settings, which only work over Wi-Fi.

  1. Connect to your Wi-Fi and configure the connection to use Tor as system proxy: Under the Wi-Fi settings, long-press on your connection, choose “Modify network” → “Show advanced options”. Select “Manual” proxy settings and enter localhost and port 8118 as HTTP proxy. (When you start Orbot, it provides proxy services into the Tor network at port 8118.)

  2. Configure Firefox or Orfox to use the system proxy and avoid DNS requests: Type about:config into the address bar and verify that network.proxy.type is set to 5, which should be the default and lets the browser use the system proxy (the system proxy is also used to fetch favicons). Furthermore, you must set network.proxy.socks_remote_dns to true, which is not the default. Otherwise, the browser leaks DNS requests that reveal your real IP address.

  3. Start Orbot, connect to the Tor network.

  4. Surf anonymized. At the moment you need to configure the browser’s privacy settings to clear private data on exit. Maybe you want to wait for an official Orfox release.

Three steps towards more privacy on the Net

Initially, I wanted to summarize my findings concerning Tor with Firefox on Android. Then, I decided to start with an explanation why I care about Tor at all. The summary, that I had in mind initially, then follows in a subsequent post.

I belong to a species that appears to be on the verge of extinction. My species believes in the value of privacy, also on the Net. We did not yet despair or resign in view of mass surveillance and ubiquitous, surreptitious, nontransparent data brokering. Instead, we made a deliberate decision to resist.

People around us seem to be indifferent to mass surveillance and data brokerage. Recent empirical research indicates that they resign. In consequence, they submit to the destruction of our (their’s and, what they don’t realize, also mine) privacy. I may be an optimist in believing that my species can spread by the proliferation of simple ideas. This is an infection attempt.

Step 1. Opt-out of the cloud and piracy policies.

In this post, I use the term “cloud” as placeholder for convenient, centralized services provided by data brokers from remote data centers. Such services are available for calendar synchronization, file sharing, e-mail and messaging, and I recommend to avoid those services that gain access to “your” data, turn it into their data, generously providing access rights also to you (next to their business partners as well as intelligence agencies and other criminals with access to their infrastructure).

My main advice is simple, if you are interested in privacy: Opt out of the cloud. Do not entrust your private data (e-mails, messages, photos, calendar events, browser history) to untrustworthy parties with incomprehensible terms of service and “privacy” policies. The typical goal of a “privacy” policy is to make you renounce your right to privacy and to allow companies the collection and sale of data treasures based on your data. Thus, you should really think of a “piracy policy” whenever you agree to those terms. (By the way, in German, I prefer “Datenschatzbedingungen” to “Datenschutzbedingungen” towards the same end.)

Opting out of the cloud may be inconvenient, but is necessary and possible. Building on a metaphor that I borrow from Eben Moglen, privacy is an ecological phenomenon. All of us can work jointly towards the improvement of our privacy, or we can pollute our environment, pretending that we don’t know better or that each individual has none or little influence anyways.

While your influence may be small, you are free to choose. You may choose to send e-mails via some data broker. If you make that choice, then you force your friends to send replies intended for your eyes to your data broker, reducing their privacy. Alternatively, you may choose some local, more trustworthy provider. Most likely, good alternatives are available in your country; there certainly are some in Germany such as and Posteo (both were tested positively in February 2015 by Stiftung Warentest; in addition, I’m paying 1€ per month for an account at the former). Messaging is just the same. You are free to contribute to a world-wide, centralized communication monopoly, sustaining the opposite of private communication, or to choose tools and services that allow direct communication with your friends, without data brokers in between. (Or you could use e-mail instead.) Besides, you are free to use alternative search engines such as Startpage (which shows Google results in a privacy friendly manner) or meta search engines such as MetaGer or ixquick.

Step 2. Encrypt your communication.

I don’t think that there is a reason to send unencrypted communication through the Net. Clearly, encryption hinders mass surveillance and data brokering. Learn about e-mail self-defense. Learn about off-the-record (OTR) communication (sample tools at PRISM Break).

Step 3. Anonymize your surfing behavior.

I recommend Tor for anonymized Web surfing to resist mass surveillance by intelligence agencies as well as profiling by data brokers. Mass surveillance and profiling are based on bulk data collection, where it’s easy to see who communicates where and when with whom, potentially about what. It’s probably safe to say that with Tor it is not “easy” any more to see who communicates where and when with whom. Tor users do not offer this information voluntarily, they resist actively.

On desktop PCs, you can just use the Tor Browser, which includes the Tor software itself and a modified version of the Firefox browser, specifically designed to protect your privacy, in particular in view of basic and sophisticated identification techniques (such as cookies and various forms of fingerprinting).

On Android, Tor Browser does not exist, and alternatives need to be configured carefully, which is the topic for the next post.

I Love Free Software

I love Free Software!
Today is Valentine’s Day, which is a popular occasion to celebrate love. I love free software. In case you don’t know: Free software is software that respects our freedom, and I suggest that you take a close look.

Today I’d like to recommend a pair of nifty, lovely Android apps that I use on a regular basis to improve my vocabulary, namely AnkiDroid with QuickDic. (Needless to say, both are available via F-Droid, an alternative app store that provides nothing but free software.)

AnkiDroid is a tool to memorize things based on flashcards, organized in decks. In a nutshell, you create cards with different contents on back and front, AnkiDroid presents one side of a card, and you try to recall the other, telling AnkiDroid how easy it was to recall the matching content. The frequency of how often a single card’s side is presented is determined by a so-called spaced repetition algorithm. Essentially, the better you know a card, the less frequently it is presented. Lots of card decks are available on the Web and can be imported into AnkiDroid. I don’t use that feature, however.

Instead, I use AnkiDroid with the offline dictionary app QuickDic, which offers dictionaries for lots of (pairs of) languages. Whenever I look up an intriguing word or phrase in QuickDic, I long-press that dictionary entry to invoke a share dialog. Selecting AnkiDroid in that dialog creates a pre-filled flashcard in AnkiDroid, which just needs minor tweaking to create a new card. Learning vocabulary has never been simpler.

I love free software.

I love Free Software!

Die irreführende Rhetorik für mehr Überwachung

Unser von Grundgesetz und Europäischer Menschenrechtskonvention zugesichertes Abwehrrecht zur Privatsphäre gegenüber dem Staat wird unter dem Vorwand der Terrorbekämpfung seit Jahren weiter eingeschränkt. Die Begründungen für zusätzliche Einschränkungen werden mit mehr oder weniger geschickter, aber in jedem Fall irreführender Rhetorik vortragen, die zu selten als solche entlarvt wird. Überlegen Sie selbst.

Der Bundesminister des Innern, Herr Dr. Thomas de Maizière, argumentierte in einer Rede am 20.1.2015, dass Verschlüsselung notwendig sei, damit wir, die Bevölkerung, uns sicher im Internet bewegen könnten. Trotzdem sollten Sicherheitsbehörden in der Lage sein, verschlüsselte Kommunikation zu entschlüsseln. Er versuchte, diese Forderung nach unwirksamer oder umgehbarer Verschlüsselung durch eine Analogie aus der physischen Welt vernünftig erscheinen zu lassen: Wir alle schließen unsere Häuser ab, in die die Polizei unter rechtsstaatlichen Voraussetzungen eindringen darf.

Diese Analogie ist aus mehreren Gründen irreführend:

  • Das Eindringen in unsere Häuser erfolgt in begründeten Einzelfällen. Demgegenüber wird unsere Kommunikation im Internet von Geheim- und Nachrichtendiensten abgehört, gespeichert und analysiert, und zwar im Wesentlichen vollständig, hemmungslos und unkontrolliert, wie wir spätestens seit den Enthüllungen von Edward Snowden wissen.
  • Das Eindringen in unsere Häuser erfordert personellen Aufwand, was ein Vorgehen mit Verstand und Augenmaß erzwingt. (Dies ändert sich, je mehr Smartphones, Smart-TVs, Smart-Watches, smarte Brillen usw. wir als zusätzliche Augen und Ohren jenseits unserer Kontrolle einsetzen.) Demgegenüber laufen Spionage und Überwachung im Internet weitgehend automatisiert ab, was anlasslose Massenüberwachung unter Missachtung der Unschuldsvermutung ermöglicht.
  • Das Eindringen in unsere Häuser ist für uns (meistens) erkennbar und damit anfechtbar. Demgegenüber finden Spionage und Überwachung im Internet hinterrücks statt. Welche Daten von wem zu welchen Zwecken erfasst werden, bleibt im Verborgenen und lässt uns keine Möglichkeit zu rechtsstaatlicher Gegenwehr.

Diese Unterschiede zwischen Überwachung in der physischen Welt und Überwachung im Internet erfordern, dass wir unsere Kommunikation verschlüsseln, wenn wir an Privatsphäre interessiert sind. Wenn „vertrauenswürdige“ staatliche Stellen diese Verschlüsselung umgehen können, dann werden das auch nicht vertrauenswürdige staatliche Stellen und andere Kriminelle schaffen. Das ist inakzeptabel.

Unser Bundesinnenminister steht mit seiner wirren Analogie leider nicht allein. In ähnlicher Weise behauptete Herr Troels Oerting, der Leiter des Europäischen Zentrums zur Bekämpfung der Cyberkriminalität, dass die verschlüsselte Kommunikation so ähnlich wirke wie der Kofferraum eines Autos, der bei einer Polizeikontrolle nicht geöffnet werden könne. Offenbar entgehen auch Herrn Oerting die fundamentalen Unterschiede (a) des begründeten Vorgehens im Einzelfall unter Personaleinsatz mit rechtsstaatlichen Abwehrmöglichkeiten und (b) der anlasslosen, automatisierten und unbemerkbaren Massenüberwachung ohne Möglichkeit zur Gegenwehr. Vermutlich dieser haarsträubenden Logik folgend forderte im Januar 2015 der Anti-Terror-Koordinator im Rat der Europäischen Union, Herr Gilles de Kerchove, die Hinterlegung kryptografischer Schlüssel. Höchst bedenklich.

Unser Bundesinnenminister ist auch an anderer Stelle zu schnell, um die Feinheiten der Realität angemessen zu würdigen. So behauptet er in seiner oben erwähnten Rede mit Bezug auf den Terroranschlag auf Charlie Hebdo:

Die Ereignisse in Paris verdeutlichen einmal mehr, dass wir gemeinsam handeln müssen, und zwar nicht nur im Bereich der so genannten „realen“ Welt. Das Handeln krimineller und terroristischer Bestrebungen findet genauso in der „virtuellen“ Welt statt […]

Die Ereignisse in Paris mögen vieles verdeutlichen, mit der virtuellen Welt hatten sie herzlich wenig zu tun. Die Attentäter waren verschiedenen staatlichen Stellen im Vorfeld bekannt, aber ihre Überwachung wurde zu früh beendet. Davon, dass die Täter verschlüsselt kommuniziert hätten, ist nirgends die Rede – auch nicht in der ministeriellen Rede. Dass er dieses Attentat dennoch zur Rechtfertigung der Umgehung von Verschlüsselung, unserer einzigen Waffe gegen anlasslose Massenüberwachung und andere Kriminalität im Internet, verwendet, ist ungeheuerlich.

Im Ausland ist die Lage nicht besser. So versprach Premierminister David Cameron seinen Landsleuten angesichts des Attentats in Paris, im Falle seiner Wiederwahl Terroristen keine sicheren Kommunikationsräume zu lassen. Dem Premierminister ist offenbar ebenso wie unserem Innenminister entgangen, dass das Attentat nichts mit sicherer terroristischer Kommunikation zu hatte. Darüber hinaus macht seine Aussage klar, wohin die Reise gehen soll: Wer unbekannten Terroristen keine sichere Kommunikation zugestehen will, darf niemandem sichere Kommunikation zugestehen. Von Ihnen und mir ist nicht auszuschließen, dass wir unbekannte Terroristen sind; daher müssen wir überwacht werden, und zwar überall, wo dies technisch machbar ist.

Momentan gibt es noch vergängliche, unaufgezeichnete, private Gespräche. In Familien, mit Wildfremden, zwischen ganz normalen und zwischen verrückten Menschen. Auch die Brüder Kouachi werden sich vor ihrem Anschlag auf Charlie Hebdo über ihre Pläne unterhalten haben. Hatten sie ein Recht auf private Gespräche? Haben wir, die wir anders als sie nicht in Terror-Camps ausgebildet worden sind, dieses Recht, oder wollen wir es uns nehmen lassen, Cameron folgend?

Bevor Sie urteilen, sei daran erinnert, dass die Gefahr, ein Terroropfer zu werden, verschwindend klein ist. Laut Zahlen der New York Times vom Juli 2013 starben seit 2005 jährlich 23 Amerikaner durch Terror. Dreiundzwanzig. Etwa doppelt so viele starben an Bienen- und Wespenstichen, 15-mal so viele durch Stürze von Leitern. In Deutschland gab es nach Angaben der Tagesschau im Januar 2015 bisher nur einen einzigen islamistischen Anschlag – und zwar im März 2011 mit zwei Todesopfern. Demgegenüber gibt es bei uns jährlich mehr als 3.000 Verkehrstote. An den Folgen von Alkoholmissbrauch sollen in Deutschland 74.000 Menschen pro Jahr sterben.

Bevor Sie urteilen, sei zudem daran erinnert, dass Terroristen in Europa und in den USA in den vergangenen Jahren regelmäßig im Vorfeld auffällig geworden sind und Sicherheitsbehörden vor ihren Untaten bekannt waren. Offenbar fehlte es an gezielter Überwachung, um Anschläge zu verhindern. Wer trotzdem vorgibt, die Situation durch anlasslose Massenüberwachung oder durch die Schwächung von Verschlüsselungstechniken verbessern zu können, sollte sich rechtfertigen müssen oder ausgelacht werden.

Haben wir also ein Recht auf vergängliche, unaufgezeichnete, private Gespräche? Im Grunde spielt die Antwort auf diese Frage zumindest für Kommunikation im Internet keine Rolle: Wenn Sie denken, dass Sie dieses Recht haben sollten, müssen und können Sie es sich nehmen. Sie dürfen Ihre Kommunikation nicht kommerziell orientierten Datenkraken anvertrauen, und Sie müssen Ihre Kommunikation verschlüsseln.

Verschlüsselung ist alternativlos. Im Januar 2015 sind Berichte hochrangiger europäischer Gremien erschienen, die dies nachdrücklich belegen. Zum einen empfiehlt der Rechtsausschuss der Parlamentarischen Versammlung des Europarats durchgängige Verschlüsselung zum Schutz unserer Privatsphäre. Zum anderen empfiehlt auch der Ausschuss für Technikfolgenabschätzung des EU-Parlaments den Einsatz von Ende-zu-Ende-Verschlüsselung und Anonymisierungsdiensten zum Schutz der Privatsphäre.

Ich rate Ihnen, für private Kommunikation nicht auf die Dienste bekannter Datenkraken zurückzugreifen, sondern freie Software zur Verteidigung Ihrer Grundrechte einzusetzen, insbesondere GnuPG zur E-Mail-Selbstverteidigung und Tor oder JonDo zur Anonymisierung im Internet.

Lassen Sie sich nicht in die Irre führen, sondern verteidigen Sie Ihre Grundrechte!

Certificate Pinning for GNU/Linux and Android

Previously, I described the dismal state of SSL/TLS security and explained how certificate pinning protects against man-in-the-middle (MITM) attacks; in particular, I recommended GnuTLS with its command line tool gnutls-cli for do-it-yourself certificate pinning based on trust-on-first-use (TOFU). In this post, I explain how I apply those ideas on my Android phone. In a nutshell, I use gnutls-cli in combination with socat and shell scripting to create separate, certificate pinned TLS tunnels for every target server; then, I configure my apps to connect into such a tunnel instead of the target server, which protects my apps against MITM attacks with “trusted” and other certificates. Note that nothing in this post is specific to Android; instead, I installed the app Lil’ Debi, which provides a Debian GNU/Linux system as prerequisite for the following.

Prepare Debian Environment

Lil’ Debi by default uses DNS servers of a US based search engine company, which is configured in /etc/resolv.conf. I don’t want that company to learn when I access my e-mail (and more). Instead, I’d like to use the “normal” DNS servers of the network to which I’m connected, which gets configured automatically via DHCP on Android. However, I don’t know how to inject that information reliably into Debian (automatically, upon connectivity changes). Hence, I’m currently manually running something like dhclient -d wlan0, which updates the network configuration. I’d love to hear about better solutions.

Next, the stable version of gnutls-bin does not support the option --strict-tofu. More recent versions are available among “experimental” packages. To install those, I switched to Debian Testing (replace stable with testing in /etc/apt/sources.list; do apt-get update, apt-get dist-upgrade, apt-get autoremove). Then, I installed gnutls-cli:
apt-get -t experimental install gnutls-bin

Afterwards, I created a non-root user gnutls with directories to be used in shell scripts below:
useradd -s /bin/false -r -d /var/lib/gnutls gnutls
mkdir /var/{lib,log}/gnutls
chown gnutls /var/{lib,log}/gnutls
chmod 755 /var/{lib,log}/gnutls

For network access on android, I also needed to assign gnutls to a special group as follows. (Before that, I got “net.c:142: socket() failed: Permission denied” or “socket: Permission denied” for network commands.)
groupadd -g 3003 aid_inet
usermod -G aid_inet gnutls

Finally, certificates need to be pinned with GnuTLS. I did that on my PC as described previously and copied the resulting file ~/.gnutls/known_hosts to /var/lib/gnutls/.gnutls/known_hosts.

Certificate Pinning via Tunnels/Proxies

I use socat to create (encrypting) proxies (i.e., local servers that relay received data towards the real destinations). In my case, socat relays received data via a shell script into GnuTLS, which establishes the TLS connection to the real destination and performs certificate checking with option --strict-tofu. Thus, the combination of socat, shell script, and gnutls-cli creates a TLS tunnel with certificate pinning against MITM attacks. Clearly, none of this is necessary for apps that pin certificates themselves. (On my phone, ChatSecure, a chat app implementing end-to-end encryption with Off-The-Record (OTR) Messaging, pins certificates, but other apps such as K-9 Mail, CalDAV Sync Adapter. and DAVdroid do not.) For the sake of an example, suppose that I send e-mail via server at port 25, which I would normally enter in my e-mail app along with the setting to use SSL/TLS for every connection, which leaves me vulnerable to MITM attacks with “trusted” certificates. Let’s see how to replace that setting with a secure tunnel. First, the following command starts a socat proxy that listens on port 1125 for incoming network connections from my phone. For every connection, it executes the script gnutls-tunnel.shand relays all network traffic into that script:
$ socat TCP4-LISTEN:1125,bind=,reuseaddr,fork \
EXEC:"/usr/local/bin/ -s -t 25"

Second, the script is invoked with the options -s -t 25. Thus, the script invokes gnutls-cli to open an SMTP (e-mail delivery) connection with TLS protection (some details of are explained below, details of gnutls-cli in my previous article). If certificate verification succeeds, this establishes a tunnel from the phone’s local port 1125 to the mail server. (There is nothing special about the number 1125; I prefer numbers ending in “25” for SMTP.)

Third, I configure my e-mail app to use the server named localhost at port 1125 (without encryption). Then, the app sends e-mails into socat, which forwards them into the script, which in turn relays them via a GnuTLS secured connection to the mail server

Shell Scripting

To setup my GnuTLS tunnels, I use three scripts, which are contained in this tar archive. (One of those scripts contains the following text: “I don’t like shell scripts. I don’t know much about shell scripts. This is a shell script. Use at your own risk. Read the license.”)

First, instead of the invocation of socat shown above, I’m using the following wrapper script,, whose first argument needs to be the local port to be used by socat, while the other arguments are passed on. Moreover, the script redirects log messages to a file.
umask 0022
socat TCP4-LISTEN:$LPORT,bind=,reuseaddr,fork EXEC:"/usr/local/bin/ $*" >> $LOG 2>&1 &

Second, embeds the invocation of gnutls-cli --strict-tofu, parses its output, and writes log messages. That script is too long to reproduce here, but I’d like to point out that it sends data through a background process as described by Marco Maggi. Moreover, it uses a “delayed encrypted bridge.” Currently, the script knows the following essential options:

  • -t: Use option --starttls for gnutls-cli; this start with a protocol-specific plaintext connection which switches to TLS later on.
  • -h: Try to talk HTTP in case of errors.
  • -i: Try to talk IMAP in case of errors.
  • -s: Try to talk SMTP, possibly before STARTTLS and in case of errors.

Third, I use a script called to start my TLS tunnels, essentially as follows:
run_as () {
su -s $TLSSHELL -c "$1" $TLSUSER
# SMTP (-s) with STARTTLS (-t) if SMTPS is not supported, typically to
# port 25 or 587:
run_as "/usr/local/bin/ 1125 -t -s 587"
# Without -t if server supports SMTPS at port 465:
run_as "/usr/local/bin/ 1225 -s 465"
# IMAPS (-i) at port 993:
run_as "/usr/local/bin/ 1193 -i 993"
run_as "/usr/local/bin/ 1293 -i 993"
# HTTPS (-h) at port 443:
run_as "/usr/local/bin/ 1143 -h 443"

Once the Debian system is running (via Lil’ Debi), I invoke in the Debian shell. (This could be automated in the app’s startup script Then, I configure K-9 Mail to use localhostwith the local ports defined in the script (without encryption).

(You may want to remove the log files under /var/log/gnutls from time to time.)

Certificate Expiry, MITM Attacks

Whenever certificate verification fails because the presented certificate does not match the pinned one, logs an error message and reports an error condition to the invoking app. Clearly, it is up to the app whether and how to inform the user. For example, K-9 Mail fails silently for e-mail retrieval via IMAP (which is an old bug) but triggers a notification when sending e-mail via SMTP. The following screen shot displays notifications of K-9 Mail and CalDAV Sync Adapter.

MITM Notifications of K-9 Mail and CalDAV Sync Adapter

The screenshot shows that in case of certificate failures for HTTPS connections, I’m using error code 418. That number was specified in RFC 2324 (updated a couple of days ago in RFC 7168). If you see error code 418, you know that you are in deep trouble without coffee.

In any case, the user needs to decide whether the server was equipped with a new certificate, which needs to be pinned, or whether a MITM attack takes place.

What’s Next

SSL/TLS is a mess, and the above is far more complicated than I’d like it to be. I hope to see more apps pinning certificates themselves. Clearly, users of such apps need some guidance how to identify the correct certificates that should be pinned.

If you develop apps, please implement certificate pinning. As I wrote previously, I believe these papers to be good starting points:

You may also want to think about the consequences if “trust” in some CA is discontinued as just happened for CAcert for Debian and its derivatives. Recall that CAcert was a “trusted” CA in Debian, which implied that lots of software “trusted” any certificate issued by that CA without needing to ask users any complicated question at all. Now, as that “trust” has been revoked (see this bug report for details), users will see warnings concerning those same, unchanged (!), previously “trusted” certificates; depending on the client software, they may even experience an unrecoverable error, rendering them unable to access the server at all. Clearly, this is far from desirable.

However, if your app supports certificate pinning, then such a revocation of “trust” does not matter at all. The app will simply continue to be usable. It is high time to distinguish trust from “trust.”

Certificate Pinning for GNU Emacs

GNU Emacs is where is spend most of my computer time. Using Emacs, I’m writing texts in general and this post in particular, I’m programming, I’m reading RSS feeds and news articles, I’m reading and writing e-mails. Emacs is highly customizable and extensible which is great, in general. However, in the past Emacs valued convenience over security, and by default it does not protect against man-in-the-middle (MITM) attacks. This is about to change with upcoming releases.

In a previous post, I explained how certificate pinning protects against MITM attacks and I recommended GnuTLS with its command line tool gnutls-cli for do-it-yourself certificate pinning based on trust-on-first-use (TOFU). In this post, I explain my Emacs configuration for certificate pinning with GnuTLS. The lisp code shown in the following is executed from ~/.emacs, as usual.

Emacs comes with at least three different libraries to establish SSL/TLS connections (net/gnutls.el, net/tls.el, gnus/starttls.el), and some libraries implement their own approach (e.g., net/imap.el). Which approach is used when does not seem to be documented; in general, it depends on what software is installed on your machine. By default, various Emacs libraries will use net/gnutls.el, which is vulnerable to MITM attacks. So I disable that library if it is available:

(if (fboundp 'gnutls-available-p)
    (fmakunbound 'gnutls-available-p))

Without net/gnutls.el, lots of code falls back to net/tls.el, which uses gnutls-cli, but with the switch --insecure. Clearly, that option is called “insecure” for a reason, and by replacing that option certificate pinning based on TOFU can be activated:

(setq tls-program '("gnutls-cli --strict-tofu -p %p %h")

(Recall from my previous post that you need a recent version of GnuTLS for this to work.)

In particular, the above change enables certificate pinning for news servers via NNTPS. E.g., I’m using the following for

(setq gnus-select-method
      '(nntp ""
	     (nntp-open-connection-function nntp-open-tls-stream)
	     (nntp-port-number 563)
	     (nntp-address "")))

(Recall from my previous post that you need to pin the server’s certificate first, e.g., via gnutls-cli --tofu -p 563 The same holds for every server.)

I’m sending e-mails via mail/smtpmail.el, which also defaults to net/gnutls.el but falls back to gnus/starttls.el. In my case, that library uses gnutls-cli, and option --strict-tofu can be added via a variable, starttls-extra-arguments:

(setq starttls-extra-arguments '("--strict-tofu"))

I’m reading e-mails via net/imap.el, which does not use net/gnutls.el but its own approach based on Openssl’s s_client. While s_client is great to debug SSL/TLS connections, it is useless for everyday encryption as it prints an error message if certificates cannot be verified, but it opens the connection anyways. And, those errors are not shown in Emacs. So, switch to gnutls-cli with certificate pinning:

(setq imap-ssl-program '("gnutls-cli --strict-tofu -p %p %s"))

To summarize, this is what I’m currently using in and recommending for Emacs:

(if (fboundp 'gnutls-available-p)
    (fmakunbound 'gnutls-available-p))
(setq tls-program '("gnutls-cli --strict-tofu -p %p %h")
      imap-ssl-program '("gnutls-cli --strict-tofu -p %p %s")
      smtpmail-stream-type 'starttls
      starttls-extra-arguments '("--strict-tofu")

Certificate Pinning with GnuTLS in the Mess of SSL/TLS

Lots of modern communication is “protected” from spying eyes and other criminals via an Internet standard called Transport Layer Security (TLS) or its outdated predecessor Secure Sockets Layer (SSL). In the following, I’m using the term “SSL/TLS” to refer to both of them. In a nutshell, SSL/TLS is a mess. It’s security has been, can be, and is being broken on several layers. In this post, I’m trying to clarify my understanding and recommend the use of certificate pinning by default. In particular, I start to describe how I’m using GnuTLS for certificate pinning in the form of trust-on-first-use. In subsequent posts, I’ll explain certificate pinning in real use cases.

I assume a basic understanding of TLS. Please read the Wikipedia entry on TLS first, if necessary.

The Mess

SSL/TLS is a mess for at least three major reasons.

First, SSL/TLS requires certificates issued by “trusted” certificate authorities (CAs). Previously, I wrote on trust vs. “trust” in the context of e-mail encryption, and that reasoning applies to SSL/TLS as well: Our software (browsers, e-mail clients, apps) “trusts” all certificates issued by “trusted” CAs. However, I do not trust (without quotes, in the original meaning of the term) a single CA. How could I? I don’t know anything about them, except for the recurring horror reports where someone was able to pay, bribe, trick, compel, force, or operate a “trusted” CA to issue “trusted” certificates. See my previous post for more details. In essence, if you “trust” without reason, you are vulnerable to so-called man-in-the-middle attacks, where third parties steal your secrets, your passwords, and your credit card details. And you will be blissfully ignorant, unable to see that anything bad is going on.

Second, lots of software is simply broken when it comes to certificate validation. SSL/TLS APIs and libraries are too complicated for the average software developer to get it right. And some developers just don’t care. If you are a software developer, you must read the following two peer-reviewed, highly accessible publications. Really, please read them.

As a software developer you are going to implement certificate pinning, right? Otherwise, all users of your software will be defenseless against man-in-the-middle attacks.

Third, client and server need to negotiate a cipher suite, and there are lots of insecure choices. If you operate a server, please have a look at the Internet-Draft Recommendations for Secure Use of TLS and DTLS and the companion document Summarizing Current Attacks on TLS and DTLS, which states concerning the BREACH attack:

“We are not aware of mitigations at the protocol level to the latter attack, and so application-level mitigations are needed (see [BREACH]).”

Anyways, configure your server to use AES and Diffie-Hellman key exchange for perfect forward secrecy as recommended.

Certificate or Public Key Pinning

Several times I mentioned certificate pinning already. The core idea is to make a trusted (without quotation marks) certificate for a specific purpose directly accessible to the client software. Suppose you develop an app that needs to communicate with your own server. Then you can embed the server’s certificate directly into the app’s source code. Whenever the app contacts the server, it checks the certificate presented by the server (or a man-in-the-middle) against the one embedded in the source code. If they do not match, your apps stops working and displays a big warning to the user. (Of course, you need to update the source code before the embedded certificate expires.) See the paper Rethinking SSL Development in an Appified World mentioned above for details. Also note that certificate pinning works perfectly well with self-signed certificates, without the need for unwarranted “trust.”

[Added on 2014-04-22] I’d like to clarify one fact which I overlooked when writing this post: Although “certificate pinning” is a popular term, from a security perspective “public key pinning” is typically sufficient. In fact, a certificate is a digitally signed document containing a public key, where the digital signature is meant to provide some assurance that the public key belongs to a certain organization, user, or machine. Now, if pinning is used to embed key material within source code, the digital signature of a “trusted” certificate does not offer added value. Instead, it is sufficient to embed the public key. (Of course, the app itself should be digitally signed to provide assurance of its authenticity.)

Certificate or public key pinning is not restricted to cases where certificates or keys are embedded within the source code. Instead, every client software can consult some storage for pinned certificates or keys to verify whether keys are authentic. Then, the challenge is to populate that storage in a trustworthy fashion. A common approach is to rely on trust-on-first-use (TOFU), which is well-known to users of OpenSSH: With ssh, the pinning storage is implemented as file known_hosts, which contains public keys for servers that have been accepted as trusted by the user previously. With TOFU in general, when the client connects for the first time, it does not know the server’s certificate or public key yet. So the client presents the server’s (or man-in-the-middle’s) public key (or its fingerprint) to the user, who needs to decide whether that key really belongs to the server or not. If the user recognizes that key as authentic, the server’s key is stored as pinned key; otherwise, the connection is aborted.

If you run your own server, e.g., an ownCloud, then you know the correct certificate (containing the public key) and its fingerprint. Otherwise, things may get complicated.

The canonical way to verify a public key or certificate as authentic is to compare the fingerprint of the presented certificate with the “real” fingerprint, which needs to be obtained via some out-of-band method. E.g., for e-mail with GnuPG fingerprints are compared offline at key signing parties or over the phone, some companies distribute fingerprints of CAs and servers in print, some banks provide fingerprints of their online banking servers to customers via snail mail. Quite likely, none of this will be available to you if you try to verify the certificate of a remote server, say, of some News server such as


My current tool of choice to perform certificate pinning for insecure applications is gnutls-cli, a command line tool provided by GnuTLS. For example, to open a TLS connection to on the NNTPS port 563, you can invoke:

$ gnutls-cli -p 563

The response looks as follows:

Processed 141 CA certificate(s).
Resolving ''...
Connecting to ''...
- Certificate type: X.509
- Got a certificate list of 1 certificates.
- Certificate[0] info:
 - subject `C=NO,ST=Some-State,O=Gmane,', issuer `C=NO,ST=Some-State,O=Gmane,', RSA key 1024 bits, signed using RSA-SHA1, activated `2011-12-04 06:38:42 UTC', expires `2014-12-03 06:38:42 UTC', SHA-1 fingerprint `c0ec2f016cff4a43c1a7c7834b480b3ac54e90f9'
	Public Key ID:
	Public key's random art:
		+--[ RSA 1024]----+
		|          |
		|+*o+ . .         |
		|= + + o          |
		| . + = o         |
		|  . + + S        |
		|   . . =         |
		|    . +          |
		|     E .         |
		|      .          |

- Status: The certificate is NOT trusted. The certificate issuer is unknown.
*** Verifying server certificate failed...
*** Fatal error: Error in the certificate.
*** Handshake has failed
GnuTLS error: Error in the certificate.

Apparently, GnuTLS refuses to connect because the CA (C=NO,ST=Some-State,O=Gmane, issuing the certificate is unknown. As I’m interested in certificate pinning, that “trust” issue is not really important to me. However, I’d like to gain evidence that the displayed fingerprint c0ec2f016cff4a43c1a7c7834b480b3ac54e90f9 is actually the correct one (and not one belonging to a forged certificate under an active MITM attack). First, I assume that it is indeed correct and start gnutls-cli with the option --tofu to activate trust-on-first-use:

$ gnutls-cli --tofu -p 563

This time, gnutls-cli displays the same information as before but asks whether I trust the certificate. I answer with “y,” and the public key contained in the certificate is recorded in the file ~/.gnutls/known_hosts. (Yes, GnuTLS really pins public keys, not certificates). Now, subsequent connections with option --tofu succeed.

Update on 2014-04-11: If you pin a public key by answering “y,” that key is recorded at the end of ~/.gnutls/known_hosts. If you need to replace a key (which you should expect to happen frequently these days due to the Heartbleed bug in OpenSSL), you must remove the old entry manually from ~/.gnutls/known_hosts: Search for lines containing the server’s name and service and delete all (probably just one) but the last one.

Before dealing with the question whether I actually pinned the correct public key, I’d like to point out something else. The output of a successfully established TLS session indicates what cipher suite was negotiated between gnutls-cli and the server. Here, the output contains:

- Description: (TLS1.0)-(RSA)-(AES-128-CBC)-(SHA1)

This line implies that no Diffie-Hellman key exchange was performed (in favor of compatibility with broken servers), which can be changed by adding the option --priority=PFS:

- Description: (TLS1.0)-(DHE-RSA-1024)-(AES-128-CBC)-(SHA1)

To gain some evidence that GnuTLS recorded the correct public key, I use the Tor network and connect to the server from different locations:

$ torify gnutls-cli --tofu --priority=PFS -p 563

Tor is an anonymity network, where Internet traffic is re-routed over randomly chosen Tor servers so that it appears to originate from one of these servers. With torify, the subsequent command performs its network connection via the Tor network. Thus, I essentially connect from a different place, and it is less likely that a MITM is able to compromise the paths from different places to the target server (Of course, this method does not offer any guarantees: E.g., an attacker located in the target server’s LAN might be able to hijack all traffic directed to the server.)

With Tor, you can control from what country you’d like to connect: Tor offers a so-called control port, by default the port 9051 on the local host, to change configuration options. E.g., to check the certificate from a Tor server located in Norway, instruct Tor to use only exit nodes with country code “no”:

$ telnet localhost 9051
SETCONF ExitNodes={no}

Then, connect with torify again:

$ torify gnutls-cli --tofu --priority=PFS -p 563

In case of a MITM attack (or a new certificate, e.g., to replace an expired one), the following warning is displayed:

Warning: host is known and it is associated with a different key.
It might be that the server has multiple keys, or an attacker replaced the key to eavesdrop this connection .
Its certificate is valid for
Do you trust the received key? (y/N):

If you plan to use GnuTLS for certificate pinning you probably want to record the expiry date displayed by gnutls-cli when you accept a public key. Then you know when to expect a regular certificate change.

Also, you may prefer gnutls-cli to fail when a presented public key does not match the pinned one (instead of asking the above question). For that purpose, I added option --strict-tofu, which is present in GnuTLS since version 3.2.12.

For certificate pinning in situations where the connection starts out in plaintext and switches to TLS via STARTTLS (e.g., SMTP or XMPP), a command like the following can be used:

gnutls-cli --tofu --crlf --starttls -p 25

What you need to type then depends on the protocol. E.g., for an encrypted SMTP connection you can type the following commands in the gnutls-cli session; afterwards, you’ll be asked whether you trust the certificate:

EHLO localhost
<Then press Ctrl-D to enter TLS mode.>

In subsequent posts I’ll explain how I’m really using the above to do something useful.


I motivated this post with the mess of SSL/TLS on three different levels, namely (1) “trust” issues, (2) broken applications that do not check certificates properly, and (3) the choice of insecure cipher suites.

Certificate pinning solves the first two issues. Consequently, certificate pinning is a Good Thing.

The third issue is unrelated to certificate pinning, but at least GnuTLS allows us to choose whatever cipher suite is recommended by the experts (if the server supports it).