Fellowship interview with Simon Josefsson

Simon Josefsson is a Fellow and GNU hacker with a special interest in security. His contributions to the Free Software world include such ubiquitous projects as GnuTLS and Libssh2, and he was recently presented with the Nordic Free Software Award[1]. I sat down for a jabber session with Simon, asking him about his projects and other security matters.

Stian Rødven Eide: While proprietary software vendors often tout security by obscurity as an advantage, you are involved in several Free Software projects that are regarded as among the most secure software there is. Can you explain how Free Software can provide better security?

Simon Josefsson
Simon Josefsson

Simon Josefsson: To answer that, one should study the history of security incidents in software. Once you do, it becomes evident that no matter how much effort is put into an implementation or specification, or even how much effort you put into analyzing it, sooner or later someone will figure out a way around it. This means that security really is a process rather than anything absolute. And here Free Software has many advantages, some technical, but even more important ones are the social aspects. For example, Free Software is open for people to scrutinize, and people help each other by scrutinize software they use, and the result is that widely used software is better analayzed. In comparison, security by obscurity does not invite people to review the system, so there are much fewer improvements to the system, and only those inclined to attack the system will analyze it. And, as we’ve seen, no software security is absolute.

SRE: One point that you have stressed in several talks is that security should be treated as a process. This affects both how the community should be involved and how businesses should treat potential security issues. Can you tell us a bit about the background for this notion and how it would work in practice?

SJ: The background is witnessing really complicated designs by smart people be cracked relatively quickly. This reflects older software design principles, where you spend a lot of time on design stages, whereas Free Software is typically engineered in an iterative process — you add one small feature, release it quickly, people start to use it, starts thinking about it, and some may realize that there is something wrong with the feature, and it gets reported back. The small feature can then be re-designed, or even removed because it was a bad idea. The point is that if every addition is done in this somewhat modular and piecemeal way, you are less likely to make major design issues. Free Software is good at making frequent releases that correct minor things, and users have adapted to that habit. If you only do one major release every 5 years, you are more likely to break some things heavily that require a lot of work for people. So I tend to recommend businesses to work in an iterative way and involve the users early on to avoid embarassment.

SRE: You are maintaining quite a few security libraries such as GnuTLS, GNU SASL, GSS and more. Which ones do you find yourself spending the most time on improving, and which ones receives the most attention and/or help from other people?

SJ: I have spent quite a lot of time during the development cycle on my own projects, but after that it becomes more of a maintainer’s work. The most development time I’ve spent is probably on Shishi, which is my Kerberos V5 implementation. But as a maintainer, my time is more directed on what people use, and right now that tends to be GnuTLS. There is also a factor of maturity; the Libidn project is used in critical places (including glibc) but I rarely spend any time on it these days because it is mostly feature-complete. On some projects, like Libssh2, I also get paid for doing certain things, which naturally make me spend more time on that project. Lately I have found myself working a lot on Gnulib because it contains re-usable components used by all my other projects.

SRE: You have provided security services for a range of various clients, including hospitals, wireless providers and web applications. Are the concerns of these very different or should the same security standards more or less be applied in all cases?

SJ: There are some places where my contributions haven’t been as successful as in others, which could be due to many reasons, but I think generally that where I’ve failed to get my point across are the places where people don’t understand (or agree) that security is a process — they want something that is Absolutely Secure, and then never touch that piece of component again. It then becomes difficult for me to have any effective discussion. Also, some organizations have established traditions about how to deal with security incidents — obscurity rather than openness, including the bank world, some parts of governments, and so on. I think having a process-like view of security would help many places, but I also understand that some companies have business reasons why they cannot use an open community process. The Free Software world has been learning from this, and we now follow something called responsible disclosure, which I think is one example of where Free Software has been improved by learning from the “old” way of handling security.

SRE: Your Master’s Thesis dealt with the concept of storing personal encryption certificates in DNS. While still not a common practice, you wrote in a recent blogpost that some work has begun to happen in the area. How do you currently regard the promise of this way of distributing keys? Have keyservers in general improved since your thesis was written?

SJ: The problem is not so much about technology here, but social matters. The person responsible for managing DNS for an organization is typically not the same person responsible for managing user certificates for an organization, and people have been reluctant to change their habits here. After all, DNS is a pretty critical piece of any company’s infrastructure. So I haven’t seen much uptake in this, even if it continues to be a interesting possibility, especially for the OpenPGP world. One part of my thesis was about the privacy issues around the then-current DNSSEC standard, the so called NXT record. I identified and explained that it will lead to problems when people can enumerate entire DNS zones, and even wrote a IETF draft on how to solve the problem using hashing of the names instead of storing the names directly. People in the IETF felt that the threat didn’t exist, and thought they were ready to roll out DNSSEC quite soon anyway (this was in 2001/2002!) so they didn’t want to change DNSSEC. I gave up on the draft, but years later people who were actually deploying this identified the same problem, and ended up re-inventing my solution, which is now standardized (the NSEC3 record). So at least some of it ended up being used, although not in the form or way I anticipated.

SRE: Another project you have worked on is the YubiKey, a physical USB device that aims to make secure communication simpler. Has the YubiKey been successful so far? Do you think that this approach could end up being adopted by computer manufacturers as well?

SJ: The YubiKey popularity is growing, and given the amazing number of community contributions we’ve received I’d say it has been a success. Technically we are now changing to support new standards like OATH HOTP which will make it even more relevant. The difference between the YubiKey and other authentication devices like smart cards is that it is based on a process-oriented and cost-efficient way of working with security. Rather than purchasing smart cards, readers, and spending a fortune on device driver installation and user education, we focused on getting something that was good enough security (one-time passwords based on AES) but pushed strongly on ease of use (no device drivers or software!), and to support the kind of compromises people do. For example it also supports a mode where it outputs a static password, which is not a good idea in general but many people were asking for it and are now using it. We are open for it to be used by anyone, including manufacturers, but as there is no integration required on computer manufacturer side (in contrast to smart card readers or fingerprint readers), the solution isn’t depending on support from computer manufacturers.

SRE: During the GNU Hackers Meeting in Göteborg, you had a presentation on Code Quality Assurance. What is, in your opinion, the best way of aquiring quality assurance and how will this be implemented in the GNU project?

SJ: I believe it is important that quality assurance isn’t something done by a separate set of people, and after the product is otherwise finished, but rather that it is integrated into how hackers work daily. So my goal is to setup a GNU QA site where people can help a project by setting up a build server, either from version controlled sources (to build daily snapshots) or from a daily snapshot to see if it works on their favorite architecture. It has to be a opt-in system, so that people don’t feel it is a burden. The goal is to be able to present Code Coverage reports (based on GCOV/LCOV), provide Cyclomatic Code Complexity charts, GIT/CVS statistics, and so on. All of it should be done in a distributed way, so people feel involved in the effort, but also to reduce the work-load on me and other people who run the servers.

A big thanks to Simon for sharing his valuable insight into these matters. You can learn more about him and his projects at josefsson.org.

[1] The award was split between Simon Josefsson and Daniel Stenberg.