Nico Rikken is a Fellow of the FSFE from The Netherlands with a background in electrical engineering and interests in open hardware, fabrication, digital preservation, photography and education policy, amongst other things.
Paul Boddie: It seems that we both read each other’s blogs and write about similar topics, particularly things related to open hardware, and it looks like we both follow some of the same projects. For others interested in open hardware and, more generally, hardware that supports Free Software, which projects would you suggest they keep an eye on? Which ones are perhaps the most interesting or exciting ones?
Nico Rikken: There is a strong synergy between free hardware designs and free software, as free software allows modifications corresponding to changes in free hardware designs, and free hardware designs provide the desired transparency for free software to be modified to run. And above all, the freer the stack of hardware and software, the better your freedoms are respected, as the ‘respects your freedom’ certification by the FSF recognizes. The amount of free hardware designs available is actually immense, covering many different applications. For my personal interests I’ve seen energy monitors (OpenEnergyMonitor), attempts for solar power inverters (Open Source Solar Inverter, Open Source Ecology), 3D Printers (Aleph Objects and RepRap), a video camera (Apertus), VJ tools (M-Labs), and an OpenPGP token with true random number generator (NeuG). But these projects work on task-specific hardware and software, and can remain in operation unchanged for many years.
The next frontier in free hardware development seems to me to be twofold, to develop free processor designs like lowRISC, and a modular free hardware design for generic computing like EOMA-68. In recent years there have been noteworthy projects like Novena, and the OLinuXino which provide a free hardware design solution but fail to provide free firmware or a modular approach to hardware. In that regard these projects, including the recent Librem laptop, are just wasted effort. These projects certainly provide much needed additional freedoms but lack an outlook towards the future for further improving performance and freedom. As microchips and processors in particular are only available for a limited duration before the next model comes into production, hardware designs and the corresponding firmware will have be updated continuously. Free processor designs will allow control on the pinout and feature set of the processors, avoiding unnecessary design revisions at the lowest level. A modular hardware structure will avoid having to modify and produce all components each iteration and allows higher production counts making production more viable. So taking this into account, I’ve only observed two projects which are important for the long-term development of free hardware designs of generic computing platforms: EOMA-68 and lowRISC. Of course I’m very interested in finding out about other efforts, as in the distributed community it is hard to know everything that is going on.
Paul Boddie: Your background appears to be much more hardware-oriented than mine, meaning that your perspective on hardware is perhaps quite different from mine, too. You have written that engineering students need to be taught Free Software. Did you establish your interest in Free Software as a consequence of your electrical engineering education, or did it happen over time through a general exposure to technology and the benefits of Free Software?
Nico Rikken: There has been quite some synergy between my formal education and my own hacker attitude. As long as I can remember I’ve been creative with technology, spanning hardware (wood, paper, fabric), electronics, and software. Probably because my dad is a power systems engineer and there was plenty of hardware and tools around in my youth. Part of the creative attitude is figuring out how to achieve a goal, figuring out how stuff works, and using readily available products and methods to speed up the process. When you are creative with digital technology, free software and free hardware designs are like oxygen. Quite notable is the fact that we had a presentation on the Creative Commons licenses in primary school by some expert, although I only recognized the importance of that moment many years later, after I had become aware of free software.
My technical development accelerated when I started my high school education. It offered the theoretical and practical education including the labs and people. In the years in high school a friend and I worked alongside the technical assistants of the school daily to help other students with their physics experiments and do our own in the process. But on the software side I did get the informatics education of the workings of computers, the MS Office suite, SQL and basic web development, but was never taught about free software. I had a friend whose dad was a electronics engineer and they used GNU/Linux at home. He showed it to me briefly but I only considered the look of the desktop, even though he tried to communicate the difference in the underlying technology. All this time I was a MS Windows user, using any software as long as it satisfied my feature requirements and was free of cost.
It wasn’t until I was at university for my electrical engineering education I became aware of GNU/Linux as relevant. It was used in the embedded systems department and was more visible, and some students were experimenting with using it. When I started investigating what Linux actually was, I was struck by the technical superiority and the overall better user interface. I started dual-booting GNU/Linux Mint and was pleased with it. Switching between GNU/Linux Mint and MS Windows daily did introduce some issues, so I was in need of a solution. A friend at the time, who was quite involved in the Dutch hacking community, was using Ubuntu as his daily driver. He convinced me to switch to Ubuntu and ditch MS Windows and was a helping hand in getting across all the tiny problems. From that moment on I’ve only used a Windows VM to do some magazine design work in Adobe InDesign as porting the template to Scribus wasn’t worth the effort.
But more importantly that friend, being a hacker, briefly introduced me to the concept of free software and why it was relevant. It didn’t take long before I found Stallman speeches and became aware of the vastness of the free software community. That was the moment I realized how much I had been restricted in the past, and how my own creative work was taken away from me in proprietary data formats. I had falsely assumed that freeware was the equivalent of sharing hardware plans, because that followed from how little consideration I had given to accepting software licenses or considering alternatives because of the license. Having become aware of free software changed my world view, reinforcing itself with every issue that arose. I unwillingly accepted the fact that I needed proprietary software to finish my studies, and sticking to free software certainly brought inconveniences. I have two illustrative examples from this struggle. I failed an exam partly because I had missed out on about half the formulas during the course revision, as LibreOffice wasn’t able to parse the PowerPoint file correctly. Also I wasn’t allowed to use an alternative to Matlab like Scilab as a numerical computation suite as the examiners during the test weren’t instructed about other software tools. In retrospect I believe my education would have been better if I was introduced to the free software and the community more explicitly.
Paul Boddie: Those of us with a software background sometimes look at electrical and hardware engineers and feel that we can perhaps get away with faults in our work that people making physical infrastructure cannot. At the same time, efforts to make software engineering a proper engineering discipline have been decades-long struggles, and now we see some of the consequences in terms of reliability, privacy and security. What do you think software and hardware practitioners can learn from each other? Has any progress in the software domain, or perhaps the broader adoption of technology in society, brought new lessons for those in the hardware domain?
Nico Rikken: Software, especially the software running on a general purpose processor, can be changed more easily. This especially holds true regarding scale. I might as easily modify the hardware of my computer as I might switch my software, but hardware changes don’t really scale. Although my view is limited, I believe hardware design can learn from software by having a more rapid and distributed development cycle, relying on common building blocks (like libraries) as much as possible, and achieving automated tests based on specifications. From a development standpoint this requires diff-based hardware development with automated testing simulations. From a production standpoint this requires small batches to be produced cost-effectively for test runs, and generic testing connectivity on the boards itself. This stimulates the use of common components to avoid forced redesign or high component sourcing costs. Or to put the latter statement differently: I believe hardware development can learn from software development that a certain microchip can be good enough and it is worthwhile to have fewer models covering a similar feature set, more like the UNIX Philosophy. The 741 operational amplifier is a great example of such a default building block.
I don’t see what software can learn from electronics development that much. I however do see points of improvement based on industrial design principles. This has got to do with the way in which a product is meant to target a large audience as a single design is produced numerous times. I personally view the principles for good design by Dieter Rams to represent the pinnacle of industrial design. It recognizes the way in which a product is meant to target a wide audience, and improve their lives. I consider it to be analogous to the UNIX Philosophy, but I especially believe that user interfaces should be developed with these principles in mind. Too often interfaces seem to be an afterthought, or the levels of abstraction aren’t equivalent throughout the program. I recognise there are projects highlighting the importance of usability like GNOME, elementary OS, and LibreOffice. However too often I encounter user interfaces I consider overly complex and badly structured.
Paul Boddie: In your article about smart electrical grids you talk about fifty year timescales and planning for the longer term. And yet at the same time, with many aspects of our lives changing so fast, we now have the risk that our own devices and data might become ephemeral: that we might be throwing devices away and losing data to obsolescence. How do you think anyone can sensibly make the case for more sustainable evolution in technology when it might mean not being able to sell a more exciting product to people who just want newer and better things? And can we learn things from other industries about looking after our data and the ways in which we access it?
Nico Rikken: When considering the power distribution infrastructure, it is highly stable with hardly any moving parts, and a minimal level of wear. The systems are generally over-dimensioned, but this initial investment proves beneficial in the long run. This is very different to a computer which is nearly irrelevant within five years as a result of an evolving need. Regarding the sustainability of our technology, I’d again look at industrial design. Mark Adams, the CEO of the company Vitsoe, based around designs by Dieter Rams, has given me great insights in this regard. He considers recycling a defeat, because that means a product wasn’t suitable for reuse. This originates from the original ethos of the company, requiring a mutual commitment between company and user to allow the company to sell fewer products to more people. Taking this coherent point of view, we have to make hardware modular and easy to repair or repurpose. I think we are heading in the wrong direction as a result of miniaturization, especially if we consider the downward trend in repairability scores by iFixit.
I guess that the other way of going about this is the way 3D printing and IKEA are taking on the issue of sustainability. 3D desktop printing allows a filling factor to be defined, to reduce the amount of material used. Of course this reduces the physical strength, but this allows for material usage optimization. This is why 3D printed cars can be strong, light, and low on resources. And a plain 3D print can easily be recycled by shredding and melting, closing the material loop and only requiring tools and energy. IKEA offers modular furniture enabling reuse, but from experience I can say that it certainly shows if you’ve moved the furniture a couple of times. But the counterargument is that the production process is continuously being optimized to be low on resources. IKEA’s BESTÅ seems to be the latest and greatest on this issue, being highly modular and being made of hybrid materials of particleboard, fiberboard, honeycomb structured recycled paper filling, a foil wrap and tiny plastic shelf supports. It is optimized for recycling at the cost of reusablity, but I guess that better suits the way in which the majority buys and deals with furniture.
Taking this argument of sustainability towards electronics, being able to freely replace software is a prerequisite for making electronics long-lasting. This has bugged the Fairphone, despite best intentions. We will have to protest anti-features as consumers, demanding formal legislation to protect our rights and the well-being of our society. Ideally we would go so far as to declare all patents and copyright regarding interfaces unlawful, to enable use and (re)implementation of such interfaces even if it wasn’t part of a formal standardization effort. Also the Phonebloks concept is great in that it allows products of separate lifetimes to be combined, and components to be exchanged when requirements change, rather than having to change the complete device.
Considering the specific question around data, or information in general, I have come to find my digital notes to be far less fleeting than my paper-based notes, because I can keep them at hand all the time and because I can query them. Keeping your own archives available requires the use of common open standards, as I’ve come to find. Some of my earlier creative work is still locked in proprietary formats I have no way of opening. Some of my work in the Office suite I can only open with some loss of detail, although this gets better as projects like LibreOffice are improving the compatibility with proprietary formats. Thanks to libpwd, currently part of the Document Liberation Project, I was able to settle a dispute as secretary of a student climbing association, as the details of the agreement were only available in the WordPerfect format. In that regard I understand why printed documents are preferred for archival, and why most of the communication in the energy metering industry is still ASCII-based.
I do recognise the shallowness of the store of the digital commons, especially regarding websites. As a result of the vastness of the digital media we all consume, I guess it is hard to store all data, other than in a shared resource like the Wayback Machine, which fortunately offers a service for organizations. Also I recently discovered the MHTML format for storing a website in a single open format file. I would think the digital dark age is somewhat exaggerated in the fact that most produced information was discarded in history anyway. However for the information which is actually subject to archival, retrieving it from obsolete media or proprietary formats is a challenge which increases in complexity over time.
Paul Boddie: Another one of your hardware interests that appears to overlap with one of mine is that of photography, and you describe the industry standard of Micro Four Thirds for interchangeable lens cameras. Have you been able to take a closer look at Olympus’ Open Platform Camera initiative and the “OPC Hack & Make Project” or is it as unscrutinisable for you as it is for me?
Nico Rikken: Coming from an advanced compact camera, it took me quite a while to select the system camera I desired, because I was very aware I was going to buy into a lock-in. The amount of technical differences related to the various lens mounts was quite interesting and I came to the conclusion I wanted to have as many technical solutions available as possible when using manual lenses. In a way the best option for compatibility would have been the way the Ricoh GXR did it, by making the interface between body and lens purely electronic. In this way the optical requirements are separated and all components can be updated in case the interfacing standard changes.
Ultimately I believe the optical circuit will be kept to a minimum, because the digital information can more easily be manipulated, even after the fact. I realized this regarding the focusing, as now contrast-based focusing can be faster than phase-based focusing using, with the benefit of various focus-assisting technologies, which can then both be displayed on the rear display or via the viewfinder. A DSLR cannot offer the focus-assisting technologies via the viewfinder and the speed of the contrast-based focusing as required in live-view mode is significantly slower if only due to the different lens drive. More on the innovative side the Lytro is more than about correcting focusing afterwards, it opens up new ways for creative expression by changing perspective in a frozen shot. It is another innovative way of doing cinematography, like putting cameras on cranes, on drones, or the famous ‘bullet time’.
So regarding the Open Platform Camera initiative, based around the Olympus Air I believe it is a step forward regarding digital interoperability. Having an API available rather than image files opens up new capabilities, but I would think a physical connector with the option of a power adapter would have been better as it allows more direct control and can prevent having to recharge the batteries all the time. In that regard I believe enabling the API on current cameras would be more beneficial because I don’t believe the form-factor is actually holding people back from adopting it in their projects, considering the creations from the OPC Hack & Make Project Party in March. I assume the main drivers for the open approach are media attention, image building, testing potential niche markets, and probably selling more lenses. According to Wikipedia 11 companies have formally committed to Micro Four Thirds (MFT). Considering the available lenses even more companies offer products for the system. In that regard it seems to be the most universal lens mount standard available.
If I understand correctly Olympus is one of the mayor patent holders regarding digital photography, so I’m curious in what regard they exercise their patents by licensing. Regarding MFT as a standard, in terms of standardization it is said to be an extension of the original Four Thirds specification, which is said to be highly mobile, 100% digital, and an open standard, but apparently they have a different standard of openness as the same page mentions: “Details of the Four Thirds System standard are available to camera equipment manufacturers and industry organizations on an NDA basis. Full specifications cannot be provided to individuals or other educational/research entities.” Whether or not this includes license agreements regarding the standard we don’t know, but either way you’d have to start or join an imaging company to find out. Maybe the AXIOM Gamma camera will provide the needed information in the MFT module, although I doubt that will happen as a result of the NDA. Considering the number of companies working with MFT, I guess the standard is effectively open, other than for individuals or educational or research entities. Luckily work has been done to reverse engineer the electronic protocol by Lasse Beyer and Marcus Wolschon.
Paul Boddie: Do you think established manufacturers can be encouraged to produce truly open products and initiatives or do you think they are genuinely prevented from doing so by legitimate concerns about current or potential competitors and entities like patent trolls?
I hardly think so. They have a vested interest in keeping a strong grip on the market for targeting consumers, and losing the NDA means losing that grip. The Open Platform Camera Initiative by Olympus seems to be a step in the right direction, now lets hope they see the benefit of truly opening up the standard. That would benefit niche applications like astrophotography, book scanning, photographing old negatives, lomography or IR photography. All these types of photography have specific requirements for filters, sensors, focusing or software and opening up the specification would lower the barrier for adopting these features.
Paul Boddie: Could you imagine Micro Four Thirds becoming a genuine open standard?
Creating a motive for opening up the standard can be done using both a carrot and a stick. The carrot approach would be to complete the reverse engineering of the protocol and show what applications could benefit from an open standard. The stick approach would be to introduce a open pseudo-standard, regarding mechanical and electronic connectivity. Ideally such a standard would be between a mirrorless interchangeable-lens camera (MILC) and larger lenses, to allow multiple lenses to be connected with multiple bodies. As adapters start popping up for such a standard, the reputation of universal lens mount of MFT is threatened. I haven’t looked into the serial protocols of the various lens standards, so I’m not aware how easy it would be to pull off a universal lens mount. To me a sensor-based stabilized telescope would be a great test case for reverse engineering the standard and enhancing the camera body for the benefit of the user.
Paul Boddie: You have written about privacy and education a few times, occasionally combining the two topics. I was interested to see that you covered the Microsoft Outlook app credentials-leakage fiasco that also affected users at my former (university) workplace, and you also mentioned how people are being compelled to either use proprietary software and services or be excluded from activities as important as their own education. How do you see individuals being able to maintain their own privacy and choice in such adverse circumstances? As organisations seek to push their users into “the cloud” (sometimes in contravention of applicable laws), what strategies can you envisage for Free Software to pursue to counter such threats?
Nico Rikken: I assume these solutions are introduced with the best intentions, but they bring negative side-effects regarding user freedom. Accepting licences of other organizations than the educational organization should be considered unacceptable, even implicitly via a school policy. Likewise third parties having access to personal information including communication should be unacceptable. Luckily some universities are deploying their own solutions, for example universities in Nordhein-Westfalen and the University of Saskatchewan deploy solutions based on ownCloud, which is one of the ways external dependencies can be avoided. Schools should offer suitable tools with open interfacing standards for collaboration, preventing teams from adopting non-free solutions under social pressure. Using open standards and defaulting to free software is obvious. To avoid unnecessary usage information being generated, all information resources should be available for download, ideally exposing them via an API or web standard like RSS for inclusion in common client applications.
But this is wishful thinking, as I’m aware that current policies are weak, even those policies aren’t adhered to. Simply put, if you want to take a formal education you have to accept your freedoms are violated. The impact can be minimized by continuously protesting the use of non-free software service as a software substitute (SaaSS). I’ve come to find most of the times teachers don’t care as much about the software used, they just know the common proprietary solution. Having some friends to pass along information or convert documents can further reduce observability. Things get particularly difficult if no alternatives exist, or if non-free formats or templates are required.
An alternative way of getting educated is by taking part in Massive Open Online Courses (MOOCs). It seems to be the most promising way out, as content is offered according to open standards. The content availability and reusability is limited depending on the licenses, but the same holds for most educational institutions. Then there is the amount of monitoring involved, but most MOOCs allow pseudonymity unless you desire an official certificate. Assuming you use a VPN service or Tor even, this offers an unprecedented level of anonymity. Just compare this to the non-free software dominated IT systems of educational organizations, combined with the vast number of registered personal details and campus cameras. Whether or not MOOCs can replace a formal education in the coming years I don’t know, neither do I know how corporate organizations will judge MOOC-taught students.
Many thanks to Nico for answering our questions and for his continuing involvement in the Fellowship of the FSFE.