Communicating freely


Archive for the ‘Uncategorized’ Category

Doing the European cafe thing

Thursday, October 12th, 2006

I’m sitting in a cafe in central Zurich.  I’m about two minutes from the new FSFE office.  It’s quite a change of scene from Ireland.

I moved to Switzerland to begin working as a project coordinator for the Free Software Foundation Europe at the end of September.  We’re doing some exciting new things to protect and promote Free Software in the European arena and the Zurich office will be in the middle of all the adventures.

Free Software has really come along in the last few years.  The Linux kernel is powering a heck of a lot of computers and GNU/Linux operating system distributions are being deployed in increasing frequency.  Perhaps the most important change is political; Free Software is entering the mainstream.

I feel that 2006 is something of a turning-point for Free Software.  A new license is on the way (GPLv3).  Cities are transitioning to our technology.  Big vendors like IBM, Sun and Novell are increasing their commitments to our success.

Sure, not everything is rosy.  There is still bickering in the sidelines about stuff like ‘Free Software’ vs ‘open source’ and GPLv2 vs GPLv3.  Flame wars still appear on sites like Slashdot.  But I think if we look at the bigger picture things are going very well.

One thing I’ve been talking about for a while is the increasing maturity of the Free Software ecosystem.  That is, the extension of Free Software into realms far removed from the hackers who often created it.  I believe this extension is particularly important in the political and business spheres.

We are seeing the emergency of a ‘professional’ layer in the Free Software world; these include managers, salespeople and marketing experts.  I think this is an excellent and necessary step to ensure the long-term adoption of Free Software by society as a whole.  The concept is permeating beyond its creators and entering social consciousness.

Yes, these are exciting times.

I’m delighted that I am participating in these times through the Free Software Foundation Europe.  We have a great deal of work to do.  It’s going to be challenging and it’s going to be fun.

I will try to document my adventures through my blog as often as possible (and my apologies for the relative neglect in the last few weeks).  If you need to contact me you can do so through email (coughlan@fsfeurope.org, shane@shaneland.co.uk).  In the meantime – if you have a moment – I’d love to hear your comments on where you think Free Software will go in the next twelve months.

A day with the boys from GLLUG

Wednesday, May 3rd, 2006

Recently I was lucky enough to speak at the Great London Linux User Group, a vibrant LUG right in the centre of London.  The meetings are held at Westminster University in a well equipped lecture theatre (no less than two projector screens lurk over the heads of the speakers), and the ‘amphitheater’ seating allows everyone to get a good view of the proceedings.  An additional benefit is that it’s possible to pelt bad speakers with fruit, paper and Microsoft CDs without undue effort.  Gravity does most of the work.

I was talking about Free Software, the Free Software Foundation Europe, and why things like GPLv3 are important.  That might sound like preaching to the converted – I was addressing a LUG after all – but we have lots of controversial topics on our collective community plates.  DRM is hovering around like an annoying bee.  Patents loom and loom again, threatening innovation in development.  Fights over terms like ‘Open Source’ and ‘Free Software’ cause bickering among friends.  If ever there was a time to reassess where we are, and where we are going, it’s now.  This is the moment to pull together and share a common agenda.

The GLLUG crowd was both sizable and enthusiastic.  I had been a tad worried that people would contribute to the conversation, but the opposite was true.  Intelligent, piercing questions appeared from all directions.  People wanted to know about business models, development models, challenges facing us, and the need to innovate.  It’s a genuine pleasure to engage with an audience who have things on their minds, and are not afraid to talk about them.  

I’m loath to label anything as a ‘favourite’ topic, but one strand of our conversation really grabbed me.  A gentleman from the BBC introduced an issue regarding the deployment of GNU/Linux and other Free systems in corporate environments.  Take-up is often sluggish at best, and often a non-starter.  Is it because upper-management don’t know about Free Software?  Is it because of institutional inertia?

My own opinion is that cracking corporate environments is a substantial and multifaceted challenge.  Many factors contribute to reluctance on the part of managers to adopt our system.  One such factor  is the lack of technical education on the part of decision makers.  Another factor is institutional policy.  However, I believe the larger issue runs deeper than this, and actually goes right to the core of what we do.

Free Software development models do not aways inspire corporate confidence.  That’s not unnatural.  We are hardly corporate in our approach!  Free Software often is developed in non-hierarchical casual groups, with few nominated managers, vague schedules, and even vaguer accountability.  That’s a pretty scary proposition for a business thinking of using our tools: they are often afraid that projects will not deliver, will not continue to evolve, or will not be supported long enough for a deployment lifespan to be completed.

Sure.  Red Hat.  I hear you.  There are companies that support GNU/Linux solutions, and they are not going to vanish overnight.  IBM is a fairly heavyweight asset to have our on side.  The problem is that the presentation of GNU/Linux solutions is not as mature as the proprietary solutions that have had time to prove themselves in the eyes of non-technical business managers.

Things are changing.  I love the IBM site about GNU/Linux solutions, because I think it’s really beating Microsoft at their own game.  Year by year GNU/Linux is gaining respectability and sophistication.  It even steamrolled Sun into releasing Solaris under an open source license.  We are getting to the point where our Free business models are matching the closed business models of the old guard.  However, we must not be complacent.  No matter how good our technology is, it counts for little if we are not competitive with the market leader.  Look at what happened to BeOS.

It’s an interesting discussion thread.  It might not keep the conversation going at frat parties, but it does generate a buzz at LUGs!

Later in the day another gentleman in the audience took me aside.  He also used to work for the BBC, and he described how the community environment of the corporation was ruined by the introduction of aggressive management policies.  In the old days the BBC departments used to pool ideas and resources.  However, now departments have been encouraged to stop helping each other, and to instead compete for increasingly limited budgets.  This set me thinking.  I mentioned earlier how issues like ‘Open Source’ vs ‘Free Software’ have lead to bickering in our community.  Is there a danger that as Free Software gets more corporate we’ll become like the modern BBC?  Competitive, uncooperative nuclear groups fighting to prove our worth in comparison to each other?

I don’t think so.

In Free Software these days I see more cooperation than ever.  Look at GNOME and KDE.  In the late nineties there was a lot of tension and (dare I say it?) FUD regarding our two main desktop environments.  These days the dragon and the foot are not only buddy-buddy, but they are working together to develop shared paradigms and innovative ideas about how user productivity can be increased.

Programmers and software architects may be annoying, eccentric, and occasionally unwashed, but they are as smart as heck.  We all have our reasons for spending ridiculous hours in front of computer screens, and these reasons have little to do with shallow egotistical goals.  

GLLUG is pretty much proof of this.  People asked me questions like “what do you think can be done to make Free Software more competitive?” That type of question sees me giving answers that are perhaps too business orientated for some tastes.  I like productivity analysis, quantitative research, and the application of structured frameworks.  However, instead of ending up in flame wars, we all shared different ideas, and mulled over problems with one goal in mind: finding ways forward that will work.  I think that’s why our software works, while Microsoft end up patching a sinking ship (XP) while the mythical Vista slips further away.

One of my favourite moments at the GLLUG meeting was the talk after mine.  Simon Morris was presenting a rather nice PBX system called Asterix, and things got technical.  It was fun to get down to the real hardware and software after spending such a long time talking about politics and ideology.  I love to learn about technologies, and on this occasion I had the opportunity to uncover a completely new patch of the ICT wonderland.  I had never realised that PBX systems were so easy to set up, or so flexible when it comes to configuration.  It’s a shame I only have one telephone in my house.

After the talks were done, GLLUG as a whole drifted out into the sunlight, and we promptly vanished into the dark basement of a pub to drown our sorrows and talk about geek stuff.  There was a gent with a beard (I apologise if you are reading this, but I don’t think we were ever properly introduced), and we ended up chatting about quantum encryption.  The photon encryption we are playing with now is interesting, but the hardcore geeks among us want to know when the quantum entanglement toys will be ready.

What a day.

If you are ever in London, do drop by the Greater London Linux User Group.  It’s a great example of how rich and diverse a LUG can be.  I had a wonderful time speaking, listening and sharing a pint.  Thank you to everyone for making my all-too-brief visit such a pleasure.  I’m already looking forward to going back.  

The Mobile Office

Wednesday, April 19th, 2006

There is a gross inequality in the distribution of empowering Information and Communication Technology (ICT).  Access to productivity and communication solutions is currently the domain of the richest one sixth of the world, with the remaining five sixths remaining resolutely disenfranchised with regards personal computing, mobile communication, and instant processing of  information.  This is the ‘Digital Divide,’ an unnecessarily damaging situation where the people who most need productivity solutions are unable to obtain them.  In effect, the vast majority of the human race is condemned to prolonged poverty and inefficient economic, political and social solutions due to neglect and a lack of effort with regards sharing technology.

There are some positive initiatives underway to introduce modern ICT technologies into developing nations.  Perhaps the most successful are the campaigns to recycle mobile phones into under-privileged nations, thus giving disparate populations access to rudimentary voice communication solutions.  More ambitious projects exist too, ranging from serious efforts to recycle old personal computers through to the $100 laptop being advocated by Nicholas Negroponte from MIT labs.

This paper wishes to make a further contribution to positive suggestions for ICT empowerment and a reduction in the Digital Divide.  The basic premise is that it might be possible to utilise and extend existing technologies to provide functional systems for ICT in developing nations.  The method for incremental empowerment utilises a technology that is already deployed in developing nations: the mobile phone.

Mobile phones provide two essential elements of ICT infrastructure.  They are connected to a digital network, and they have processing capacity.  Newer telephones are capable of running applications, and have rudimentary support for email and Internet communication.  Apart from having three inherent design limitations regarding small input devices, limited screen size and limited memory, they provide an excellent ICT front-end.

We will address the limitations of mobile devices, firstly examining the critical issue of hardware.  Mobile phones don’t have keyboards that are suitable for typing, and they don’t have screens suitable for running productivity applications.

There is a preexisting solution for enabling the connection of keyboards to mobile phones.  It is called a USB port, and many mobile phones already have these ports for things like PictBridge (a way to print directly from phone to printer).  A simple USB connection and a little bit of tweaking of the internal hardware design of the mobile phone would allow a user to connect a standard PC keyboard to the device.

The second hardware issue is a little bit more difficult to resolve.  The limited screen size of mobile phones is an engineering reality and cannot be avoided.  To increase screen real estate requires either increasing the physical size of the mobile phones, or implementing an extension to the mobile phones that will allow images to be output to external devices such as television sets.

The first option is not viable, as increasing the size of the mobile phones would reduce their utility as portable telephones, and the larger physical screen size would reduce battery life exponentially.  In effect, they would no longer be mobile phones.  They would be small notebook computers with ring tones.

The second option is viable, but requires a certain level of commitment from manufacturers to introduce an output port  on all new mobile phones.  The port would be designed to work in conjunction with a special cable to allow connection from the mobile phone to the aerial input on a television set.  This would provide the mobile phone with a display area of 720×480 on NTSC televisions and  720×576 on PAL televisions.

This mobile phone output port would  provide little utility in developed nations outside of slide-shows and video screenings, but would act as an essential hardware extension for turning modern mobiles into fully-fledged ICT solutions in developing nations.  People in developing nations already have access to television sets, and increasingly they have access to mobile phones.

With a standard PC keyboard, a mobile phone, a television set and cable to connect the phone to the TV set, a complete physical networked ICT system would exist and depend only on the software to power it.

We must now address the second key limitation of mobile phones.  Mobile phones are low power devices designed for relatively limited computational tasks.  They have limited memory, limited ability to process complex tasks, and a finite battery life.  Any attempt to extend a mobile phone’s computing remit needs to take these factors into account.

This paper’s key suggestion is that a carefully designed series of computational tasks can be accomplished by the intelligent use of preexisting mobile phone technology and dynamically loaded software modules.  It sounds complex, but it is really just about intelligent use of things that already exist, or can be easily created.

In traditional computing we think of applications as big programs that do something.  Microsoft Word is a big application that runs and allows you to create a document.  It can spell check the document, it can print the document.  It can even help you put a movie in your document.  It does a lot, but it’s big.  It uses a lot of space on your computer and it takes up a lot of memory and processing power.  Precisely because Microsoft Word has a lot of capabilities, it requires a lot of resources.

Mobile phones simply don’t have a lot of memory or processing power, and therefore they cannot run an application like Microsoft Word.  To get a mobile phone to run an application means rethinking how we make an application.  Instead of thinking about an application as a completed concept (a big tool that allow people to edit documents), we need to think of an application as lots of tiny little ‘services’ that are called to do something when they are needed.  When the services are not needed, they just switch themselves off.

If we are going to be technical about it, we would say that services need to be incrementally loaded modular components designed to accomplish certain tasks without overloading the processing or memory capacity of the mobile phone.  

Mobile phones already do this to an extent, especially in their support for lots of different types of communication.  Modern mobile phones can send email (we call this POP and SMTP support), they can access the Internet (with what is called a network stack) and they can save information to little memory cards.  Mobile phones don’t do all of these things at once.  In fact, to preserve memory, processing power and battery life, mobile phones usually do one thing at a time.  Therefore mobile phones already have a lot of services that can be called when they are needed.

We are going to extend that concept to office productivity applications.

A productivity application is actually a lot of different services that used by the person trying to be productive.  Some of these functions are critical, but most of them are what might be termed optional.  You don’t need these services to be productive, but you might want to be able to call them.  For example, you don’t need a spell checker to type a letter, but you might want to use one after you have drafted the letter.  The spell checker is a service, and we don’t need to load it into memory until you specifically say “check my spelling now.”

If you reconceptualise an application as different services you can quickly create a map of how resource usage can be minimalised.  Certain services such as drawing text (text rendering), understanding user input (keyboard driver) and outputting images to the screen (display driver) must be loaded at all times, but others services for saving documents, sending emails or checking grammar can load dynamically according to task requirements.  An example of how this works is below:

Word Processor/Email client
           |
Text and HTML render engine
Dictionary service (inactive)
Saving/sending service (inactive)
           |
Java Virtual Machine
           |
Mobile OS
Mobile SMTP/Network stack (inactive)

This looks awfully complex, but it’s not really.  

At the top of the pile there is a ‘Word Processor/Email client,’ which is a hypothetical application front-end.  That’s what the user of the mobile phone sees.  It looks like a word processor that can also send emails.  Below that there is a ‘Text and HTML render engine,’ which draws the text on the screen.  Beside this there are other things like a ‘Dictionary service’ for spell checking and a ‘Saving/sending service’ that can help the end user save or email a document.  These all run on a ‘Java Virtual Machine,’ which is an engine to run applications.  This is already on most mobile phones, and it can be understood as a really powerful way to run programs on any phone or computer.  At the bottom there is the ‘Mobile OS,’ (Operating System) which is the heart of the phone.  It controls the basic functions of the phone, and stops and starts things like the Java Virtual Machine.  The ‘Mobile OS’ works with the ‘Mobile SMTP/Network stack’ to allow communication on the Internet or through telephone calls.

Let’s reverse direction, and go through that again.

At the very bottom of the example there is a preexisting Mobile OS running a simple Java Virtual Machine (Java VM).  The mobile has a SMTP/Network stack which is inactive, but can be called by the Mobile OS when required.  This means that the network stack is consuming no resources when it’s not needed.

The Java VM is running a lightweight Text and HTML render engine.  This can render (show) text and standard compliant (X)HTML.  Given current standards, perhaps ‘UTF-8′ (text standard) support and ‘XHTML 1.0 transitional’ (web language standard) support would make the most sense.  Because resources are limited, even this render engine might benefit from being modular.  This would mean loading support for common rendering tasks into memory by default, but leaving out special sub-services that support unusual XHTML tags or the like.  The render engine could accomplish this by only loading all rendering sections when it comes across an unknown XHTML tag.

Beside the Text and HTML rendering engine there are inactive services.  These could include a dictionary service and a lightweight service for saving or sending the completed text and HTML files.  These services would be wholly inactive until called, and when called might assume temporary control of the environment (pausing the user front end) to accomplish their task without memory or processor overload.  This would have the advantage of minimising memory usage with the disadvantage of only allowing one task to be accomplished at a time.

The dictionary service is a fairly straightforward concept.  A dictionary file would exist on the mobile phone, and so would a service to take the user input (what is written on the screen) and compare it to the dictionary file.  The saving/sending service is more abstract.  This would be a service designed to save the user input from the screen (the Random Access Memory; RAM) to the main memory (like a MMC card), or to take the user input from the screen and send it as an email through the SMTP/Network stack.  

The top of the modular application framework is the application front-end.  This would be a simple window with basic menu items (used to call the other modular services), and a canvas for user input.  In most cases input would be textual (word processing, email, spreadsheet, database), but there is room for graphical input as well.  It all depends on what services are running or can be called by the application front-end.  

The application front-end would actually be pretty light, as all application ‘actions’ would actually be separate modules loaded dynamically as needed.  The text and HTML render engine and any (option) graphic rendering engines would also exist below the front-end, providing essential services to all aspects of the user interface without requiring a large overhead.

By having a well designed modular framework such as this it should be possible to give the appearance of a complete network aware office suite using the limited resources available in a mobile phone.  Best of all, because almost all services would be shared easily and openly, application development time would be dramatically reduced.  Once primary services like the text and HTML render engine existed, creating a new application would largely consist of making a front-end and linking it to the services underneath.  Adding a new service (like video decoding or a random number generator) would provide new functionality for all front-ends with a minor update.

It’s all about cooperation, and sharing.  The mobile office would actually not be an application or an application suite running on a telephone.  It would be loads of little services calling each other when needed, and disappearing into the background when not required.  At least in theory this could allow for quite complex ICT tasks to be accomplished with a very modest computational device.

All we need to build are mobile phones with two ports, some simple cables, and some smart software services.  We are talking about a complete networked productivity solution.  It’s not a solution for the developed world, but in Africa, South Asia and Latin America it could make a massive difference.  New phones for the rich, recycled phones for the poor.  The devices would spread along existing channels, and supplement existing technology without requiring massive investment, training or logistics.  Existing networks would carry the signals.  Existing knowledge would empower the users.

$100 laptops are a great idea for the future, but people have mobile phones and TVs now.  With a little effort we could really make that count for something, and take another step towards closing the Digital Divide.

UK ‘Intellectual Property’ consultation process

Monday, April 17th, 2006

The UK government has called for public comment on certain copyright and patent issues.

http://www.hm-treasury.gov.uk/media/978/9B/gowers_callforevidence230206.pdf

The deadline for submitting your comment is the 21st of April.  I have just sent in my comment, and reproduce it below for reference:

=== Shane’s comment ==

As a fellow of the Free Software Foundation Europe, an associate of the Free Software Foundation, a member of the Open Source Academy and a member of numerous Free Software development teams, I hereby wish to register my concern regarding the current review of what you term “Intellectual Property Rights.”

Firstly, the term ‘Intellectual Property’ is in itself problematic.   To quote from an article on the Free Software Foundation website: “The term "intellectual property" operates as a catch-all to lump together disparate laws.   Non-lawyers who hear the term "intellectual property" applied to these various laws tend to assume they are instances of a common principle, and that they function similarly.  Nothing could be further from the case.   These laws originated separately, evolved differently, cover different activities, have different rules, and raise different public policy issues.   Copyright law was designed to promote authorship and art, and covers the details of a work of authorship or art.   Patent law was intended to encourage publication of ideas, at the price of finite monopolies over these ideas–a price that may be worth paying in some fields and not in others.   Trademark law was not intended to promote any business activity, but simply to enable buyers to know what they are buying; however, legislators under the influence of "intellectual property" have turned it into a scheme that provides incentives for advertising (without asking the public if we want more advertising).”  (http://www.gnu.org/philosophy/not-ipr.xhtml)

I am uncomfortable with the use of a blanket term to cover issues that are inherently different, and I wish to have that noted.

On page 1 of your call for evidence there is also a problematic sentence.   It suggests that the state “must ensure that IP owners can enforce their rights through both technical and legal means.”  The state remit is that of legal jurisdiction.   The adoption of technological enforcement of copyright, patent or trademark claims as an extension or supplement of legal jurisdiction is an untenable one.   It is not the state’s place to enforce or sanctify technological limitations on hardware or software.   It is the state’s role to provide a legal framework for just regulation.   By allowing for the confusion of legal and technological jurisdiction, the state permits the existence of limitations on end user experiences that go far beyond the enforcement of just rights.   The most worrying example of this is termed Digital Rights Management (DRM), and falls within the remit of the call for evidence under the section entitled ‘COPYRIGHT – DIGITAL RIGHTS MANAGEMENT.’

Digital Rights Management is an unfair extension of legitimate copyright terms to cover all aspects of use.   It is about restricting how and when people can use copyrighted files.  It may restrict how many times people can play something.  It may restrict people’s ability to share something.  It may restrict the method people employ to consume something.  It is about allowing companies to determine how the end user will experience the copyrighted material that the end user purchased.

Ultimately DRM means that when a person buys a copyrighted file, they don’t actually have the permission to use it.  The file containing the DRM protected information won’t necessarily work on another computer, or your mobile phone, or your PDA.  It’s a bit like selling a book designed to be read in the living room, but with a limitation preventing it being read in the bath.  Instead of finding a way to stop the end user giving illegitimate copies of a work to other people, DRM is about controlling the right to copy work for any purpose, and in the process it determines the end user consumption method and options as well.

DRM creates a completely new way of controlling how people access information and sanctions corporate control in what was previously a very private sphere.   DRM will allow the companies who create DRM, and the companies that own the content, to  control digital networks.   It is an unappealing thought, and governments will be disenfranchised along with citizens.   At best DRM is a misguided attempt to solve a legal concern through a technological arena.   At worst it is a wholly unfair attempt to control how people can access or use copyrighted material, regardless of historical precedent or fair access rights.

DRM is unnecessary.   Governments already stop people sharing copyrighted material through copyright law.   Existing copyright law is applicable to books that you can hold, and books on your computer.  It applies to music, movies and software.   There is no place nor fair justification for any extension of this law through technological limitations and controls.

On a final note, I wish to note that your call for evidence does raise valid points.   In particular, on page 2 it is suggested that while “patents provide a vital incentive for innovation, the granting of overly broad patent protection, together with restrictive or restricted licensing of IP, can impede the development of the next generation of products and reduce competition.”  This is undoubtedly true, especially in an area critical to the sustainability of the European economic sphere: software development.   The United States of America currently sanctifies broad software patents and innovation is distorted because of this.   Examples of obvious, overly broad or misguided patents abound, with perhaps the most famous being that of Amazon.com and their patent of “One-click” purchasing system.   Such a patent is obvious, and dependent only on an authentication system and a user log-in identity.  While an argument exists for the ability to patent an innovating invention, the use of standard technology to facilitate an obvious and abstract service is unacceptable.   Software patents, covering an area without tangible products and where standardised tools are used to create new applications, are an unjust for both innovators and end users.   Software should properly be covered by copyright law, not patents.   There is an excellent article on this matter on The Guardian website
(http://technology.guardian.co.uk/online/comment/story/0,12449,1510566,00.html).

Thank you for reading my thoughts on this matter.   I look forward to following this consultation processes closely.

Yours

Shane M.  Coughlan

(Detached Digital Signature enclosed with original email)

DRM, ‘Trusted Computing’, and the future of our children

Tuesday, March 28th, 2006

There is a war being fought over the future of digital technology and there is a serious reason for conflict. You see, computers allow people to copy things, and digital copies are always perfect. You can make as many copies as you want. In the past you would could buy a high quality version of something like a DVD. Only one person could use this DVD at a time. You could lend the DVD to another person and get it back a week later, but that meant you would not have it in the meantime. Copying, by necessity, was limited. But this has changed. With digital technology you can give a perfect digital copy of a movie to ten million people at zero cost, and still keep the original for yourself.

If people can share an infinite amount of digital products zero cost, there is no reason for people to pay for such products. This has the potential to turn the economic foundation of publishing on its head. Sharing information takes on a whole new meaning in the digital realm.

The traditional publishers want to prevent people using technology to copy things and are promoting a restrictive system called Digital Rights Management (DRM). These people believe that the Internet and the PC are a threat to the economic foundation of publishing media. They believe that if they don’t find a way to regulate how people use computers, their revenue streams will cease to exist. They say that they need to be able to prevent the copying of copyrighted work.

Other groups of people insist that sharing is important, and rejects DRM because it means allowing private companies to have control over personal computers. These people believe that networked and free communication is empowering the disenfranchised. They believe that people will pay for things legitimately most of the time and that the traditional publishers want to encroach on an area of personal freedom that goes against both the precedent of law and of copyright history. They say that people have always had the choice to obey or disobey copyright law and the onus is on the publisher to prove breaches of copyright.

Microsoft, Intel, Apple and the entire music and movie industry of the USA stand on one side and thousands of programmers, technology organisations and academics on the other. Both sides have something important to say, and both sides are refusing to listen to the other.

Over 500 million people use the Internet, and over a billion computers are in use around the world. It has become impossible to ignore the issue of content management and access. Call it Digital Rights Management (DRM), if you will, or call it working out how to manage copying in the digital realm. We need to solve the problem of how digital information will be shared and equally importantly, we need to set open and wide reaching standards. It has been more than ten years since computers and the Internet really started to take off, and there is still no coherent approach to restricted (or unrestricted) information sharing. This is a serious problem.

Let’s have a look at the history.

In 1994 there was a new buzzword in computer magazines. It was “multimedia.” Users were told a revolution was coming.  We would have pictures, video and sound on every desktop. Microsoft heralded this revolution with the introduction of their advanced new operating system called Windows 95, IBM pointed out that they already had a great 32bit system that could do multimedia with OS/2 and Apple cheerfully reminded everyone that they had a multimedia machine in deployment since the mid-eighties.

By 1995 the Internet was beginning to take off. Modems were falling to price levels that made sense and more websites started to appear. By 1996 the tech crowd were pretty advanced, and by 1997 the web was finally entering the lives of everyone. Companies like Amazon.com appeared, and everything changed.

No one expected the Internet to become so big so quickly. No one could have predicted the extent to which digital technology would enter the fabric of everyday life, especially in the economic sphere. Far from proving to be an extension of the existing market space, the digital world became an economy in its own right. Computers became an inescapable and essential part of everyday life. Costs plummeted and power soared. Multimedia changed from being a novelty to being a normal product.

By around 1998 there were millions of people with the ability to view and share many different forms of digital content. The only major problem was a lack of content to actually share. DVDs were too big to squeeze down modem lines, and music files were still substantial in size. Pictures were about the only thing that most people could share.

Technology advanced. Real Networks developed a good way to make small sound files with high quality, and then MP3 appeared and gave everyone the ability to share as much music as they wanted. Even over a modem it was possible to download an entire album in a couple of hours. File sharing networks appeared, and new social networking groups formed around them. Napster created software that literally allowed anyone to find the music they liked, and download it with one click.

A floodgate of community sharing was opened. New networks appeared, new methodologies were introduced, but the essential idea remained the same: create a simple network that allowed lots of people to share lots of files. This innovation apparently caught the publishing industry completely by surprise. No major publishing house had invested in the development of a commercial digital distribution network, and the first one to appear with a usable interface was Apple’s iTunes in 2001. It is ironic that when millions of people wanted to share information, the only way they could do so easily was through free file sharing networks.

By the time the music, movie and text publishing industry began to actively engage with the emerging technology, file sharing was already an accepted method of providing multimedia information. It might have been a free method, and it might have been a method that largely ignored copyright, but file sharing was the method that most people used to get most of their files. Millions of people shared billions of files. Even today file sharing accounts for over 60% of the traffic on the Internet. A significant amount of this traffic is illegitimate.

Technology allows sharing. People are sharing files. These files included copyrighted music, films, books and software. Companies say revenue is being lost because of file sharing, and that something needs to be done. The question is “what needs to be done?”

Should governments stop people sharing? They already do. It’s called copyright law. According to this, only the original creator of a work has the right to determine who makes a copy of this work, and what is done with the copy. This law is applicable to books that you can hold, and books on your computer. It applies to music, movies and software. But companies say that this law is not enough. Because of the power of technology, and the ability of the user to share copies so easily and so extensively, companies say that stronger measures are required to protect their copyrighted material.

This is where we get DRM.  Digital Rights Management is about restricting how and when people can use copyrighted files. It may restrict how many times you can play something. It may restrict your ability to share something. It may restrict the method you employ to consume something. It is about allowing companies to determine how the end user will experience the copyrighted material that the end user purchased.

The biggest buzzword in DRM right now is ‘Trusted Computing’. This is about creating a way for companies to trust your computer with their copyrighted material (rather than anything to do with consumers trusting their computers). For several years Intel have been included a ‘Trusted Computing’ encryption chip on motherboards, and when Windows Vista is released it will support this framework. For the first time there will be a very solid and secure way for a publisher to determine if your machine has the right to play their copyrighted material. When you try to play a movie, it will check with their server to see if your Trusted Computer has the correct permissions. If you attempt to run an illegitimate file, your computer will refuse. No matter what you do, a chip on your motherboard will refuse to cooperate.

This is a very strange approach to copyright enforcement. It means that when you buy a copyrighted file, you don’t actually have the permission to use it. Your computer has the permission. The file won’t work on another computer, or your mobile phone, or your PDA, or your heavily modified XBOX. It’s a bit like selling a person a book, but designing it so that people cannot read it in the bath (or in the garden or in the kitchen). Instead of finding a way to stop the end user giving illegitimate copies of a work to other people, this type of DRM is about controlling the right to copy work for any purpose, and in the process it determines the end user consumption method and options as well.

Objections abound. That’s where all the thousands of programmers, technology organisations and academics come in. They are saying that ‘Trusted Computing’ and the entire methodological approach currently being suggested by industry heavyweights is a grossly unfair attempt to extend copyright law. The proposed DRM system would control every aspect of a person’s private digital life. This is unprecedented. There has never been an occasion when producers could control how people use their possessions.

There are so many voices. Some people say industries will collapse if we don’t have DRM. Some people say industries are too greedy and asking for too much. Some people say sharing data will help advanceand promote culture. Some people say that finally we are getting a world where everyone can have access to everything that is beautiful, useful or interesting. There is a cacophony of comment.  Ideas are swirling around and mixing together. The result is  an enormous and confusing mess.

The truth is that we are in an unusual situation. A completely new technology has appeared and this technology makes obsolete previously established technologies and the industries associated with them. Music is better digitally, and suddenly CDs and tapes and records belong to yesteryear. Films are the same. Purely digital copies on purely digital media are simply more flexible, useful and effective than those tied to traditional media. Books, magazines and newspapers are in the same boat. People with a lot of money and influence are feeling threatened, but their perception of the problem is perhaps too selfish to allow for a true understanding of what’s happening.

The network for communicating certain media has shifted because of the introduction of revolutionary technology. The paradigm of media consumption has changed, and we are lacking a new discourse to explain it. Our copyright laws and processes, our market definitions and our consumer expectations were designed before this technology was conceived. To make matters worse, a formal approach to understanding the new paradigm has been effectively impossible due to the rapid changes and evolution of digital desktops and the Internet. Thus instead of a sustained worldwide academic understanding of the emerging new digital world, we have ended up with a discourse primarily conducted by interest groups, while the slower moving world of journals and research has lagged behind. The danger is that instead of seeking and finding an understanding of these new technologies, end users will find themselves permanently disenfranchised by those who wish to profit or control the digital sphere. This situation is not helped by the level of confusion that has been created and sustained on personal, governmental and international levels.

Even the legal process has become a problem regarding the issue of DRM and intellectual property. On one hand laws are meant to protect people and companies, and on the other hand law is a restrictive influence. In countries like the USA, the law regarding the new communicative paradigms has been abused to provide virtual monopolies over the ideas needed to utilise technology. Wide-ranging patents have been granted on abstract ideas. Purchases are being removed from ownership (you don’t own the software you buy, you just license it). Copyright infringement is being pushed from a civil infringement to a criminal action.

Fear abounds. Fear breeds compliance. Compliance breeds acceptance. Acceptance means control.  The companies who create DRM in the form of ‘Trusted Computing’ (Intel, Microsoft, Apple), and the companies that own the content (EMI, Sony), will  control the digital revolution.  Imagine a world where our computers actually belong to these multinationals. It is an unappealing thought.

Worse still, governments will be disenfranchised along with citizens. Networks will be needed, but not controlled by the nations that host them and the content will be monitored by proxy, with companies working for private shareholders. That’s not just an unappealing thought. It’s untenable.

We are back to the question of “what needs to be done?” It’s a tough one. DRM is not really about making sure you don’t steal music. DRM is about attempting to create a completely new way of controlling how people use information and in doing so it is about attempting to force corporate control into what was previously a very private sphere. This is the solution that companies wish to impose.  It is not necessarily the solution your grandchildren will be glad you accepted.

We need to carefully examine how digital technology and the Internet has changed the way that people consume different types of media. We need to examine how current copyright law controls access and sharing across different nations. We need a long consultation process to discuss how we could form international standards of maintaining copyright in a reasonable way without infringing on the rights of end users or content producers. In short, we need to produce a new discourse to explore this paradigm, and we need to do so from first principles.

That is going to be a long and difficult process and to be successful, we’re all going to have to talk to each other. We’re going to have to share all our problems, concerns and aspirations. Businesses, legal experts, users, governments and technologists will have to work together to find a new way to approach publishing, information access and information control. No one group can be allowed to hijack this consultation process. Democracy demands nothing less and rightly so. The generations of the future will suffer or benefit from the end result of the legislation we are now just beginning to develop.

One small step for the FSFE, many steps for Shane

Monday, March 27th, 2006

Tonight was a rather long one for little old me.  I decided to get all proactive, guerrilla, and viral.  I took lots of FSFE leaflets, put them together into nice little packs, and walked two kilometers to the student residences of the University of Birmingham in the UK.  My goal?  To put information about the FSFE and the fellowship through as many doors near the schools of politics and computer science as I could.

This strange little mission was motivated by conversations I have had with people as I tour local LUGs.  Many people don’t really know what the FSFE does, or what we want to accomplish.  I figured that it’s important to meet that ignorance head-on, and to get our message out to one demographic that is relatively sure to be interested.  Students…you have to love them!

There was another motive.  I had loads of little fellowship leaflets talking about the laptops we are giving away during April, and I wanted to get that message out to people before the deadline passed.  Success.  Almost all the leaflets are now gone.  A handful have been retained for the Birmingham Perl Mongers meeting on Wednesday where I will wax lyrical about FSFE stuff.

Ah.  My feet hurt.  My back hurts.  It was raining.  But in my tiny, tiny, tiny way I did something that might mean more people will learn about Free Software.  That is a good thing.  Sometimes all the discourse and planning in the world is not as important as getting out there, and pro-actively spreading the important word.

Oh.  Hey.  FSFE Office guys.  I need more leaflets.  Lots more.

Visiting South Birmingham LUG

Sunday, March 19th, 2006

The other day I attended a meeting of the South Birmingham Linux User Group.  The meeting was held in Birmingham University in the school of computer science, and I found it a pretty comfortable setting.  After all, this is the university where I did my MA.

On the day I attended there was a special talk by Professor Aaron Sloman on Artificial Intelligence.  More specifically, Aaron was speaking about POPLOG and POP11, an AI development environment and the language it uses.  From a technological perspective, it was a fascinating lecture.

POPLOG allows you to develop AI applications using an incremental compiler.  This means it has some of the speed advantages of compiled code like C, but the flexibility of instant redevelopment and redeployment we would normally associate with interpreted languages like Java.  The coolest feature was to watch Aaron writing changes to the code, seeing it compile almost instantly, and execute cleanly.

For AI development, the instant compile and execute model is particularly useful.  It allows researchers to pro-actively adapt their applications to reach their overarching design goals.  Because the end product of code design is compiled, it’s faster than interpreted languages, but it’s not as inflexible as tradition code.

POPLOG is not a new product.  It’s been around for decades, and was even a commercial product in the nineties.  It provided the original engine for a data-mining product called Clementine (the product is now ported to C++ and Java instead of using the original POP11 code).  It’s a powerful system, especially for its target field, and it’s released under an open source license (Xfree86 1.0).

Hey, let’s not beat around the bush.  Not everything is lovely in POPLOG land.  The development environment of POPLOG looks dated.  Being based around the command line instead of providing a graphical IDE makes it a bit scary for programming idiots like me.  There are also problems getting POPLOG to run on all the Linux distributions.  But I saw it in action, and I was impressed.

You can get instructions to help you install and run POPLOG on Debian and Ubuntu at the POPLOG main page.  Best of all, there is an effort on sourceforge to develop OpenPOPLOG."  

After the big talk, we wandered to Staff House and let down our hair.  There was a really interesting mix of people attending, including some school students.  I was impressed with those guys, and rather chuffed to hear that their school is running Red Hat!  It looks like Windows is not everywhere, after all.

Hopefully I’ll make it along to future meetings.  This one was certainly educational!  Meanwhile, if you are anywhere near Birmingham (UK), do drop by one of the SBLUG meetings.  The website is here.

The Shane Roadshow (me? a copycat?)

Thursday, March 16th, 2006

SCO has (had?) one. Ciarán from FSFE has one. Now I have one too! Yes, it’s time to lock your children inside. It’s time to hide behind the sofa. Shane has a roadshow.

I’m on a one-man mission to spread the FSFE word across the Midlands of the UK, and last night my first victim…um…I mean audience…was the Wolverhampton Linux Users Group. I used to study at Wolverhampton Uni (way back before electricity), and it was great fun to return to the city. It was even more fun to meet the people behind the LUG.

When I first turned up at Spice Avenue balti restaurant I didn’t know what to expect. Old people, young people? Happy people? Sad people? A bunch of people who would set fire to me and dump the body because I dual-boot Ubuntu and WindowsXP? As it happens, the Wovles LUG is made up of a diverse range of people with a whole variety of commercial and non-commercial interests. One thing unifying the group is that everyone is really friendly.

We ate before we did any serious talking, and it was only after stuffing myself with a nice chicken dish that I stumbled upstairs to a conference room to chat about the FSFE. I didn’t want to present too much information and bore everyone to tears, so we had a quick run-through about Free Software (as opposed to open source), the role of the FSFE, and how people can help out. Then we had a much longer section for Q&A.

During the question time a lot of interesting points were raised about DRM, trusted computing, and getting people to adopt GNU/Linux. There is some common ground about issues like DRM, but there is also a lot of worry about things like Linus’s rejection of GPLv3 draft 1. From a practical perspective, a very good point was made about GNU/Linux as a whole: until the average person walks into PC World (a large chain in the UK) and sees Linux machines there, Linux will have a lot of trouble getting to the desktop.

My favourite comment of the evening had to do with “is Linux ready for the desktop?” I suggested that for many people it’s not, and I was rightly corrected by a wise LUG member. They pointed out that if someone has never used a computer before, something like GNOME is easy to pick up. The problem is transitional users. For people used to Windows, Linux systems have a learning curve, and that is where a lot of people don’t want to put in the effort. Excellent point!

We need to engage with each other closely in the coming years. GNU/Linux is more popular than ever before, and it would be wonderful if we could keep expanding our reach. However, we need to work together on this one, drawing LUGs and Free Software Foundations and governments and businesses closer. The purely hobby years of Linux are gone, but that’s no reason to take the fun out of it. If we can all find common ground, and aim towards aspirational goals like technological provision, I believe we can make a massive difference not just to existing computer users but also to disadvantaged people around the world.

Personally, I would like to see more dialog. I’d like to see more people talking to more groups, and to make full use of all this communication technology we are sitting on. That way we can really deal with some of the issues (DRM, sustainable business models in FOSS) that are floating around.

Thank you Wolves LUG! I had great fun, and I’ll try to make it to the next meeting!

One of our own has fallen

Tuesday, March 14th, 2006

Today there was an email that I didn’t want to read. The subject said “To friends of Richard Rauch.” I sighed, and opened the message. Rob, Richard’s brother, had written “I have some very bad news: Richard was riding his bicycle when he was hit by a drunk driver; he eventually died from his injuries.”

What a sad moment. Not just for Richard’s family, but for the Free Software community. Richard Rauch was a contributor to NetBSD, and gave a great deal to many people who will never even know his name. He had worked for years to make NetBSD better, and in doing so I believe he has left an important legacy. He is part of the larger Free Software world.

Free Software is a big community. In every corner of the globe people start computers every day, and they use our work. Maybe it’s GPL software, maybe it’s BSD software (like what Richard worked on). Maybe it’s MPL. Maybe it’s something else. But it’s out there. What a wonderful accomplishment this is. It’s something quite unheard of. People like Richard have united in a fantastic mission to bring technology to everyone.

The very thought that someone can have a computer with software as good (or better) than commercial products without any restrictions is breath-taking.

Yes, we are doing so well. But Richard is dead. As many of us as there are, the loss of Richard has upset me. He was a pillar in his particular FOSS community. He will surely be missed.

A reply to a comment on DRM

Saturday, March 11th, 2006

Marcos posted a very interesting article about DRM.  I couldn’t help commenting.  As I said:

== 

I’m glad that many discussions are appearing around DRM. I think there is a lot that needs to be said on this subject, and it’s our duty to be at the forefront of this process. One way or another, DRM will have a huge effect on the entire digital sphere in the coming months and years. This blog entry is a very interesting read. I don’t agree with all the points, but I think you raise some legitimate concerns. One of the clearest messages in your article is that DRM – as currently appearing in things like trusted computing chips – has the real and scary potential to take away end user freedom. In this sense you are absolutely right that DRM is a weird and unacceptable thing, because it basically involves allowing companies to decide how we live our lives.
The one thing I want to raise is that you equate the "trusted chip" (trusted computing) with DRM. Trusted computing in this form is not DRM. It is a method of applying DRM. Digital Rights Management can be anything related to digital rights management. Trusted chips are an important technology, and one that is already on many computers, but it’s just one technology. I think your article is in danger of confusing trusted computing chips with DRM itself (a confusion of concept and implementation).
Linus Torvalds does not claim that you would want to use exactly the same technology to protect your diary. He is saying that fundamentally the same technology is used for both personal protection and DRM. To quote, he said the "basic technology is about encryption, public and private keys and so on." I guess that if we can fault him for anything, it’s for being too general. Perhaps this is because he made it pretty clear he wants to keep clear of dictating the implementation of DRM. Again, to quote him: "I don’t want to make my software be "activist." I try to make it technically as good as possible and let that part speak for itself. I don’t want it to make politics."
Gosh, this is a heated topic. There is a good reason for it being heated: if DRM is indeed allowed in the form that many companies want, we’ll lose freedoms we have enjoyed for generations. On the other hand, if we don’t work out some way for businesses to sell digital goods without losing their market after shipping a couple of copies, economics problems will appear. We have to discuss and discuss this issue, and look at it from all sides.

==

I think DRM is really going to be the big word in the coming nine months.  Web 2.0, Windows Vista, UMPC…these are small things compared to the confusion around how we are going to control (or free) our digital futures.  Without the painful process of creating coherent policy around this subject we will not advance as digital societies. 

Some day my fridge will be connected to my laptop, and that’ll speak to my car.  This is great because it means no more surprises regarding running out of milk or gas, but I don’t want anyone hacking in (pro digital rights management).  That does not mean I want EMI to tell me what music I will listen to over breakfast (against digital rights management).  It’s a difficult topic.

For the record, let’s not forget about software patents.  I’m pretty worried about them too.