Communicating freely

Thoughts on how we can all talk a little easier, and how that can make life better.

Quantum encryption for real people

July 11th, 2006

I’m going to be attending a security round-table at Birmingham University this Thursday and I’ve been trying to create a short, simple introduction to quantum encryption for real people.  That’s more difficult than it sounds…

The work in progress is below…

Quantum encryption is a very young field.  The first public research into quantum encryption was conducted by Stephen Wiesner at Columbia University in New York during the early 1970s.  His paper ‘Conjugate Coding’ was published in 1983 in SIGACT News.  Wiesner’s paper had previously been rejected by IEEE Information Theory.  This is indicative of the unusual nature of the field; Einstein referred to quantum entanglement – a principle used in quantum encryption – as "spooky action at a distance."  The normal laws of physics do not apply in quantum relationships.

Quantum encryption is focused on finding a solution to the key distribution problem.  This is a problem with ensuring that two users who wish to communicate secretly will use a genuinely secret key for their communication.  In many communication situations it is impossible to do this in advance.  This means users have to agree a secret key at the time of communication.  A problem arises in trying to agree this key without revealing it to eavesdroppers.

At the moment secret keys are shared using systems like Diffie-Hellman key exchange.  Diffie-Hellman uses very large prime numbers to agree a secret key and assumes that analysis of the exchange is very difficult.  While this is true of today’s computers it may not be true of those deployed tomorrow.  It will certainly not be true when quantum computers enter production.  They will be able to factor large integers instantly.

Quantum encryption uses Quantum Key Distribution (QKD).  This is a method of generating a verifiable secret key that can be transmitted between two people but cannot be altered in transit without the alterations being detected.  Two different aspects of quantum physics can be employed to accomplish this; one is the Heisenberg uncertainty principle and the other is quantum entanglement.  Both methods are generally accomplished through the transmission of photons.

The uncertainty principle is applied to quantum encryption through the polarisation of photons.  In observing the state of a photon a secret key can be obtained.  An example is that vertical photon polarisation can constitute the binary "0" and horizontal polarisation the binary "1".   The strength of photon polarisation is that it is possible to observe photons in different ways: rectilinear, circular, and diagonal.  When you observe a photon in one way you alter the conjugates that could be obtained by observing it in another way.  Unless you know how you should be looking at the photon you cannot obtain useful information about it.  It is also impossible to intercept a polarised stream of  photons.  It is virtually impossible to read the stream without degrading it to a detectable extent.

Quantum entanglement is applied to quantum encryption through the entanglement of individual photons.  This is a genuinely “spooky action” that results in the two photons having a mutual relationship that does not rely on time or space.  If one photon is altered than the other will also change state.  The result of measurements of photon states are random but shared.  It is virtually impossible to either predict or intercept this form of communication.  There is some degree of discrepancy possible between Alice and Bob’s measurements of the changed states but an attempt at eavesdropping would noticeably degrade the data stream.

As those already familiar with encryption will have guessed both the uncertainty principle and quantum entanglement offer methods of exchanging secret keys that are highly resistant to man-in-the-middle attacks.  It is very difficult to intercept photon communication streams.  The Observer Effect is one of the primary reasons for this; the very act of observing the photons results in altering their states.  This will both reduce the coherency of the message being transmitted and ensure that both Alice and Bob will know their stream is being intercepted.  The difficulty of interception is compounded with quantum entanglement.  The only way to reliably intercept an entangled stream would be through introducing a third entangled photon.  However, this would weaken each photon to such a degree that it would be easily detectable.

There are two possible ways to intercept quantum encrypted communication streams.  One is where an attacker (Eve) manages to pretend to be Bob when talking to Alice and to pretend to be Alice when talking to Bob.  If Eve assumed these identities it would be possible to act as a silent observer of the data stream.  The second interception method would involve sending large pulses of light towards either Alice or Bob’s transmission equipment between the legitimate communication pulses.  The reflection of the massive light pulse could indicate the polarisation of Alice or Bob’s equipment.  This is potentially useful on encryption relying on the uncertainty principle.

A limit to quantum encryption based on the uncertainty principle is deniability.  The act of intercepting a polarised photon stream will place some data in Eve’s hands.  If Alice and Bob detect the interception and switch keys during their conversation they will not have ensured they can deny that the conversation took place.  Eve will have partial data of the conversation.  If the the data Alice and Bob changed with the switch of their keys is already partially known to Eve, Eve has proof that the conversation took place.

One method of strengthening quantum encryption is privacy amplification.  Privacy amplification is where Alice and Bob use the initial strength of quantum encryption to establish a secret key.  This secret key is used to make further secret keys that Eve will have no information about.  Privacy amplification provides additional protection but does not reduce the probability of eavesdropping to zero.  It is important to bear in mind that there is no such thing as a ‘completely secret’ communication method.

Sources:

Quantum cryptography, http://en.wikipedia.org/wiki/Quantum_key_distribution

Quantum Cryptography Tutorial, http://www.cs.dartmouth.edu/~jford/crypto.html

Quantum Encryption progresses, http://tonytalkstech.com/2004/05/04/quantum-encryption-progresses/

Child labour

July 9th, 2006

Ah, child labour.  That forgotten resource; small, easily intimidated and full of energy.  I’ve been doing my bit for European economic competitiveness, and I have enticed innocent children to become test subjects for a usability study.  The little mites have been deceptively handed a Ubuntu laptop and left alone.  The purpose of this study?  To find out if Dapper Drake can be used by real (small) people.

I am astounded by the results of my research.  They are better than my fan-boy GNU/Linux DNA dared to expect.  

I powered up a laptop for Test Subject #1 (TB#1 hereafter) and waited for his initial reaction.  Blankness.  He didn’t know when Ubuntu had finished loading and was ready for use due to the unfamiliar desktop.  I showed TB#1 the desktop menu and left him play.  After half an hour I returned and asked “how’s it going?”

TB#1 kept his chubby eyes glued to the screen.  “It’s OK.  I’m just finding it hard to beat the computer.” I was curious.  What was TB#1 trying to beat the computer at?  What had TB#1 done to my nice default install of Ubuntu?  I leaned over and looked at the screen.  TB#1 was playing four-in-a-row.

“The computer moves so fast.  Sometimes it moves too fast for me and I don’t have time to think,” said TB#1.  I nodded my head.  The test subject had discovered software of direct interest to his age-group.  His youthful mind was being trashed by the game but that’s the price you pay for science.

I left the room.  TB#1 was using my old laptop.  It was not a valuable resource so I confidently wandered away.  Three hours later I wandered back.  TB#1 was gone, the laptop was powered down and plugged out.  Hm.  I started it.  Clean mount.  No problems.  On the desktop was one new folder (default name) and one new file (default name).  TB#1 had obviously been experimenting with the abilities of the computer.  Excellent.

The next day TB#1 was back.  He hovered around until I innocently suggested he might like to play with the laptop.  He pretended to consider this suggestion and agreed.  Shortly thereafter TB#1 was sitting in a sofa doing whatever nine year old humans do.  I left him again for two hours.  When I returned TB#1 was gone and the laptop was again neatly powered off.

There are some variables to take into account with my testing.  TB#1 had a protector.  Some kind of mother figure.  However, the mother figure has no idea about computers.  The last time she was sighted with a computer she was waving a mouse in the air and saying it didn’t seem to be working.  Therefore her impact on the actual results of TB#1’s actions were minimal.

The second variable is that TB#1 has used an old computer.  He has a Windows 95 machine at home (I kid you not).  Therefore TB#1 has some awareness of the desktop paradigm.  

Conclusion?  Kids can use Ubuntu Dapper Drake.  TB#1 has no objection except that four-in-a-row is too difficult.  What a result.  Well done Mark Shuttleworth and associates.  

Freedom is important (so is evolution)

July 8th, 2006

I just read Paul Cooper’s blog post about the FSF and FSFE.  One line really struck me: “[FSF(E)] could become irrelevant because certain elements are living in the past confined by the way things were rather than the way things are.”

I don’t disagree but I don’t quite agree either.  It’s a very broad statement.  There is a lot of leeway for individual interpretations of what constitutes the ‘way things were’ and the ‘way things are now.’

What I think is true is that there is some tension between the traditional hacker community and the wider free software developer-base.  Free Software is no longer exclusively a hacker arena.  We are seeing Free Software deployed in very serious places and we’re seeing professional levels of support and development.  This includes the NSA with SELinux and Novell with SuSE.  It’s been happening for years but a certain critical mass is being reached.

The field is evolving.  The organisations that support it must evolve too.  

I think I was in London (speaking to GLLUG) when I started talking about how we’re seeing the rise of management layers in Free Software projects.  We’re seeing stuff like usability engineers, project managers and public relations people.  In short, we’re seeing the ‘businessification’ of Free Software projects that are regarded as mission critical.

There is a reason that we’re seeing this.  Programmers are programmers.  They are not managers or public relations people or usability engineers.  When you get to a certain size you really need to find experts.

Strap on a beard and wrinkled brow “but then you’re becoming the MAN…you’re like a company.”

Ah.  Herein lies the rub.  The traditional hacker culture was very much about being an independent person.  About hacking away and discovering stuff and playing with code because it was fun.  That’s fine.  That’s good.  It’s a noble aim.  However, when Free Software grew up and went into the real world it needed to fulfil the requirements of usability and reliability.  The Freedom bit of Free Software remains but the edgy technical hobby stuff falls away.  The government of Bhutan are not interested in cool new features; they want an empowering digital infrastructure for the future of their people.

Because Free Software is Free Software the guys who want to hack can do so.  But the big mainstream projects need to produce big mainstream products.  They need to be “the man.”  Well, the man without shareholders making technology to empower everyone.  A nice man.  The kind of man you would be happy to see your daughter marry.

I recently saw a flame-war appear on a mailing list because two people were having an argument about who was more free.  When there are five billion people in this world who lack basic infrastructure I find these arguments tiring and pointless.  

I think the four freedoms of software are important.  I think people should be able to use, modify, redistribute and improve their tools.  I think this is especially important in terms of getting this stuff out to developing nations.  It’s also important for me.  When I was fifteen I had no money for a new computer.  I got a DOS 3.3 computer from someone’s garage.  I wish I had known about GNU/Linux.  I wish I could have learned about the freedom of software at that juncture.

I think the FSFE is important.  It’s mission is to protect and promote the four freedoms of software in the European arena.  Patents and DRM challenge Free Software and the FSFE really engages on these issues.  It also educates people and has some great initiatives coming up to further strengthen Free Software in our area.

I think engagement with everyone is important.  I have nothing but admiration for the companies that are embracing openness and freedom (well done Sun and Novell).  I’m looking forward to the day that Microsoft start considering software freedom.  I’m looking forward to the day that Java is released under the GPL (hint hint).  I’m looking forward to the day that children learn OpenOffice.org at school.

I think that software freedom is relevant.  I think that promoting freedom is essential.  That’s why I’m out here talking to people, putting forward initiatives, and generally trying to do my little bit to provide digital infrastructure for everyone.

Useful freedom

July 5th, 2006

"You’re not the REAL Farmer McShane!"

My friend (let’s call him Dave) was right.  I’m not.  I don’t even know who the real Farmer McShane is.  I mean, one minute we’re talking about computers and the next he comes out with this.  I suspect it’s a distraction technique.

Our conversations usually drift around Windows (which I use for web design), GNU/Linux (which is now my main desktop) and MacOS X (Dave’s baby).  More often than not I say something slightly negative about Macs, Dave denies it, and we end up bickering like two children.

I like the Mac.  I don’t agree with all of Apple’s aims but I do think the Mac looks and feels great to work with.  It’s certainly a sweeter deal than the clunky Windows machines.

However, lately I’ve been banging a Ubuntu drum.  I installed Dapper Drake and finally GNU/Linux was working like a proper grown-up operating system.  No more messing around with configuration files.  No more heartache.  Just click, click, click.  It’s beautiful.

Several Mac geeks have defected to the Ubuntu cause.  This annoys Dave.  He’s not happy.  Arguments centred around open formats, anti-DRM and the cult of the penguin tend to get him vibrating and cause steam to drift out of his ears.

I don’t blame Dave.  He’s got this beautiful computer with this lovely interface, and then some jerks come along and say its evil.  Worse than that, the jerks are waving a similar product that won’t play his music and are cheeky enough to say their way is the true path to enlightenment.

But…it’s true.

My iPod is dead.  Apple won’t fix it.  My music is in AAC format in the iTunes library.  I can’t listen to it on my Nokia phone (with MP3), and it won’t play in my Ubuntu install.  Bang.  All the shine, gloss and wonder is gone, and I’ll left with 600 songs I can’t listen to while I work or take my evening walk.

I used to have the same problems with my mailbox in Outlook Express.  I could not shift my mail to another program easily.  I used to have the same problem with Office documents.  Ditto for instant messenger profiles spread across different clients.  Closed, locked formats holding me into applications and working methods I either didn’t want or needed to escape.

Now my life is easier.  ODF means my documents just move, and when I need to give them to someone in Windows land I can send a PDF or (if they really want it) a DOC.  My mail and instant messenger profiles are in MBOX and XML formats.  I know that my data is safe.

Right now I am in control, not the guys who made my software.  I can change application or computer at will.  Isn’t that how it should be?

Whether or not I’m the real Farmer McShane (and I’m not), I do think openness and freedom is become very relevant.  It’s about allowing us to make our own choices about the tools we use and the data we own.  It’s sure as heck a lot better than losing 600 songs because the Apple iPod breaks one month out of warranty.

The next phase of Internet evolution

July 3rd, 2006

A friend of mine is working on a conference in Taiwan about Web 2.0 and I was asked to send in some thoughts.  After initially flinching at the use of the Web 2.0 term I found there was a lot I wanted to say.  Because I’m cruel and unprincipled I’ve decided to share my verbosity with everyone else as well.

From the top…

The Internet plays a crucial role in the development of information and knowledge societies.  This is a network that has reduced the transaction time of digital information to zero.  It has reduced the relevance of borders, eroded the meaning of time-zones, and facilitated a massive increase in the sharing of data.  Nations increasingly depend on the Internet as an empowerment tool for accomplishing a wide-range of communicative tasks.  

The Internet first entered most people’s minds in the late nineties.  A bunch of firms appeared with venture capital backing, promised the world and then failed to deliver.  Masses of money went into the Internet and was lost.    That was the first noticeable phase of the Internet for the general public.  Perhaps Tim O’Reilly would call it Web 1.0.  Most people used to call it the dot.com economy.  

We’re now in the second really noticeable public phase of the Internet.  New companies are rising up and old companies are reborn.  The post-dot.com market space is called Web 2.0, a signifier of the shift from information transmission on the Internet (static one-way communication) towards information interaction (web services that build a two-way communicative relationship).  The old Internet didn’t provide enough to deliver on the promise of communication nirvana.  Now the Internet companies have found a way to provide more.

As it stands Web 2.0 means services and products that interact with users and encourage the building of communities instead of customers.  Flickr, MySpace and Meebo are all Web 2.0 brands.  They all attempt to build a relationship with their users and to create an information repository that can be leveraged to generate revenue.  A cynic might suggest that Web 2.0 companies largely try to get a lot of people into one place and to generate data that can be resold to information mining services like Google.  Google, in turn, uses information to place targeted advertising.  

Web 2.0 has largely been about single-service provision to customers based around the idea of information-sharing.  This is the concept of providing simple powerful tools to share and access a certain type of data.  Successful Web 2.0 brands also include those that provide information convergence (bringing two or more data source together).  An example of a data-convergence brand is Flock, a version of the Mozilla web browser that links into blogging and tagging portals.

Web 3.0 (the newest buzzword on the block) is about the coherent utilization of data to provide increased productivity.  Where Web 2.0 has been about community building, Web 3.0 is about increasing productivity.  The next ‘hot’ brands on the Internet are likely to be on-demand applications and tools.  Google is already entering this field with its Writely and Google Calender beta products.  The third generation services will be centered around communication and productivity empowerment, the breakthrough of web services rather than installed software, and what Simon Phipps referred to as substitutability.1

Substitutability might actually be the key determiner of Web 3.0.  It’s about the freedom to enter and leave a product or service.  It is the embrace of open standards to ensure that customer data is completely portable.  In short, it’s the opposite of lock-in, and it is the missing link in the current computer paradigm.  It ensures that users ‘own’ their data and that any product or service can access the data.  No more import and export filters to try and get Microsoft Word documents into OpenOffice.org.  No more different profiles and conversation standards for Yahoo! Messenger and Google Talk.  Substitutability will mark the point in Information and Communication Technology (ICT) evolution when the vendors no longer have the power to use information containment and transmission formats as a product in themselves.

Substitutability also allows for consumer confidence in the trial and adoption of new products and services.  People can use a service or product until they grow out of it and then to transition to another tool.  That sounds like a terrifying concept for companies.  It means that if people don’t like a product or service they will leave.  They might even leave on a whim, just to try out something new.

Beyond the bluster of ‘unique’ service discourse and ‘killer apps’ the ultimate emerging paradigm of ICT is that of commodity tools.  When we move beyond the gloss and excitement of humanity’s discovery of digital communication we see people wishing to use ICT to accomplish real tasks.  That means talking to people, buying things, selling things and keeping track of information.  The real killer apps will be those that allow the user to accomplish these goals with the minimum of fuss.

In other words, a real Web 3.0 application will probably be something you don’t even notice.  You’ll use it every day, it’ll get stuff done for you, but you won’t pay attention to it.  It won’t be based around the thrill of discovery like Flickr.  It won’t build a ‘community’ of disparate and ultimately shallow connections between people like MySpace.  A Web 3.0 application will deliver tangibles.  It will be when the Internet grows up.

Now, this is where I’m going to break with the marketing discourse.  I don’t particularly like the terms Web 2.0 or Web 3.0.  I think they create artificial categories that don’t necessarily reflect the true nature of the ICT evolution that we are experiencing.  It’s not like there really was a Web 1.0 that failed, that a ‘new’ Web 2.0 was truly released, or that we’ll see a Web 3.0 actually appear.  The web was not ‘released.’  It’s not a product.  The Internet evolves.  Hundreds of millions of people use it.  Tens of thousands of companies develop for it.

The changes, the alterations, everything we have seen are part of the process of developing the Internet.  People use stuff, they give feedback (either in discourse or in moving on to new services), and Internet companies reply by generating new tools, products and services.  Web communities existed in 1998 (Geocities is a prime example) and they just became more sophisticated over time.  Right now we have My Space.  In five years we’ll have a better service, one more integrated into people’s real lives.

We should not talk about the changes occurring right now as if they were exclusively part of the Web.  This is a far bigger paradigm shift than that.  ICT technology is entering the very heart of people’s lives.  It’s becoming the key determiner in our communication transactions.  To understand this we need to look at the entire digital sphere.  We need to look at the Internet, we need to look at the machines on people’s desks, the boxes in people’s living rooms and the phones in their hands.  All this stuff is converging to create a unified information gateway.

In the end the Internet will cease to be something we notice.  Existing information streams will converge and collapse to form a unified information sphere.  ICT will provide a coherent information framework that will form the backbone of knowledge societies.  This is not Web 3.0.  It’s about rethinking how humanity speaks.

Marketing shrewdly

July 2nd, 2006

I’ve been thinking about marketing a lot lately.  I’m involved in the Irish Free Software market at the moment and there is a real need to get increased publicity in the field.  Most people in Ireland have little awareness of Free Software, and the penetration of the technology into business is miserable.  My mind has been filled with questions like “how do we engage our audiences?” and “how do we raise our profile in this market?”

I was downloading my email one day and a message appeared with the subject "I am Ling  Chinese Human Female UK Car Expert."  I assumed it was spam until I noticed that underneath the lunacy there was a serious business ticking.  It seems that Ling helps businesses and individuals lease cars.  She’s also the only Chinese person doing this in the UK.

I have to admit that I was curious.  Ling’s marketing approach was quite unusual.  She embraced stereotypes applied to the Chinese and turned them into a wry marketing tool.  To call her approach over-the-top might be an understatement.  She certainly has the ability to cause controversy.

I decided to investigate further.  It was time to contact Ling and find out more about her business, her philosophy, and the nuclear missile truck she parked on the A1.

Q: Ling, who are you and why did you email me?

A: That WAS a kind of spam email! Just directed at UK businesses. You must be a successful business to be on my list! I am Ling, as in “clever”, in Chinese. I am a (the only?) UK new-car sales female Chinese whirlwind expert. I live in Gateshead, I run www.LINGsCARS.com and I contract-hire or long-term rent new cars online. I sold over £10m of cars in 2005, but this year is running at an 80% growth so far. You want new car? You talk to me online. You have got to pass finance, so no cockle-pickers, please.

Q: You studied in Finland in 1997, got married and came to England. How on Earth did you end up running a business?

A: Well, hehe, Helsinki University was free, and it was an escape route from the trap of China. Chinese ARE good at running businesses. It’s a shame that most are bloody terrible take-aways. I often ask my customers if they want boiled or fried rice with their car… they always laugh. If they moan at me, I send them Chinese Heinz baby food. I learnt from two people, my husband Jon (who I met online in 1997) and a guy called Mike Porritt who runs CarShock. Both are white, middle-aged EBIs (English Born Idiots).

Q: I confess that the slight insanity of your email encouraged me to visit your website. Once there I discovered a series of videos you have made with your ‘red guard’ friend to compete with the Top Gear TV show. You guys test a bunch of cars and give your opinions. How did this come about?

A: Only slight insanity? That’s my sister Shan in the movies! She really WAS a red guard back in the early 1970’s – kicking teachers, stoning doctors, denouncing capitalist-roaders and ruining innocent peoples’ lives, that sort of fun stuff. She’s in the UK at the moment on holiday from Chengdu, so I roped her in. She brought a Chinese PLA uniform and her red-guard armband from 1970. I think she does a fantastic job of imitating Jeremy Clarkson in my ChopGear series. In one, she even sees how many Chinese takeaways you can load into a BMW 7-series! There’s more to come. Good job she has grown out of kicking people to death, huh?

Q: Do people ever suggest that you are playing to British stereotypes of the Chinese? I mean, do you ever run into problems with the BBC (British Born Chinese) regarding your light-hearted and rather self-depreciating approach to your culture and country?

A: I generally find that Chinese do this stereotype thing to themselves, very well. It is as if they are “owned” by a Chinese-ness and they subjugate themselves to it. In my view, the UK Chinese community is very backward, quiet and timid even. They cluster in communities, almost ghettos. A bit like the Amish, or the Orthodox Jews. Is there inbreeding? Why not assimilate? I prefer KFC (Kentucky Fried Chinese) to BBC. It’s still yellow, but tastier. What is all this BBC rubbish, anyway? Do we have British Born French? Or British Born South Africans? What a bloody chip (or prawn cracker) on the Chinese communities shoulder to think they need to classify themselves. Why not break free of this shit. I am just human, born in China with slanty eyes and a shit Government. Bollocks to what others think. Many Chinese send me stupid “loss of face” emails when I pull a Chinese-piss-take stunt. All I can say to them is… bollocks. Crawl out of your Chinatown ghetto and face the UK on equal terms!

Q: On your website it says you bring a free lunch to people. Can you tell me more about that? What’s this choice I see with the dessert?

A: Yes! I thought I would take the piss out of Chinese takeaway stereotypes (they deserve it), so I send out fast noodles, chopsticks, dessert and (real Chinese branded) Nescafe. I supply dried plum dessert for constipated British people. I specially chose FUKU brand noodles (geddit?), and I tell the people who apply for them that the MSG and Chinese “C” numbers will poison them. Probably true, eh? It is a very cheap and unique marketing stunt. Customers seem to like to be tortured. Not one person has ever complained (except some Chinese).

Q: Do you really send polo mints to people who ask you for a quote?

A: Chinese Polo mints, and Cola flavoured Polos too. My sis buys them in China and posts them over. My customers just LOVE these individually wrapped Polos, as they think they are a British sweet. People pass them around, in their Chinese wrappings. I also send out RMB notes, everyone adores them, as they are not available here. They have a picture of Mao with the big spot on his chin. Not many current banknotes have an image of a killer in the league of Stalin, Hitler, Idi Amin or (lately) Saddam or Mugabe. Very popular!

Q: In your email to me there is a picture of you with what appears to be a mobile missile launching truck. Can you tell me a little bit more about this rather bizarre image?

A: Ah, yes. I own a Chinese Peoples Liberation Army nuclear missile truck, complete with missile. It is always pointed west, towards America! It does not have the range to reach Taiwan, you see (like most other Chinese missiles, hehe). I painted my website and face on it. For fun, I parked it next to the A1 in Tony Blair’s constituency of Sedgefield. Bloody 2-shags Prescott made me move it in the end, after 1-year of wrangling. I made the truck from a real 1970 nuclear decontamination truck. It has been in lots of newspapers including the Financial Times and Daily Telegraph, and on the BBC. It is fantastic. My baby. I also have a 1966 London bus and a 1976 Beijing Jeep. Oh, and a Land Rover and a BMW RT bike.

Q: You’re clearly a driven person (no pun intended) and you have goals. Where do you want to be in five years?

A: Oh, retired on a beach! My business is cumulative – most people come back after 2 years for another car, so my growth is fantastic. Everyone else selling brand new cars in the UK is so boring and conventional. I fight them like mad. Mazda banned me from selling their cars, but they gave in, in the end! No one, especially not a trumped-up employee Managing Director of a Japanese company dictates my actions. Like I said at first, I am Ling! And frankly, if the Chinese “community” don’t like me, or if some traditional old Chinese crow moans at me, I don’t care! I’ll sell my cheap new cars to some other race.

She’s a little bit like a tornado, don’t you think?  But she does get her message across, she does make money, and she does a lot of repeat business.  In short, Ling is an excellent marketeer.  She’s engaging her audience with enough controversy to garner attention and enough likeability to have a growing customer-base.  That strikes me as very shrewd.  

What Ling does is not an accident.  She’s fostering a very well-constructed brand.  This brand is built on a careful analysis of her market and a careful analysis of her own abilities, aims and aspirations.  It sounds like she also has fun along the way.

Perhaps Free Software advocates can learn from Ling.

Sometimes we may be too busy thinking about what we want instead of what is actually happening.  Sometimes we may be preaching when we should be marketing.  Perhaps we need to look at our brand in a more objective way, and to construct our own marketing techniques based on an analysis of our market and of ourselves.  We have this fantastic technology and this fantastic method of ensuring that everyone can always access it.  I believe we should ‘sell’ it more.

On a final note, I want to make it very clear that I’m not suggesting that Free Software advocates should start investing in nuclear missile trucks.  I don’t think we have the budget for that and I’m not sure that it would be the right move politically.

Security is a process

June 26th, 2006

Security is a process.  That means it’s not a tool, it’s not something that comes out of a box, and it’s not easy to get right.  Security depends on a chain of different things moving in tandem to counter dangers and weaknesses.  Security does not exist; it’s a way of anticipating or reacting to problems and ensuring that certain goals are met.

Digital security is about countering digital threats.  In my case that usually means considering the threat to communications, more specifically the information contained inside emails.

The security threat with emails sounds something like this: people want to send private messages through a public network where random computers can intercept the messages.  We have to work out how to ensure the messages remain private even if they are intercepted.

The easiest way to do this is to encrypt the messages.  This means that even if someone intercepts the message they won’t be able to read a thing.  The only person who can decrypt the private message is the person who should be reading it.  Perfect.  The overarching security threat is solved by an overarching solution.

The problem returns when we look at the details.

How will we encrypt the private message?  If we use really strong encryption (symmetric encryption) we have to tell the recipient the password.  Transmitting a password is a terrible security risk.  If we use hybrid encryption (PGP) we lose a certain amount of cryptographic robustness.

It becomes evident that security processes inevitably end up being about trading off different percentages of security and practicality.  We need to balance our requirements (sending a private message privately) with the reality of the situation (the only way to send a completely private message is not to send it at all).

A good security process is a careful analysis of the security threat and the security requirements.  It is a balance between theory and practicality that will ensure the main goals of the enterprise are (more or less) met.  Sometimes the security process will fail.  That’s just a mathematical certainty.  There is no such thing as perfect security.

If you think about it there isn’t even such thing as really good security.  What’s a secure workstation?  One you don’t use.  What’s a secure communication network?  See above.  If you actually deploy something you set in motion variables that ensure that at some point or other the security process will fail.  There will be an error along the time and one link in the chain will open.

It is very fortunate that most of the time security is not really quite as important as people may think.  Most private emails are private from a select few individuals like your boss or your wife.  The level of security required to keep them out of your affairs (literally or otherwise) is far less than that required to prevent the NSA checking to see what socks you are wearing.

Even critical security is often time-sensitive.  This means that if a security process offers a good chance of maintaining itself for XYZ time-scale it will often accomplish the required goal.

To put it more bluntly: security processes often translate into either confusing people who have no chance of breaking through them or buying time before professionals break through.  If you’re dealing with professional security life becomes all about buying time.  It’s just insane to think any security process will make a wall that people can’t dig holes in.

Encrypted email is used to send private messages.  It’s pretty tough to crack a PGP encrypted email by brute force.  It takes a lot of computing resources to do that sort of thing.  Of course, the NSA, GCHQ and China intelligence have a lot of computing resources.  If you’re hiding from these guys you’re going to need a lot more than encryption.

Holistic is the word we’re looking for.  A real security process is going to be holistic.  If we’re talking email that means looking way beyond encryption.  Of course we’ll include encryption, but we’ll also be including things like geographical movements (where are the messages coming from?) and time-based analysis (when are these messages being sent and in what order?).  We’ll combine thousands of factors to try to work out how to make a process that will buy enough time to accomplish a goal.

From another perspective, you might bump into a security process and try to work out how to break it before the people using it accomplish their aim.

Maybe the most important thing about any security process is the people using it.  Social engineering has got to be the primary way security processes are cracked.  You meet a guy, get him drunk and get the information you need.  That leads you to another bit of information and so on.  This is how intelligence agencies get a lot of breakthroughs.

For a moment there I drifted into the big picture.  You’re not so interested in how the NSA will discover what XYZ said on ABC trade mission.  But let’s apply the holistic thought to normal everyday encryption.  Holistic processes still apply if you want to go about things properly.

If you want to send a private messages to someone (and that message is to be truly private) simply downloading something like Enigmail OpenPGP will not work.  You need to think locations, you need to think about what will happen to the private keys on both ends of the communication chain.  You need to think about stored emails and the possibility that someone will find them in six months.

You need to think about the fact that even if the emails are encrypted they still exist.  Someone browsing the computer can see that you sent an email to a certain address or name.

There are holes everywhere and we’re just talking about sending a private message to one person in a normal environment.

If you want security then think process.  Look at yourself, look at your objectives, and make a call.  Balance your end-goal against the idea of true security (not doing anything at all), and find a way forward. Remember to plan for the inevitable failure of the security process as well.  Perhaps not today, perhaps not tomorrow, but inevitably it will fail.  If you need something that will work in the long-term you’ll have to keep changing and evolving the security process, replacing each potential access point as probability throws it out of favour.

What’s the most likely way a two-way private message conversation will be compromised?  Any guesses?

Yes.

One of the parties will tell a friend.

That’s what I mean about security being a process.  And people are the least reliable part of the process.  We have to take this into account when we try to secure a communication channel.

A day with the boys from GLLUG

May 3rd, 2006

Recently I was lucky enough to speak at the Great London Linux User Group, a vibrant LUG right in the centre of London.  The meetings are held at Westminster University in a well equipped lecture theatre (no less than two projector screens lurk over the heads of the speakers), and the ‘amphitheater’ seating allows everyone to get a good view of the proceedings.  An additional benefit is that it’s possible to pelt bad speakers with fruit, paper and Microsoft CDs without undue effort.  Gravity does most of the work.

I was talking about Free Software, the Free Software Foundation Europe, and why things like GPLv3 are important.  That might sound like preaching to the converted – I was addressing a LUG after all – but we have lots of controversial topics on our collective community plates.  DRM is hovering around like an annoying bee.  Patents loom and loom again, threatening innovation in development.  Fights over terms like ‘Open Source’ and ‘Free Software’ cause bickering among friends.  If ever there was a time to reassess where we are, and where we are going, it’s now.  This is the moment to pull together and share a common agenda.

The GLLUG crowd was both sizable and enthusiastic.  I had been a tad worried that people would contribute to the conversation, but the opposite was true.  Intelligent, piercing questions appeared from all directions.  People wanted to know about business models, development models, challenges facing us, and the need to innovate.  It’s a genuine pleasure to engage with an audience who have things on their minds, and are not afraid to talk about them.  

I’m loath to label anything as a ‘favourite’ topic, but one strand of our conversation really grabbed me.  A gentleman from the BBC introduced an issue regarding the deployment of GNU/Linux and other Free systems in corporate environments.  Take-up is often sluggish at best, and often a non-starter.  Is it because upper-management don’t know about Free Software?  Is it because of institutional inertia?

My own opinion is that cracking corporate environments is a substantial and multifaceted challenge.  Many factors contribute to reluctance on the part of managers to adopt our system.  One such factor  is the lack of technical education on the part of decision makers.  Another factor is institutional policy.  However, I believe the larger issue runs deeper than this, and actually goes right to the core of what we do.

Free Software development models do not aways inspire corporate confidence.  That’s not unnatural.  We are hardly corporate in our approach!  Free Software often is developed in non-hierarchical casual groups, with few nominated managers, vague schedules, and even vaguer accountability.  That’s a pretty scary proposition for a business thinking of using our tools: they are often afraid that projects will not deliver, will not continue to evolve, or will not be supported long enough for a deployment lifespan to be completed.

Sure.  Red Hat.  I hear you.  There are companies that support GNU/Linux solutions, and they are not going to vanish overnight.  IBM is a fairly heavyweight asset to have our on side.  The problem is that the presentation of GNU/Linux solutions is not as mature as the proprietary solutions that have had time to prove themselves in the eyes of non-technical business managers.

Things are changing.  I love the IBM site about GNU/Linux solutions, because I think it’s really beating Microsoft at their own game.  Year by year GNU/Linux is gaining respectability and sophistication.  It even steamrolled Sun into releasing Solaris under an open source license.  We are getting to the point where our Free business models are matching the closed business models of the old guard.  However, we must not be complacent.  No matter how good our technology is, it counts for little if we are not competitive with the market leader.  Look at what happened to BeOS.

It’s an interesting discussion thread.  It might not keep the conversation going at frat parties, but it does generate a buzz at LUGs!

Later in the day another gentleman in the audience took me aside.  He also used to work for the BBC, and he described how the community environment of the corporation was ruined by the introduction of aggressive management policies.  In the old days the BBC departments used to pool ideas and resources.  However, now departments have been encouraged to stop helping each other, and to instead compete for increasingly limited budgets.  This set me thinking.  I mentioned earlier how issues like ‘Open Source’ vs ‘Free Software’ have lead to bickering in our community.  Is there a danger that as Free Software gets more corporate we’ll become like the modern BBC?  Competitive, uncooperative nuclear groups fighting to prove our worth in comparison to each other?

I don’t think so.

In Free Software these days I see more cooperation than ever.  Look at GNOME and KDE.  In the late nineties there was a lot of tension and (dare I say it?) FUD regarding our two main desktop environments.  These days the dragon and the foot are not only buddy-buddy, but they are working together to develop shared paradigms and innovative ideas about how user productivity can be increased.

Programmers and software architects may be annoying, eccentric, and occasionally unwashed, but they are as smart as heck.  We all have our reasons for spending ridiculous hours in front of computer screens, and these reasons have little to do with shallow egotistical goals.  

GLLUG is pretty much proof of this.  People asked me questions like “what do you think can be done to make Free Software more competitive?” That type of question sees me giving answers that are perhaps too business orientated for some tastes.  I like productivity analysis, quantitative research, and the application of structured frameworks.  However, instead of ending up in flame wars, we all shared different ideas, and mulled over problems with one goal in mind: finding ways forward that will work.  I think that’s why our software works, while Microsoft end up patching a sinking ship (XP) while the mythical Vista slips further away.

One of my favourite moments at the GLLUG meeting was the talk after mine.  Simon Morris was presenting a rather nice PBX system called Asterix, and things got technical.  It was fun to get down to the real hardware and software after spending such a long time talking about politics and ideology.  I love to learn about technologies, and on this occasion I had the opportunity to uncover a completely new patch of the ICT wonderland.  I had never realised that PBX systems were so easy to set up, or so flexible when it comes to configuration.  It’s a shame I only have one telephone in my house.

After the talks were done, GLLUG as a whole drifted out into the sunlight, and we promptly vanished into the dark basement of a pub to drown our sorrows and talk about geek stuff.  There was a gent with a beard (I apologise if you are reading this, but I don’t think we were ever properly introduced), and we ended up chatting about quantum encryption.  The photon encryption we are playing with now is interesting, but the hardcore geeks among us want to know when the quantum entanglement toys will be ready.

What a day.

If you are ever in London, do drop by the Greater London Linux User Group.  It’s a great example of how rich and diverse a LUG can be.  I had a wonderful time speaking, listening and sharing a pint.  Thank you to everyone for making my all-too-brief visit such a pleasure.  I’m already looking forward to going back.  

The Mobile Office

April 19th, 2006

There is a gross inequality in the distribution of empowering Information and Communication Technology (ICT).  Access to productivity and communication solutions is currently the domain of the richest one sixth of the world, with the remaining five sixths remaining resolutely disenfranchised with regards personal computing, mobile communication, and instant processing of  information.  This is the ‘Digital Divide,’ an unnecessarily damaging situation where the people who most need productivity solutions are unable to obtain them.  In effect, the vast majority of the human race is condemned to prolonged poverty and inefficient economic, political and social solutions due to neglect and a lack of effort with regards sharing technology.

There are some positive initiatives underway to introduce modern ICT technologies into developing nations.  Perhaps the most successful are the campaigns to recycle mobile phones into under-privileged nations, thus giving disparate populations access to rudimentary voice communication solutions.  More ambitious projects exist too, ranging from serious efforts to recycle old personal computers through to the $100 laptop being advocated by Nicholas Negroponte from MIT labs.

This paper wishes to make a further contribution to positive suggestions for ICT empowerment and a reduction in the Digital Divide.  The basic premise is that it might be possible to utilise and extend existing technologies to provide functional systems for ICT in developing nations.  The method for incremental empowerment utilises a technology that is already deployed in developing nations: the mobile phone.

Mobile phones provide two essential elements of ICT infrastructure.  They are connected to a digital network, and they have processing capacity.  Newer telephones are capable of running applications, and have rudimentary support for email and Internet communication.  Apart from having three inherent design limitations regarding small input devices, limited screen size and limited memory, they provide an excellent ICT front-end.

We will address the limitations of mobile devices, firstly examining the critical issue of hardware.  Mobile phones don’t have keyboards that are suitable for typing, and they don’t have screens suitable for running productivity applications.

There is a preexisting solution for enabling the connection of keyboards to mobile phones.  It is called a USB port, and many mobile phones already have these ports for things like PictBridge (a way to print directly from phone to printer).  A simple USB connection and a little bit of tweaking of the internal hardware design of the mobile phone would allow a user to connect a standard PC keyboard to the device.

The second hardware issue is a little bit more difficult to resolve.  The limited screen size of mobile phones is an engineering reality and cannot be avoided.  To increase screen real estate requires either increasing the physical size of the mobile phones, or implementing an extension to the mobile phones that will allow images to be output to external devices such as television sets.

The first option is not viable, as increasing the size of the mobile phones would reduce their utility as portable telephones, and the larger physical screen size would reduce battery life exponentially.  In effect, they would no longer be mobile phones.  They would be small notebook computers with ring tones.

The second option is viable, but requires a certain level of commitment from manufacturers to introduce an output port  on all new mobile phones.  The port would be designed to work in conjunction with a special cable to allow connection from the mobile phone to the aerial input on a television set.  This would provide the mobile phone with a display area of 720×480 on NTSC televisions and  720×576 on PAL televisions.

This mobile phone output port would  provide little utility in developed nations outside of slide-shows and video screenings, but would act as an essential hardware extension for turning modern mobiles into fully-fledged ICT solutions in developing nations.  People in developing nations already have access to television sets, and increasingly they have access to mobile phones.

With a standard PC keyboard, a mobile phone, a television set and cable to connect the phone to the TV set, a complete physical networked ICT system would exist and depend only on the software to power it.

We must now address the second key limitation of mobile phones.  Mobile phones are low power devices designed for relatively limited computational tasks.  They have limited memory, limited ability to process complex tasks, and a finite battery life.  Any attempt to extend a mobile phone’s computing remit needs to take these factors into account.

This paper’s key suggestion is that a carefully designed series of computational tasks can be accomplished by the intelligent use of preexisting mobile phone technology and dynamically loaded software modules.  It sounds complex, but it is really just about intelligent use of things that already exist, or can be easily created.

In traditional computing we think of applications as big programs that do something.  Microsoft Word is a big application that runs and allows you to create a document.  It can spell check the document, it can print the document.  It can even help you put a movie in your document.  It does a lot, but it’s big.  It uses a lot of space on your computer and it takes up a lot of memory and processing power.  Precisely because Microsoft Word has a lot of capabilities, it requires a lot of resources.

Mobile phones simply don’t have a lot of memory or processing power, and therefore they cannot run an application like Microsoft Word.  To get a mobile phone to run an application means rethinking how we make an application.  Instead of thinking about an application as a completed concept (a big tool that allow people to edit documents), we need to think of an application as lots of tiny little ‘services’ that are called to do something when they are needed.  When the services are not needed, they just switch themselves off.

If we are going to be technical about it, we would say that services need to be incrementally loaded modular components designed to accomplish certain tasks without overloading the processing or memory capacity of the mobile phone.  

Mobile phones already do this to an extent, especially in their support for lots of different types of communication.  Modern mobile phones can send email (we call this POP and SMTP support), they can access the Internet (with what is called a network stack) and they can save information to little memory cards.  Mobile phones don’t do all of these things at once.  In fact, to preserve memory, processing power and battery life, mobile phones usually do one thing at a time.  Therefore mobile phones already have a lot of services that can be called when they are needed.

We are going to extend that concept to office productivity applications.

A productivity application is actually a lot of different services that used by the person trying to be productive.  Some of these functions are critical, but most of them are what might be termed optional.  You don’t need these services to be productive, but you might want to be able to call them.  For example, you don’t need a spell checker to type a letter, but you might want to use one after you have drafted the letter.  The spell checker is a service, and we don’t need to load it into memory until you specifically say “check my spelling now.”

If you reconceptualise an application as different services you can quickly create a map of how resource usage can be minimalised.  Certain services such as drawing text (text rendering), understanding user input (keyboard driver) and outputting images to the screen (display driver) must be loaded at all times, but others services for saving documents, sending emails or checking grammar can load dynamically according to task requirements.  An example of how this works is below:

Word Processor/Email client
           |
Text and HTML render engine
Dictionary service (inactive)
Saving/sending service (inactive)
           |
Java Virtual Machine
           |
Mobile OS
Mobile SMTP/Network stack (inactive)

This looks awfully complex, but it’s not really.  

At the top of the pile there is a ‘Word Processor/Email client,’ which is a hypothetical application front-end.  That’s what the user of the mobile phone sees.  It looks like a word processor that can also send emails.  Below that there is a ‘Text and HTML render engine,’ which draws the text on the screen.  Beside this there are other things like a ‘Dictionary service’ for spell checking and a ‘Saving/sending service’ that can help the end user save or email a document.  These all run on a ‘Java Virtual Machine,’ which is an engine to run applications.  This is already on most mobile phones, and it can be understood as a really powerful way to run programs on any phone or computer.  At the bottom there is the ‘Mobile OS,’ (Operating System) which is the heart of the phone.  It controls the basic functions of the phone, and stops and starts things like the Java Virtual Machine.  The ‘Mobile OS’ works with the ‘Mobile SMTP/Network stack’ to allow communication on the Internet or through telephone calls.

Let’s reverse direction, and go through that again.

At the very bottom of the example there is a preexisting Mobile OS running a simple Java Virtual Machine (Java VM).  The mobile has a SMTP/Network stack which is inactive, but can be called by the Mobile OS when required.  This means that the network stack is consuming no resources when it’s not needed.

The Java VM is running a lightweight Text and HTML render engine.  This can render (show) text and standard compliant (X)HTML.  Given current standards, perhaps ‘UTF-8’ (text standard) support and ‘XHTML 1.0 transitional’ (web language standard) support would make the most sense.  Because resources are limited, even this render engine might benefit from being modular.  This would mean loading support for common rendering tasks into memory by default, but leaving out special sub-services that support unusual XHTML tags or the like.  The render engine could accomplish this by only loading all rendering sections when it comes across an unknown XHTML tag.

Beside the Text and HTML rendering engine there are inactive services.  These could include a dictionary service and a lightweight service for saving or sending the completed text and HTML files.  These services would be wholly inactive until called, and when called might assume temporary control of the environment (pausing the user front end) to accomplish their task without memory or processor overload.  This would have the advantage of minimising memory usage with the disadvantage of only allowing one task to be accomplished at a time.

The dictionary service is a fairly straightforward concept.  A dictionary file would exist on the mobile phone, and so would a service to take the user input (what is written on the screen) and compare it to the dictionary file.  The saving/sending service is more abstract.  This would be a service designed to save the user input from the screen (the Random Access Memory; RAM) to the main memory (like a MMC card), or to take the user input from the screen and send it as an email through the SMTP/Network stack.  

The top of the modular application framework is the application front-end.  This would be a simple window with basic menu items (used to call the other modular services), and a canvas for user input.  In most cases input would be textual (word processing, email, spreadsheet, database), but there is room for graphical input as well.  It all depends on what services are running or can be called by the application front-end.  

The application front-end would actually be pretty light, as all application ‘actions’ would actually be separate modules loaded dynamically as needed.  The text and HTML render engine and any (option) graphic rendering engines would also exist below the front-end, providing essential services to all aspects of the user interface without requiring a large overhead.

By having a well designed modular framework such as this it should be possible to give the appearance of a complete network aware office suite using the limited resources available in a mobile phone.  Best of all, because almost all services would be shared easily and openly, application development time would be dramatically reduced.  Once primary services like the text and HTML render engine existed, creating a new application would largely consist of making a front-end and linking it to the services underneath.  Adding a new service (like video decoding or a random number generator) would provide new functionality for all front-ends with a minor update.

It’s all about cooperation, and sharing.  The mobile office would actually not be an application or an application suite running on a telephone.  It would be loads of little services calling each other when needed, and disappearing into the background when not required.  At least in theory this could allow for quite complex ICT tasks to be accomplished with a very modest computational device.

All we need to build are mobile phones with two ports, some simple cables, and some smart software services.  We are talking about a complete networked productivity solution.  It’s not a solution for the developed world, but in Africa, South Asia and Latin America it could make a massive difference.  New phones for the rich, recycled phones for the poor.  The devices would spread along existing channels, and supplement existing technology without requiring massive investment, training or logistics.  Existing networks would carry the signals.  Existing knowledge would empower the users.

$100 laptops are a great idea for the future, but people have mobile phones and TVs now.  With a little effort we could really make that count for something, and take another step towards closing the Digital Divide.

UK ‘Intellectual Property’ consultation process

April 17th, 2006

The UK government has called for public comment on certain copyright and patent issues.

http://www.hm-treasury.gov.uk/media/978/9B/gowers_callforevidence230206.pdf

The deadline for submitting your comment is the 21st of April.  I have just sent in my comment, and reproduce it below for reference:

=== Shane’s comment ==

As a fellow of the Free Software Foundation Europe, an associate of the Free Software Foundation, a member of the Open Source Academy and a member of numerous Free Software development teams, I hereby wish to register my concern regarding the current review of what you term “Intellectual Property Rights.”

Firstly, the term ‘Intellectual Property’ is in itself problematic.   To quote from an article on the Free Software Foundation website: “The term "intellectual property" operates as a catch-all to lump together disparate laws.   Non-lawyers who hear the term "intellectual property" applied to these various laws tend to assume they are instances of a common principle, and that they function similarly.  Nothing could be further from the case.   These laws originated separately, evolved differently, cover different activities, have different rules, and raise different public policy issues.   Copyright law was designed to promote authorship and art, and covers the details of a work of authorship or art.   Patent law was intended to encourage publication of ideas, at the price of finite monopolies over these ideas–a price that may be worth paying in some fields and not in others.   Trademark law was not intended to promote any business activity, but simply to enable buyers to know what they are buying; however, legislators under the influence of "intellectual property" have turned it into a scheme that provides incentives for advertising (without asking the public if we want more advertising).”  (http://www.gnu.org/philosophy/not-ipr.xhtml)

I am uncomfortable with the use of a blanket term to cover issues that are inherently different, and I wish to have that noted.

On page 1 of your call for evidence there is also a problematic sentence.   It suggests that the state “must ensure that IP owners can enforce their rights through both technical and legal means.”  The state remit is that of legal jurisdiction.   The adoption of technological enforcement of copyright, patent or trademark claims as an extension or supplement of legal jurisdiction is an untenable one.   It is not the state’s place to enforce or sanctify technological limitations on hardware or software.   It is the state’s role to provide a legal framework for just regulation.   By allowing for the confusion of legal and technological jurisdiction, the state permits the existence of limitations on end user experiences that go far beyond the enforcement of just rights.   The most worrying example of this is termed Digital Rights Management (DRM), and falls within the remit of the call for evidence under the section entitled ‘COPYRIGHT – DIGITAL RIGHTS MANAGEMENT.’

Digital Rights Management is an unfair extension of legitimate copyright terms to cover all aspects of use.   It is about restricting how and when people can use copyrighted files.  It may restrict how many times people can play something.  It may restrict people’s ability to share something.  It may restrict the method people employ to consume something.  It is about allowing companies to determine how the end user will experience the copyrighted material that the end user purchased.

Ultimately DRM means that when a person buys a copyrighted file, they don’t actually have the permission to use it.  The file containing the DRM protected information won’t necessarily work on another computer, or your mobile phone, or your PDA.  It’s a bit like selling a book designed to be read in the living room, but with a limitation preventing it being read in the bath.  Instead of finding a way to stop the end user giving illegitimate copies of a work to other people, DRM is about controlling the right to copy work for any purpose, and in the process it determines the end user consumption method and options as well.

DRM creates a completely new way of controlling how people access information and sanctions corporate control in what was previously a very private sphere.   DRM will allow the companies who create DRM, and the companies that own the content, to  control digital networks.   It is an unappealing thought, and governments will be disenfranchised along with citizens.   At best DRM is a misguided attempt to solve a legal concern through a technological arena.   At worst it is a wholly unfair attempt to control how people can access or use copyrighted material, regardless of historical precedent or fair access rights.

DRM is unnecessary.   Governments already stop people sharing copyrighted material through copyright law.   Existing copyright law is applicable to books that you can hold, and books on your computer.  It applies to music, movies and software.   There is no place nor fair justification for any extension of this law through technological limitations and controls.

On a final note, I wish to note that your call for evidence does raise valid points.   In particular, on page 2 it is suggested that while “patents provide a vital incentive for innovation, the granting of overly broad patent protection, together with restrictive or restricted licensing of IP, can impede the development of the next generation of products and reduce competition.”  This is undoubtedly true, especially in an area critical to the sustainability of the European economic sphere: software development.   The United States of America currently sanctifies broad software patents and innovation is distorted because of this.   Examples of obvious, overly broad or misguided patents abound, with perhaps the most famous being that of Amazon.com and their patent of “One-click” purchasing system.   Such a patent is obvious, and dependent only on an authentication system and a user log-in identity.  While an argument exists for the ability to patent an innovating invention, the use of standard technology to facilitate an obvious and abstract service is unacceptable.   Software patents, covering an area without tangible products and where standardised tools are used to create new applications, are an unjust for both innovators and end users.   Software should properly be covered by copyright law, not patents.   There is an excellent article on this matter on The Guardian website
(http://technology.guardian.co.uk/online/comment/story/0,12449,1510566,00.html).

Thank you for reading my thoughts on this matter.   I look forward to following this consultation processes closely.

Yours

Shane M.  Coughlan

(Detached Digital Signature enclosed with original email)