Paul Boddie's Free Software-related blog


Archive for the ‘public sector’ Category

Site Licences and Volume Licensing: Locking You Both In… and Out

Sunday, June 9th, 2013

Once upon a time, back in the microcomputer era, if you were a reputable institution and were looking to acquire software it was likely that you would end up buying proprietary software, mostly because Free Software was not a particularly widely-known concept, and partly because any “public domain” or “freeware” programs probably didn’t give you or your superiors much confidence about the quality or maintenance of those programs, although there were notable exceptions on some platforms (some of which are now Free Software). As computers became more numerous, programs would be used on more and more computers, and producers would take exception to their customers buying a single “copy” and then using it on many computers simultaneously.

In order to avoid arguments about common expectations of reasonable use – if you could copy a program onto many floppy disks and run that program on many computers at once, there was obviously no physical restriction on the use of copies and thus no apparent need to buy “official” copies when your computer could make them for you – and in order to avoid needing to engage in protracted explanations of copyright law to people for whom such law might seem counter-intuitive or nonsensical, the concept of the “site licence” was born: instead of having to pay for one hundred official copies of a product, presumably consisting of one hundred disks in one hundred boxes with one hundred manuals, at one hundred times the list price of the product, an institution would buy a site licence for up to one hundred computers (or perhaps as many as the institution has, betting on the improbability that the institution will grow tenfold, say) and pay somewhat less than one hundred times the original price, although perhaps still a multiple of ten of that price.

Thus, the customer got the vendor off their back, the vendor still got more or less what they thought was a fair price, and everyone was happy. At least that is how it all seemed.

The Physical Discount Store

Now, because of the apparent compromise made by the vendor – that the customer might be paying somewhat less per copy – the notion of the “volume licence” or “bulk discount” arose: suddenly, software licences start to superficially resemble commodities and people start to think of them just like they do when they buy other things in bulk. Indeed, in the retail sector the average person became aware of the concept of bulk purchasing with the introduction of cash and carry stores, discount stores, and so on: the larger the volume of goods passing through those channels, the bigger the discounts on those goods.

Now, economies of scale exist throughout modern commerce and often for good reason: any fixed costs (or costs largely insensitive to the scale of output) in production and distribution can be diluted by an increased number of units produced and shipped, making the total per-unit cost less; commitments to larger purchases, potentially over a longer period of time, can also provide stability to producers and suppliers and encourage mutually-beneficial and lasting relationships throughout the supply chain. A thorough treatment of this topic is clearly beyond a blog post, but it is worthwhile to briefly explore how savings arise and how discounts are made.

Let us consider a producer whose factory can produce at most a million units of a product every year, it may not seek to utilise this capacity if it cannot be sure that all units will be sold: excess inventory may incur warehouse costs and also result in an uncompetitive product going unsold or needing to be heavily discounted in order to empty those warehouses and make room for more competitive stock. Moreover, the producer may need to reconsider their employment levels if the demand varies significantly, which in some places incurs significant costs both in reduction and expansion. Adding manufacturing capability might not be as easy as finding a spare factory, either. All this additional flexibility is expensive for producers.

However, if a large, well-known retailer like Wal-Mart or Tesco (to name but two that come to mind immediately) comes along and commits to buying most or all of the production, a producer now has more certainty that the inventory will be sold and that it will not be paying people to do nothing or to suddenly have to change production lines to make new products, and so on. Even things like product variations can be minimised by having a single customer or few customers, and this reduces costs for the producer still further. Naturally, Wal-Mart would expect some of the savings to be passed on to them, and so this relationship benefits both parties. (It also produces a potential discount to be passed on to retail customers who may not be buying in bulk after all, but that is another matter.)

The Software Discount Store?

For software, even though the costs of replication have been driven close to nothing, the production of software certainly has a significant fixed cost: the work required to develop a viable product in the first place. Let us say that an organisation wishes to make and sell a non-niche product but needs to employ fifty people for two years to do so (although this would have been almost biblical levels of manpower for some successful software companies in the era of the microcomputer); thus one hundred person-years are invested in development. To just remain in business while selling “copies” of the software, one might need to sell one hundred thousand individual copies. That is if the company wants to just sell “licences” and not do things like services, consulting, paid support, and so on.

Now, the cost of each copy can be adjusted according to the number of sales. If things go better than expected, the prices could be lowered because the company will cover its costs more quickly than anticipated, but they may also raise the prices to take advantage of the desirability of the product. If things go worse than expected, the prices might be raised to bring in more revenue per sale, but such pricing decisions also have to consider the customer reaction where an increased price turns away customers who can no longer justify the expense. In some cases, however, raising the price might make the product seem more valuable and make it more attractive to potential customers, despite the initial lack of interest from such customers.

So, can one talk about economies of scale with regard to software as if it were a physical product or commodity? Not really. The days of needing to get more disks from the duplicator, more manuals from the printer, and to send more boxes to distributors are over, leaving the bulk of the expense in employing people to get the software written. And all those people developing the product are not producing more units by writing more code or spending more time in the office. One can argue that by adding more features they are generating more sales, but it is doubtful that the relationship between features and sales is so well defined: after a while, a lot of the new features will be superfluous for all but “power users”. One can also argue that by adding more features they are making the product seem more valuable, and so a higher price can be justified. To an extent this may be the case, but the relationship between price and sales is not always so well defined, either (despite attempts to do so). But certainly, you do not need to increase your “production capacity” to fulfil a sales need: whether you make one hundred or one million sales (or generate a tenth of or ten times the anticipated revenue) is probably largely independent of how many people were hired to write the code.

But does it make sense to consider bulk purchasing of software as a way of achieving savings? Not really. Unlike physical production, there is no real limit to how many units are sold to customers, and so beyond a certain threshold demanded by profitability, there is no need for anyone to commit to purchasing a certain number of units. Especially now that a physical component of a software product is unlikely to be provided in any transaction – the software is downloaded, the manual is downloaded, there is no “retail box”, no truck arriving at the customer, no fork-lift offloading pallets of the product – there is also no inventory sitting in a warehouse going unsold. It might be nice if someone paid a large sum of money so that the developers could keep working on the product and not have to be moved to some other project, but the constraints of physical products do not apply so readily here.

Who Benefits from Volume Licensing?

It might be said, then, that the “economies of scale” argument starts to break down when software is considered. Producers can more or less increase supply at will and at a relatively low cost, and they need only consider demand in order to break even. Beyond that point, everything is more or less profit and they deliver units at no risk to themselves. Certainly, a producer could use this to price their products aggressively and to pass on considerable savings to customers, but they have no obligation and arguably little inclination to do so for profitability reasons alone. Indeed, they probably want to finance new products and therefore need the money.

When purchasers of physical goods choose to buy in bulk, they do so to get access to savings passed on by the producer, and for some categories of products the practice of committing larger sums of money to larger purchases carries little risk. For example, an organisation might buy a larger quantity of toilet paper than it normally would – even to the point of some administrator complaining that “this must be more than we really need!” – and as long as the organisation had space to store it, it would surely be used over time with very little money wasted as a result.

But for software, any savings passed on by the producer are more discretionary than genuine products of commerce, and there is a real risk of buying “more than we really need”: a licence for an office application will not get “used up” when someone has “reached the end” of another licence; overspending on such capacity is just throwing money away. It is simply not in the purchaser’s interest to buy too many licences.

Now, software producers have realised that their customers are sensitive to this issue. Presumably, the notion of the site licence or “volume licensing” arose fairly quickly: some customers may have indicated that their needs were not so well-defined that they could say that they needed precisely one hundred copies of a product, and besides, their computer users might not have all been using the software at the same time, and so it might not make sense to provide everyone with a copy of a program when they could pass the disks around (or in later times use “floating licences”). So, producers want customers to feel that they are getting value for money and not spending too much, and thus the site licence was presumably offered as a way of stopping them from just buying exactly what they need, instead getting them to spend a bit more than they might like, but perhaps a bit less than they would need to if money were no object and per-unit pricing was the only thing on offer. (The other way of influencing the customer is, of course, the threat of audits by aggressive proprietary software organisations, but that is another matter.)

Regardless of the theory and the mechanisms involved, do customers benefit from site licences? Well, if they spend less on a site licence than they do on the list price of a product multiplied by the number of active users of that product, then they at least benefit from savings on the licensing fees, certainly. However, there are other factors involved, introducing other broader costs, that we will return to in a moment.

Do producers benefit from site licences? Almost certainly. They allow companies to opportunistically increase revenue by inviting customers to spend a bit more for “peace of mind” and convenience of administration (no more having to track all by yourself who is using which product and whether too many people are doing so because a “helpful” company will take care of it for you). If such a thing did not exist, customers would probably choose to act conservatively and more closely review their purchases. (Or they might just choose to embrace Free Software instead, of course.)

All You Won’t Eat

But it is the matter of what the customer needs that should interest us here. If customers did need to review their purchases more closely, they might find it hard to justify spending large sums on volume licences. After all, not everyone might be in need of some product that can theoretically be rolled out to everyone. Indeed, some people might prefer another product instead: it might be much more appropriate for their kind of work, or it might work better on their platform (or even actually work on their platform where the already-bought product does not).

And where the organisation’s purse strings are loosened when buying a site licence for a product in the first instance, the organisation may not be so forthcoming with finance to acquire other products in the same domain, even if there are genuine reasons for doing so. “You already have an office program you can use; why do you want us to buy another?” Suddenly, instead of creating opportunities, volume licensing eliminates them: if the realm of physical products worked like this, Tesco would offer only one brand of toilet paper and perhaps not even a particularly pleasant one at that!

But it doesn’t stop there. Some vendors bundle products together in volume licensing deals. “Why not indulge yourself with a package of products featuring the ones you want together with some you might like?” This is what customers are made to ask themselves. Suddenly, the justification for acquiring a superior product from a competitor of the volume licensing provider is subject to scrutiny. “You already have access to an intranet solution; why do you want us to spend time and money on another?” And so the supposedly generous site licence becomes a mechanism to rein in spending and even the mere usage of alternatives (which may be Free Software acquired at no cost), all because the acquisition cost of things that people are not already actively using are wrongly perceived as being “free”. “Just take advantage of the site licence!” is what people are told, and even if the alternatives are zero cost, the pressure will still be brought to bear because “we paid for things we could use, so let’s use them!”

And the Winner is…

With such blinkered thinking the customer can no longer sensibly exercise choice: it becomes too easy to constrain an organisation’s strategy based on what else is in the lucky dip of products included in the multiple product volume licensing programme. Once one has bought into such a scheme, there is a disincentive to look elsewhere for other solutions, and soon every need to be satisfied become phrased in terms of the solutions an organisation has already “bought”. Need an e-mail system? The solution now has to be phrased in terms of a single vendor’s product that “we already have”. And when such extra purchases merely add to proprietary infrastructure with proprietary dependencies, that supposedly generous site licence is nothing but bait on the end of the vendor’s fishing line.

We know who the real winner is here. The real loser is anyone having to compete with such schemes, especially anyone using open standards in their products, particularly anyone delivering Free Software using open standards. Because once people have paid good money for something, they will defend that “investment” even when it makes no real sense: this is basic human psychology at work. But the customer is the loser, too: what once seemed like a good deal will just result in them throwing good money after bad, telling themselves that it’s the volume of usage – the chance to sample everything at the “all you can eat” buffet – that makes it a “good investment”, never mind that some of the food at the buffet is unhealthy, poor quality, or may even make people ill.

The customer becomes increasingly “locked in”, unable to consider alternatives. The competition becomes “locked out”, unable to persuade the customer to migrate to open-standards-based solutions or indeed anything else, because even if the customer recognised their dependency on their existing vendor, the cost of undoing the mess might well be less predictable and less palatable than a subscription fee to that “preferred” vendor, appearing as an uncomfortably noticeable entry in the accounts that might indicate strategic ineptitude or wrongdoing – that a mistake has been made – which would be difficult to acknowledge and tempting to conceal. But when the outcome of taking such uncomfortable remedial measures would be lower costs, truly interoperable systems and vastly increased choice, it would be the right thing to do.

One might be tempted to just sit back and watch all this unfold, especially if one has no connection with any of the organisations involved and if the competition consists only of a bunch of proprietary software vendors. But remember this: when the customer is spending your tax money, you are the loser, too. And then you have to wonder who apart from the “preferred” vendor benefits from making you part of the losing team.

Horseplay in Public Procurement? “Standards!”

Thursday, June 6th, 2013

There is a classic XKCD comic strip where the programmer, “slacking off” in the office and taking a break from doing work, clearly engaging in horseplay, issues the retort “Compiling!” to get his supervisor or peers off his back. It is seen as the ultimate excuse for not doing one’s work, immediately curtailing any further investigation of what really is going on in the corridor. Having recently been investigating some strategic public sector purchasing decisions, it occurred to me that something similar is going on in that area as well.

There’s an interesting case that came up a few years ago: Oslo municipality sought to acquire infrastructure for e-mail and related functionality. The scope of the tender covered “at least 30000 accounts” for client and server software, services and assistance, which is a pretty big tender but not unexpected given that the municipality is one of the largest single employers in Norway with almost 50000 employees (more statistics available here). Unfortunately, the additional documents are no longer available (and are generally not publicly available at the state procurement portal – you have to register as an interested party), but they are quoted in various places. Translating one particular requirement…

“Oslo municipality has standardised on Microsoft Office as office productivity software. It is therefore expected that solutions use MS Outlook 2003 and later as client.”

Two places where the offending requirements are reproduced are in complaints to the state procurement panel: 2009/124 and 2009/153. In these very similar complaints, it is pointed out that alternatives to Outlook can be offered as options (this is in the original tender), but that the municipality would only test proposed solutions with Outlook. As justification for insisting on Outlook compatibility, the municipality claimed that they had found “six different large companies providing relevant software in connection with the drafting of the requirements… all of which can be used together with Outlook”, and thus there was a basis for real competition. As a result, both complaints were rejected.

The Illusion of Compatibility

Now, one might claim that it is perfectly reasonable to want to acquire systems that work with the ones you already have. It is a bit like saying, “I’ve bought all this riding equipment: of course I want a horse!” The deeper issue here is whether anyone should be allowed to specify product compatibility to limit competition. In other words, when you just need transport to get around, why have you made your requirements so specific that you will only ever be getting a horse?

It is all very well demanding compatibility with a specific product, but when the means by which compatibility can be achieved are controlled by the vendor of that product, it is never going to be a fair competition for anyone trying to provide compatibility for their own separate products and solutions, especially when the vendor of the specified product is known to have used compatibility breakage to deliberately undermine the viability of competitors’ products. One response to this pitfall is to insist that those writing procurement tenders specify standards instead of products and that these standards must be genuinely open and not de-facto proprietary standards.

Unfortunately, the regulators of procurement do not seem to go even this far. The Norwegian government states that public sector institutions must support various standards, although the directorate concerned appears to have changed these obligations from the original directive and now insists that the dubious, forcibly- and incompletely-standardised Office Open XML document format must be accepted by the public sector in communications; they have also weakened the Internet publishing requirements for public sector institutions by permitting the use of various encumbered, cartel-controlled audio and video formats. For these changes, entertained in a review process, we can thank the likes of Statistics Norway who wanted “Word format” as well as OOXML to be permitted in the list of acceptable “standards”.

In any case, such directives only cover the surface of public sector activity, and the list of standards do not in general cover anything more than storage and interchange formats plus basic communications standards. This leaves quite a gap where established Internet standards exist but are not mandated, thus allowing proprietary protocols and technologies to insert themselves into infrastructure and pervert the processes of procurement and systems integration.

The Pretense of “Standards!”

But even if open standards were mandated in the public sector – a worthy and necessary measure – that wouldn’t mean that our work to ensure a level playing field – fairness in procurement – would be done. Because vendors can always advertise compliance with standards, they can still insist that their products be considered in any procurement contest, and even if those products do notionally support standards it does not mean that they will end up using them when deployed. For example, from the case of the Oslo municipality e-mail system, the councillor with responsibility for finance and development indicated the following:

“Oslo municipality is a complicated and comprehensive organisation and must take existing integration with specialist/bespoke systems into account. A procurement of other [non-Microsoft] end-user software will therefore result in unnecessary increases in costs for the municipality.”

In other words, even if existing software was acquired under the pretense that it supported standards, in deployment it may actually only function with other software using proprietary mechanisms, and the result of this is that newly-acquired software must also support these proprietary mechanisms. And so, a proprietary infrastructure grows, actively repelling components that employ open standards, with its custodians insisting that it is the fault of standards-compliant software that such an infrastructure would need to be dismantled almost in its entirety and replaced if even one standards-compliant component were to be admitted.

Who benefits the most from this? The vendor peddling the proprietary platforms and technologies that enable this morass of interdependency, of course. Make no mistake: any initial convenience promised by such a vendor fades away when the task of having to pursue an infrastructure strategy not dictated by outside interests is brought to bear on the purchaser. But such tasks are work, of course, and if there’s a way of avoiding it and insisting it doesn’t need attending to, a distraction can always be found.

And so, the horseplay continues under the excuse of “Standards!” when there is no real intent to uphold them or engage in the real work of maintaining a sustainable infrastructure that does not exclude open competition or channel public money to preferred vendors. Unlike the character in the comic strip whose code probably is still compiling, certain public sector institutions would have experienced a compilation error and be found out. It appears, unfortunately, that it is our job to peer around the cubicle partition and see what is happening on screen and perhaps to investigate the noises coming from the corridor. After all, our institutions don’t seem to be particularly concerned about doing so.

The Academic Challenge: Ideas, Patents, Openness and Knowledge

Thursday, April 18th, 2013

I recently had reason to respond to an article posted by the head of my former employer, the Rector of the University of Oslo, about an initiative to persuade students to come up with ideas for commercialisation to solve the urban challenges of the city of Oslo. In the article, the Rector brought up an “inspiring example” of such academic commercialisation: a company selling a security solution to the finance industry, possibly based on “an idea” originating in a student project and patented as part of the subsequent commercialisation strategy leading to the founding of that company.

My response made the following points:

  • Patents stand counter to the academic principle of the dissemination of unencumbered knowledge, where people may come and learn, then make use of their new knowledge, skills and expertise. Universities are there to teach people and to undertake research without restricting how people in their own organisations and in other organisations may use knowledge and thus perform those activities themselves. Patents also act against the independent discovery and use of knowledge in a startlingly unethical fashion: people can be prevented from taking advantage of their own discoveries by completely unknown and inscrutable “rights holders”.
  • Where patents get baked into attempts at commercialisation, not only does the existence of such patents have a “chilling effect” on others working in a particular field, but even with such patents starting life in the custody of the most responsible and benign custodians, financial adversity or other circumstances could lead to those patents being used aggressively to stifle competition and to intimidate others working in the same field.
  • It is all very well claiming to support Open Access (particularly when snobbery persists about which journals one publishes in, and when a single paper in a “big name” journal will change people’s attitudes to the very same work whose aspects were already exposed without such recognition in other less well-known publications), but encouraging people to patent research at the same time is like giving with one hand while taking with the other.
  • Research, development and “innovation” happens more efficiently when people don’t have to negotiate to be able to access and make use of knowledge. For those of us in the Free Software community who have seen how real progress can be made when resources – in our case, software – are freely usable by others through explicit and generous licensing, this is not news. But for others, this is a complete change of perspective that requires them to question their assumptions about the way society currently rewards the production of new work and to question the optimality of the system that grants such rewards.

On the one hand, I am grateful for the Rector’s response, but I feel somewhat disappointed with its substance. I must admit that the Rector is not the first person to favour the term “innovation”, but by now the term has surely lost all meaning and is used by every party to mean what they want it to mean, to be as broad or as narrow as they wish it to be, to be equivalent to work associated with various incentives such as patents, or to be a softer and more photogenic term than “invention” whose own usage may be contentious and even more intertwined with patents and specific kinds of legal instruments.

But looking beyond my terminological grumble, I remain unsatisfied:

  • The Rector insists that openness should be the basis of the university’s activities. I agree, but what about freedoms? Just as the term “open source” is widely misunderstood and misused, being taken to mean that “you can look inside the box if you want (but don’t touch)” or “we will tell you what we do (but don’t request the details or attempt to do the same yourself)”, there is a gulf between openly accessible knowledge and freely usable knowledge. Should we not commit to upholding freedoms as well?
  • The assertion is made that in some cases, commercialisation may be the way innovations are made available to society. I don’t dispute that sometimes you need to find an interested and motivated organisation to drive adoption of new technology or new solutions, but I do dispute that society should grant monopolies on entire fields of endeavour to organisations wishing to invest in such opportunities. Monopolies, whether state-granted or produced by the market, can have a very high cost to society. Is it not right to acknowledge such costs and to seek more equitable ways of delivering research to a wider audience?
  • Even if the Rector’s mention of an “inspiring example” had upheld the openness he espouses and had explicitly mentioned the existence of patents, is it ethical to erect a fence around a piece of research and to appoint someone as the gatekeeper even if you do bother to mention that this has been done?

Commercialisation in academia is nothing new. The university where I took my degree had a research park even when I started my studies there, and that was quite a few years ago, and the general topic has been under scrutiny for quite some time. When I applied for a university place, the politics of the era in question were dominated by notions of competition, competitiveness, market-driven reform, league tables and rankings, with schools and hospitals being rated and ranked in a misguided and/or divisive exercise to improve and/or demolish the supposedly worst-performing instances of each kind.

Metrics of “research excellence” are nothing new, either. It seemed to me that some university departments were obsessed with the idea of research rankings. I recall at least one occasion during the various tours of university departments of being asked which other universities us potential applicants were considering, only to have the appointed tour leaders consult the rankings and make an on-the-spot comparison, although I also recall that when the top-ranked universities were named such comparison exercises drew to a swift close. Naturally, the best research departments didn’t need to indulge in such exercises of arguable inadequacy.

The Ole Johan Dahl building, University of Oslo, seen through the mist

The Real Challenge for Students

But does the quality of research have anything to do with the quality of an institution for the average student? Furthermore, does the scale of commercialisation of research in a teaching institution have anything to do with the quality of research? And anyway, why should students care about commercialisation at all?

My own experiences tell me that prospective students would do better to pay attention to reliable indicators of teaching quality than to research ratings. Many of them will end up having relatively little exposure to the research activities of an institution, and even if researchers actively attempt to engage students with “real world” examples from their own work, one can argue that this may not be completely desirable if such examples incorporate the kind of encumbered knowledge featured in the “inspiring example” provided by the Rector. It is, however, more likely that researchers would rather be doing research than teaching and will be less engaging, less available for consultation, and just less suited to providing high quality tuition than the teaching staff in a decent teaching institution. Who cares if a department is doing “cutting edge” research if all you see as a student is a bored and distracted lecturer having to be dragged out of the lab for an hour once or twice a week?

Even the idea that students will go on to do research after their undergraduate degree in the same institution, presumably by forging contacts with researchers in teaching positions, should be questioned. People are encouraged to move around in academia, arguably to an extent that most well-qualified people would find intolerable even in today’s celebrated/infamous “global economy”. That undergraduates would need to relate to the research of their current institution, let alone any commercialisation activity, is in many respects rather wishful thinking. In my entire undergraduate era I never once had any dealings or even awareness of what went on in the university research park: it was just a block of the campus on the map without any relevance and might have well been a large, empty car park for all the influence it had on my university education.

My advice to undergraduates is to seek out the institutions that care about high-quality teaching, whose educators are motivated and whose courses are recognised for providing the right kind of education for the career you want to pursue. Not having been a postgraduate as such, I don’t feel comfortable giving advice about which criteria might be more important than others, although I will say that you should seek out the institutions who provide a safe, supportive, properly-run and properly-supervised working environment for their researchers and all their employees.

The Circus of Commercialisation

Lots of money is being made in licensing and litigation around commercial and commercialised research, and with large sums being transferred as the result of legal rulings and settlements, it is not particularly difficult to see why universities want in on “the action”. In some countries, with private money and operational revenue ostensibly being the primary source of income for universities, one can almost understand the temptation of such institutions to nail down every piece of work and aggressively squeeze every last revenue-earning drop of potential each work may have, if only because a few bad years of ordinary revenue might lead to the demise of an institution or a substantial curtailment of its reputation and influence. For such institutions, perhaps the only barrier being broken voluntarily is an ethical one: whether they should be appointing themselves as the gatekeepers to knowledge and still calling themselves places of learning.

In other countries, public money props up the education sector, in some nations to the extent that students pay nominal fees and experience as close to a free higher education as one can reasonably expect. Although one might argue that this also puts universities at the mercy of an ungenerous public purse and that other sources of income should be secured to allow such institutions to enhance their offerings and maintain their facilities, such commercial activities deservedly attract accusations of a gradual privatisation of higher education (together with the threat of the introduction of significant fees for students and thus an increased inequality between rich and poor), of neglecting non-applied research and commercially unattractive areas of research, and of taking money from taxpayers whilst denying them the benefit of how it was spent.

Commercialisation is undoubtedly used to help universities appear “relevant” to the general public and to industry, especially if large numbers can be made to appear next to items such as “patents” and “spin-offs” in reports made available to the press and to policy makers and if everyone unquestioningly accepts those things and the large numbers of them as being good things (which is far from being a widely-accepted truth, despite the best efforts of self-serving, high-profile, semi-celebrity advocates of patent proliferation), but the influence of such exercises can be damaging to things like Free Software, not merely creating obstacles for the sharing of knowledge but also creating a culture that opposes the principles of sharing and genuine knowledge exchange that Free Software facilitates and encourages.

Indeed, the Free Software movement and its peers provide a fairer and more sustainable model for the widespread distribution and further development of research than the continuing drive for the commercialisation and monetisation of academia. Free Software developers give each other explicit rights to their work and do not demand that others constantly have to ask permission to do the most elementary things with it. In contrast, commercialisation imposes barriers between researchers and their natural collaborators in the form of obligations to an institution’s “intellectual property” or “technology transfer” office, demanding that every work be considered for licensing and revenue generation (by a group of people who may well be neither qualified nor legitimately entitled to decide). Where Free Software emphasises generosity, commercialisation emphasises control.

Universities, whose role it is to provide universal access to usable knowledge, particularly when funded with public money, should be looking to support and even emulate Free Software practitioners. Instead, by pursuing an agenda of pervasive commercialisation, they risk at the very least a stifling of collaboration and the sharing of knowledge; at worst, such an agenda may corrupt the academic activity completely.

Can universities resist the temptations and distractions of commercialisation and focus on delivering a high-quality experience for students and researchers? That is the real ethical challenge.

Sculpture and reflection detail, Ole Johan Dahl building, University of Oslo

Why the Raspberry Pi isn’t the new BBC Micro (and perhaps shouldn’t be, either)

Wednesday, January 23rd, 2013

Having read a commentary on “rivals” to the increasingly well-known Raspberry Pi, and having previously read a commentary that criticised the product and the project for not upholding the claimed ideals of encouraging top-to-bottom experimentation in computing and recreating the environment of studying systems at every level from the hardware through the operating system to the applications, I find myself scrutinising both the advocacy as well as the criticism of the project to see how well it measures up to those ideals and whether the project objectives and how the way they are to be achieved can be seen as still being appropriate thirty years on from the introduction of microcomputers to the masses.

The latter, critical commentary is provocatively titled “Why Raspberry Pi Is Unsuitable for Education” because the Raspberry Pi product, or at least the hardware being sold, is supposedly aimed at education just as the BBC Microcomputer was in the early 1980s. A significant objective of the Computer Literacy Project in that era was to introduce microcomputers in the educational system (at many levels, not just in primary and secondary schools, although that is clearly by far the largest area in the educational sector) and to encourage learning using educational software tools as well as learning about computing itself. Indeed, the “folklore” around the Raspberry Pi is meant to evoke fond memories of that era, with Model A and B variants of the device, and with various well-known personalities being involved in the initiative in one way or another.

Now, if I were to criticise the Raspberry Pi initiative and to tread on toes in doing so, I would rather state something more specific than “education” because I don’t think that making low-cost hardware to education is a bad thing at all, even if it does leave various things to be desired with regard to the openness of the hardware. The former commentary mentioned above makes the point that cheap computers means fewer angry people when or if they get broken, and this is hard to disagree with, although I would caution people into thinking that it means we can treat these devices as disposable items to be treated carelessly. In fact, I would be as specific as to state that the Raspberry Pi is not the equivalent of the BBC Micro.

In the debate about openness, one focus is on the hardware and whether the users can understand and experiment with it. The BBC Micro used a number of commodity components, many of which are still available in some form today, with only perhaps one or two proprietary integrated circuits in the form of uncommitted logic arrays (ULAs), and the circuit diagram was published in various manuals. (My own experience with such matters is actually related to the Acorn Electron which is derived from the BBC Micro and which uses fewer components by merging the tasks of some of those omitted components into a more complicated ULA which also sacrifices some functionality.) In principle, apart from the ULAs for which only block diagrams and pin-outs were published, it was possible to understand the functioning of the hardware and thus make your own peripheral hardware without signing non-disclosure agreements (NDAs) or being best friends with the manufacturer.

Meanwhile, although things are known about the components used by the Raspberry Pi, most obviously the controversial choice of system-on-a-chip (SoC) solution, the information available to make your own is not readily available. Would it be possible to make a BBC Micro from the published information? In fact, there was a version of the BBC Micro made and sold under licence in India where the intention was to source only the ULAs from Acorn (the manufacturer of the BBC Micro) and to make everything else in India. Would it be desirable to replicate the Raspberry Pi exactly? For a number of reasons it would neither be necessary nor would it be desirable to focus so narrowly on one specific device, and I will return to this shortly.

But first, on the most controversial aspect of the Raspberry Pi, it has been criticised by a number of people for using a SoC that incorporates the CPU core alongside proprietary functionality including the display/graphics hardware. Indeed, the system can only boot through the use of proprietary firmware that runs on a not-publicly-documented processing core for which the source code may never be made available. This does raise concern about the sustainability of the device in terms of continued support from the manufacturer of the SoC – it is doubtful that Broadcom will stick with the component in question for very long given the competitive pressures in the market for such hardware – as well as the more general issues of transparency (what does that firmware really do?) and maintainability (can I fix bad hardware behaviour myself?). Many people play down these latter issues, but it is clear that many people also experience problems with proprietary graphics hardware, with its sudden unexplainable crashes, and proprietary BIOS firmware, with its weird behaviour (why does my BIOS sometimes not boot the machine and just sit there with a stupid Intel machine version message?) and lack of functionality.

One can always argue that the operating system on the BBC Micro was proprietary and the source code never officially published – books did apparently appear in print with disassembled code listings, clearly testing then-imprecisely-defined boundaries of copyright – and that the Raspberry Pi can run GNU/Linux (and the proprietary operating system, RISC OS, that is perhaps best left as a historical relic), and if anything I would argue that the exposure that Free Software gets from the Raspberry Pi is one of the initiative’s most welcome outcomes. Back in the microcomputer era, proprietary things were often regarded as being good things in the misguided sense that since they are only offered by one company to customers of that company, they would presumably offer exclusive features that not only act as selling points for that company’s products but also give customers some kind of “edge” over people buying the products of the competitors, if this mattered to you, of course, which is arguably most celebrated in recollections of playground/schoolyard arguments over who had the best computer.

The Computer Literacy Project, even though it did offer funding to buy hardware from many vendors, sadly favoured one vendor in particular. This might seem odd as a product of a government and an ideology that in most aspects of public life in the United Kingdom emphasised and enforced competition, even in areas where competition between private companies was a poor solution for a problem best solved by state governance, and so de-facto standards as opposed to genuine standards ruled the education sector (just as de-facto standards set by corporations, facilitated by dubious business practices, ruled other sectors from that era onwards). Thus, a substantial investment was made in equipment and knowledge tied to one vendor, and it would be that vendor the customers would need to return to if they wanted more of the same, either to continue providing education on the range of supported topics or related ones, or to leverage the knowledge gained for other purposes.

The first commentary mentioned above uses the term “the new Raspberry Pi” as if the choice is between holding firm to a specific device with its expanding but specific ecosystem of products and other offerings or discarding it and choosing something that offers more “bang for the buck”. Admittedly, the commentary also notes that there are other choices for other purposes. But just as the BBC Micro enjoyed a proliferation of peripheral hardware, software commissioned for the platform as well as software written as the market expanded, and even though this does mean that people will be able to do things that they never considered doing before, particularly with hardware and electronics, there is a huge risk that all of this will be done in a way that emphasises a specific device – a specific solution involving specific investments – that serves to fragment the educational community and reproduce the confusion and frustration of the microcomputer era where a program or device required a specific machine to work.

Although it appeals to people’s nostalgia, the educational materials that should be (and presumably are) the real deliverable of the Raspberry Pi initiative should not seek to recreate the Tower of Babel feeling brought about by opening a 1980s book like Computer Spacegames and having to make repeated corrections to programs so that they may have a chance of running on a particular system (even though this may in itself have inspired a curiosity in myself for the diversity seen in systems and both machine and natural languages). Nothing should be “the old Raspberry Pi” or “the new Raspberry Pi” or “the even newer Raspberry Pi” because dividing things up like this will mean that people will end up using the wrong instructions for the wrong thing and being frustrated and giving up. Just looking at the chaos in the periphery around the Arduino is surely enough of a warning.

In short, we should encourage diversity in the devices and solutions offered for people to learn about computing, and we should seek to support genuine standards and genuine openness so that everyone can learn from each other, work with each other’s kit, and work together on materials that support them all as easily and as well as possible. Otherwise, we will have learned nothing from the past and will repeat the major mistakes of the 1980s. That is why the Raspberry Pi should not be the new BBC Micro.

Norway and the Euro 2012 Final

Sunday, July 1st, 2012

Here, I’m obviously not referring to the football championship but to the outcome of an interesting thought experiment: which countries are making the best progress in adopting and supporting Free Software? As a resident of Norway, I’d like to think that I’m keeping my finger on the pulse here in a country that has achieved a lot in recent years: mandatory use of open standards, funding of important Free Software projects in education, and the encouragement of responsible procurement practices in the public sector.

Norway’s Own Goals

However, things don’t always go in the right direction. Recently, it has become known that the government will withdraw all financial support for the Norwegian Open Source Competence Center, founded to encourage and promote Free Software adoption in government and the public sector. One may, of course, question the achievements of the centre, especially when considering how much funding it has had and whether “value for money” has been delivered, or as everyone knows where any measure of politics is present, whether the impression of “value for money” has been delivered. Certainly, the centre has rolled out a number of services which do not seem to have gained massive popularity: kunnskapsbazaren (the knowledge bazaar) and delingsbazaren (the sharing bazaar) do not seem to have amassed much activity and appear, at least on the surface, as mostly static collections of links to other places, some of which are where the real activity is taking place.

But aside from such “knowledge repository” services, the centre does seem to have managed to provide the foundations for the wider use of Free Software, in particular arguing for and justifying the collaborative nature of Free Software within the legal framework of public sector procurement, as well as organising a yearly conference where interested parties can discuss such matters, present their own activities, and presumably establish the basis for wider collaboration. My own experience tells me that even if one isn’t involved with the more practical aspects of setting up such an event, such as the details of getting a venue in order, organising catering and so on, the other aspects can consume a lot of organising time and could quite easily take over the schedule of a number of full-time employees. (People going to volunteer conferences possibly don’t realise how many extra full-time positions, or the effective equivalent of such positions, have been conjured up from hours scraped together from people’s spare time.)

And it has also been noted that the centre has worked hard to spread the message on the ground, touring, giving presentations, producing materials – all important and underrated activities. So, in contrast to those who think that the centre is merely a way of creating jobs for the sake of it, I’m certainly willing to give those who have invested their energy in the centre the benefit of the doubt, even if I cannot honestly say that my awareness of their work has been particularly great. (Those who are so eager to criticise the centre should really take a long hard look at some other, well-established institutions for evidence of waste, lack of “value of money”, and failure to act in the public interest. I’m sure the list is pretty long if one really starts to dig.)

In Politics the Short Term Always Wins

In many ways – too many to get into here – Norwegian society offers plenty of contradictions. A recent addition appears regularly on the front pages of the tabloids, asserting a different angle at different times: there may be a financial crisis, but it apparently doesn’t affect Norway… or maybe it does, but here’s how you, the consumer, can profit from it (your mortgage is even cheaper and you can expect your house to increase even further in value!). I was told by someone many years my junior the other day as he tried to sell me a gym membership that “we’re in a recession” and so the chain is offering a discount to new recruits. I somehow doubt that the chain is really suffering or that the seller really knows what a recession is like.

Nevertheless, the R-word is a great way to tighten the purse strings and move money around to support different priorities, not all of them noble or sensible. There are plenty of people who will claim that public IT spending should be frugal and that it is cheaper to buy solutions off the shelf than it is to develop sustainable solutions collaboratively. But when so many organisations need to operate very similar solutions, and when everybody knows that such off-the-shelf solutions will probably need to be customised, frequently at considerable expense, then to buy something that is ostensibly ready-made and inexpensive is a demonstration of short-term thinking, only outdone by the head of IT at the nation’s parliament boasting about only needing to rely on a single vendor. Since he appears to have presided over the introduction of the iPad in the parliament – a matter of concern to those skeptical about the security implications – one wonders how many vendors are really involved and how this somehow automatically precludes the use of Free Software, anyway.

(Another great example of public IT spending has to be the story of a local council buying iPads for councillors while local schools have 60-year-old maps featuring the Soviet Union, with the spending being justified on the basis that people will print and copy less. It will be interesting whether the predicted savings will materialise after people figure out how to print from their iPads, I’m sure.)

The Role of the Referee

The public sector always attracts vested interests who make very large sums of money from selling licences and services and who will gladly perpetrate myths and untruths about Free Software and open standards in order to maintain their position, leaving it to others to correct the resulting misconceptions of the impressionable observer. There needs to be someone to remind public institutions that they are obliged to act in the public interest, conduct sustainable operations, and that the taxpayer should not have to cover every expense of those operations because they have delegated control to a vendor who decides which technologies they may or may not use, which roadmaps are available to them, burdening them with arbitrary migration exercises, extra and needless expenditure, and so on. Moreover, such institutions should protect those who have to interact with them from interference: taxpayers should not suddenly be required to buy a particular vendor’s products in order to discharge their public obligations.

As we have already seen, there is a need for education around the issues of sustainable public sector computing as well as a need to hold those responsible for public expenditure to account. Holding them to account should be more considered than the knee-jerk response of splashing their organisation across the media when something goes wrong, even if this does provide opportunities for parody; it should involve discussion of matters such as whether such organisations have enough resources, whether they are sharing the burden with others who have similar goals instead of needlessly duplicating effort, and whether they are appropriately resourced so that they may operate sustainably.

I don’t expect a competence centre to perform the task of referee in ensuring a properly functioning, sustainable, interoperable, transparent public sector, but just as a referee cannot do without his linesmen, it is clear that public institutions and the society that pays for them and gives them their role cannot do without an agent that helps and informs those institutions, ensuring that they interact fairly with technology providers both small and large, and operate in a manner that benefits both society and themselves, through the genuine empowerment that Free Software has to offer.