Paul Boddie's Free Software-related blog


Archive for the ‘procurement’ Category

Public Money, Public Code, Public Control

Thursday, September 14th, 2017

An interesting article published by the UK Government Digital Service was referenced in a response to the LWN.net coverage of the recently-launched “Public Money, Public Code” campaign. Arguably, the article focuses a little too much on “in the open” and perhaps not enough on the matter of control. Transparency is a good thing, collaboration is a good thing, no-one can really argue about spending less tax money and getting more out of it, but it is the matter of control that makes this campaign and similar initiatives so important.

In one of the comments on the referenced article you can already see the kind of resistance that this worthy and overdue initiative will meet. There is this idea that the public sector should just buy stuff from companies and not be in the business of writing software. Of course, this denies the reality of delivering solutions where you have to pay attention to customer needs and not just have some package thrown through the doorway of the customer as big bucks are exchanged for the privilege. And where the public sector ends up managing its vendors, you inevitably get an under-resourced customer paying consultants to manage those vendors, maybe even their own consultant colleagues. Guess how that works out!

There is a culture of proprietary software vendors touting their wares or skills to public sector departments, undoubtedly insisting that their products are a result of their own technological excellence and that they are doing their customers a favour by merely doing business with them. But at the same time, those vendors need a steady – perhaps generous – stream of revenue consisting largely of public money. Those vendors do not want their customers to have any real control: they want their customers to be obliged to come back year after year for updates, support, further sales, and so on; they want more revenue opportunities rather than their customers empowering themselves and collaborating with each other. So who really needs whom here?

Some of these vendors undoubtedly think that the public sector is some kind of vehicle to support and fund enterprises. (Small- and medium-sized enterprises are often mentioned, but the big winners are usually the corporate giants.) Some may even think that the public sector is a vehicle for “innovation” where publicly-funded work gets siphoned off for businesses to exploit. Neither of these things cultivate a sustainable public sector, nor do they even create wealth effectively in wider society: they lock organisations into awkward, even perilous technological dependencies, and they undermine competition while inhibiting the spread of high-quality solutions and the effective delivery of services.

Unfortunately, certain flavours of government hate the idea that the state might be in a role of actually doing anything itself, preferring that its role be limited to delegating everything to “the market” where private businesses will magically do everything better and cheaper. In practice, under such conditions, some people may benefit (usually the rich and well-represented) but many others often lose out. And it is not unknown for the taxpayer to have to pick up the bill to fix the resulting mess that gets produced, anyway.

We need sustainable public services and a sustainable software-producing economy. By insisting on Free Software – public code – we can build the foundations of sustainability by promoting interoperability and sharing, maximising the opportunities for those wishing to improve public services by upholding proper competition and establishing fair relationships between customers and vendors. But this also obliges us to be vigilant to ensure that where politicians claim to support this initiative, they do not try and limit its impact by directing money away from software development to the more easily subverted process of procurement, while claiming that procured systems not be subject to the same demands.

Indeed, we should seek to expand our campaigning to cover public procurement in general. When public money is used to deliver any kind of system or service, it should not matter whether the code existed in some form already or not: it should be Free Software. Otherwise, we indulge those who put their own profits before the interests of a well-run public sector and a functioning society.

The BBC Micro and the BBC Micro Bit

Sunday, March 22nd, 2015

At least on certain parts of the Internet as well as in other channels, there has been a degree of excitement about the announcement by the BBC of a computing device called the “Micro Bit“, with the BBC’s plan to give one of these devices to each child starting secondary school, presumably in September 2015, attracting particular attention amongst technology observers and television licence fee-payers alike. Details of the device are a little vague at the moment, but the announcement along with discussions of the role of the corporation and previous initiatives of this nature provides me with an opportunity to look back at the original BBC Microcomputer, evaluate some of the criticisms (and myths) around the associated Computer Literacy Project, and to consider the things that were done right and wrong, with the latter hopefully not about to be repeated in this latest endeavour.

As the public record reveals, at the start of the 1980s, the BBC wanted to engage its audience beyond television programmes describing the growing microcomputer revolution, and it was decided that to do this and to increase computer literacy generally, it would need to be able to demonstrate various concepts and technologies on a platform that would be able to support the range of activities to which computers were being put to use. Naturally, a demanding specification was constructed – clearly, the scope of microcomputing was increasing rapidly, and there was a lot to demonstrate – and various manufacturers were invited to deliver products that could be used as this reference platform. History indicates that a certain amount of acrimony followed – a complete description of which could fill an entire article of its own – but ultimately only Acorn Computers managed to deliver a machine that could do what the corporation was asking for.

An Ambitious Specification

It is worth considering what the BBC Micro was offering in 1981, especially when considering ill-informed criticism of the machine’s specifications by people who either prefer other systems or who felt that participating in the development of such a machine was none of the corporation’s business. The technologies to be showcased by the BBC’s programme-makers and supported by the materials and software developed for the machine included full-colour graphics, multi-channel sound, 80-column text, Viewdata/Teletext, cassette and diskette storage, local area networking, interfacing to printers, joysticks and other input/output devices, as well as to things like robots and user-developed devices. Although it is easy to pick out one or two of these requirements, move forwards a year or two, increase the budget two- or three-fold, or any combination of these things, and to nominate various other computers, there really were few existing systems that could deliver all of the above, at least at an affordable price at the time.

Some microcomputers of the early 1980s
Computer RAM Text Graphics Year Price
Apple II Plus Up to 64K 40 x 25 (upper case only) 280 x 192 (6 colours), 40 x 48 (16 colours) 1979 £1500 or more
Commodore PET 4032/8032 32K 40/80 x 25 Graphics characters (2 colours) 1980 £800 (4032), £1030 (8032) (including monochrome monitor)
Commodore VIC-20 5K 22 x 23 176 x 184 (8 colours) 1980 (1981 outside Japan) £199
IBM PC (Model 5150) 16K up to 256K 40/80 x 25 640 x 200 (2 colours), 320 x 200 (4 colours) 1981 £1736 (including monochrome monitor, presumably with 16K or 64K)
BBC Micro (Model B) 32K 80/40/20 x 32/24, Teletext 640 x 256 (2 colours), 320 x 256 (2/4 colours), 160 x 256 (4/8 colours) 1981 £399 (originally £335)
Research Machines LINK 480Z 64K (expandable to 256K) 40 x 24 (optional 80 x 24) 160 x 72, 80 x 72 (2 colours); expandable to 640 x 192 (2 colours), 320 x 192 (4 colours), 190 x 96 (8 colours or 16 shades) 1981 £818
ZX Spectrum 16K or 48K 32 x 24 256 x 192 (16 colours applied using attributes) 1982 £125 (16K), £175 (48K)
Commodore 64 64K 40 x 25 320 x 200 (16 colours applied using attributes) 1982 £399

Perhaps the closest competitor, already being used in a fairly limited fashion in educational establishments in the UK, was the Commodore PET. However, it is clear that despite the adaptability of that system, its display capabilities were becoming increasingly uncompetitive, and Commodore had chosen to focus on the chipsets that would power the VIC-20 and Commodore 64 instead. (The designer of the PET went on to make the very capable, and understandably more expensive, Victor 9000/Sirius 1.) That Apple products were notoriously expensive and, indeed, the target of Commodore’s aggressive advertising did not seem to prevent them from capturing the US education market from the PET, but they always remained severely uncompetitive in the UK as commentators of the time indeed noted.

Later, the ZX Spectrum and Commodore 64 were released. Technology was progressing rapidly, and in hindsight one might have advocated waiting around until more capable and cheaper products came to market. However, it can be argued that in fulfilling virtually all aspects of the ambitious specification and pricing, it would not be until the release of the Amstrad CPC series in 1984 that a suitable alternative product might have become available. Even then, these Amstrad computers actually benefited from the experience accumulated in the UK computing industry from the introduction of the BBC Micro: they were, if anything, an iteration within the same generation of microcomputers and would even have used the same 6502 CPU as the BBC Micro had it not been for time-to-market pressures and the readily-available expertise with the Zilog Z80 CPU amongst those in the development team. And yet, specific aspects of the specification would still be unfulfilled: the BBC Micro had hardware support for Teletext displays, although it would have been possible to emulate these with a bitmapped display and suitable software.

Arise Sir Clive

Much has been made of the disappointment of Sir Clive Sinclair that his computers were not adopted by the BBC as products to be endorsed and targeted at schools. Sinclair made his name developing products that were competitive on price, often seeking cost-reduction measures to reach attractive pricing levels, but such measures also served to make his products less desirable. If one reads reviews of microcomputers from the early 1980s, many reviewers explicitly mention the quality of the keyboard provided by the computers being reviewed: a “typewriter” keyboard with keys that “travel” appear to be much preferred over the “calculator” keyboards provided by computers like the ZX Spectrum, Oric 1 or Newbury NewBrain, and they appear to be vastly preferred over the “membrane” keyboards employed by the ZX80, ZX81 and Atari 400.

For target audiences in education, business, and in the home, it would have been inconceivable to promote a product with anything less than a “proper” keyboard. Ultimately, the world had to wait until the ZX Spectrum +2 released in 1986 for a Spectrum with such a keyboard, and that occurred only after the product line had been acquired by Amstrad. (One might also consider the ZX Spectrum+ in 1984, but its keyboard was more of a hybrid of the calculator keyboard that had been used before and the “full-travel” keyboards provided by its competitors.)

Some people claim that they owe nothing to the BBC Micro and everything to the ZX Spectrum (or, indeed, the computer they happened to own) for their careers in computing. Certainly, the BBC Micro was an expensive purchase for many people, although contrary to popular assertion it was not any more expensive than the Commodore 64 upon that computer’s introduction in the UK, and for those of us who wanted BBC compatibility at home on a more reasonable budget, the Acorn Electron was really the only other choice. But it would be as childish as the playground tribalism that had everyone insist that their computer was “the best” to insist that the BBC Micro had no influence on computer literacy in general, or on the expectations of what computer systems should provide. Many people who owned a ZX Spectrum freely admit that the BBC Micro coloured their experiences, some even subsequently seeking to buy one or one of its successors and to go on to build a successful software development career.

The Costly IBM PC

Some commentators seem to consider the BBC Micro as having been an unnecessary diversion from the widespread adoption of the IBM PC throughout British society. As was the case everywhere else, the de-facto “industry standard” of the PC architecture and DOS captured much of the business market and gradually invaded the education sector from the top down, although significantly better products existed both before and after its introduction. It is tempting with hindsight to believe that by placing an early bet on the success of the then-new PC architecture, business and education could have somehow benefited from standardising on the PC and DOS applications. And there has always been the persistent misguided belief amongst some people that schools should be training their pupils/students for a narrow version of “the world of work”, as opposed to educating them to be able to deal with all aspects of their lives once their school days are over.

What many people forget or fail to realise is that the early 1980s witnessed rapid technological improvement in microcomputing, that there were many different systems and platforms, some already successful and established (such as CP/M), and others arriving to disrupt ideas of what computing should be like (the Xerox Alto and Star having paved the way for the Apple Lisa and Macintosh, the Atari ST, and so on). It was not clear that the IBM PC would be successful at all: IBM had largely avoided embracing personal computing, and although the system was favourably reviewed and seen as having the potential for success, thanks to IBM’s extensive sales organisation, other giants of the mainframe and minicomputing era such as DEC and HP were pursuing their own personal computing strategies. Moreover, existing personal computers were becoming entrenched in certain markets, and early adopters were building a familiarity with those existing machines that was reflected in publications and materials available at the time.

Despite the technical advantages of the IBM PC over much of the competition at the beginning of the 1980s, it was also substantially more expensive than the mass-market products arriving in significant numbers, aimed at homes, schools and small businesses. With many people remaining intrigued but unconvinced by the need for a personal computer, it would have been impossible for a school to justify spending almost £2000 (probably around £8000 today) on something without proven educational value. Software would also need to be purchased, and the procurement of expensive and potentially non-localised products would have created even more controversy.

Ultimately, the Computer Literacy Project stimulated the production of a wide range of locally-produced products at relatively inexpensive prices, and while there may have been a few years of children learning BBC BASIC instead of one of the variants of BASIC for the IBM PC (before BASIC became a deprecated aspect of DOS-based computing), it is hard to argue that those children missed out on any valuable experience using DOS commands or specific DOS-based products, especially since DOS became a largely forgotten environment itself as technological progress introduced new paradigms and products, making “hard-wired”, product-specific experience obsolete.

The Good and the Bad

Not everything about the BBC Micro and its introduction can be considered unconditionally good. Choices needed to be made to deliver a product that could fulfil the desired specification within certain technological constraints. Some people like to criticise BBC BASIC as being “non-standard”, for example, which neglects the diversity of BASIC dialects that existed at the dawn of the 1980s. Typically, for such people “standard” equates to “Microsoft”, but back then Microsoft BASIC was a number of different things. Commodore famously paid a one-off licence fee to use Microsoft BASIC in its products, but the version for the Commodore 64 was regarded as lacking user-friendly support for graphics primitives and other interesting hardware features. Meanwhile, the MSX range of microcomputers featured Microsoft Extended BASIC which did provide convenient access to hardware features, although the MSX range of computers were not the success at the low end of the market that Microsoft had probably desired to complement its increasing influence at the higher end through the IBM PC. And it is informative in this regard to see just how many variants of Microsoft BASIC were produced, thanks to Microsoft’s widespread licensing of its software.

Nevertheless, the availability of one company’s products do not make a standard, particularly if interoperability between those products is limited. Neither BBC BASIC nor Microsoft BASIC can be regarded as anything other than de-facto standards in their own territories, and it is nonsensical to regard one as non-standard when the other has largely the same characteristics as a proprietary product in widespread use, even if it was licensed to others, as indeed both Microsoft BASIC and BBC BASIC were. Genuine attempts to standardise BASIC did indeed exist, notably BASICODE, which was used in the distribution of programs via public radio broadcasts. One suspects that people making casual remarks about standard and non-standard things remain unaware of such initiatives. Meanwhile, Acorn did deliver implementations of other standards-driven programming languages such as COMAL, Pascal, Logo, Lisp and Forth, largely adhering to any standards subject to the limitations of the hardware.

However, what undermined the BBC Micro and Acorn’s related initiatives over time was the control that they as a single vendor had over the platform and its technologies. At the time, a “winner takes all” mentality prevailed: Commodore under Jack Tramiel had declared a “price war” on other vendors and had caused difficulties for new and established manufacturers alike, with Atari eventually being sold to Tramiel (who had resigned from Commodore) by Warner Communications, but many companies disappeared or were absorbed by others before half of the decade had passed. Indeed, Acorn, who had released the Electron to compete with Sinclair Research at the lower end of the market, and who had been developing product lines to compete in the business sector, experienced financial difficulties and was ultimately taken over by Olivetti; Sinclair, meanwhile, experienced similar difficulties and was acquired by Amstrad. In such a climate, ideas of collaboration seemed far from everybody’s minds.

Since then, the protagonists of the era have been able to reflect on such matters, Acorn co-founder Hermann Hauser admitting that it may have been better to license Acorn’s Econet local area networking technology to interested competitors like Commodore. Although the sentiments might have something to do with revenues and influence – it was at Acorn that the ARM processor was developed, sowing the seeds of a successful licensing business today – the rest of us may well ask what might have happened had the market’s participants of the era cooperated on things like standards and interoperability, helping their customers to protect their investments in technology, and building a bigger “common” market for third-party products. What if they had competed on bringing technological improvements to market without demanding that people abandon their existing purchases (and cause confusion amongst their users) just because those people happened to already be using products from a different vendor? It is interesting to see the range of available BBC BASIC implementations and to consider a world where such building blocks could have been adopted by different manufacturers, providing a common, heterogeneous platform built on cooperation and standards, not the imposition of a single hardware or software platform.

But That Was Then

Back then, as Richard Stallman points out, proprietary software was the norm. It would have been even more interesting had the operating systems and the available software for microcomputers been Free Software, but that may have been asking too much at the time. And although computer designs were often shared and published, a tendency to prevent copying of commercial computer designs prevailed, with Acorn and Sinclair both employing proprietary integrated circuits mostly to reduce complexity and increase performance, but partly to obfuscate their hardware designs, too. Thus, it may have been too much to expect something like the BBC Micro to have been open hardware to any degree “back in the day”, although circuit diagrams were published in publicly-available technical documentation.

But we have different expectations now. We expect software to be freely available for inspection, modification and redistribution, knowing that this empowers the end-users and reassures them that the software does what they want it to do, and that they remain in control of their computing environment. Increasingly, we also expect hardware to exhibit the same characteristics, perhaps only accepting that some components are particularly difficult to manufacture and that there are physical and economic restrictions on how readily we may practise the modification and redistribution of a particular device. Crucially, we demand control over the software and hardware we use, and we reject attempts to prevent us from exercising that control.

The big lesson to be learned from the early 1980s, to be considered now in the mid-2010s, is not how to avoid upsetting a popular (but ultimately doomed) participant in the computing industry, as some commentators might have everybody believe. It is to avoid developing proprietary solutions that favour specific organisations and that, despite the general benefits of increased access to technology, ultimately disempower the end-user. And in this era of readily available Free Software and open hardware platforms, the lesson to be learned is to strengthen such existing platforms and to work with them, letting those products and solutions participate and interoperate with the newly-introduced initiative in question.

The BBC Micro was a product of its time and its development was very necessary to fill an educational need. Contrary to the laziest of reports, the Micro Bit plays a different role as an accessory rather than as a complete home computer, at least if we may interpret the apparent intentions of its creators. But as a product of this era, our expectations for the Micro Bit are greater: we expect transparency and interoperability, the ability to make our own (just as one can with the Arduino, as long as one does not seek to call it an Arduino without asking permission from the trademark owner), and the ability to control exactly how it works. Whether there is a need to develop a completely new hardware solution remains an unanswered question, but we may assert that should it be necessary, such a solution should be made available as open hardware to the maximum possible extent. And of course, the software controlling it should be Free Software.

As we edge gradually closer to September and the big deployment, it will be interesting to assess how the device and the associated initiative measures up to our expectations. Let us hope that the right lessons from the days of the BBC Micro have indeed been learned!

Når sannheten kommer frem

Monday, April 28th, 2014

For noen måneder siden skrev jeg om avgjørelsen foretatt av ledelsen til Universitetet i Oslo å innføre Microsoft Exchange som gruppevareløsning. I teksten viste jeg til kommentarer som handlet om Roundcube og hvorfor den løsningen ikke kunne brukes i fremtiden hos UiO. Nå etter at universitetets IT-avdeling publiserte en nyhetssak som omtaler SquirrelMail, får jeg muligheten til å grave opp og dele noen detaljer fra mine samtaler med talspersonen tilknyttet prosjektet som vurderte Exchange og andre løsninger.

Når man ser rundt på nettet, virker det ikke uvanlig at organisasjoner innførte Roundcube parallelt med SquirrelMail på grunn av bekymringer om “universell utforming” (UU). Men i samtaler med prosjektet ble jeg fortalt at diverse mangler i UU-støtte var en aktuell grunn for at Roundcube ikke kunne bli en del av en fremtidig løsning for webmail hos UiO. Nå kommer sannheten frem:

“Ved innføringen av Roundcube som UiOs primære webmail-tjeneste fikk SquirrelMail lov til å leve videre i parallell fordi Roundcube hadde noen mangler knyttet til universell utforming. Da disse var forbedret hadde ledelsen besluttet at e-post og kalender skulle samles i et nytt system.”

Det finnes to ting som fanger vår oppmerksomhet her:

  1. At Roundcube hadde mangler “ved innføringen”, mens Roundcube hadde vært i bruk i noen år før den tvilsomme prosessen ble satt i gang for å vurdere e-post og kalenderløsninger.
  2. At forbedringene kom tilfeldigvis for sent for å påvirke ledelsens beslutning å innføre Exchange.

I fjor sommer, uten å dele prosjektgruppens påstander om mangler i Roundcube direkte i offentlighet, hørte jeg med andre om det fantes kjente mangler og forbedringspotensial i Roundcube i dette området. Var UU virkelig noe som hindret utbredelsen av Roundcube? Jeg satte meg inn i UU-teknologier og prøvde noen av dem med Roundcube for å se om situasjonen kunne forbedres med egen innsats. Det kan hende at det var store mangler i Roundcube tilbake i 2011 da prosjektgruppen begynte sitt arbeid – jeg velger ikke å hevde noe slikt – men etter at slike mangler ikke kom frem i 2012 i prosjektets sluttrapport (der Roundcube faktisk ble anbefalt som en del av den åpne kandidaten), må vi konkludere at slike bekymringer var for lengst borte og at universitetets egen webmail-tjeneste, selv om den er tilpasset organisasjonens egen visuelle profil (som kan ha noe å si om vedlikehold), var og fortsatt er tilgjengelig for alle brukere.

Hvis vi våger å tro at utdraget ovenfor forteller sannheten må vi nå konkludere at ledelsens beslutning fant sted lenge før selve prosessen som skulle underbygge denne beslutningen ble avsluttet. Og her må vi anse ordene “ledelsen besluttet” i et annet lys enn det som ellers er vanlig – der ledelsen først drar nytte av kompetanse i organisasjonen, og så tar en informert beslutning – med å anta at, som tidligere skrevet, noen “måtte ha” noe de likte, tok beslutningen de ville ta uansett, og så fikk andre til å finne på unnskyldninger og et grunnlag som virker fornuftig nok overfor utenforstående.

Det er én sak at en tilstrekkelig og fungerende IT-løsning som også er fri programvare svartmales mens tilsynelatende upopulære proprietære løsninger som Fronter når ikke opp slik at problemer rapportert i 2004 ble omtalt igjen i 2012 uten at produsenten gjør noe vesentlig mer enn å love å “jobbe med tilgjengelighet” fremover. Det er en annen sak at bærekraftige investeringer i fri programvare virker så fremmed til beslutningstakere hos UiO at folk heller vil snu arbeidsdagen til den vanlige ansatte opp-ned, slik utskiftningen av e-postinfrastrukturen har gjort for noen, enn å undersøke muligheten til å foreta relativt små utviklingsoppgaver for å oppgradere eksisterende systemer, og slippe at folk må “[stå] på dag og natt” i løpet av “de siste månedene” (og her viser vi selvsagt ikke til folk i ledelsen).

At ansatte i IT-avdelingen fikk munnkurv omkring prosessen og måtte forlange at deler av beslutningsgrunnlaget ikke når frem til offentligheten. At “menige” IT-ansvarlige må løpe hit og dit for å sørge for at ingen blir altfor misfornøyd med utfallet til en avgjørelse hverken de eller andre vanlige ansatte hadde vesentlig innflytelse på. At andre må tilpasse seg preferansene til folk som egentlig burde hatt ingenting å si om hvordan andre utfører sitt daglig arbeid. Alt dette sier noe om ledelseskulturen og demokratiske tilstander internt i Universitetet i Oslo.

Men sannheten kommer langsomt frem.

Jeg må ha… derfor må vi ha…

Monday, February 17th, 2014

For et par år siden satte Universitetet i Oslo i gang en prosess for å vurdere gruppevareløsninger (systemer som skal motta, lagre og sende organisasjonens e-postmeldinger og samtidig tillate lagring og deling av kalenderinformasjon som avtaler og møter). Prosessen vurderte et ukjent antall løsninger, droppet alle bortsett fra fire hovedkandidater, og lot være å vurdere en av kandidatene fordi den var det eksisterende kalendersystemet i bruk hos universitetet. Prosessens utfall var en oppsummering av de tre gjenstående løsningene med fordeler og ulemper beskrevet i en 23-siders rapport.

Rapportens konklusjon var at én av løsningene ikke nådde opp til forventninger omkring brukervennlighet eller åpenhet, én hadde tilstrekkelig brukervennlighet og baseres på fri programvare og åpne standarder, og én ble betraktet som å være mest brukervennlig selv om den baseres på et proprietært produkt og proprietære teknologier. Det ble skrevet at universitetet kunne “eventuelt” drifte begge de to sistnevnte løsningene men at den åpne bør foretrekkes med mindre noen spesielle strategiske hensyn styrer valget mot den proprietære, og om det skulle være tilfellet, så ville implementeringen innebære en betydelig arbeidsmengde for institusjonens IT-organisasjon.

Man kan vel komme med kritikk til måten selve prosessen ble utført, men etter å ha lest rapporten ville man trodd at i en offentlig organisasjon som ofte sliter med å dekke alle behov med tilstrekkelige midler, ville man valgt løsningen som fortsetter “tradisjonen” for åpne løsninger og ikke belastet organisasjonen med “ressurskrevende innføring” og andre ulemper. Prosjektets konklusjonen, derimot, var at universitetet skulle innføre Microsoft Exchange – den proprietære løsningen – og dermed skifte ut vesentlige deler av institusjonens infrastruktur for e-post, overføre lagrede meldinger til den proprietære Exchange løsningen, og flytte brukere til nye programvarer og systemer.

Man begynner å få en følelse at på et eller annet sted i ledelseshierarkiet noen har sagt “jeg må ha”, og ettersom ingen har nektet dem noe de “måtte ha” tidligere i livet, så har de fått det de “måtte ha” i denne situasjonen også. Man får også en følelse at rapporten i sitt endelige utkast ble formulert slik at beslutningstakerne raskt kunne avfeie ulempene og risikoene og så få “grønt lys” for et valg de ville ta helt på begynnelsen av prosessen: når det står i “svart på hvitt” at innføringen av den forhåndsutvalgte løsningen er gjennomførbar, føler de at de har alt de trenger for å bare sette i gang, koster det som det bare må.

For Nyhets Skyld

Men tilbake til prosessen. Det første som forbauset meg med prosessen var at man skulle vurdere implementering av helt nye systemer i det hele tatt. Prosessen fikk sin opprinnelse i et tilsynelatende behov for en “integrert” e-post- og kalenderløsning: noe som anses som ganske uviktig hos mange, men man kan ikke nekte for at noen ville opplevd en slik integrasjon av tjenester som nyttig. Men oppgaven å integrere tjenester trenger ikke å innebære en innføring av kun ett stort system som skal dekke alle funksjonelle områder og som skal erstatte alle eksisterende systemer som hadde noe å gjøre med tjenestene som integreres: en slik forenklet oppfatning av hvordan teknologiske systemer fungerer ville føre til at man insisterer at hele Internettet drives fra kun én stormaskin eller på kun én teknologisk plattform; alle som har satt seg inn i hvordan nettet fungerer vet at det ikke er slik (selv om noen organisasjoner ville foretrukket at alle kunne overvåkes på ett sted).

Det kan hende at beslutningstakerne faktisk tror på utilstrekkelige og overforenklede modeller av hvordan teknologi anvendes, eller at de velger bare å avfeie virkeligheten med en “få det gjort” mentalitet som lett kobles sammen med vrangforestillingen at ett produkt og én leverandør er som regel “løsningen”. Men infrastruktur i de fleste organisasjonene vil alltid bestå av forskjellige systemer, ofte med forskjellige opphav, som ofte fungerer sammen for å levere det som virker som kun én løsning eller tjeneste til utenforstående. Akkurat dette har faktisk vært situasjonen inne i universitetets infrastruktur for e-post frem til nå. Hvis den nåværende infrastrukturen mangler en forbindelse mellom e-postmeldinger og kalenderdata, hvorfor kan man ikke legge til en ny komponent eller tjeneste for å realisere den manglende integrasjonen som folk savner?

Og så blir det interessant å se nærmere på løsninger som likner på den eksisterende infrastrukturen universitetet bruker, og på produkter og prosjekter som leverer den savnede delen av infrastrukturen. Hvis det finnes liknende løsninger og tilsvarende infrastrukturbeskrivelser, spesielt om de finnes som fri programvare akkurat som programvarene universitetet allerede bruker, og hvis avstanden mellom det som kjøres nå og en fremtidig “fullstendig” løsning består bare av noen tilleggskomponenter og litt arbeid, ville det ikke vært interessant å se litt nærmere på slike ting først?

Gruppevare: Et Perspektiv

Jeg har vært interessert i gruppevare og kalenderløsninger i ganske lenge. For noen år siden utviklet jeg en nettleser-basert applikasjon for å behandle personlige kommunikasjoner og avtaler, og denne brukte eksisterende teknologier og standarder for å utveksle, fremvise og redigere informasjonen som ble lagret og behandlet. Selv om jeg etterhvert ble mindre overbevist på akkurat den måten jeg hadde valgt å implementere applikasjonen, hadde jeg likevel fått et innblikk i teknologiene som brukes og standardene som finnes for å utveksle gruppevaredata. Tross alt, uansett hvordan meldinger og kalenderinformasjon lagres og håndteres må man fortsatt forholde seg til andre programmer og systemer. Og etter at jeg ble mer opptatt av wikisystemer og fikk muligheten til å implementere en kalendertjeneste for en av de mest utbredte wikiløsningene, ble jeg oppmerksom på nytt på standardiseringsretninger, praktiske forhold i implementasjon av gruppevaresystemer, og ikke minst hva slags eksisterende løsninger som fantes.

Et gruppevaresystem jeg hadde hørt om for mange år siden var Kolab, som består av noen godt etablerte komponenter og programmer (som også er fri programvare), ettersom produktet og prosjektet bak ble grunnlagt for å styrke tilbudet på serversiden slik at klientprogrammer som Evolution og KMail kunne kommunisere med fullstendige gruppevaretjenester som også ville være fri programvare. Et slikt behov ble identifisert da personer og organisasjoner tilknyttet Kolab leverte en e-postløsning bygget på KMail (med kryptering som viktig element) til en del av den tyske stat. Hvorfor bruke fri programvare kun på klientsiden og dermed måtte tåle skiftende forhold og varierende støtte for åpne standarder og interoperabilitet på serversiden?

Nesten Framme

Når man ser på Kolab er det slående hvor mye løsningen har til felles med universitetets nåværende infrastruktur: begge bruker LDAP som autentiseringsgrunnlag, begge bruker felles antispam og antivirus teknologier, og Roundcube står ganske sentralt som “webmail” løsning. Selv om noen funksjoner leveres av forskjellige programvarer, når man sammenlikner Kolab med infrastrukturen til UiO, kan man likevel påstå at avstanden mellom de ulike komponentene som brukes på hver sin side ikke er altfor stor for at man enten kunne bytte ut en komponent til fordel for komponenten Kolab helst vil bruke, eller kunne tilpasse Kolab slik at den kunne bruke den eksisterende komponenten istedenfor sin egen foretrukne komponent. Begge måter å håndtere slike forskjeller kunne benytte dokumentasjonen og ekspertisen som finnes på en mengde steder på nettet og i andre former som bøker og – ikke overraskende – organisasjonens egne fagpersoner.

Man trenger dog egentlig ikke å “bytte” til Kolab i det hele tatt: man kunne ta i bruk delene som dekker manglene i den nåværende infrastrukturen, eller man kunne ta i bruk andre komponenter som kan utføre slike jobber. Poenget er at det finnes alternativer som ligger ganske nær infrastrukturen som brukes i dag, og når man må velge mellom en storslått gjenimplementering av den infrastrukturen eller en mindre oppgradering som sannsynligvis kommer til å levere omtrent det samme resultatet, burde man begrunne og dokumentere avgjørelsen om å ikke se nærmere på slike nærliggende løsninger istedenfor å bare droppe dem fra vurderingen i stillhet og håpe at ingen ville merket at de forsvant uten å bli omtalt i det hele tatt.

I et dokument som oppsummerer prosjektets arbeid står det nettopp at SOGo – den foretrukne åpne løsningen – opptrer som et ledd mellom systemer som allerede er i drift. Man kunne godt tenkt at en vurdering av Kolab ville informert vurderingen av SOGo, og omvendt, slik at forståelsen for en eventuell oppgraderingsløsning kunne blitt grundigere og bedre, og kanskje førte til at en blanding av de mest interessante elementene tas i bruk slik at organisasjonen får ut det beste fra begge (og enda flere) løsninger.

Gjennom Nåløyet

Som skrevet ovenfor er jeg interessert i gruppevare og bestemte meg for å oppdage hvorfor en løsning slik som Kolab (og liknende løsninger, selvsagt) kunne vært fraværende fra sluttrapporten. Selv om det ikke ville vært riktig å publisere samtalene mellom prosjektets talsperson og meg, kan jeg likevel oppsummere det jeg lærte gjennom meldingsutvekslingen: en enkel oversikt over hvilke løsninger som ble vurdert (bortsett fra de som omtales i rapporten) finnes trolig ikke, og prosjektdeltakerne kunne ikke med det første huske hvorfor Kolab ikke ble tatt med til sluttrapporten. Dermed, når rapporten avfeier alt som ikke blir beskrevet i sin tekst, som om det fantes en grundig prosess for å sive ut alt som ikke nådde opp til prosjektets behov, kan dette anses nesten som ren bløff: det finnes ingen bevis for at evalueringen av tilgjengelige løsninger var på noen som helst måte omfattende eller balansert.

Man kan vel hevde at det skader ingen å publisere en rapport på nettet som velger ut verdige systemer for en organisasjons gruppevarebehov – konklusjonene vil naturligvis dreie seg ganske mye om hva slags organisasjon det er som omtales, og man kan alltid lese gjennom teksten og bedømme kompetansen til forfatterne – men det er også tilfellet at andre i samme situasjon ofte vil bruke materialer som likner på det de selv må produsere som utgangspunkt for sitt eget evalueringsarbeid. Når kjente systemer utelates vil lesere kanskje konkludere at slike systemer måtte ha hatt grunnleggende mangler eller vært fullstendig uaktuelle: hvorfor ellers ville de ikke blitt tatt med? Det er flott at organisasjoner foretar slikt arbeid på en åpen måte, men man bør ikke undervurdere propagandaverdien som sitter i en rapport som dropper noen systemer uten begrunnelse slik at systemene som gjenstår kan tolkes som de aller beste eller de eneste relevante, og at andre systemer bare ikke når opp og dermed har ingen plass i slike vurderinger hos andre institusjoner.

Etter litt frem og tilbake ble jeg endelig fortalt at Kolab ikke hadde vist tilstrekkelige livstegn i utvikler- og brukersamfunnet rundt prosjektet, og det var grunnen til at programvaren ikke ble vurdert videre; eksempler trukket frem handlet om prosjektets dokumentasjon som ikke var godt nok oppdatert. Selv om prosjektet har forbedringspotensial i noen områder – jeg har selv foreslått at noe gjøres med wikitjenesten til prosjektet i perioden etter jeg avsluttet mine samtaler med universitetets prosjektdeltakere og begynte å sette meg inn i situasjonen angående fri programvare og gruppevare – må det sies at tilbakemeldingene virket litt forhastede og tilfeldige: e-postlistene til Kolab-prosjektet viser relativt mye aktivitet, og det finnes ikke noen tegn som tyder på at noen fra universitetet tok kontakt med prosjektets utviklersamfunn for å fordype seg i mulige forbedringer i dokumentasjon, fremtidige planer, og muligheter for å gjenbruke kompetanse og materialer for komponenter og systemer som Kolab har til felles med andre løsninger, blant annet de som brukes hos universitetet.

Enda merkeligere var kommentarer fra prosjektgruppen som handlet om Roundcube. Rapporten omtalte Roundcube som en tilstrekkelig løsning som ikke bare brukes frem til i dag hos universitetet, men også ville vært brukt som erstatning for webmail-løsningen i SOGo. Plutselig, ifølge prosjektets talsperson, var Roundcube ikke godt nok i ett navngitt område, men rapporten brukte ikke noen som helst plass om slike påståtte mangler: rart når man tenker at det ville helt sikkert vært en veldig god anledning til å beskrive slike mangler og så forenkle beslutningsgrunnlaget vesentlig. Det han hende at slike ting ble funnet ut i etterkant av rapportens publisering, men man får et inntrykk – begrunnet eller ikke – at slike ting også kan finnes opp i etterkant for å begrunne en avgjørelse som ikke nødvendigvis trenger å bli forankret i faktagrunnlaget.

(Jeg ble bedt om å ikke dele prosjektgruppens oppfatninger om Roundcube med andre i offentligheten, og selv om jeg foreslo at de tok opp de påståtte manglene direkte med Roundcube-prosjektet, er jeg ikke overbevist at de hadde noen som helst hensikt å gjøre det. Beleilig at man kan kritisere noe eller noen uten at de har muligheten til å forklare eller forsvare seg!)

Og til slutt, brukte rapporten en god del plass for å beskrive proprietære Exchange-teknologier og hvordan man kunne få andre systemer til å bruke dem, samtidig som åpne og frie løsninger måtte tilpasses Outlook og det produktets avhengighet på proprietær kommunikasjon for å fungere med alle funksjoner slått på. Relativt lite plass og prioritering ble tildelt alternative klienter. Til tross for bekymringer om Thunderbird – den foretrukne e-postklienten frem til nå – og hvordan den skal utvides med kalenderfunksjon, har jeg aldri sett Kontact eller KMail nevnt en eneste gang, ikke en gang i Linux-sammenheng til tross for at Kontact har vært tilgjengelig i universitetets påbudte Linux-distribusjon – Red Hat Enterprise Linux – i årevis og fungerer helt greit med e-postsystemene som nå skal vrakes. Det kan hende at folk anser Kontact som gammeldags – en holdning jeg oppdaget på IRC for noen uker siden uten at den kunne begrunnes videre eller med mer substans – men den er en moden e-postklient med kalenderfunksjon som har fungert for mange over lengre tid. Pussig at ingen i vurderingsarbeidet vil nevne denne programvaren.

Må Bare Ha

Det begynner med en organisasjon som har godt fungerende systemer som kunne bygges på slik at nye behov tilfredsstilles. Imidlertid, ble det kjørt en prosess som ser ut til å ha begynt med forutsetningen at ingenting kan gjøres med disse godt fungerende systemene, og at de må skiftes ut med “ressurskrevende innføring” til fordel for et proprietært system som skal skape en dypere avhengighet på en beryktet leverandør (og monopolist). Og for å styrke denne tvilsomme forutsetningen ble kjente løsninger utelatt fra vurderingene som ble gjort, tilsynelatende slik at en enklere og “passende” avgjørelse kunne tas.

Inntrykket som blir igjen tyder på at prosessen ikke nødvendigvis ble brukt til å informere avgjørelser, men at avgjørelser informerte eller dirigerte prosessen. Hvorfor dette kan ha skjedd er kanskje en historie for en annen gang: en historie som handler om ulike perspektiver i forhold til etikk, demokrati, og investeringer i kunnskap og kompetanse. Og som man kunne forvente ut ifra det som er skrevet ovenfor, er det ikke nødvendigvis en historie som setter organisasjonens “dirigenter” i et så veldig godt lys.

Når man fokuserer på merkevarer og ikke standarder

Friday, January 24th, 2014

Det var interessant å lese et leserbrev i Uniforum med tittelen “Digitale samarbeidsverktøy i undervisning“: noe som trykker alle de riktige knappene i dagens akademiske landskap med et økende fokus på nettbasert opplæring, bredere tilgang til kurs, og mange andre ting som kanskje motiveres mer av “prestisje” og “å kapre kunder” enn å øke generell tilgjengelighet til en institusjons kunnskap og ekspertise. Brevforfatterne beskriver utbredt bruk av videokonferanseløsninger og skryter på en proprietær løsning som de tilsynelatende liker ganske godt, og henviser til universitetets anbefalinger for tekniske løsninger. Etter at man graver litt til å finne disse anbefalingene ser man ganske fort at de dreier seg om tre proprietære (eller delvis proprietære) systemer: Adobe Connect, Skype, og spesialiserte videokonferanseutstyr som finnes i noen møterom.

Det er i all sannsynlighet en konsekvens av forbrukersamfunnet vi lever i: at man tenker produkter og merkevarer og ikke standarder, og at etter man oppdager et spennende produkt vil man gjerne øke oppmerksomheten på produktet blant alle som har liknende behov. Men når man blir produkt- og merkevarefokusert kan man fort miste blikket over det egentlige problemet og den egentlige løsningen. At så mange fortsetter å insistere på Ibux når de kunne kjøpe generiske medikamenter for en brøkdel av merkevarens pris er bare et eksempel på at folk ikke lenger er vant til å vurdere de virkelige forholdene og vil heller ty til merkevarer for en enkel og rask løsning de ikke trenger å tenke så mye på.

Man bør kanskje ikke legge så veldig mye skyld på vanlige databrukere når de begår slike feil, spesielt når store deler av det offentlige her i landet fokuserer på proprietære produkter som, hvis de utnytter genuine standarder i det hele tatt, pleier å blande dem med proprietære teknologier for å styre kunden mot en forpliktelse overfor noen få leverandører i en nesten ubegrenset tid fremover. Men det er litt skuffende at “grønne” representanter ikke vurderer bærekraftige teknologiske løsninger – de som gjøres tilgjengelig som fri programvare og som bruker dokumenterte og åpne standarder – når det ville forventes at slike representanter foreslår bærekraftige løsninger i andre områder.

The Organisational Panic Button and the Magic Single Vendor Delusion

Wednesday, November 27th, 2013

I have had reason to consider the way organisations make technology choices in recent months, particularly where the public sector is concerned, and although my conclusions may not come as a surprise to some people, I think they sum up fairly well how bad decisions get made even if the intentions behind them are supposedly good ones. Approaching such matters from a technological point of view, being informed about things like interoperability, systems diversity, the way people adopt and use technology, and the details of how various technologies work, it can be easy to forget that decisions around acquisitions and strategies are often taken by people who have no appreciation of such things and no time or inclination to consider them either: as far as decision makers are concerned, such things are mere details that obscure the dramatic solution that shows them off as dynamic leaders getting things done.

Assuming the Position

So, assume for a moment that you are a decision-maker with decisions to make about technology, that you have in your organisation some problems that may or may not have technology as their root cause, and that because you claim to listen to what people in your organisation have to say about their workplace, you feel that clear and decisive measures are required to solve some of those problems. First of all, it is important to make sure that when people complain about something, they are not mixing that thing up with something else that really makes their life awkward, but let us assume that you and your advisers are aware of that issue and are good at getting to the heart of the real problem, whatever that may be. Next, people may ask for all sorts of things that they want but do not actually need – “an iPad in every meeting room, elevator and lavatory cubicle!” – and even if you also like the sound of such wild ideas, you also need to be able to restrain yourself and to acknowledge that it would simply be imprudent to indulge every whim of the workforce (or your own). After all, neither they nor you are royalty!

With distractions out of the way, you can now focus on the real problems. But remember: as an executive with no time for detail, the nuances of a supposedly technological problem – things like why people really struggle with some task in their workplace and what technical issues might be contributing to this discomfort – these things are distractions, too. As someone who has to decide a lot of things, you want short and simple summaries and to give short and simple remedies, delegating to other people to flesh out the details and to make things happen. People might try and get you to understand the detail, but you can always delegate the task of entertaining such explanations and representations to other people, naturally telling them not to waste too much time on executing the plan.

Architectural ornamentation in central Oslo

Architectural ornamentation in central Oslo

On the Wrong Foot

So, let us just consider what we now know (or at least suspect) about the behaviour of someone in an executive position who has an organisation-wide problem to solve. They need to demonstrate leadership, vision and intent, certainly: it is worth remembering that such positions are inherently political, and if there is anything we should all know about politics by now, it is that it is often far more attractive to make one’s mark, define one’s legacy, fulfil one’s vision, reserve one’s place in the history books than it is to just keep things running efficiently and smoothly and to keep people generally satisfied with their lot in life; this principle alone explains why the city of Oslo is so infatuated with prestige projects and wants to host the Winter Olympics in a few years’ time (presumably things like functioning public transport, education, healthcare, even an electoral process that does not almost deliberately disenfranchise entire groups of voters, will all be faultless by then). It is far more exciting being a politician if you can continually announce exciting things, leaving the non-visionary stuff to your staff.

Executives also like to keep things as uncluttered as possible, even if the very nature of a problem is complicated, and at their level in the organisation they want the explanations and the directives to be as simple as possible. Again, this probably explains the “rip it up and start over” mentality that one sees in government, especially after changes in government even if consecutive governments have ideological similarities: it is far better to be seen to be different and bold than to be associated with your discredited predecessors.

But what do these traits lead to? Well, let us return to an organisational problem with complicated technical underpinnings. Naturally, decision-makers at the highest levels will not want to be bored with the complications – at the classic “10000 foot” view, nothing should be allowed to encroach on the elegant clarity of the decision – and even the consideration of those complications may be discouraged amongst those tasked to implement the solution. Such complications may be regarded as a legacy of an untidy and unruly past that was not properly governed or supervised (and are thus mere symptoms of an underlying malaise that must be dealt with), and the need to consider them may draw time and resources away from an “urgently needed” solution that deals with the issue no matter what it takes.

How many times have we been told “not to spend too much time” on something? And yet, that thing may need to be treated thoroughly so that it does not recur over and over again. And as too many people have come to realise or experience, blame very often travels through delegation: people given a task to see through are often deprived of resources to do it adequately, but this will not shield them from recriminations and reprisals afterwards.

It should not demand too much imagination to realise that certain important things will be sacrificed or ignored within such a decision-making framework. Executives will seek simplistic solutions that almost favour an ignorance of the actual problem at hand. Meanwhile, the minions or underlings doing the work may seek to stay as close as possible to the exact word of the directive handed down to them from on high, abandoning any objective assessment of the problem domain, so as to be able to say if or when things go wrong that they were only following the instructions given to them, and that as everything falls to pieces it was the very nature of the vision that led to its demise rather than the work they did, or that they took the initiative to do something “unsanctioned” themselves.

The Magic Single Vendor Temptation

We can already see that an appreciation of the finer points of a problem will be an early casualty in the flawed framework described above, but when pressure also exists to “just do something” and when possible tendencies to “make one’s mark” lie just below the surface, decision-makers also do things like ignore the best advice available to them, choosing instead to just go over the heads of the people they employ to have opinions about matters of technology. Such antics are not uncommon: there must be thousands or even millions of people with the experience of seeing consultants breeze into their workplace and impart opinions about the work being done that are supposedly more accurate, insightful and valuable than the actual experiences of the people paid to do that very work. But sometimes hubris can get the better of the decision-maker to the extent that their own experiences are somehow more valid than those supposed experts on the payroll who cannot seem to make up their minds about something as mundane as which technology to use.

And so, the executive may be tempted to take a page from their own playbook: maybe they used a product in one of their previous organisations that had something to do with the problem area; maybe they know someone in their peer group who has an opinion on the topic; maybe they can also show that they “know about these things” by choosing such a product. And with so many areas of life now effectively remedied by going and buying a product that instantly eradicates any deficiency, need, shortcoming or desire, why would this not work for some organisational problem? “What do you mean ‘network provisioning problems’? I can get the Internet on my phone! Just tell everybody to do that!”

When the tendency to avoid complexity meets the apparent simplicity of consumerism (and of solutions encountered in their final form in the executive’s previous endeavours), the temptation to solve a problem at a single stroke or a single click of the “buy” button becomes great indeed. So what if everyone affected by the decision has different needs? The product will surely meet all those needs: the vendor will make sure of that. And if the vendor cannot deliver, then perhaps those people should reconsider their needs. “I’ve seen this product work perfectly elsewhere. Why do you people have to be so awkward?” After all, the vendor can work magic: the salespeople practically told us so!

Nothing wrong here: a public transport "real time" system failure; all the trains are arriving "now"

Nothing wrong here: a public transport "real time" system failure; all the trains are arriving "now"

The Threat to Diversity

In those courses in my computer science degree that dealt with the implementation of solutions at the organisational level, as opposed to the actual implementation of software, attempts were made to impress upon us students the need to consider the requirements of any given problem domain because any solution that neglects the realities of the problem domain will struggle with acceptance and flirt with failure. Thus, the impatient executive approach involving the single vendor and their magic product that “does it all” and “solves the problem” flirts openly and readily with failure.

Technological diversity within an organisation frequently exists for good reason, not to irritate decision-makers and their helpers, and the larger the organisation the larger the potential diversity to be found. Extrapolating from narrow experiences – insisting that a solution must be good enough for everyone because “it is good enough for my people” – risks neglecting the needs of large sections of an organisation and denying the benefits of diversity within the organisation. In turn, this risks the health of those parts of an organisation whose needs have now been ignored.

But diversity goes beyond what people happen to be using to do their job right now. By maintaining the basis for diversity within an organisation, it remains possible to retain the freedom for people to choose the most appropriate systems and platforms for their work. Conversely, undermining diversity by imposing a single vendor solution on everyone, especially when such solutions also neglect open standards and interoperability, threatens the ability for people to make choices central to their own work, and thus threatens the vitality of that work itself.

Stories abound of people in technical disciplines who “also had to have a Windows computer” to do administrative chores like fill out their expenses, hours, travel claims, and all the peripheral tasks in a workplace, even though they used a functioning workstation or other computer that would have been adequate to perform the same chores within a framework that might actually have upheld interoperability and choice. Who pays for all these extra computers, and who benefits from such redundancy? And when some bright spark in the administration suggests throwing away the “special” workstation, putting administrative chores above the real work, what damage does this do to the working environment, to productivity, and to the capabilities of the organisation?

Moreover, the threat to diversity is more serious than many people presumably understand. Any single vendor solution imposed across an organisation also threatens the independence of the institution when that solution also informs and dictates the terms under which other solutions are acquired and introduced. Any decision-maker who regards their “one product for everybody” solution as adequate in one area may find themselves supporting a “one vendor for everything” policy that infects every aspect of the organisation’s existence, especially if they are deluded enough to think that they getting a “good deal” by buying all their things from that one vendor and thus unquestioningly going along with it all for “economic reasons”. At that point, one has to wonder whether the organisation itself is in control of its own acquisitions, systems or strategies any longer.

Somebody Else’s Problem

People may find it hard to get worked up about the tools and systems their employer uses. Surely, they think, what people have chosen to run a part of the organisation is a matter only for those who work with that specific thing from one day to the next. When other people complain about such matters, it is easy to marginalise them and to accuse them of making trouble for the sake of doing so. But such reactions are short-sighted: when other people’s tools are being torn out and replaced by something less than desirable, bystanders may not feel any urgency to react or even think about showing any sympathy at all, but when tendencies exist to tackle other parts of an organisation with simplistic rationalisation exercises, who knows whose tools might be the next ones to be tampered with?

And from what we know from unfriendly solutions that shun interoperability and that prefer other solutions from the same vendor (or that vendor’s special partners), when one person’s tool or system gets the single vendor treatment, it is not necessarily only that person who will be affected: suddenly, other people who need to exchange information with that person may find themselves having to “upgrade” to a different set of tools that are now required for them just to be able to continue that exchange. One person’s loss of control may mean that many people lose control of their working environment, too. The domino effect that follows may result in an organisation transformed for the worse based only on the uninformed gut instincts of someone with the power to demand that something be done the apparently easy way.

Inconvenience: a crane operating over one pavement while sitting on the other, with a sign reading "please use the pavement on the other side"

Inconvenience: a crane operating over one pavement while sitting on the other, with a sign reading "please use the pavement on the other side"

Getting the Message Across

For those of us who want to see Free Software and open standards in organisations, the dangers of the top-down single vendor strategy are obvious, but other people may find it difficult to relate to the issues. There are, however, analogies that can be illustrative, and as I perused a publication related to my former employer I came across an interesting complaint that happens to nicely complement an analogy I had been considering for a while. The complaint in question is about some supplier management software that insists that bank account numbers can only have 18 digits at most, but this fails to consider the situation where payments to Russian and Chinese accounts might need account numbers with more than 18 digits, and the complainant vents his frustration at “the new super-elite of decision makers” who have decided that they know better than the people actually doing the work.

If that “super-elite” were to call all the shots, their solution would surely involve making everyone get an account with an account number that could only ever have 18 digits. “Not supported by your bank? Change bank! Not supported in your country? Change your banking country!” They might not stop there, either: why not just insist on everyone having an account at just one organisation-mandated bank? “Who cares if you don’t want a customer relationship with another bank? You want to get paid, don’t you?”

At one former employer of mine, setting up a special account at a particular bank was actually how things were done, but ignoring peculiarities related to the nature of certain kinds of institutions, making everyone needlessly conform through some dubiously justified, executive-imposed initiative whether it be requiring them to have an account with the organisation’s bank, or requiring them to use only certain vendor-sanctioned software (and as a consequence requiring them to buy certain vendor-sanctioned products so that they may have a chance of using them at work or to interact with their workplace from home) is an imposition too far. Rationalisation is a powerful argument for shaking things up, but it is often used by those who do not care how much it manages to transfer the inconvenience in an organisation to the individual and to other parties.

Bearing the Costs

We have seen how the organisational cost of short-sighted, buy-and-forget decision-making can end up being borne by those whose interests have been ignored or marginalised “for the good of the organisation”, and we can see how this can very easily impose costs across the whole organisation, too. But another aspect of this way of deciding things can also be costly: in the hurry to demonstrate the banishment of an organisational problem with a flourish, incremental solutions that might have dealt with the problem more effectively can become as marginalised as the influence of the people tasked with the job of seeing any eventual solution through. When people are loudly demanding improvements and solutions, an equally dramatic response probably does not involve reviewing the existing infrastructure, identifying areas that can provide significant improvement without significant inconvenience or significant additional costs, and committing to improve the existing solutions quietly and effectively.

Thus, when faced with disillusionment – that people may have decided for themselves that whatever it was that they did not like is now beyond redemption – decision-makers are apt to pander to such disillusionment by replacing any existing thing with something completely new. Especially if it reinforces their own blinkered view of an organisational problem or “confirms” what they “already know”, decision-makers may gladly embrace such dramatic acts as a demonstration of the resolve expected of a decisive leader as they stand to look good by visibly banishing the source of disillusionment. But when such pandering neglects relatively inexpensive, incremental improvements and instead incurs significant costs and disruptions for the organisation, one can justifiably question the motivations behind such dramatic acts and the level of competence brought to bear on resolving the original source of discomfort.

The electrical waste collection

The electrical waste collection

Mission Accomplished?

Thinking that putting down money with a single vendor will solve everybody’s problems, purging diversity from an organisation and stipulating the uniformity encouraged by that vendor, is an overly simplistic and even deluded approach to organisational change. Change in any organisation can be very expensive and must therefore be managed carefully. Change for the sake of change is therefore incredibly irresponsible. And change imposed to gratify the perception of change or progress, made on a superficial basis and incurring unnecessary and avoidable burdens within an organisation whilst risking that organisation’s independence and viability, is nothing other than indefensible.

Be wary of the “single vendor fixes it all” delusion, especially if all the signs point to a decision made at the highest levels of your organisation: it is the sign of the organisational panic button being pressed while someone declares “Mission Accomplished!” Because at the same time they will be thinking “We will have progress whatever the cost!” And you, not them, will be the one bearing the cost.

Students: Beware of the Academic Cloud!

Sunday, July 21st, 2013

Things were certainly different when I started my university degree many years ago. For a start, institutions of higher education provided for many people their first encounter with the Internet, although for those of us in the United Kingdom, the very first encounter with wide area networking may well have involved X.25 and PAD terminals, and instead of getting a “proper” Internet e-mail address, it may have been the case that an address only worked within a particular institution. (My fellow students quickly became aware of an Internet mail gateway in our department and the possibility, at least, of sending Internet mail, however.)

These days, students beginning their university studies have probably already been using the Internet for most of their lives, will have had at least one e-mail address as well as accounts for other online services, may be publishing blog entries and Web pages, and maybe even have their own Web applications accessible on the Internet. For them, arriving at a university is not about learning about new kinds of services and new ways of communicating and collaborating: it is about incorporating yet more ways of working online into their existing habits and practices.

So what should students expect from their university in terms of services? Well, if things have not changed that much over the years, they probably need a means of communicating with the administration, their lecturers and their fellow students, along with some kind of environment to help them do their work and provide things like file storage space and the tools they won’t necessarily be able to provide themselves. Of course, students are more likely to have their own laptop computer (or even a tablet) these days, and it is entirely possible that they could use that for all their work, subject to the availability of tools for their particular course, and since they will already be communicating with others on the Internet, communicating with people in the university is not really anything different from what they are already doing. But still, there are good reasons for providing infrastructure for students to use, even if those students do end up working from their laptops, uploading assignments when they are done, and forwarding their mail to their personal accounts.

The Student Starter Kit

First and foremost, a university e-mail account is something that can act as an official communications channel. One could certainly get away with using some other account, perhaps provided by a free online service like Google or Yahoo, but if something went wrong somewhere – the account gets taken over by an imposter and then gets shut down, for example – that channel of communication gets closed and important information may be lost.

The matter of how students carry out their work is also important. In computer science, where my experiences come from and where computer usage is central to the course, it is necessary to have access to suitable tools to undertake assignments. As everyone who has used technology knows, the matter of “setting stuff up” is one that often demands plenty of time and distracts from the task at hand, and when running a course that requires the participants to install programs before they can make use of the learning materials, considerable amounts of time are wasted on installing programs and troubleshooting. Thus, providing a ready-to-use environment allows students to concentrate on their work and to more easily relate to the learning materials.

There is the matter of the nature of teaching environments and the tools chosen. Teaching environments also allow students to become familiar with desirable practices when finding solutions to the problems in their assignments. In software engineering, for example, the use of version control software encourages a more controlled and rational way of refining a program under development. Although the process itself may not be recognised and rewarded when an assignment is assessed, it allows students to see how things should be done and to take full advantage of the opportunity to learn provided by the institution.

Naturally, it should be regarded as highly undesirable to train students to use specific solutions provided by selected vendors, as opposed to educating them to become familiar with the overarching concepts and techniques of a particular field. Schools and universities are not vocational training institutions, and they should seek to provide their students with transferable skills and knowledge that can be applied generally, instead of taking the easy way out and training those students to perform repetitive tasks in “popular” software that gives them no awareness of why they are doing those things or any indication that the rest of the world might be doing them in other ways.

Construction of the IT department's new building, University of Oslo

Minority Rule

So even if students arrive at their place of learning somewhat equipped to learn, communicate and do their work, there may still be a need for a structured environment to be provided for them. At that place of learning they will join those employed there who already have a structured environment in place to be able to do their work, whether that is research, teaching or administration. It makes a certain amount of sense for the students to join those other groups in the infrastructure already provided. Indeed, considering the numbers of people involved, with the students outnumbering all other groups put together by a substantial margin, one might think that the needs of the students would come first. Sadly, things do not always work that way.

First of all, students are only ever “passing through”. While some university employees may be retained for similar lengths of time – especially researchers and others on temporary contracts (a known problem with social and ethical dimensions of its own) – others may end up spending most of their working life there. As a result, the infrastructure is likely to favour such people over time as their demands are made known year after year, with any discomfort demanding to be remedied by a comfortable environment that helps those people do their work. Not that there is anything wrong with providing employees with a decent working environment: employers should probably do even more to uphold their commitments in this regard.

But when the demands and priorities of a relatively small group of people take precedence over what the majority – the students – need, one can argue that such demands and priorities have begun to subvert the very nature of the institution. Imposing restrictions on students or withholding facilities from them just to make life easier for the institution itself is surely a classic example of “the tail wagging the dog”. After all, without students and teaching an institution of higher education can no longer be considered a university.

Outsourcing Responsibility

With students showing up every year and with an obligation to provide services to them, one might imagine that an institution might take the opportunity to innovate and to evaluate ways in which it might stand out amongst the competition, truly looking after the group of people that in today’s increasingly commercialised education sector are considered the institution’s “customers”. When I was studying for my degree, the university’s mathematics department was in the process of developing computer-aided learning software for mathematics, which was regarded as a useful way of helping students improve their understanding of the course material through the application of knowledge recently acquired. However, such attempts to improve teaching quality are only likely to get substantial funding either through teaching-related programmes or by claiming some research benefit in the field of teaching or in another field. Consequently, developing software to benefit teaching is likely to be an activity located near the back of the queue for attention in a university, especially amongst universities whose leadership regard research commercialisation as their top priority.

So it becomes tempting for universities to minimise costs around student provision. Students are not meant to be sophisticated users whose demands must be met, mostly because they are not supposed to be around for long enough to be comfortable and for those providing services to eventually have to give in to student demands. Moreover, university employees are protected by workplace regulation (in theory, at least) whereas students are most likely protected by much weaker regulation. To take one example, whereas a university employee could probably demand appropriate accessibility measures for a disability they may have, students may have to work harder to get their disabilities recognised and their resulting needs addressed.

The Costs of Doing Business

So, with universities looking to minimise costs and to maximise revenue-generating opportunities, doing things like running infrastructure in a way that looks after the needs of the student and researcher populations seems like a distraction. University executives look to their counterparts in industry and see that outsourcing might offer a solution: why bother having people on the payroll when there are cloud computing services run by friendly corporations?

Let us take the most widely-deployed service, e-mail, as an example. Certainly, many students and employees might not be too concerned with logging into a cloud-based service to access their university e-mail – many may already be using such services for personal e-mail, and many may already be forwarding their university e-mail to their personal account – and although they might be irritated by the need to use one service when they have perhaps selected another for their personal use, a quick login, some adjustments to the mail forwarding settings, and logging out (never to return) might be the simple solution. The result: the institution supposedly saves money by making external organisations responsible for essential services, and the users get on with using those services in the ways they find most bearable, even if they barely take advantage of the specially designated services at all.

However, a few things may complicate this simplified scenario somewhat: reliability, interoperability, lock-in, and privacy. Reliability is perhaps the easiest to consider: if “Office 365” suddenly becomes “Office 360” for a few days, cloud-based services cannot be considered suitable for essential services, and if the “remedy” is to purchase infrastructure to bail out the cloud service provider, one has to question the decision to choose such an external provider in the first place. As for interoperability, if a user prefers Gmail, say, over some other hosted e-mail solution where that other solution doesn’t exchange messages properly with Gmail, that user will be in the awkward position of having to choose between a compromised experience in their preferred solution or an experience they regard as inconvenient or inferior. With services more exotic and less standardised than e-mail, the risk is that a user’s preferred services or software will not work with the chosen cloud-based service at all. Either way, users are forced to adopt services they dislike or otherwise have objections to using.

Miscellaneous waste

Product Placement

With users firmly parked on a specific vendor’s cloud-based platform, the temptation will naturally grow amongst some members of an organisation to “take advantage” of the other services on that platform whether they support interoperability or not. Users will be forced to log into the solution they would rather avoid or ignore in order to participate in processes and activities initiated by those who have actively embraced that platform. This is rather similar to the experience of getting a Microsoft Office document in an e-mail by someone requesting that one reads it and provides feedback, even though recipients may not have access to a compatible version of Microsoft Office or even run the software in question at all. In an organisational context, legitimate complaints about poor workflow, inappropriate tool use, and even plain unavailability of software (imagine being a software developer using a GNU/Linux or Unix workstation!) are often “steamrollered” by management and the worker is told to “deal with it”. Suddenly, everyone has to adapt to the tool choices of a few managerial product evangelists instead of participating in a standards-based infrastructure where the most important thing is just getting the work done.

We should not be surprised that vendors might be very enthusiastic to see organisations adopt both their traditional products as well as cloud-based platforms. Not only are the users exposed to such vendors’ products, often to the exclusion of any competing or alternative products, but by having to sign up for those vendors’ services, organisations are effectively recruiting customers for the vendor. Indeed, given the potential commercial benefits of recruiting new customers – in the academic context, that would be a new group of students every year – it is conceivable that vendors might offer discounts on products, waive the purchase prices, or even pay organisations in the form of services rendered to get access to new customers and increased revenue. Down the line, this pays off for the vendor: its organisational customers are truly locked in, cannot easily switch to other solutions, and end up paying through the nose to help the vendor recruit new customers.

How Much Are You Worth?

All of the above concerns are valid, but the most important one of all for many people is that of privacy. Now, most people have a complicated relationship with privacy: most people probably believe that they deserve to have a form of privacy, but at the same time many people are quite happy to be indiscreet if they think no-one else is watching or no-one else cares about what they are doing.

So, they are quite happy to share information about themselves (or content that they have created or acquired themselves) with a helpful provider of services on the Internet. After all, if that provider offers services that permit convenient ways of doing things that might be awkward to do otherwise, and especially if no money is changing hands, surely the meagre “payment” of tedious documents, mundane exchanges of messages, unremarkable images and videos, and so on, all with no apparently significant value or benefit to the provider, gets the customer a product worth far more in return. Everybody wins!

Well, there is always the matter of the small print – the terms of use, frequently verbose and convoluted – together with how other people perceive the status of all that content you’ve been sharing. As your content reaches others, some might perceive it as fair game for use in places you never could have imagined. Naturally, unintended use of images is no new phenomenon: I once saw one of my photographs being used in a school project (how I knew about it was that the student concerned had credited me, although they really should have asked me first, and an Internet search brought up their page in the results), whereas another photograph of mine was featured in a museum exhibition (where I was asked for permission, although the photograph was a lot less remarkable than the one found by the student).

One might argue that public sharing of images and other content is not really the same as sharing stuff over a closed channel like a social network, and so the possibility of unintended or undesirable use is diminished. But take another look at the terms of use: unlike just uploading a file to a Web site that you control, where nobody deems to claim any rights to what you are sharing, social networking and content sharing service providers frequently try and claim rights to your work.

Privacy on Parade

When everyone is seeking to use your content for their own goals, whether to promote their own businesses or to provide nice imagery to make their political advocacy more palatable, or indeed to support any number of potential and dubious endeavours that you may not agree with, it is understandable that you might want to be a bit more cautious about who wants a piece of your content and what they intend to do with it once they have it. Consequently, you might decide that you only want to deal with the companies and services you feel you can trust.

What does this have to do with students and the cloud? Well, unlike the services that a student may already be using when they arrive at university to start their studies, any services chosen by the institution will be imposed on the student, and they will be required to accept the terms of use of such services regardless of whether they agree with them or not. Now, although it might be said that the academic work of a student might be somewhat mundane and much the same as any other student’s work (even if this need not be the case), and that the nature of such work is firmly bound to the institution and it is therefore the institution’s place to decide how such work is used (even though this could be disputed), other aspects of that student’s activities and communications might be regarded as beyond the interests of the institution: who the student communicates with, what personal views they may express in such communications, what academic or professional views they may have.

One might claim that such trivia is of no interest to anyone, and certainly not to commercial entities who just want to sell advertising or gather demographic data or whatever supposedly harmless thing they might do with the mere usage of their services just to keep paying the bills and covering their overheads, but one should still be wary that information stored on some remote server in some distant country might somehow make its way to someone who might take a closer and not so benign interest in it. Indeed, the matter of the data residing in some environment beyond their control is enough for adopters of cloud computing to offer specially sanctioned exemptions and opt-outs. Maybe it is not so desirable that some foreign student writing about some controversial topic in their own country has their work floating around in the cloud, or as far as a university’s legal department is concerned, maybe it does not look so good if such information manages to wander into the wrong hands only for someone to ask the awkward question of why the information left the university’s own systems in the first place.

A leaflet for a tourist attraction in the Cambridge area

Excuses, Excuses

Cloud-based service providers are likely to respond to fears articulated about privacy violations and intrusions by insisting that such fears are disproportionate: that no-one is really looking at the data stored on their servers, that the data is encrypted somewhere/somehow, that if anything does look at the data it is merely an “algorithm” and not a person. Often these protests of innocence contradict each other, so that at any point in time there is at least one lie being told. But what if it is “only an algorithm” looking at your data? The algorithm will not be keeping its conclusions to itself.

How would you know what is really being done with your data? Not only is the code being run on a remote server, but with the most popular cloud services the important code is completely proprietary – service providers may claim to support Free Software and even contribute to it, but they do so only for part of their infrastructure – and you have no way of verifying any of their claims. Disturbingly, some companies want to enforce such practices within your home, too, so that when Microsoft claims that the camera on their most recent games console has to be active all the time but only for supposedly benign reasons and that the data is only studied by algorithms, the company will deny you the right to verify this for yourself. For all you know the image data could be uploaded somewhere, maybe only on command, and you would not only be none the wiser but you would also be denied the right to become wiser about the matter. And even if the images were not shared with mysterious servers, there are still unpleasant applications of “the algorithm”: it could, for example, count people’s faces and decide whether you were breaking the licensing conditions on a movie or other content by holding a “performance” that goes against the arbitrary licensing that accompanies a particular work.

Back in the world of the cloud, companies like Microsoft typically respond to criticism by pointing the finger at others. Through “shell” or “front” organisations the alleged faults of Microsoft’s competitors are brought to the attention of regulators, and in the case of the notorious FairSearch organisation, to take one example, the accusing finger is pointed at Google. We should all try and be aware of the misdeeds of corporations, that unscrupulous behaviour may have occurred, and we should insist that such behaviour be punished. But we should also not be distracted by the tactics of corporations that insist that all the problems reside elsewhere. “But Google!” is not a reason to stop scrutinising the record of a company shouting it out loud, nor is it an excuse for us to disregard any dubious behaviour previously committed by the company shouting it the loudest. (It is absurd that a company with a long history of being subject to scrutiny for anticompetitive practices – a recognised monopoly – should shout claims of monopoly so loudly, and it is even more absurd for anyone to take such claims at face value.)

We should be concerned about Google’s treatment of user privacy, but that should not diminish our concern about Microsoft’s treatment of user privacy. As it turns out, both companies – and several others – have some work to do to regain our trust.

I Do Not Agree

So why should students specifically be worried about all this? Does this not also apply to other groups, like anyone who is made to use software and services in their job? Certainly, this does affect more than just students, but students will probably be the first in line to be forced to accept these solutions or just not be able to take the courses they want at the institutions they want to attend. Even in countries with relatively large higher education sectors like the United Kingdom, it can be the case that certain courses are only available at a narrow selection of institutions, and if you consider a small country like Norway, it is entirely possible that some courses are only available at one institution. For students forced to choose a particular institution and to accept that institution’s own technological choices, the issue of their online privacy becomes urgent because such institutional changes are happening right now and the only way to work around them is to plan ahead and to make it immediately clear to those institutions that the abandonment of the online privacy rights (and other rights) of their “customers” is not acceptable.

Of course, none of this is much comfort to those working in private businesses whose technological choices are imposed on employees as a consequence of taking a job at such organisations. The only silver lining to this particular cloud is that the job market may offer more choices to job seekers – that they can try and identify responsible employers and that such alternatives exist in the first place – compared to students whose educational path may be constrained by course availability. Nevertheless, there exists a risk that both students and others may be confronted with having to accept undesirable conditions just to be able to get a study place or a job. It may be too disruptive to their lives not to “just live with it” and accept choices made on their behalf without their input.

But this brings up an interesting dilemma. Should a person be bound by terms of use and contracts where that person has been effectively coerced into accepting them? If their boss tells them that they have to have a Microsoft or Google account to view and edit some online document, and when they go to sign up they are presented with the usual terms that nobody can reasonably be expected to read, and given that they cannot reasonably refuse because their boss would then be annoyed at that person’s attitude (and may even be angry and threaten them with disciplinary action), can we not consider that when this person clicks on the “I agree” button it is only their employer who really agrees, and that this person not only does not necessarily agree but cannot be expected to agree, either?

Excuses from the Other Side

Recent events have probably made people wary of where their data goes and what happens with it once it has left their own computers, but merely being concerned and actually doing something are two different things. Individuals may feel helpless: all their friends use social networks and big name webmail services; withdrawing from the former means potential isolation, and withdrawing from the latter involves researching alternatives and trying to decide whether those alternatives can be trusted more than one of the big names. Certainly, those of us who develop and promote Free Software should be trying to provide trustworthy alternatives and giving less technologically-aware people the lifeline that they need to escape potentially exploitative services and yet maintain an active, social online experience. Not everyone is willing to sacrifice their privacy for shiny new online toys that supposedly need to rifle through your personal data to provide that shiny new online experience, nor is everyone likely to accept such constraints on their privacy when made aware of them. We should not merely assume that people do not care, would just accept such things, and thus do not need to be bothered with knowledge about such matters, either.

As we have already seen, individuals can make their own choices, but people in organisations are frequently denied such choices. This is where the excuses become more irrational and yet bring with them much more serious consequences. When an organisation chooses a solution from a vendor known to share sensitive information with other, potentially less friendly, parties, they might try and explain such reports away by claiming that such behaviour would never affect “business applications”, that such applications are completely separate from “consumer applications” (where surveillance is apparently acceptable, but no-one would openly admit to thinking this, of course), and that such a vendor would never jeopardise their relationship with customers because “as a customer we are important to them”.

But how do you know any of this? You cannot see what their online services are actually doing, who can access them secretly, whether people – unfriendly regimes, opportunistic law enforcement agencies, dishonest employees, privileged commercial partners of the vendor itself – actually do access your data, because how all that stuff is managed is secret and off-limits. You cannot easily inspect any software that such a vendor provides to you because it will be an inscrutable binary file, maybe even partially encrypted, and every attempt will have been made to forbid you from inspecting it both through licence agreements and legislation made at the request of exactly these vendors.

And how do you know that they value your business, that you are important to them? Is any business going to admit that no, they do not value your business, that you are just another trophy, that they share your private data with other entities? With documentation to the contrary exposing the lies necessary to preserve their reputation, how do you know you can believe anything they tell you at all?

Clouds over the IT building, University of Oslo

The Betrayal

It is all very well for an individual to make poor decisions based on wilful ignorance, but when an organisation makes poor decisions and then imposes them on other people for those people to suffer, this ignorance becomes negligence at the very least. In a university or other higher education institution, apparently at the bottom of the list of people to consult about anything, the bottoms on seats, the places to be filled, are the students: the first in line for whatever experiment or strategic misadventure is coming down the pipe of organisational reform, rationalisation, harmonisation, and all the other buzzwords that look great on the big screen in the boardroom.

Let us be clear: there is nothing inherently wrong with storing content on some network-accessible service, provided that the conditions under which that content is stored and accessed uphold the levels of control and privacy that we as the owners of that data demand, and where those we have elected to provide such services to us deserve our trust precisely by behaving in a trustworthy fashion. We may indeed select a service provider or vendor who respects us, rather than one whose terms and conditions are unfathomable and who treats its users and their data as commodities to be traded for profits and favours. It is this latter class of service providers and vendors – ones who have virtually pioneered the corrupted notion of the consumer cloud, with every user action recorded, tracked and analysed – that this article focuses on.

Students should very much beware of being sent into the cloud: they have little influence and make for a convenient group of experimental subjects, with few powerful allies to defend their rights. That does not mean that everyone else can feel secure, shielded by employee representatives, trade unions, industry organisations, politicians, and so on. Every group pushed into the cloud builds the pressure on every subsequent group until your own particular group is pressured, belittled and finally coerced into resignation. Maybe you might want to look into what your organisation is planning to do, to insist on privacy-preserving infrastructure, and to advocate Free Software as the only effective way of building that infrastructure.

And beware of the excuses – for the favourite vendor’s past behaviour, for the convenience of the cloud, for the improbability that any undesirable stuff could happen – because after the excuses, the belittlement of the opposition, comes the betrayal: the betrayal of sustainable and decentralised solutions, the betrayal of the development of local and institutional expertise, the betrayal of choice and real competition, and the betrayal of your right to privacy online.

Norwegian Voting and the Illusion of “Open Source”

Sunday, June 30th, 2013

I was interested to read about the new Norwegian electronic voting administration system, EVA, and a degree of controversy about whether the system can scale up to handle the number of votes expected in Oslo during the upcoming parliamentary elections. What I find more controversial is the claim that the system has been made available as “open source” software. In fact, a quick look at the licence is enough to confirm that the source code is really only available as “shared source”: something which people may be able to download and consult, but which withholds the freedoms that should be associated with open source software (or Free Software, as we should really call it).

So, here is why the usage of the term “open source” is dishonest in this particular case:

  • The rights offered to you (as opposed to the Norwegian authorities) cover only “testing, reviewing or evaluating the code”. (The authorities have geographical and usage restrictions placed on them.)
  • The licence restricts use to “non-commercial purposes”.
  • Anything else you might want to do requires you to get “written approval” from the vendors of the different parts of the system in question.
  • The software is encumbered by patents, but there is no patent grant.

Lost in Translation

In fairness, the page covering the source code does say the following about the use of the term “open source” which I have translated from the Norwegian:

When we use the term “open source”, we do not mean that the entire solution can be regarded as “free software”, meaning that you can download the software, inspect it, change it, and use it however you like. Anyone can download and inspect the source code, but it may only be used to carry out Norwegian elections. The solution should be available for research, however, and you are allowed to develop it further for such academic purposes.

Why translate from Norwegian when the English is underneath in the same page? Here’s why:

When we are using the term “open source code”, we do not say that the source code as a whole is what’s known as “open source”, that is code that can be freely downloaded, examined, changed and used. Anyone can download our source code, but it is only permittable to use it for elections in Norway. However, research on the solution is allowed and you are hence allowed to develop the system for an academic purpose.

In other words, while there are people involved who are clearly aware of what Free Software is, they apparently reserve the right to misuse the term “open source” to mean something other than what it was intended to mean. By consulting the Norwegian text first, I even give them the benefit of the doubt, but the contradiction of saying that something both is and is not something else should have awakened a degree of guilt that this could be regarded as deception.

Open Season

Now, one may argue that the Open Source Initiative should have been more aggressive in upholding its own brand and making sure that “open source” really does act as a guarantee of openness, but the “open” prefix is perhaps one of the most abused terms in the field of computing, and thus one might conclude that the OSI were fighting a losing battle from the outset. Of course, people might claim that the term “free software” is ambiguous or vague, which is why I prefer to write it as Free Software: the use of capital letters should at least get English readers unfamiliar with the term to wonder whether there is more involved than the mere juxtaposition of two widely-used words.

In Norwegian, “fri programvare” communicates something closer to what is meant by Free Software: the “fri” does not generally mean “for free” or “gratis”, but rather communicates a notion of freedom. I imagine that those who translated the text quoted above need to improve their terminology dictionary. Nevertheless, this does not excuse anyone who takes advantage of the potential ambiguity in the common sense perception of the term to promote something that has only superficial similarities to Free Software.

The Good, the Bad, and the Evry

I suppose we should welcome increased transparency in such important systems, and we should encourage more of the same in other areas. Nevertheless, with a substantial amount of activity in the field of electronic voting, particularly in the academic realm and in response to high-profile scandals (with some researchers even apparently experiencing persecution for their work), one has to wonder why the Norwegian government is not willing to work on genuinely open, Free Software electronic voting systems instead of partnering with commercial interests who, by advertising patent coverage, potentially threaten research in this area and thus obstruct societal progress, accountability and democracy.

One might suggest that the involvement of EDB ErgoGroup, now known as Evry, provides some answers to questions about how and where taxpayer money gets spent. With substantial state involvement in the company, either through the state-owned postal monopoly or the partly-state-owned incumbent telephone operator, one cannot help thinking that this is yet another attempt to funnel money to the usual beneficiaries of public contracts and to pick winners in an international market for electronic voting systems. Maybe, with voting problems, banking system problems and the resulting customer dissatisfaction, public departments and ministries think that this vendor needs all the help it can get, despite being appointed to a dominant position in the nation’s infrastructure and technology industry to the point of it all being rather anti-competitive. When the financial sector starts talking of monopolies, maybe the responsible adults need to intervene to bring a degree of proper functioning to the marketplace.

But regardless of who became involved and their own rewards for doing so, the Norwegian ministry responsible for this deception should be ashamed of itself! How about genuinely participating in and advancing the research and development of voting systems that everyone on the planet can freely use instead of issuing dishonest press releases and patronising accounts of how Norway doesn’t really need to learn from anyone? Because, by releasing patent-encumbered “shared source” and calling it “open source”, some decision makers and their communications staff clearly need lessons in at least one area: telling the truth.

An Aside

I also find it distasteful that the documentation hosted on the government’s electoral site has adverts for proprietary software embedded in it, but I doubt that those working in the Microsoft monoculture even notice their presence. These people may be using taxpayer money to go shopping for proprietary products, but I do not see why they should then be advertising those products to us as well using our money.

Site Licences and Volume Licensing: Locking You Both In… and Out

Sunday, June 9th, 2013

Once upon a time, back in the microcomputer era, if you were a reputable institution and were looking to acquire software it was likely that you would end up buying proprietary software, mostly because Free Software was not a particularly widely-known concept, and partly because any “public domain” or “freeware” programs probably didn’t give you or your superiors much confidence about the quality or maintenance of those programs, although there were notable exceptions on some platforms (some of which are now Free Software). As computers became more numerous, programs would be used on more and more computers, and producers would take exception to their customers buying a single “copy” and then using it on many computers simultaneously.

In order to avoid arguments about common expectations of reasonable use – if you could copy a program onto many floppy disks and run that program on many computers at once, there was obviously no physical restriction on the use of copies and thus no apparent need to buy “official” copies when your computer could make them for you – and in order to avoid needing to engage in protracted explanations of copyright law to people for whom such law might seem counter-intuitive or nonsensical, the concept of the “site licence” was born: instead of having to pay for one hundred official copies of a product, presumably consisting of one hundred disks in one hundred boxes with one hundred manuals, at one hundred times the list price of the product, an institution would buy a site licence for up to one hundred computers (or perhaps as many as the institution has, betting on the improbability that the institution will grow tenfold, say) and pay somewhat less than one hundred times the original price, although perhaps still a multiple of ten of that price.

Thus, the customer got the vendor off their back, the vendor still got more or less what they thought was a fair price, and everyone was happy. At least that is how it all seemed.

The Physical Discount Store

Now, because of the apparent compromise made by the vendor – that the customer might be paying somewhat less per copy – the notion of the “volume licence” or “bulk discount” arose: suddenly, software licences start to superficially resemble commodities and people start to think of them just like they do when they buy other things in bulk. Indeed, in the retail sector the average person became aware of the concept of bulk purchasing with the introduction of cash and carry stores, discount stores, and so on: the larger the volume of goods passing through those channels, the bigger the discounts on those goods.

Now, economies of scale exist throughout modern commerce and often for good reason: any fixed costs (or costs largely insensitive to the scale of output) in production and distribution can be diluted by an increased number of units produced and shipped, making the total per-unit cost less; commitments to larger purchases, potentially over a longer period of time, can also provide stability to producers and suppliers and encourage mutually-beneficial and lasting relationships throughout the supply chain. A thorough treatment of this topic is clearly beyond a blog post, but it is worthwhile to briefly explore how savings arise and how discounts are made.

Let us consider a producer whose factory can produce at most a million units of a product every year, it may not seek to utilise this capacity if it cannot be sure that all units will be sold: excess inventory may incur warehouse costs and also result in an uncompetitive product going unsold or needing to be heavily discounted in order to empty those warehouses and make room for more competitive stock. Moreover, the producer may need to reconsider their employment levels if the demand varies significantly, which in some places incurs significant costs both in reduction and expansion. Adding manufacturing capability might not be as easy as finding a spare factory, either. All this additional flexibility is expensive for producers.

However, if a large, well-known retailer like Wal-Mart or Tesco (to name but two that come to mind immediately) comes along and commits to buying most or all of the production, a producer now has more certainty that the inventory will be sold and that it will not be paying people to do nothing or to suddenly have to change production lines to make new products, and so on. Even things like product variations can be minimised by having a single customer or few customers, and this reduces costs for the producer still further. Naturally, Wal-Mart would expect some of the savings to be passed on to them, and so this relationship benefits both parties. (It also produces a potential discount to be passed on to retail customers who may not be buying in bulk after all, but that is another matter.)

The Software Discount Store?

For software, even though the costs of replication have been driven close to nothing, the production of software certainly has a significant fixed cost: the work required to develop a viable product in the first place. Let us say that an organisation wishes to make and sell a non-niche product but needs to employ fifty people for two years to do so (although this would have been almost biblical levels of manpower for some successful software companies in the era of the microcomputer); thus one hundred person-years are invested in development. To just remain in business while selling “copies” of the software, one might need to sell one hundred thousand individual copies. That is if the company wants to just sell “licences” and not do things like services, consulting, paid support, and so on.

Now, the cost of each copy can be adjusted according to the number of sales. If things go better than expected, the prices could be lowered because the company will cover its costs more quickly than anticipated, but they may also raise the prices to take advantage of the desirability of the product. If things go worse than expected, the prices might be raised to bring in more revenue per sale, but such pricing decisions also have to consider the customer reaction where an increased price turns away customers who can no longer justify the expense. In some cases, however, raising the price might make the product seem more valuable and make it more attractive to potential customers, despite the initial lack of interest from such customers.

So, can one talk about economies of scale with regard to software as if it were a physical product or commodity? Not really. The days of needing to get more disks from the duplicator, more manuals from the printer, and to send more boxes to distributors are over, leaving the bulk of the expense in employing people to get the software written. And all those people developing the product are not producing more units by writing more code or spending more time in the office. One can argue that by adding more features they are generating more sales, but it is doubtful that the relationship between features and sales is so well defined: after a while, a lot of the new features will be superfluous for all but “power users”. One can also argue that by adding more features they are making the product seem more valuable, and so a higher price can be justified. To an extent this may be the case, but the relationship between price and sales is not always so well defined, either (despite attempts to do so). But certainly, you do not need to increase your “production capacity” to fulfil a sales need: whether you make one hundred or one million sales (or generate a tenth of or ten times the anticipated revenue) is probably largely independent of how many people were hired to write the code.

But does it make sense to consider bulk purchasing of software as a way of achieving savings? Not really. Unlike physical production, there is no real limit to how many units are sold to customers, and so beyond a certain threshold demanded by profitability, there is no need for anyone to commit to purchasing a certain number of units. Especially now that a physical component of a software product is unlikely to be provided in any transaction – the software is downloaded, the manual is downloaded, there is no “retail box”, no truck arriving at the customer, no fork-lift offloading pallets of the product – there is also no inventory sitting in a warehouse going unsold. It might be nice if someone paid a large sum of money so that the developers could keep working on the product and not have to be moved to some other project, but the constraints of physical products do not apply so readily here.

Who Benefits from Volume Licensing?

It might be said, then, that the “economies of scale” argument starts to break down when software is considered. Producers can more or less increase supply at will and at a relatively low cost, and they need only consider demand in order to break even. Beyond that point, everything is more or less profit and they deliver units at no risk to themselves. Certainly, a producer could use this to price their products aggressively and to pass on considerable savings to customers, but they have no obligation and arguably little inclination to do so for profitability reasons alone. Indeed, they probably want to finance new products and therefore need the money.

When purchasers of physical goods choose to buy in bulk, they do so to get access to savings passed on by the producer, and for some categories of products the practice of committing larger sums of money to larger purchases carries little risk. For example, an organisation might buy a larger quantity of toilet paper than it normally would – even to the point of some administrator complaining that “this must be more than we really need!” – and as long as the organisation had space to store it, it would surely be used over time with very little money wasted as a result.

But for software, any savings passed on by the producer are more discretionary than genuine products of commerce, and there is a real risk of buying “more than we really need”: a licence for an office application will not get “used up” when someone has “reached the end” of another licence; overspending on such capacity is just throwing money away. It is simply not in the purchaser’s interest to buy too many licences.

Now, software producers have realised that their customers are sensitive to this issue. Presumably, the notion of the site licence or “volume licensing” arose fairly quickly: some customers may have indicated that their needs were not so well-defined that they could say that they needed precisely one hundred copies of a product, and besides, their computer users might not have all been using the software at the same time, and so it might not make sense to provide everyone with a copy of a program when they could pass the disks around (or in later times use “floating licences”). So, producers want customers to feel that they are getting value for money and not spending too much, and thus the site licence was presumably offered as a way of stopping them from just buying exactly what they need, instead getting them to spend a bit more than they might like, but perhaps a bit less than they would need to if money were no object and per-unit pricing was the only thing on offer. (The other way of influencing the customer is, of course, the threat of audits by aggressive proprietary software organisations, but that is another matter.)

Regardless of the theory and the mechanisms involved, do customers benefit from site licences? Well, if they spend less on a site licence than they do on the list price of a product multiplied by the number of active users of that product, then they at least benefit from savings on the licensing fees, certainly. However, there are other factors involved, introducing other broader costs, that we will return to in a moment.

Do producers benefit from site licences? Almost certainly. They allow companies to opportunistically increase revenue by inviting customers to spend a bit more for “peace of mind” and convenience of administration (no more having to track all by yourself who is using which product and whether too many people are doing so because a “helpful” company will take care of it for you). If such a thing did not exist, customers would probably choose to act conservatively and more closely review their purchases. (Or they might just choose to embrace Free Software instead, of course.)

All You Won’t Eat

But it is the matter of what the customer needs that should interest us here. If customers did need to review their purchases more closely, they might find it hard to justify spending large sums on volume licences. After all, not everyone might be in need of some product that can theoretically be rolled out to everyone. Indeed, some people might prefer another product instead: it might be much more appropriate for their kind of work, or it might work better on their platform (or even actually work on their platform where the already-bought product does not).

And where the organisation’s purse strings are loosened when buying a site licence for a product in the first instance, the organisation may not be so forthcoming with finance to acquire other products in the same domain, even if there are genuine reasons for doing so. “You already have an office program you can use; why do you want us to buy another?” Suddenly, instead of creating opportunities, volume licensing eliminates them: if the realm of physical products worked like this, Tesco would offer only one brand of toilet paper and perhaps not even a particularly pleasant one at that!

But it doesn’t stop there. Some vendors bundle products together in volume licensing deals. “Why not indulge yourself with a package of products featuring the ones you want together with some you might like?” This is what customers are made to ask themselves. Suddenly, the justification for acquiring a superior product from a competitor of the volume licensing provider is subject to scrutiny. “You already have access to an intranet solution; why do you want us to spend time and money on another?” And so the supposedly generous site licence becomes a mechanism to rein in spending and even the mere usage of alternatives (which may be Free Software acquired at no cost), all because the acquisition cost of things that people are not already actively using are wrongly perceived as being “free”. “Just take advantage of the site licence!” is what people are told, and even if the alternatives are zero cost, the pressure will still be brought to bear because “we paid for things we could use, so let’s use them!”

And the Winner is…

With such blinkered thinking the customer can no longer sensibly exercise choice: it becomes too easy to constrain an organisation’s strategy based on what else is in the lucky dip of products included in the multiple product volume licensing programme. Once one has bought into such a scheme, there is a disincentive to look elsewhere for other solutions, and soon every need to be satisfied become phrased in terms of the solutions an organisation has already “bought”. Need an e-mail system? The solution now has to be phrased in terms of a single vendor’s product that “we already have”. And when such extra purchases merely add to proprietary infrastructure with proprietary dependencies, that supposedly generous site licence is nothing but bait on the end of the vendor’s fishing line.

We know who the real winner is here. The real loser is anyone having to compete with such schemes, especially anyone using open standards in their products, particularly anyone delivering Free Software using open standards. Because once people have paid good money for something, they will defend that “investment” even when it makes no real sense: this is basic human psychology at work. But the customer is the loser, too: what once seemed like a good deal will just result in them throwing good money after bad, telling themselves that it’s the volume of usage – the chance to sample everything at the “all you can eat” buffet – that makes it a “good investment”, never mind that some of the food at the buffet is unhealthy, poor quality, or may even make people ill.

The customer becomes increasingly “locked in”, unable to consider alternatives. The competition becomes “locked out”, unable to persuade the customer to migrate to open-standards-based solutions or indeed anything else, because even if the customer recognised their dependency on their existing vendor, the cost of undoing the mess might well be less predictable and less palatable than a subscription fee to that “preferred” vendor, appearing as an uncomfortably noticeable entry in the accounts that might indicate strategic ineptitude or wrongdoing – that a mistake has been made – which would be difficult to acknowledge and tempting to conceal. But when the outcome of taking such uncomfortable remedial measures would be lower costs, truly interoperable systems and vastly increased choice, it would be the right thing to do.

One might be tempted to just sit back and watch all this unfold, especially if one has no connection with any of the organisations involved and if the competition consists only of a bunch of proprietary software vendors. But remember this: when the customer is spending your tax money, you are the loser, too. And then you have to wonder who apart from the “preferred” vendor benefits from making you part of the losing team.

Horseplay in Public Procurement? “Standards!”

Thursday, June 6th, 2013

There is a classic XKCD comic strip where the programmer, “slacking off” in the office and taking a break from doing work, clearly engaging in horseplay, issues the retort “Compiling!” to get his supervisor or peers off his back. It is seen as the ultimate excuse for not doing one’s work, immediately curtailing any further investigation of what really is going on in the corridor. Having recently been investigating some strategic public sector purchasing decisions, it occurred to me that something similar is going on in that area as well.

There’s an interesting case that came up a few years ago: Oslo municipality sought to acquire infrastructure for e-mail and related functionality. The scope of the tender covered “at least 30000 accounts” for client and server software, services and assistance, which is a pretty big tender but not unexpected given that the municipality is one of the largest single employers in Norway with almost 50000 employees (more statistics available here). Unfortunately, the additional documents are no longer available (and are generally not publicly available at the state procurement portal – you have to register as an interested party), but they are quoted in various places. Translating one particular requirement…

“Oslo municipality has standardised on Microsoft Office as office productivity software. It is therefore expected that solutions use MS Outlook 2003 and later as client.”

Two places where the offending requirements are reproduced are in complaints to the state procurement panel: 2009/124 and 2009/153. In these very similar complaints, it is pointed out that alternatives to Outlook can be offered as options (this is in the original tender), but that the municipality would only test proposed solutions with Outlook. As justification for insisting on Outlook compatibility, the municipality claimed that they had found “six different large companies providing relevant software in connection with the drafting of the requirements… all of which can be used together with Outlook”, and thus there was a basis for real competition. As a result, both complaints were rejected.

The Illusion of Compatibility

Now, one might claim that it is perfectly reasonable to want to acquire systems that work with the ones you already have. It is a bit like saying, “I’ve bought all this riding equipment: of course I want a horse!” The deeper issue here is whether anyone should be allowed to specify product compatibility to limit competition. In other words, when you just need transport to get around, why have you made your requirements so specific that you will only ever be getting a horse?

It is all very well demanding compatibility with a specific product, but when the means by which compatibility can be achieved are controlled by the vendor of that product, it is never going to be a fair competition for anyone trying to provide compatibility for their own separate products and solutions, especially when the vendor of the specified product is known to have used compatibility breakage to deliberately undermine the viability of competitors’ products. One response to this pitfall is to insist that those writing procurement tenders specify standards instead of products and that these standards must be genuinely open and not de-facto proprietary standards.

Unfortunately, the regulators of procurement do not seem to go even this far. The Norwegian government states that public sector institutions must support various standards, although the directorate concerned appears to have changed these obligations from the original directive and now insists that the dubious, forcibly- and incompletely-standardised Office Open XML document format must be accepted by the public sector in communications; they have also weakened the Internet publishing requirements for public sector institutions by permitting the use of various encumbered, cartel-controlled audio and video formats. For these changes, entertained in a review process, we can thank the likes of Statistics Norway who wanted “Word format” as well as OOXML to be permitted in the list of acceptable “standards”.

In any case, such directives only cover the surface of public sector activity, and the list of standards do not in general cover anything more than storage and interchange formats plus basic communications standards. This leaves quite a gap where established Internet standards exist but are not mandated, thus allowing proprietary protocols and technologies to insert themselves into infrastructure and pervert the processes of procurement and systems integration.

The Pretense of “Standards!”

But even if open standards were mandated in the public sector – a worthy and necessary measure – that wouldn’t mean that our work to ensure a level playing field – fairness in procurement – would be done. Because vendors can always advertise compliance with standards, they can still insist that their products be considered in any procurement contest, and even if those products do notionally support standards it does not mean that they will end up using them when deployed. For example, from the case of the Oslo municipality e-mail system, the councillor with responsibility for finance and development indicated the following:

“Oslo municipality is a complicated and comprehensive organisation and must take existing integration with specialist/bespoke systems into account. A procurement of other [non-Microsoft] end-user software will therefore result in unnecessary increases in costs for the municipality.”

In other words, even if existing software was acquired under the pretense that it supported standards, in deployment it may actually only function with other software using proprietary mechanisms, and the result of this is that newly-acquired software must also support these proprietary mechanisms. And so, a proprietary infrastructure grows, actively repelling components that employ open standards, with its custodians insisting that it is the fault of standards-compliant software that such an infrastructure would need to be dismantled almost in its entirety and replaced if even one standards-compliant component were to be admitted.

Who benefits the most from this? The vendor peddling the proprietary platforms and technologies that enable this morass of interdependency, of course. Make no mistake: any initial convenience promised by such a vendor fades away when the task of having to pursue an infrastructure strategy not dictated by outside interests is brought to bear on the purchaser. But such tasks are work, of course, and if there’s a way of avoiding it and insisting it doesn’t need attending to, a distraction can always be found.

And so, the horseplay continues under the excuse of “Standards!” when there is no real intent to uphold them or engage in the real work of maintaining a sustainable infrastructure that does not exclude open competition or channel public money to preferred vendors. Unlike the character in the comic strip whose code probably is still compiling, certain public sector institutions would have experienced a compilation error and be found out. It appears, unfortunately, that it is our job to peer around the cubicle partition and see what is happening on screen and perhaps to investigate the noises coming from the corridor. After all, our institutions don’t seem to be particularly concerned about doing so.