Paul Boddie's Free Software-related blog


Archive for the ‘standards’ Category

In Defence of Mail

Monday, November 6th, 2017

A recent LWN.net article, “The trouble with text-only email“, gives us an insight through an initially-narrow perspective into a broader problem: how the use of e-mail by organisations and its handling as it traverses the Internet can undermine the viability of the medium. And how organisations supposedly defending the Internet as a platform can easily find themselves abandoning technologies that do not sit well with their “core mission”, not to mention betraying that mission by employing dubious technological workarounds.

To summarise, the Mozilla organisation wants its community to correspond via mailing lists but, being the origin of the mails propagated to list recipients when someone communicates with one of their mailing lists, it finds itself under the threat of being blacklisted as a spammer. This might sound counterintuitive: surely everyone on such lists signed up for mails originating from Mozilla in order to be on the list.

Unfortunately, the elevation of Mozilla to being a potential spammer says more about the stack of workaround upon workaround, second- and third-guessing, and the “secret handshakes” that define the handling of e-mail today than it does about anything else. Not that factions in the Mozilla organisation have necessarily covered themselves in glory in exploring ways of dealing with their current problem.

The Elimination Problem

Let us first identify the immediate problem here. No, it is not spamming as such, but it is the existence of dubious “reputation” services who cause mail to be blocked on opaque and undemocratic grounds. I encountered one of these a few years ago when trying to send a mail to a competition and finding that such a service had decided that my mail hosting provider’s Internet address was somehow “bad”.

What can one do when placed in such a situation? Appealing to the blacklisting service will not do an individual any good. Instead, one has to ask one’s mail provider to try and fix the issue, which in my case they had actually been trying to do for some time. My mail never got through in the end. Who knows how long it took to persuade the blacklisting service to rectify what might have been a mistake?

Yes, we all know that the Internet is awash with spam. And yes, mechanisms need to be in place to deal with it. But such mechanisms need to be transparent and accountable. Without these things, all sorts of bad things can take place: censorship, harassment, and forms of economic crime spring readily to mind. It should be a general rule of thumb in society that when someone exercises power over others, such power must be controlled through transparency (so that it is not arbitrary and so that everyone knows what the rules are) and through accountability (so that decisions can be explained and judged to have been properly taken and acted upon).

We actually need better ways of eliminating spam and other misuse of common communications mechanisms. But for now we should at least insist that whatever flawed mechanisms that exist today uphold the democratic principles described above.

The Marketing Problem

Although Mozilla may have distribution lists for marketing purposes, its problem with mailing lists is something of a different creature. The latter are intended to be collaborative and involve multiple senders of the original messages: a many-to-many communications medium. Meanwhile, the former is all about one-to-many messaging, and in this regard we stumble across the root of the spam problem.

Obviously, compulsive spammers are people who harvest mail addresses from wherever they can be found, trawling public data or buying up lists of addresses sourced during potentially unethical activities. Such spammers create a huge burden on society’s common infrastructure, but they are hardly the only ones cultivating that burden. Reputable businesses, even when following the law communicating with their own customers, often employ what can be regarded as a “clueless” use of mail as a marketing channel without any thought to the consequences.

Businesses might want to remind you of their products and encourage you to receive their mails. The next thing you know, you get messages three times a week telling you about products that are barely of interest to you. This may be a “win” for the marketing department – it is like advertising on television but cheaper because you don’t have to bid against addiction-exploiting money launderers gambling companies, debt sharks consumer credit companies or environment-trashing, cure peddlers nutritional supplement companies for “eyeballs” – but it cheapens and worsens the medium for everybody who uses it for genuine interpersonal communication and not just for viewing advertisements.

People view e-mail and mail software as a lost cause in the face of wave after wave of illegal spam and opportunistic “spammy” marketing. “Why bother with it at all?” they might ask, asserting that it is just a wastebin that one needs to empty once a week as some kind of chore, before returning to one’s favourite “social” tools (also plagued with spam and surveillance, but consistency is not exactly everybody’s strong suit).

The Authenticity Problem

Perhaps to escape problems with the overly-zealous blacklisting services, it is not unusual to get messages ostensibly from a company, being a customer of theirs, but where the message originates from some kind of marketing communications service. The use of such a service may be excusable depending on how much information is shared, what kinds of safeguards are in place, and so on. What is less excusable is the way the communication is performed.

I actually experience this with financial institutions, which should be a significant area of concern both for individuals, the industry and its regulators. First of all, the messages are not encrypted, which is what one might expect given that the sender would need some kind of public key information that I haven’t provided. But provided that the message details are not sensitive (although sometimes they have been, which is another story), we might not set our expectations so high for these communications.

However, of more substantial concern is the way that when receiving such mails, we have no way of verifying that they really originated from the company they claim to have come from. And when the mail inevitably contains links to things, we might be suspicious about where those links, even if they are URLs in plain text messages, might want to lead us.

The recipient is now confronted with a collection of Internet domain names that may or may not correspond to the identities of reputable organisations, some of which they might know as a customer, others they might be aware of, but where the recipient must also exercise the correct judgement about the relationship between the companies they do use and these other organisations with which they have no relationship. Even with a great deal of peripheral knowledge, the recipient needs to exercise caution that they do not go off to random places on the Internet and start filling out their details on the say-so of some message or other.

Indeed, I have a recent example of this. One financial institution I use wants me to take a survey conducted by a company I actually have heard of in that line of business. So far, so plausible. But then, the site being used to solicit responses is one I have no prior knowledge of: it could be a reputable technology business or it could be some kind of “honeypot”; that one of the domains mentioned contains “cloud” also does not instil confidence in the management of the data. To top it all, the mail is not cryptographically signed and so I would have to make a judgement on its authenticity based on some kind of “tea-leaf-reading” activity using the message headers or assume that the institution is likely to want to ask my opinion about something.

The Identity Problem

With the possibly-authentic financial institution survey message situation, we can perhaps put our finger on the malaise in the use of mail by companies wanting our business. I already have a heavily-regulated relationship with the company concerned. They seemingly emphasise issues like security when I present myself to their Web sites. Why can they not at least identify themselves correctly when communicating with me?

Some banks only want electronic communications to take place within their hopefully-secure Web site mechanisms, offering “secure messaging” and similar things. Others also offer such things, either two-way or maybe only customer-to-company messaging, but then spew e-mails at customers anyway, perhaps under the direction of the sales and marketing branches of the organisation.

But if they really must send mails, why can they not leverage their “secure” assets to allow me to obtain identifying information about them, so that their mails can be cryptographically signed and so that I can install a certificate and verify their authenticity? After all, if you cannot trust a bank to do these things, which other common institutions can you trust? Such things have to start somewhere, and what better place to start than in the banking industry? These people are supposed to be good at keeping things under lock and key.

The Responsibility Problem

This actually returns us to the role of Mozilla. Being a major provider of software for accessing the Internet, the organisation maintains a definitive list of trusted parties through whom the identity of Web sites can be guaranteed (to various degrees) when one visits them with a browser. Mozilla’s own sites employ certificates so that people browsing them can have their privacy upheld, so it should hardly be inconceivable for the sources of Mozilla’s mail-based communications to do something similar.

Maybe S/MIME would be the easiest technology to adopt given the similarities between its use of certificates and certificate authorities and the way such things are managed for Web sites. Certainly, there are challenges with message signing and things like mailing lists, this being a recurring project for GNU Mailman if I remember correctly (and was paying enough attention), but nothing solves a longstanding but largely underprioritised problem than a concrete need and the will to get things done. Mozilla has certainly tried to do identity management in the past, recalling initiatives like Mozilla Persona, and the organisation is surely reasonably competent in that domain.

In the referenced article, Mozilla was described as facing an awkward technical problem: their messages were perceived as being delivered indiscriminately to an audience of which large portions may not have been receiving or taking receipt of the messages. This perception of indiscriminate, spam-like activity being some kind of metric employed by blacklisting services. The proposed remedy for potential blacklisting involved the elimination of plain text e-mail from Mozilla’s repertoire and the deployment of HTML-only mail, with the latter employing links to images that would load upon the recipient opening the message. (Never mind that many mail programs prevent this.)

The rationale for this approach was that Mozilla would then know that people were getting the mail and that by pruning away those who didn’t reveal their receipt of the message, the organisation could then be more certain of not sending mail to large numbers of “inactive” recipients, thus placating the blacklisting services. Now, let us consider principle #4 of the Mozilla manifesto:

Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.

Given such a principle, why then is the focus on tracking users and violating their privacy, not on deploying a proper solution and just sending properly-signed mail? Is it because the mail is supposedly not part of the Web or something?

The Proprietary Service Problem

Mozilla can be regarded as having a Web-first organisational mentality which, given its origins, should not be too surprising. Although the Netscape browser was extended to include mail facilities and thus Navigator became Communicator, and although the original Mozilla browser attempted to preserve a range of capabilities not directly related to hypertext browsing, Firefox became the organisation’s focus and peripheral products such as Thunderbird have long struggled for their place in the organisation’s portfolio.

One might think that the decision-makers at Mozilla believe that mundane things like mail should be done through a Web site as webmail and that everyone might as well use an established big provider for their webmail needs. After all, the vision of the Web as a platform in its own right, once formulated as Netscape Constellation in more innocent times, can be used to justify pushing everything onto the Web.

The problem here is that as soon as almost everyone has been herded into proprietary service “holding pens”, expecting a free mail service while having their private communications mined for potential commercial value, things like standards compliance and interoperability suffer. Big webmail providers don’t need to care about small mail providers. Too bad if the big provider blacklists the smaller one: most people won’t even notice, and why don’t the users of the smaller provider “get with it” and use what everybody else is using, anyway?

If everyone ends up almost on the same server or cluster of servers or on one of a handful of such clusters, why should the big providers bother to do anything by the book any more? They can make all sorts of claims about it being more efficient to do things their own way. And then, mail is no longer a decentralised, democratic tool any more: its users end up being trapped in a potentially exploitative environment with their access to communications at risk of being taken away at a moment’s notice, should the provider be persuaded that some kind of wrong has been committed.

The Empowerment Problem

Ideally, everyone would be able to assert their own identity and be able to verify the identity of those with whom they communicate. With this comes the challenge in empowering users to manage their own identities in a way which is resistant to “identity theft”, impersonation, and accidental loss of credentials that could have a severe impact on a person’s interactions with necessary services and thus on their life in general.

Here, we see the failure of banks and other established, trusted organisations to make this happen. One might argue that certain interests, political and commercial, do not want individuals controlling their own identity or their own use of cryptographic technologies. Even when such technologies have been deployed so that people can be regarded as having signed for something, it usually happens via a normal secured Web connection with a button on a Web form, everything happening at arm’s length. Such signatures may not even be any kind of personal signature at all: they may just be some kind of transaction surrounded by assumptions that it really was “that person” because they logged in with their credentials and there are logs to “prove” it.

Leaving the safeguarding of cryptographic information to the average Internet user seems like a scary thing to do. People’s computers are not particularly secure thanks to the general neglect of security by the technology industry, nor are they particularly usable or understandable, especially when things that must be done right – like cryptography – are concerned. It also doesn’t help that when trying to figure out best practices for key management, it almost seems like every expert has their own advice, leaving the impression of a cacophony of voices, even for people with a particular interest in the topic and an above-average comprehension of the issues.

Most individuals in society might well struggle if left to figure out a technical solution all by themselves. But institutions exist that are capable of operating infrastructure with a certain level of robustness and resilience. And those institutions seem quite happy with the credentials I provide to identify myself with them, some of which being provided by bits of hardware they have issued to me.

So, it seems to me that maybe they could lead individuals towards some kind of solution whereupon such institutions could vouch for a person’s digital identity, provide that person with tools (possibly hardware) to manage it, and could help that person restore their identity in cases of loss or theft. This kind of thing is probably happening already, given that smartcard solutions have been around for a while and can be a component in such solutions, but here the difference would be that each of us would want help to manage our own identity, not merely retain and present a bank-issued identity for the benefit of the bank’s own activities.

The Real Problem

The LWN.net article ends with a remark mentioning that “the email system is broken”. Given how much people complain about it, yet the mail still keeps getting through, it appears that the brokenness is not in the system as such but in the way it has been misused and undermined by those with the power to do something about it.

That the metric of being able to get “pull requests through to Linus Torvalds’s Gmail account” is mentioned as some kind of evidence perhaps shows that people’s conceptions of e-mail are themselves broken. One is left with an impression that electronic mail is like various other common resources that are systematically and deliberately neglected by vested interests so that they may eventually fail, leaving those vested interests to blatantly profit from the resulting situation while making remarks about the supposed weaknesses of those things they have wilfully destroyed.

Still, this is a topic that cannot be ignored forever, at least if we are to preserve things like genuinely open and democratic channels of communication whose functioning may depend on decent guarantees of people’s identities. Without a proper identity or trust infrastructure, we risk delegating every aspect of our online lives to unaccountable and potentially hostile entities. If it all ends up with everyone having to do their banking inside their Facebook account, it would be well for the likes of Mozilla to remember that at such a point there is no consolation to be had any more that at least everything is being done in a Web browser.

EOMA68: The Campaign (and some remarks about recurring criticisms)

Thursday, August 18th, 2016

I have previously written about the EOMA68 initiative and its objective of making small, modular computing cards that conform to a well-defined standard which can be plugged into certain kinds of device – a laptop or desktop computer, or maybe even a tablet or smartphone – providing a way of supplying such devices with the computing power they all need. This would also offer a convenient way of taking your computing environment with you, using it in the kind of device that makes most sense at the time you need to use it, since the computer card is the actual computer and all you are doing is putting it in a different box: switch off, unplug the card, plug it into something else, switch that on, and your “computer” has effectively taken on a different form.

(This “take your desktop with you” by actually taking your computer with you is fundamentally different to various dubious “cloud synchronisation” services that would claim to offer something similar: “now you can synchronise your tablet with your PC!”, or whatever. Such services tend to operate rather imperfectly – storing your files on some remote site – and, of course, exposing you to surveillance and convenience issues.)

Well, a crowd-funding campaign has since been launched to fund a number of EOMA68-related products, with an opportunity for those interested to acquire the first round of computer cards and compatible devices, those devices being a “micro-desktop” that offers a simple “mini PC” solution, together with a somewhat radically designed and produced laptop (or netbook, perhaps) that emphasises accessible construction methods (home 3D printing) and alternative material usage (“eco-friendly plywood”). In the interests of transparency, I will admit that I have pledged for a card and the micro-desktop, albeit via my brother for various personal reasons that also delayed me from actually writing about this here before now.

An EOMA68 computer card in a wallet

An EOMA68 computer card in a wallet (courtesy Rhombus Tech/Crowd Supply)

Of course, EOMA68 is about more than just conveniently taking your computer with you because it is now small enough to fit in a wallet. Even if you do not intend to regularly move your computer card from device to device, it emphasises various sustainability issues such as power consumption (deliberately kept low), long-term support and matters of freedom (the selection of CPUs that completely support Free Software and do not introduce surveillance backdoors), and device longevity (that when the user wants to upgrade, they may easily use the card in something else that might benefit from it).

This is not modularity to prove some irrelevant hypothesis. It is modularity that delivers concrete benefits to users (that they aren’t forced to keep replacing products engineered for obsolescence), to designers and manufacturers (that they can rely on the standard to provide computing functionality and just focus on their own speciality to differentiate their product in more interesting ways), and to society and the environment (by reducing needless consumption and waste caused by the upgrade treadmill promoted by the technology industries over the last few decades).

One might think that such benefits might be received with enthusiasm. Sadly, it says a lot about today’s “needy consumer” culture that instead of welcoming another choice, some would rather spend their time criticising it, often to the point that one might wonder about their motivations for doing so. Below, I present some common criticisms and some of my own remarks.

(If you don’t want to read about “first world” objections – typically about “new” and “fast” – and are already satisfied by the decisions made regarding more understandable concerns – typically involving corporate behaviour and licensing – just skip to the last section.)

“The A20 is so old and slow! What’s the point?”

The Allwinner A20 has been around for a while. Indeed, its predecessor – the A10 – was the basis of initial iterations of the computer card several years ago. Now, the amount of engineering needed to upgrade the prototypes that were previously made to use the A10 instead of the A20 is minimal, at least in comparison to adopting another CPU (that would probably require a redesign of the circuit board for the card). And hardware prototyping is expensive, especially when unnecessary design changes have to be made, when they don’t always work out as expected, and when extra rounds of prototypes are then required to get the job done. For an initiative with a limited budget, the A20 makes a lot of sense because it means changing as little as possible, benefiting from the functionality upgrade and keeping the risks low.

Obviously, there are faster processors available now, but as the processor selection criteria illustrate, if you cannot support them properly with Free Software and must potentially rely on binary blobs which potentially violate the GPL, it would be better to stick to a more sustainable choice (because that is what adherence to Free Software is largely about) even if that means accepting reduced performance. In any case, at some point, other cards with different processors will come along and offer faster performance. Alternatively, someone will make a dual-slot product that takes two cards (or even a multi-slot product that provides a kind of mini-cluster), and then with software that is hopefully better-equipped for concurrency, there will be alternative ways of improving the performance to that of finding faster processors and hoping that they meet all the practical and ethical criteria.

“The RasPi 3…”

Lots of people love the Raspberry Pi, it would seem. The original models delivered a cheap, adequate desktop computer for a sum that was competitive even with some microcontroller-based single-board computers that are aimed at electronics projects and not desktop computing, although people probably overlook rivals like the BeagleBoard and variants that would probably have occupied a similar price point even if the Raspberry Pi had never existed. Indeed, the BeagleBone Black resides in the same pricing territory now, as do many other products. It is interesting that both product families are backed by certain semiconductor manufacturers, and the Raspberry Pi appears to benefit from privileged access to Broadcom products and employees that is denied to others seeking to make solutions using the same SoC (system on a chip).

Now, the first Raspberry Pi models were not entirely at the performance level of contemporary desktop solutions, especially by having only 256MB or 512MB RAM, meaning that any desktop experience had to be optimised for the device. Furthermore, they employed an ARM architecture variant that was not fully supported by mainstream GNU/Linux distributions, in particular the one favoured by the initiative: Debian. So a variant of Debian has been concocted to support the devices – Raspbian – and despite the Raspberry Pi 2 being the first device in the series to employ an architecture variant that is fully supported by Debian, Raspbian is still recommended for it and its successor.

Anyway, the Raspberry Pi 3 having 1GB RAM and being several times faster than the earliest models might be more competitive with today’s desktop solutions, at least for modestly-priced products, and perhaps it is faster than products using the A20. But just like the fascination with MHz and GHz until Intel found that it couldn’t rely on routinely turning up the clock speed on its CPUs, or everybody emphasising the number of megapixels their digital camera had until they discovered image noise, such number games ignore other factors: the closed source hardware of the Raspberry Pi boards, the opaque architecture of the Broadcom SoCs with a closed source operating system running on the GPU (graphics processing unit) that has control over the ARM CPU running the user’s programs, the impracticality of repurposing the device for things like laptops (despite people attempting to repurpose it for such things, anyway), and the organisation behind the device seemingly being happy to promote a variety of unethical proprietary software from a variety of unethical vendors who clearly want a piece of the action.

And finally, with all the fuss about how much faster the opaque Broadcom product is than the A20, the Raspberry Pi 3 has half the RAM of the EOMA68-A20 computer card. For certain applications, more RAM is going to be much more helpful than more cores or “64-bit!”, which makes us wonder why the Raspberry Pi 3 doesn’t support 4GB RAM or more. (Indeed, the current trend of 64-bit ARM products offering memory quantities addressable by 32-bit CPUs seems to have missed the motivation for x86 finally going 64-bit back in the early 21st century, which was largely about efficiently supporting the increasingly necessary amounts of RAM required for certain computing tasks, with Intel’s name for x86-64 actually being at one time “Extended Memory 64 Technology“. Even the DEC Alpha, back in the 1990s, which could be regarded as heralding the 64-bit age in mainstream computing, and which arguably relied on the increased performance provided by a 64-bit architecture for its success, still supported 64-bit quantities of memory in delivered products when memory was obviously a lot more expensive than it is now.)

“But the RasPi Zero!”

Sure, who can argue with a $5 (or £4, or whatever) computer with 512MB RAM and a 1GHz CPU that might even be a usable size and shape for some level of repurposing for the kinds of things that EOMA68 aims at: putting a general purpose computer into a wide range of devices? Except that the Raspberry Pi Zero has had persistent availability issues, even ignoring the free give-away with a magazine that had people scuffling in newsagents to buy up all the available copies so they could resell them online at several times the retail price. And it could be perceived as yet another inventory-dumping exercise by Broadcom, given that it uses the same SoC as the original Raspberry Pi.

Arguably, the Raspberry Pi Zero is a more ambiguous follow-on from the Raspberry Pi Compute Module that obviously was (and maybe still is) intended for building into other products. Some people may wonder why the Compute Module wasn’t the same success as the earlier products in the Raspberry Pi line-up. Maybe its lack of success was because any organisation thinking of putting the Compute Module (or, these days, the Pi Zero) in a product to sell to other people is relying on a single vendor. And with that vendor itself relying on a single vendor with whom it currently has a special relationship, a chain of single vendor reliance is formed.

Any organisation wanting to build one of these boards into their product now has to have rather a lot of confidence that the chain will never weaken or break and that at no point will either of those vendors decide that they would rather like to compete in that particular market themselves and exploit their obvious dominance in doing so. And they have to be sure that the Raspberry Pi Foundation doesn’t suddenly decide to get out of the hardware business altogether and pursue those educational objectives that they once emphasised so much instead, or that the Foundation and its manufacturing partners don’t decide for some reason to cease doing business, perhaps selectively, with people building products around their boards.

“Allwinner are GPL violators and will never get my money!”

Sadly, Allwinner have repeatedly delivered GPL-licensed software without providing the corresponding source code, and this practice may even persist to this day. One response to this has referred to the internal politics and organisation of Allwinner and that some factions try to do the right thing while others act in an unenlightened, licence-violating fashion.

Let it be known that I am no fan of the argument that there are lots of departments in companies and that just because some do some bad things doesn’t mean that you should punish the whole company. To this day, Sony does not get my business because of the unsatisfactorily-resolved rootkit scandal and I am hardly alone in taking this position. (It gets brought up regularly on a photography site I tend to visit where tensions often run high between Sony fanatics and those who use cameras from other manufacturers, but to be fair, Sony also has other ways of irritating its customers.) And while people like to claim that Microsoft has changed and is nice to Free Software, even to the point where people refusing to accept this assertion get criticised, it is pretty difficult to accept claims of change and improvement when the company pulls in significant sums from shaking down device manufacturers using dubious patent claims on Android and Linux: systems it contributed nothing to. And no, nobody will have been reading any patents to figure out how to implement parts of Android or Linux, let alone any belonging to some company or other that Microsoft may have “vacuumed up” in an acquisition spree.

So, should the argument be discarded here as well? Even though I am not too happy about Allwinner’s behaviour, there is the consideration that as the saying goes, “beggars cannot be choosers”. When very few CPUs exist that meet the criteria desirable for the initiative, some kind of nasty compromise may have to be made. Personally, I would have preferred to have had the option of the Ingenic jz4775 card that was close to being offered in the campaign, although I have seen signs of Ingenic doing binary-only code drops on certain code-sharing sites, and so they do not necessarily have clean hands, either. But they are actually making the source code for such binaries available elsewhere, however, if you know where to look. Thus it is most likely that they do not really understand the precise obligations of the software licences concerned, as opposed to deliberately withholding the source code.

But it may well be that unlike certain European, American and Japanese companies for whom the familiar regime of corporate accountability allows us to judge a company on any wrongdoing, because any executives unaware of such wrongdoing have been negligent or ineffective at building the proper processes of supervision and thus permit an unethical corporate culture, and any executives aware of such wrongdoing have arguably cultivated an unethical corporate culture themselves, it could be the case that Chinese companies do not necessarily operate (or are regulated) on similar principles. That does not excuse unethical behaviour, but it might at least entertain the idea that by supporting an ethical faction within a company, the unethical factions may be weakened or even eliminated. If that really is how the game is played, of course, and is not just an excuse for finger-pointing where nobody is held to account for anything.

But companies elsewhere should certainly not be looking for a weakening of their accountability structures so as to maintain a similarly convenient situation of corporate hypocrisy: if Sony BMG does something unethical, Sony Imaging should take the bad with the good when they share and exploit the Sony brand; people cannot have things both ways. And Chinese companies should comply with international governance principles, if only to reassure their investors that nasty surprises (and liabilities) do not lie in wait because parts of such businesses were poorly supervised and not held accountable for any unethical activities taking place.

It is up to everyone to make their own decision about this. The policy of the campaign is that the A20 can be supported by Free Software without needing any proprietary software, does not rely on any Allwinner-engineered, licence-violating software (which might be perceived as a good thing), and is merely the first step into a wider endeavour that could be conveniently undertaken with the limited resources available at the time. Later computer cards may ignore Allwinner entirely, especially if the company does not clean up its act, but such cards may never get made if the campaign fails and the wider endeavour never even begins in earnest.

(And I sincerely hope that those who are apparently so outraged by GPL violations actually support organisations seeking to educate and correct companies who commit such violations.)

“You could buy a top-end laptop for that price!”

Sure you could. But this isn’t about a crowd-funding campaign trying to magically compete with an optimised production process that turns out millions of units every year backed by a multi-billion-dollar corporation. It is about highlighting the possibilities of more scalable (down to the economically-viable manufacture of a single unit), more sustainable device design and construction. And by the way, that laptop you were talking about won’t be upgradeable, so when you tire of its performance or if the battery loses its capacity, I suppose you will be disposing of it (hopefully responsibly) and presumably buying something similarly new and shiny by today’s measures.

Meanwhile, with EOMA68, the computing part of the supposedly overpriced laptop will be upgradeable, and with sensible device design the battery (and maybe other things) will be replaceable, too. Over time, EOMA68 solutions should be competitive on price, anyway, because larger numbers of them will be produced, but unlike traditional products, the increased usable lifespans of EOMA68 solutions will also offer longer-term savings to their purchasers, too.

“You could just buy a used laptop instead!”

Sure you could. At some point you will need to be buying a very old laptop just to have a CPU without a surveillance engine and offering some level of upgrade potential, although the specification might be disappointing to you. Even worse, things don’t last forever, particularly batteries and certain kinds of electronic components. Replacing those things may well be a challenge, and although it is worthwhile to make sure things get reused rather than immediately discarded, you can’t rely on picking up a particular product in the second-hand market forever. And relying on sourcing second-hand items is very much for limited edition products, whereas the EOMA68 initiative is meant to be concerned with reliably producing widely-available products.

“Why pay more for ideological purity?”

Firstly, words like “ideology”, “religion”, “church”, and so on, might be useful terms for trolls to poison and polarise any discussion, but does anyone not see that expecting suspiciously cheap, increasingly capable products to be delivered in an almost conveyor belt fashion is itself subscribing to an ideology? One that mandates that resources should be procured at the minimum cost and processed and assembled at the minimum cost, preferably without knowing too much about the human rights abuses at each step. Where everybody involved is threatened that at any time their role may be taken over by someone offering the same thing for less. And where a culture of exploitation towards those doing the work grows, perpetuating increasing wealth inequality because those offering the services in question will just lean harder on the workers to meet their cost target (while they skim off “their share” for having facilitated the deal). Meanwhile, no-one buying the product wants to know “how the sausage is made”. That sounds like an ideology to me: one of neoliberalism combined with feigned ignorance of the damage it does.

Anyway, people pay for more sustainable, more ethical products all the time. While the wilfully ignorant may jeer that they could just buy what they regard as the same thing for less (usually being unaware of factors like quality, never mind how these things get made), more sensible people see that the extra they pay provides the basis for a fairer, better society and higher-quality goods.

“There’s no point to such modularity!”

People argue this alongside the assertion that systems are easy to upgrade and that they can independently upgrade the RAM and CPU in their desktop tower system or whatever, although they usually start off by talking about laptops, but clearly not the kind of “welded shut” laptops that they or maybe others would apparently prefer to buy (see above). But systems are getting harder to upgrade, particularly portable systems like laptops, tablets, smartphones (with Fairphone 2 being a rare exception of being something that might be upgradeable), and even upgradeable systems are not typically upgraded by most end-users: they may only manage to do so by enlisting the help of more knowledgeable relatives and friends.

I use a 32-bit system that is over 11 years old. It could have more RAM, and I could do the job of upgrading it, but guess how much I would be upgrading it to: 2GB, which is as much as is supported by the two prototyped 32-bit architecture EOMA68 computer card designs (A20 and jz4775). Only certain 32-bit systems actually support more RAM, mostly because it requires the use of relatively exotic architectural features that a lot of software doesn’t support. As for the CPU, there is no sensible upgrade path even if I were sure that I could remove the CPU without causing damage to it or the board. Now, 64-bit systems might offer more options, and in upgradeable desktop systems more RAM might be added, but it still relies on what the chipset was designed to support. Some chipsets may limit upgrades based on either manufacturer pessimism (no-one will be able to afford larger amounts in the near future) or manufacturer cynicism (no-one will upgrade to our next product if they can keep adding more RAM).

EOMA68 makes a trade-off in order to support the upgrading of devices in a way that should be accessible to people who are not experts: no-one should be dealing with circuit boards and memory modules. People who think hardware engineering has nothing to do with compromises should get out of their armchair, join one of the big corporations already doing hardware, and show them how it is done, because I am sure those companies would appreciate such market-dominating insight.

An EOMA68 computer card with the micro-desktop device

An EOMA68 computer card with the micro-desktop device (courtesy Rhombus Tech/Crowd Supply)

Back to the Campaign

But really, the criticisms are not the things to focus on here. Maybe EOMA68 was interesting to you and then you read one of these criticisms somewhere and started to wonder about whether it is a good idea to support the initiative after all. Now, at least you have another perspective on them, albeit from someone who actually believes that EOMA68 provides an interesting and credible way forward for sustainable technology products.

Certainly, this campaign is not for everyone. Above all else it is crowd-funding: you are pledging for rewards, not buying things, even though the aim is to actually manufacture and ship real products to those who have pledged for them. Some crowd-funding exercises never deliver anything because they underestimate the difficulties of doing so, leaving a horde of angry backers with nothing to show for their money. I cannot make any guarantees here, but given that prototypes have been made over the last few years, that videos have been produced with a charming informality that would surely leave no-one seriously believing that “the whole thing was just rendered” (which tends to happen a lot with other campaigns), and given the initiative founder’s stubbornness not to give up, I have a lot of confidence in him to make good on his plans.

(A lot of campaigns underestimate the logistics and, having managed to deliver a complicated technological product, fail to manage the apparently simple matter of “postage”, infuriating their backers by being unable to get packages sent to all the different countries involved. My impression is that logistics expertise is what Crowd Supply brings to the table, and it really surprises me that established freight and logistics companies aren’t dipping their toes in the crowd-funding market themselves, either by running their own services or taking ownership stakes and integrating their services into such businesses.)

Personally, I think that $65 for a computer card that actually has more RAM than most single-board computers is actually a reasonable price, but I can understand that some of the other rewards seem a bit more expensive than one might have hoped. But these are effectively “limited edition” prices, and the aim of the exercise is not to merely make some things, get them for the clique of backers, and then never do anything like this ever again. Rather, the aim is to demonstrate that such products can be delivered, develop a market for them where the quantities involved will be greater, and thus be able to increase the competitiveness of the pricing, iterating on this hopefully successful formula. People are backing a standard and a concept, with the benefit of actually getting some hardware in return.

Interestingly, one priority of the campaign has been to seek the FSF’s “Respects Your Freedom” (RYF) endorsement. There is already plenty of hardware that employs proprietary software at some level, leaving the user to merely wonder what some “binary blob” actually does. Here, with one of the software distributions for the computer card, all of the software used on the card and the policies of the GNU/Linux distribution concerned – a surprisingly awkward obstacle – will seek to meet the FSF’s criteria. Thus, the “Libre Tea” card will hopefully be one of the first general purpose computing solutions to actually be designed for RYF certification and to obtain it, too.

The campaign runs until August 26th and has over a thousand pledges. If nothing else, go and take a look at the details and the updates, with the latter providing lots of background including video evidence of how the software offerings have evolved over the course of the campaign. And even if it’s not for you, maybe people you know might appreciate hearing about it, even if only to follow the action and to see how crowd-funding campaigns are done.

imip-agent: Integrating Calendaring with E-Mail

Tuesday, November 17th, 2015

Longer ago than I had, until now, realised, I wrote an article about my ongoing exploration of groupware and, specifically, calendaring. As I noted in that article, I felt that a broader range of options may be needed for those wishing to expand their use of communications technologies beyond plain e-mail and into the structured exchange of other kinds of information, whilst retaining and building upon that e-mail infrastructure.

And I noted that more often than not, people wanting to increase their ambitions in this regard are often confronted with the prospect of abandoning what they already use successfully, instead being obliged to adopt a complete package of technologies, some of which they may not even need. While proprietary software and service vendors might pursue such strategies of persuasion – getting the big sale or the big contract – it is baffling that Free Software projects might also put potential users on the spot in the same way. After all, Free Software is very much about choice and control.

So, as I spelled out in that previous article, there may be some mileage in trying to offer extensions to existing infrastructure so that people can increase their communications capabilities whilst retaining the technologies they already know. And in some depth (and at some length), I described what a mail-centred calendaring solution might need to provide in order to address most people’s needs. Finally, I promised to make my own efforts available in this area so that anyone remotely interested in the topic might get some benefit from it.

Last month, I started a very brief exchange on a Debian- and groupware-related mailing list about such matters, just to see what people interested in groupware projects might think, also attempting to find out what they use for calendaring themselves. (Unfortunately, there doesn’t seem to be so many non-product-specific, public and open places to discuss matters such as this one. Search mail software lists for calendaring discussions and you may even get to see hostility towards anyone mentioning groupware.) Ultimately, to keep the discussion concrete, I decided to announce informally what I have been working on.

Introducing imip-agent

imip-agent logoCalendaring and distributed scheduling can be achieved over e-mail using the iMIP standard. My work relies on this standard to function, providing programs that are integrated in mail transfer agents (MTAs) acting as calendaring agents. Thus, I decided to call the project imip-agent.

Initially, and as noted previously, my interest in such matters started with the mail handling functionality of Kolab and the component called Wallace that is responsible for responding to requests sent to certain e-mail addresses. Meanwhile, Kolab provided (and maybe still provides) a rather inelegant way of preparing “free/busy” information describing the availability of calendar system participants: a daemon program would run periodically, scanning mailboxes for events stored in special folders, and generate completely new manifests of each user’s schedule. (This may have changed since I last looked at Kolab in any serious manner.)

It occurred to me that the exchange of messages between participants in a scheduling transaction should be sufficient to maintain a live record of each participant’s availability, and that some experimentation would demonstrate the feasibility or infeasibility of such an approach. I had already looked into how existing architectures prepare and consume free/busy information, and felt that I had enumerated the relevant essentials for a viable calendaring architecture based on e-mail exchanges alone.

And so I set about learning about mail handling programs and expanding my existing knowledge of calendar-related standards. Fortunately, my work trying to get Kolab configured in a nice way didn’t go entirely to waste after all, although I also wanted to support different MTAs and not use convoluted Postfix-specific integration mechanisms, and so had to read up about more convenient and approachable mechanisms that other systems use to integrate with mail pipelines without trying hard to be all “high performance” about it. And I also wanted to make it possible for people to adopt a solution that didn’t force them to roll out LDAP in a scary “cross your fingers and run this script” fashion, even if many organisations already rely on LDAP and are comfortable with it.

The resulting description of this work is now available on the Web, and an attempt has been made to document the many different aspects of development, deployment and integration. Naturally, it is a work in progress and not a finished product: one step on the road to hopefully becoming a dependable solution involves packaging for Free Software distributions, which would result in the effort currently required to configure the software being minimised for the person setting it up. But at the same time, the mechanisms for integration with other systems (such as mail, mailboxes and Web servers) still need to be documented so that such work may have a chance to proceed.

Why Bother?

For various reasons unrelated to the work itself, it has taken a bit longer to get to this point than previously anticipated. But the act of making it available is, for me, a very necessary part of what I regard as a contribution to a kind of conversation about what kinds of software and solutions might work for certain groups of people, touching upon topics like how such solutions might be developed and realised. For instance, the handling of calendar data, although already supported by various Python libraries, hasn’t really led to similar Python-based solutions being developed as far as I can tell. Perhaps my contribution can act as an encouragement there.

There are, of course, various Python-based CalDAV servers, but I regard the projects around them to be somewhat opaque, and I perceive a common tendency amongst them to provide something resembling a product that covers some specific needs but then leaves those people deploying that product with numerous open-ended questions about how they might address related needs. I also wonder whether there should be more library sharing between these projects for more than basic data interpretation, but I know that this is quite difficult to achieve in practice, even if these projects should be largely functionally identical.

With such things forming the background of Free Software groupware, I can understand why some organisations are pitching complete solutions that aim to do many things. But here, in certain regards, I perceive a lack of opportunity for that conversation I mentioned above: there’s either a monologue with the insinuation that some parties know better than others (or worse, that they have the magic formula to total market domination) or there’s a dialogue with one side not really extending the courtesy of taking the other side’s views or contributions seriously.

And it is clear that those wanting to use such solutions should also be part of a conversation about what, in the end, should work best for them. Now, it is possible that organisations might see the benefit in the incremental approach to improving their systems and services that imip-agent offers. But it is also possible that there are also organisations who will contrast imip-agent with a selection of all-in-one solutions, possibly being dangled in front of them on special terms by vendors who just want to “close the deal”, and in the comparison shopping exercise that ensues, they will buy into the sales pitch of one of those vendors.

Without a concerted education exercise, that latter group of potential users are never likely to be a serious participant in our conversations (although I would hope that they might ultimately see sense), but the former group of potential users should be most welcome to participate in our conversations and thus enrich the wealth of choices and options that we should be offering. They would, I hope, realise that it is not about what they can get out of other people for nothing (or next to nothing), but instead what expertise and guidance they can contribute so that they and others can benefit from a sustainable and durable solution that, above all else, serves them and their needs and interests.

What Next?

Some people might point out that calendaring is only a small portion of what groupware is, if the latter term can even be somewhat accurately defined. This is indeed true. I would like to think that Free Software projects in other domains might enter the picture here to offer a compelling, broader groupware alternative. For instance, despite the apparent focus on chat and real-time communications, one doesn’t hear too much about one of the most popular groupware technologies on the Web today: the wiki. When used effectively, and when the dated rhetoric about wikis being equivalent to anarchy has been silenced by demonstrating effective collaborative editing and content management techniques, a wiki can be a potent tool for collaboration and collective information management.

It also turns out that Free Software calendar clients could do with some improvement. Their deficiencies may be a product of an unfortunate but fashionable fascination with proprietary mail, scheduling and social networking services amongst the community of people who use and develop Free Software. Once again, even though imip-agent seeks to provide only basic functionality as a calendar client, I hope that such functionality may inform or, at the very least, inspire developers to improve existing programs and bring them up to the expected levels of functionality.

Alongside this work, I have other things I want (and need) to be looking at, but I will happily entertain enquiries about how it might be developed further or deployed. It is, after all, Free Software, and given sufficient interest, it should be developed and improved in a collaborative fashion. There are some future plans for it that I take rather seriously, but with the privileges or freedoms granted in the licence, there is nothing stopping it from having a life of its own from now on.

So, if you are interested in this kind of solution and want to know more about it, take a look at the imip-agent site. If nothing else, I hope that it reminds you of the importance of independently-developed solutions for communication and the value in retaining control of the software and systems you rely on for that communication.

The BBC Micro and the BBC Micro Bit

Sunday, March 22nd, 2015

At least on certain parts of the Internet as well as in other channels, there has been a degree of excitement about the announcement by the BBC of a computing device called the “Micro Bit“, with the BBC’s plan to give one of these devices to each child starting secondary school, presumably in September 2015, attracting particular attention amongst technology observers and television licence fee-payers alike. Details of the device are a little vague at the moment, but the announcement along with discussions of the role of the corporation and previous initiatives of this nature provides me with an opportunity to look back at the original BBC Microcomputer, evaluate some of the criticisms (and myths) around the associated Computer Literacy Project, and to consider the things that were done right and wrong, with the latter hopefully not about to be repeated in this latest endeavour.

As the public record reveals, at the start of the 1980s, the BBC wanted to engage its audience beyond television programmes describing the growing microcomputer revolution, and it was decided that to do this and to increase computer literacy generally, it would need to be able to demonstrate various concepts and technologies on a platform that would be able to support the range of activities to which computers were being put to use. Naturally, a demanding specification was constructed – clearly, the scope of microcomputing was increasing rapidly, and there was a lot to demonstrate – and various manufacturers were invited to deliver products that could be used as this reference platform. History indicates that a certain amount of acrimony followed – a complete description of which could fill an entire article of its own – but ultimately only Acorn Computers managed to deliver a machine that could do what the corporation was asking for.

An Ambitious Specification

It is worth considering what the BBC Micro was offering in 1981, especially when considering ill-informed criticism of the machine’s specifications by people who either prefer other systems or who felt that participating in the development of such a machine was none of the corporation’s business. The technologies to be showcased by the BBC’s programme-makers and supported by the materials and software developed for the machine included full-colour graphics, multi-channel sound, 80-column text, Viewdata/Teletext, cassette and diskette storage, local area networking, interfacing to printers, joysticks and other input/output devices, as well as to things like robots and user-developed devices. Although it is easy to pick out one or two of these requirements, move forwards a year or two, increase the budget two- or three-fold, or any combination of these things, and to nominate various other computers, there really were few existing systems that could deliver all of the above, at least at an affordable price at the time.

Some microcomputers of the early 1980s
Computer RAM Text Graphics Year Price
Apple II Plus Up to 64K 40 x 25 (upper case only) 280 x 192 (6 colours), 40 x 48 (16 colours) 1979 £1500 or more
Commodore PET 4032/8032 32K 40/80 x 25 Graphics characters (2 colours) 1980 £800 (4032), £1030 (8032) (including monochrome monitor)
Commodore VIC-20 5K 22 x 23 176 x 184 (8 colours) 1980 (1981 outside Japan) £199
IBM PC (Model 5150) 16K up to 256K 40/80 x 25 640 x 200 (2 colours), 320 x 200 (4 colours) 1981 £1736 (including monochrome monitor, presumably with 16K or 64K)
BBC Micro (Model B) 32K 80/40/20 x 32/24, Teletext 640 x 256 (2 colours), 320 x 256 (2/4 colours), 160 x 256 (4/8 colours) 1981 £399 (originally £335)
Research Machines LINK 480Z 64K (expandable to 256K) 40 x 24 (optional 80 x 24) 160 x 72, 80 x 72 (2 colours); expandable to 640 x 192 (2 colours), 320 x 192 (4 colours), 190 x 96 (8 colours or 16 shades) 1981 £818
ZX Spectrum 16K or 48K 32 x 24 256 x 192 (16 colours applied using attributes) 1982 £125 (16K), £175 (48K)
Commodore 64 64K 40 x 25 320 x 200 (16 colours applied using attributes) 1982 £399

Perhaps the closest competitor, already being used in a fairly limited fashion in educational establishments in the UK, was the Commodore PET. However, it is clear that despite the adaptability of that system, its display capabilities were becoming increasingly uncompetitive, and Commodore had chosen to focus on the chipsets that would power the VIC-20 and Commodore 64 instead. (The designer of the PET went on to make the very capable, and understandably more expensive, Victor 9000/Sirius 1.) That Apple products were notoriously expensive and, indeed, the target of Commodore’s aggressive advertising did not seem to prevent them from capturing the US education market from the PET, but they always remained severely uncompetitive in the UK as commentators of the time indeed noted.

Later, the ZX Spectrum and Commodore 64 were released. Technology was progressing rapidly, and in hindsight one might have advocated waiting around until more capable and cheaper products came to market. However, it can be argued that in fulfilling virtually all aspects of the ambitious specification and pricing, it would not be until the release of the Amstrad CPC series in 1984 that a suitable alternative product might have become available. Even then, these Amstrad computers actually benefited from the experience accumulated in the UK computing industry from the introduction of the BBC Micro: they were, if anything, an iteration within the same generation of microcomputers and would even have used the same 6502 CPU as the BBC Micro had it not been for time-to-market pressures and the readily-available expertise with the Zilog Z80 CPU amongst those in the development team. And yet, specific aspects of the specification would still be unfulfilled: the BBC Micro had hardware support for Teletext displays, although it would have been possible to emulate these with a bitmapped display and suitable software.

Arise Sir Clive

Much has been made of the disappointment of Sir Clive Sinclair that his computers were not adopted by the BBC as products to be endorsed and targeted at schools. Sinclair made his name developing products that were competitive on price, often seeking cost-reduction measures to reach attractive pricing levels, but such measures also served to make his products less desirable. If one reads reviews of microcomputers from the early 1980s, many reviewers explicitly mention the quality of the keyboard provided by the computers being reviewed: a “typewriter” keyboard with keys that “travel” appear to be much preferred over the “calculator” keyboards provided by computers like the ZX Spectrum, Oric 1 or Newbury NewBrain, and they appear to be vastly preferred over the “membrane” keyboards employed by the ZX80, ZX81 and Atari 400.

For target audiences in education, business, and in the home, it would have been inconceivable to promote a product with anything less than a “proper” keyboard. Ultimately, the world had to wait until the ZX Spectrum +2 released in 1986 for a Spectrum with such a keyboard, and that occurred only after the product line had been acquired by Amstrad. (One might also consider the ZX Spectrum+ in 1984, but its keyboard was more of a hybrid of the calculator keyboard that had been used before and the “full-travel” keyboards provided by its competitors.)

Some people claim that they owe nothing to the BBC Micro and everything to the ZX Spectrum (or, indeed, the computer they happened to own) for their careers in computing. Certainly, the BBC Micro was an expensive purchase for many people, although contrary to popular assertion it was not any more expensive than the Commodore 64 upon that computer’s introduction in the UK, and for those of us who wanted BBC compatibility at home on a more reasonable budget, the Acorn Electron was really the only other choice. But it would be as childish as the playground tribalism that had everyone insist that their computer was “the best” to insist that the BBC Micro had no influence on computer literacy in general, or on the expectations of what computer systems should provide. Many people who owned a ZX Spectrum freely admit that the BBC Micro coloured their experiences, some even subsequently seeking to buy one or one of its successors and to go on to build a successful software development career.

The Costly IBM PC

Some commentators seem to consider the BBC Micro as having been an unnecessary diversion from the widespread adoption of the IBM PC throughout British society. As was the case everywhere else, the de-facto “industry standard” of the PC architecture and DOS captured much of the business market and gradually invaded the education sector from the top down, although significantly better products existed both before and after its introduction. It is tempting with hindsight to believe that by placing an early bet on the success of the then-new PC architecture, business and education could have somehow benefited from standardising on the PC and DOS applications. And there has always been the persistent misguided belief amongst some people that schools should be training their pupils/students for a narrow version of “the world of work”, as opposed to educating them to be able to deal with all aspects of their lives once their school days are over.

What many people forget or fail to realise is that the early 1980s witnessed rapid technological improvement in microcomputing, that there were many different systems and platforms, some already successful and established (such as CP/M), and others arriving to disrupt ideas of what computing should be like (the Xerox Alto and Star having paved the way for the Apple Lisa and Macintosh, the Atari ST, and so on). It was not clear that the IBM PC would be successful at all: IBM had largely avoided embracing personal computing, and although the system was favourably reviewed and seen as having the potential for success, thanks to IBM’s extensive sales organisation, other giants of the mainframe and minicomputing era such as DEC and HP were pursuing their own personal computing strategies. Moreover, existing personal computers were becoming entrenched in certain markets, and early adopters were building a familiarity with those existing machines that was reflected in publications and materials available at the time.

Despite the technical advantages of the IBM PC over much of the competition at the beginning of the 1980s, it was also substantially more expensive than the mass-market products arriving in significant numbers, aimed at homes, schools and small businesses. With many people remaining intrigued but unconvinced by the need for a personal computer, it would have been impossible for a school to justify spending almost £2000 (probably around £8000 today) on something without proven educational value. Software would also need to be purchased, and the procurement of expensive and potentially non-localised products would have created even more controversy.

Ultimately, the Computer Literacy Project stimulated the production of a wide range of locally-produced products at relatively inexpensive prices, and while there may have been a few years of children learning BBC BASIC instead of one of the variants of BASIC for the IBM PC (before BASIC became a deprecated aspect of DOS-based computing), it is hard to argue that those children missed out on any valuable experience using DOS commands or specific DOS-based products, especially since DOS became a largely forgotten environment itself as technological progress introduced new paradigms and products, making “hard-wired”, product-specific experience obsolete.

The Good and the Bad

Not everything about the BBC Micro and its introduction can be considered unconditionally good. Choices needed to be made to deliver a product that could fulfil the desired specification within certain technological constraints. Some people like to criticise BBC BASIC as being “non-standard”, for example, which neglects the diversity of BASIC dialects that existed at the dawn of the 1980s. Typically, for such people “standard” equates to “Microsoft”, but back then Microsoft BASIC was a number of different things. Commodore famously paid a one-off licence fee to use Microsoft BASIC in its products, but the version for the Commodore 64 was regarded as lacking user-friendly support for graphics primitives and other interesting hardware features. Meanwhile, the MSX range of microcomputers featured Microsoft Extended BASIC which did provide convenient access to hardware features, although the MSX range of computers were not the success at the low end of the market that Microsoft had probably desired to complement its increasing influence at the higher end through the IBM PC. And it is informative in this regard to see just how many variants of Microsoft BASIC were produced, thanks to Microsoft’s widespread licensing of its software.

Nevertheless, the availability of one company’s products do not make a standard, particularly if interoperability between those products is limited. Neither BBC BASIC nor Microsoft BASIC can be regarded as anything other than de-facto standards in their own territories, and it is nonsensical to regard one as non-standard when the other has largely the same characteristics as a proprietary product in widespread use, even if it was licensed to others, as indeed both Microsoft BASIC and BBC BASIC were. Genuine attempts to standardise BASIC did indeed exist, notably BASICODE, which was used in the distribution of programs via public radio broadcasts. One suspects that people making casual remarks about standard and non-standard things remain unaware of such initiatives. Meanwhile, Acorn did deliver implementations of other standards-driven programming languages such as COMAL, Pascal, Logo, Lisp and Forth, largely adhering to any standards subject to the limitations of the hardware.

However, what undermined the BBC Micro and Acorn’s related initiatives over time was the control that they as a single vendor had over the platform and its technologies. At the time, a “winner takes all” mentality prevailed: Commodore under Jack Tramiel had declared a “price war” on other vendors and had caused difficulties for new and established manufacturers alike, with Atari eventually being sold to Tramiel (who had resigned from Commodore) by Warner Communications, but many companies disappeared or were absorbed by others before half of the decade had passed. Indeed, Acorn, who had released the Electron to compete with Sinclair Research at the lower end of the market, and who had been developing product lines to compete in the business sector, experienced financial difficulties and was ultimately taken over by Olivetti; Sinclair, meanwhile, experienced similar difficulties and was acquired by Amstrad. In such a climate, ideas of collaboration seemed far from everybody’s minds.

Since then, the protagonists of the era have been able to reflect on such matters, Acorn co-founder Hermann Hauser admitting that it may have been better to license Acorn’s Econet local area networking technology to interested competitors like Commodore. Although the sentiments might have something to do with revenues and influence – it was at Acorn that the ARM processor was developed, sowing the seeds of a successful licensing business today – the rest of us may well ask what might have happened had the market’s participants of the era cooperated on things like standards and interoperability, helping their customers to protect their investments in technology, and building a bigger “common” market for third-party products. What if they had competed on bringing technological improvements to market without demanding that people abandon their existing purchases (and cause confusion amongst their users) just because those people happened to already be using products from a different vendor? It is interesting to see the range of available BBC BASIC implementations and to consider a world where such building blocks could have been adopted by different manufacturers, providing a common, heterogeneous platform built on cooperation and standards, not the imposition of a single hardware or software platform.

But That Was Then

Back then, as Richard Stallman points out, proprietary software was the norm. It would have been even more interesting had the operating systems and the available software for microcomputers been Free Software, but that may have been asking too much at the time. And although computer designs were often shared and published, a tendency to prevent copying of commercial computer designs prevailed, with Acorn and Sinclair both employing proprietary integrated circuits mostly to reduce complexity and increase performance, but partly to obfuscate their hardware designs, too. Thus, it may have been too much to expect something like the BBC Micro to have been open hardware to any degree “back in the day”, although circuit diagrams were published in publicly-available technical documentation.

But we have different expectations now. We expect software to be freely available for inspection, modification and redistribution, knowing that this empowers the end-users and reassures them that the software does what they want it to do, and that they remain in control of their computing environment. Increasingly, we also expect hardware to exhibit the same characteristics, perhaps only accepting that some components are particularly difficult to manufacture and that there are physical and economic restrictions on how readily we may practise the modification and redistribution of a particular device. Crucially, we demand control over the software and hardware we use, and we reject attempts to prevent us from exercising that control.

The big lesson to be learned from the early 1980s, to be considered now in the mid-2010s, is not how to avoid upsetting a popular (but ultimately doomed) participant in the computing industry, as some commentators might have everybody believe. It is to avoid developing proprietary solutions that favour specific organisations and that, despite the general benefits of increased access to technology, ultimately disempower the end-user. And in this era of readily available Free Software and open hardware platforms, the lesson to be learned is to strengthen such existing platforms and to work with them, letting those products and solutions participate and interoperate with the newly-introduced initiative in question.

The BBC Micro was a product of its time and its development was very necessary to fill an educational need. Contrary to the laziest of reports, the Micro Bit plays a different role as an accessory rather than as a complete home computer, at least if we may interpret the apparent intentions of its creators. But as a product of this era, our expectations for the Micro Bit are greater: we expect transparency and interoperability, the ability to make our own (just as one can with the Arduino, as long as one does not seek to call it an Arduino without asking permission from the trademark owner), and the ability to control exactly how it works. Whether there is a need to develop a completely new hardware solution remains an unanswered question, but we may assert that should it be necessary, such a solution should be made available as open hardware to the maximum possible extent. And of course, the software controlling it should be Free Software.

As we edge gradually closer to September and the big deployment, it will be interesting to assess how the device and the associated initiative measures up to our expectations. Let us hope that the right lessons from the days of the BBC Micro have indeed been learned!

Making the Best of a Bad Deal

Wednesday, January 7th, 2015

I had the opportunity over the holidays to browse the January 2015 issue of “Which?” – the magazine of the Consumers’ Association in Britain – which, amongst other things, covered the topic of “technology ecosystems“. Which? has a somewhat patchy record when technology matters are taken into consideration: on the one hand, reviews consider the practical and often mundane aspects of gadgets such as battery life, screen brightness, and so on, continuing their tradition of giving all sorts of items a once over; on the other hand, issues such as platform choice and interoperability are typically neglected.

Which? is very much pitched at the “empowered consumer” – someone who is looking for a “good deal” and reassurances about an impending purchase – and so the overriding attitude is one that is often in evidence in consumer societies like Britain: what’s in it for me? In other words, what goodies will the sellers give me to persuade me to choose them over their competitors? (And aren’t I lucky that these nice companies are throwing offers at me, trying to win my custom?) A treatment of ecosystems should therefore be somewhat interesting reading because through the mere use of the term “ecosystem” it acknowledges that alongside the usual incentives and benefits that the readership is so keen to hear about, there are choices and commitments to be made, with potentially negative consequences if one settles in the wrong ecosystem. (Especially if others are hell-bent on destroying competing ecosystems in a “war” as former Nokia CEO Stephen Elop – now back at Microsoft, having engineered the sale of a large chunk of Nokia to, of course, Microsoft – famously threatened in another poor choice of imagery as part of what must be the one of the most insensitively-formulated corporate messages of recent years.)

Perhaps due to the formula behind such articles in Which? and similar arenas, some space is used to describe the benefits of committing to an ecosystem.  Above the “expert view” describing the hassles of switching from a Windows phone to an Android one, the title tells us that “convenience counts for a lot”. But the article does cover problems with the availability of applications and services depending on the platform chosen, and even the matter of having to repeatedly buy access to content comes up, albeit with a disappointing lack of indignance for a topic that surely challenges basic consumer rights. The conclusion is that consumers should try and keep their options open when choosing which services to use. Sensible and uncontroversial enough, really.

The Consequences of Apathy

But sadly, Which? is once again caught in a position of reacting to technology industry change and the resulting symptoms of a deeper malaise. When reviewing computers over the years, the magazine (and presumably its sister publications) always treated the matter of platform choice with a focus on “PCs and Macs” exclusively, with the latter apparently being “the alternative” (presumably in a feeble attempt to demonstrate a coverage of choice that happens to exist in only two flavours). The editors would most likely protest that they can only cover the options most widely available to buy in a big-name store and that any lack of availability of a particular solution – say, GNU/Linux or one of the freely available BSDs – is the consequence of a lack of consumer interest, and thus their readership would also be uninterested.

Such an unwillingness to entertain genuine alternatives, and to act in the interests of members of their audience who might be best served by those solutions, demonstrates that Which? is less of a leader in consumer matters than its writers might have us believe. Refusing to acknowledge that Which? can and does drive demand for alternatives, only to then whine about the bundled products of supposed consumer interest, demonstrates a form of self-imposed impotence when faced with the coercion of proprietary product upgrade schedules. Not everyone – even amongst the Which? readership – welcomes the impending vulnerability of their computing environment as another excuse to go shopping for shiny new toys, nor should they be thankful that Which? has done little or nothing to prevent the situation from occurring in the first place.

Thus, Which? has done as much as the rest of the mainstream technology press to unquestioningly sustain the monopolistic practices of the anticompetitive corporate repeat offender, Microsoft, with only a cursory acknowledgement of other platforms in recent years, qualified by remarks that Free Software alternatives such as GNU/Linux and LibreOffice are difficult to get started with or not what people are used to. After years of ignoring such products and keeping them marginalised, this would be the equivalent of denying someone the chance to work and then criticising them for not having a long list of previous employers to vouch for them on their CV.  Noting that 1.1-billion people supposedly use Microsoft Office (“one in seven people on the planet”) makes for a nice statistic in the sidebar of the print version of the article, but how many people have a choice in doing so or, for that matter, in using other Microsoft products bundled with computers (or foisted on office workers or students due to restrictive or corrupt workplace or institutional policies)? Which? has never been concerned with such topics, or indeed the central matter of anticompetitive software bundling, or its role in the continuation of such practices in the marketplace: strange indeed for a consumer advocacy publication.

At last, obliged to review a selection of fundamentally different ecosystem choices – as opposed to pretending that different vendor badges on anonymous laptops provide genuine choice – Which? has had to confront the practical problems brought about by an absence of interoperability: that consumers might end up stranded with a large, non-transferable investment in something they no longer wish to be a part of. Now, the involvement of a more diverse selection of powerful corporate interests have made such matters impossible to ignore. One gets the impression that for now at least, the publication cannot wish such things away and return to the lazy days of recommending that everyone line up and pay a single corporation their dues, refusing to believe that there could be anything else out there, let alone usable Free Software platforms.

Beyond Treating the Symptoms

Elsewhere in the January issue, the latest e-mail scam is revealed. Of course, a campaign to widen the adoption of digitally-signed mail would be a start, but that is probably too much to expect from Which?: just as space is dedicated to mobile security “apps” in this issue, countless assortments of antivirus programs have been peddled and reviewed in the past, but solving problems effectively requires a leader rather than a follower. Which? may do the tedious job of testing kettles, toasters, washing-up liquids, and much more to a level of thoroughness that would exhaust most people’s patience and interest. And to the publication’s credit, a certain degree of sensible advice is offered on topics such as online safety, albeit with the usual emphasis on proprietary software for the copy of Windows that members of its readership were all forced to accept. But in technology, Which? appears to be a mere follower, suggesting workarounds rather than working to build a fair market for safe, secure and interoperable products.

It is surely time for Which? to join the dots and to join other organisations in campaigning for fundamental change in the way technology is delivered by vendors and used throughout society. Then, by upholding a fair marketplace, interoperability, access to digital signature and encryption technologies, true ownership of devices and of purchased content, and all the things already familiar to Free Software and online rights advocates, they might be doing their readership a favour after all.

Når man fokuserer på merkevarer og ikke standarder

Friday, January 24th, 2014

Det var interessant å lese et leserbrev i Uniforum med tittelen “Digitale samarbeidsverktøy i undervisning“: noe som trykker alle de riktige knappene i dagens akademiske landskap med et økende fokus på nettbasert opplæring, bredere tilgang til kurs, og mange andre ting som kanskje motiveres mer av “prestisje” og “å kapre kunder” enn å øke generell tilgjengelighet til en institusjons kunnskap og ekspertise. Brevforfatterne beskriver utbredt bruk av videokonferanseløsninger og skryter på en proprietær løsning som de tilsynelatende liker ganske godt, og henviser til universitetets anbefalinger for tekniske løsninger. Etter at man graver litt til å finne disse anbefalingene ser man ganske fort at de dreier seg om tre proprietære (eller delvis proprietære) systemer: Adobe Connect, Skype, og spesialiserte videokonferanseutstyr som finnes i noen møterom.

Det er i all sannsynlighet en konsekvens av forbrukersamfunnet vi lever i: at man tenker produkter og merkevarer og ikke standarder, og at etter man oppdager et spennende produkt vil man gjerne øke oppmerksomheten på produktet blant alle som har liknende behov. Men når man blir produkt- og merkevarefokusert kan man fort miste blikket over det egentlige problemet og den egentlige løsningen. At så mange fortsetter å insistere på Ibux når de kunne kjøpe generiske medikamenter for en brøkdel av merkevarens pris er bare et eksempel på at folk ikke lenger er vant til å vurdere de virkelige forholdene og vil heller ty til merkevarer for en enkel og rask løsning de ikke trenger å tenke så mye på.

Man bør kanskje ikke legge så veldig mye skyld på vanlige databrukere når de begår slike feil, spesielt når store deler av det offentlige her i landet fokuserer på proprietære produkter som, hvis de utnytter genuine standarder i det hele tatt, pleier å blande dem med proprietære teknologier for å styre kunden mot en forpliktelse overfor noen få leverandører i en nesten ubegrenset tid fremover. Men det er litt skuffende at “grønne” representanter ikke vurderer bærekraftige teknologiske løsninger – de som gjøres tilgjengelig som fri programvare og som bruker dokumenterte og åpne standarder – når det ville forventes at slike representanter foreslår bærekraftige løsninger i andre områder.

The Organisational Panic Button and the Magic Single Vendor Delusion

Wednesday, November 27th, 2013

I have had reason to consider the way organisations make technology choices in recent months, particularly where the public sector is concerned, and although my conclusions may not come as a surprise to some people, I think they sum up fairly well how bad decisions get made even if the intentions behind them are supposedly good ones. Approaching such matters from a technological point of view, being informed about things like interoperability, systems diversity, the way people adopt and use technology, and the details of how various technologies work, it can be easy to forget that decisions around acquisitions and strategies are often taken by people who have no appreciation of such things and no time or inclination to consider them either: as far as decision makers are concerned, such things are mere details that obscure the dramatic solution that shows them off as dynamic leaders getting things done.

Assuming the Position

So, assume for a moment that you are a decision-maker with decisions to make about technology, that you have in your organisation some problems that may or may not have technology as their root cause, and that because you claim to listen to what people in your organisation have to say about their workplace, you feel that clear and decisive measures are required to solve some of those problems. First of all, it is important to make sure that when people complain about something, they are not mixing that thing up with something else that really makes their life awkward, but let us assume that you and your advisers are aware of that issue and are good at getting to the heart of the real problem, whatever that may be. Next, people may ask for all sorts of things that they want but do not actually need – “an iPad in every meeting room, elevator and lavatory cubicle!” – and even if you also like the sound of such wild ideas, you also need to be able to restrain yourself and to acknowledge that it would simply be imprudent to indulge every whim of the workforce (or your own). After all, neither they nor you are royalty!

With distractions out of the way, you can now focus on the real problems. But remember: as an executive with no time for detail, the nuances of a supposedly technological problem – things like why people really struggle with some task in their workplace and what technical issues might be contributing to this discomfort – these things are distractions, too. As someone who has to decide a lot of things, you want short and simple summaries and to give short and simple remedies, delegating to other people to flesh out the details and to make things happen. People might try and get you to understand the detail, but you can always delegate the task of entertaining such explanations and representations to other people, naturally telling them not to waste too much time on executing the plan.

Architectural ornamentation in central Oslo

Architectural ornamentation in central Oslo

On the Wrong Foot

So, let us just consider what we now know (or at least suspect) about the behaviour of someone in an executive position who has an organisation-wide problem to solve. They need to demonstrate leadership, vision and intent, certainly: it is worth remembering that such positions are inherently political, and if there is anything we should all know about politics by now, it is that it is often far more attractive to make one’s mark, define one’s legacy, fulfil one’s vision, reserve one’s place in the history books than it is to just keep things running efficiently and smoothly and to keep people generally satisfied with their lot in life; this principle alone explains why the city of Oslo is so infatuated with prestige projects and wants to host the Winter Olympics in a few years’ time (presumably things like functioning public transport, education, healthcare, even an electoral process that does not almost deliberately disenfranchise entire groups of voters, will all be faultless by then). It is far more exciting being a politician if you can continually announce exciting things, leaving the non-visionary stuff to your staff.

Executives also like to keep things as uncluttered as possible, even if the very nature of a problem is complicated, and at their level in the organisation they want the explanations and the directives to be as simple as possible. Again, this probably explains the “rip it up and start over” mentality that one sees in government, especially after changes in government even if consecutive governments have ideological similarities: it is far better to be seen to be different and bold than to be associated with your discredited predecessors.

But what do these traits lead to? Well, let us return to an organisational problem with complicated technical underpinnings. Naturally, decision-makers at the highest levels will not want to be bored with the complications – at the classic “10000 foot” view, nothing should be allowed to encroach on the elegant clarity of the decision – and even the consideration of those complications may be discouraged amongst those tasked to implement the solution. Such complications may be regarded as a legacy of an untidy and unruly past that was not properly governed or supervised (and are thus mere symptoms of an underlying malaise that must be dealt with), and the need to consider them may draw time and resources away from an “urgently needed” solution that deals with the issue no matter what it takes.

How many times have we been told “not to spend too much time” on something? And yet, that thing may need to be treated thoroughly so that it does not recur over and over again. And as too many people have come to realise or experience, blame very often travels through delegation: people given a task to see through are often deprived of resources to do it adequately, but this will not shield them from recriminations and reprisals afterwards.

It should not demand too much imagination to realise that certain important things will be sacrificed or ignored within such a decision-making framework. Executives will seek simplistic solutions that almost favour an ignorance of the actual problem at hand. Meanwhile, the minions or underlings doing the work may seek to stay as close as possible to the exact word of the directive handed down to them from on high, abandoning any objective assessment of the problem domain, so as to be able to say if or when things go wrong that they were only following the instructions given to them, and that as everything falls to pieces it was the very nature of the vision that led to its demise rather than the work they did, or that they took the initiative to do something “unsanctioned” themselves.

The Magic Single Vendor Temptation

We can already see that an appreciation of the finer points of a problem will be an early casualty in the flawed framework described above, but when pressure also exists to “just do something” and when possible tendencies to “make one’s mark” lie just below the surface, decision-makers also do things like ignore the best advice available to them, choosing instead to just go over the heads of the people they employ to have opinions about matters of technology. Such antics are not uncommon: there must be thousands or even millions of people with the experience of seeing consultants breeze into their workplace and impart opinions about the work being done that are supposedly more accurate, insightful and valuable than the actual experiences of the people paid to do that very work. But sometimes hubris can get the better of the decision-maker to the extent that their own experiences are somehow more valid than those supposed experts on the payroll who cannot seem to make up their minds about something as mundane as which technology to use.

And so, the executive may be tempted to take a page from their own playbook: maybe they used a product in one of their previous organisations that had something to do with the problem area; maybe they know someone in their peer group who has an opinion on the topic; maybe they can also show that they “know about these things” by choosing such a product. And with so many areas of life now effectively remedied by going and buying a product that instantly eradicates any deficiency, need, shortcoming or desire, why would this not work for some organisational problem? “What do you mean ‘network provisioning problems’? I can get the Internet on my phone! Just tell everybody to do that!”

When the tendency to avoid complexity meets the apparent simplicity of consumerism (and of solutions encountered in their final form in the executive’s previous endeavours), the temptation to solve a problem at a single stroke or a single click of the “buy” button becomes great indeed. So what if everyone affected by the decision has different needs? The product will surely meet all those needs: the vendor will make sure of that. And if the vendor cannot deliver, then perhaps those people should reconsider their needs. “I’ve seen this product work perfectly elsewhere. Why do you people have to be so awkward?” After all, the vendor can work magic: the salespeople practically told us so!

Nothing wrong here: a public transport "real time" system failure; all the trains are arriving "now"

Nothing wrong here: a public transport "real time" system failure; all the trains are arriving "now"

The Threat to Diversity

In those courses in my computer science degree that dealt with the implementation of solutions at the organisational level, as opposed to the actual implementation of software, attempts were made to impress upon us students the need to consider the requirements of any given problem domain because any solution that neglects the realities of the problem domain will struggle with acceptance and flirt with failure. Thus, the impatient executive approach involving the single vendor and their magic product that “does it all” and “solves the problem” flirts openly and readily with failure.

Technological diversity within an organisation frequently exists for good reason, not to irritate decision-makers and their helpers, and the larger the organisation the larger the potential diversity to be found. Extrapolating from narrow experiences – insisting that a solution must be good enough for everyone because “it is good enough for my people” – risks neglecting the needs of large sections of an organisation and denying the benefits of diversity within the organisation. In turn, this risks the health of those parts of an organisation whose needs have now been ignored.

But diversity goes beyond what people happen to be using to do their job right now. By maintaining the basis for diversity within an organisation, it remains possible to retain the freedom for people to choose the most appropriate systems and platforms for their work. Conversely, undermining diversity by imposing a single vendor solution on everyone, especially when such solutions also neglect open standards and interoperability, threatens the ability for people to make choices central to their own work, and thus threatens the vitality of that work itself.

Stories abound of people in technical disciplines who “also had to have a Windows computer” to do administrative chores like fill out their expenses, hours, travel claims, and all the peripheral tasks in a workplace, even though they used a functioning workstation or other computer that would have been adequate to perform the same chores within a framework that might actually have upheld interoperability and choice. Who pays for all these extra computers, and who benefits from such redundancy? And when some bright spark in the administration suggests throwing away the “special” workstation, putting administrative chores above the real work, what damage does this do to the working environment, to productivity, and to the capabilities of the organisation?

Moreover, the threat to diversity is more serious than many people presumably understand. Any single vendor solution imposed across an organisation also threatens the independence of the institution when that solution also informs and dictates the terms under which other solutions are acquired and introduced. Any decision-maker who regards their “one product for everybody” solution as adequate in one area may find themselves supporting a “one vendor for everything” policy that infects every aspect of the organisation’s existence, especially if they are deluded enough to think that they getting a “good deal” by buying all their things from that one vendor and thus unquestioningly going along with it all for “economic reasons”. At that point, one has to wonder whether the organisation itself is in control of its own acquisitions, systems or strategies any longer.

Somebody Else’s Problem

People may find it hard to get worked up about the tools and systems their employer uses. Surely, they think, what people have chosen to run a part of the organisation is a matter only for those who work with that specific thing from one day to the next. When other people complain about such matters, it is easy to marginalise them and to accuse them of making trouble for the sake of doing so. But such reactions are short-sighted: when other people’s tools are being torn out and replaced by something less than desirable, bystanders may not feel any urgency to react or even think about showing any sympathy at all, but when tendencies exist to tackle other parts of an organisation with simplistic rationalisation exercises, who knows whose tools might be the next ones to be tampered with?

And from what we know from unfriendly solutions that shun interoperability and that prefer other solutions from the same vendor (or that vendor’s special partners), when one person’s tool or system gets the single vendor treatment, it is not necessarily only that person who will be affected: suddenly, other people who need to exchange information with that person may find themselves having to “upgrade” to a different set of tools that are now required for them just to be able to continue that exchange. One person’s loss of control may mean that many people lose control of their working environment, too. The domino effect that follows may result in an organisation transformed for the worse based only on the uninformed gut instincts of someone with the power to demand that something be done the apparently easy way.

Inconvenience: a crane operating over one pavement while sitting on the other, with a sign reading "please use the pavement on the other side"

Inconvenience: a crane operating over one pavement while sitting on the other, with a sign reading "please use the pavement on the other side"

Getting the Message Across

For those of us who want to see Free Software and open standards in organisations, the dangers of the top-down single vendor strategy are obvious, but other people may find it difficult to relate to the issues. There are, however, analogies that can be illustrative, and as I perused a publication related to my former employer I came across an interesting complaint that happens to nicely complement an analogy I had been considering for a while. The complaint in question is about some supplier management software that insists that bank account numbers can only have 18 digits at most, but this fails to consider the situation where payments to Russian and Chinese accounts might need account numbers with more than 18 digits, and the complainant vents his frustration at “the new super-elite of decision makers” who have decided that they know better than the people actually doing the work.

If that “super-elite” were to call all the shots, their solution would surely involve making everyone get an account with an account number that could only ever have 18 digits. “Not supported by your bank? Change bank! Not supported in your country? Change your banking country!” They might not stop there, either: why not just insist on everyone having an account at just one organisation-mandated bank? “Who cares if you don’t want a customer relationship with another bank? You want to get paid, don’t you?”

At one former employer of mine, setting up a special account at a particular bank was actually how things were done, but ignoring peculiarities related to the nature of certain kinds of institutions, making everyone needlessly conform through some dubiously justified, executive-imposed initiative whether it be requiring them to have an account with the organisation’s bank, or requiring them to use only certain vendor-sanctioned software (and as a consequence requiring them to buy certain vendor-sanctioned products so that they may have a chance of using them at work or to interact with their workplace from home) is an imposition too far. Rationalisation is a powerful argument for shaking things up, but it is often used by those who do not care how much it manages to transfer the inconvenience in an organisation to the individual and to other parties.

Bearing the Costs

We have seen how the organisational cost of short-sighted, buy-and-forget decision-making can end up being borne by those whose interests have been ignored or marginalised “for the good of the organisation”, and we can see how this can very easily impose costs across the whole organisation, too. But another aspect of this way of deciding things can also be costly: in the hurry to demonstrate the banishment of an organisational problem with a flourish, incremental solutions that might have dealt with the problem more effectively can become as marginalised as the influence of the people tasked with the job of seeing any eventual solution through. When people are loudly demanding improvements and solutions, an equally dramatic response probably does not involve reviewing the existing infrastructure, identifying areas that can provide significant improvement without significant inconvenience or significant additional costs, and committing to improve the existing solutions quietly and effectively.

Thus, when faced with disillusionment – that people may have decided for themselves that whatever it was that they did not like is now beyond redemption – decision-makers are apt to pander to such disillusionment by replacing any existing thing with something completely new. Especially if it reinforces their own blinkered view of an organisational problem or “confirms” what they “already know”, decision-makers may gladly embrace such dramatic acts as a demonstration of the resolve expected of a decisive leader as they stand to look good by visibly banishing the source of disillusionment. But when such pandering neglects relatively inexpensive, incremental improvements and instead incurs significant costs and disruptions for the organisation, one can justifiably question the motivations behind such dramatic acts and the level of competence brought to bear on resolving the original source of discomfort.

The electrical waste collection

The electrical waste collection

Mission Accomplished?

Thinking that putting down money with a single vendor will solve everybody’s problems, purging diversity from an organisation and stipulating the uniformity encouraged by that vendor, is an overly simplistic and even deluded approach to organisational change. Change in any organisation can be very expensive and must therefore be managed carefully. Change for the sake of change is therefore incredibly irresponsible. And change imposed to gratify the perception of change or progress, made on a superficial basis and incurring unnecessary and avoidable burdens within an organisation whilst risking that organisation’s independence and viability, is nothing other than indefensible.

Be wary of the “single vendor fixes it all” delusion, especially if all the signs point to a decision made at the highest levels of your organisation: it is the sign of the organisational panic button being pressed while someone declares “Mission Accomplished!” Because at the same time they will be thinking “We will have progress whatever the cost!” And you, not them, will be the one bearing the cost.

Horseplay in Public Procurement? “Standards!”

Thursday, June 6th, 2013

There is a classic XKCD comic strip where the programmer, “slacking off” in the office and taking a break from doing work, clearly engaging in horseplay, issues the retort “Compiling!” to get his supervisor or peers off his back. It is seen as the ultimate excuse for not doing one’s work, immediately curtailing any further investigation of what really is going on in the corridor. Having recently been investigating some strategic public sector purchasing decisions, it occurred to me that something similar is going on in that area as well.

There’s an interesting case that came up a few years ago: Oslo municipality sought to acquire infrastructure for e-mail and related functionality. The scope of the tender covered “at least 30000 accounts” for client and server software, services and assistance, which is a pretty big tender but not unexpected given that the municipality is one of the largest single employers in Norway with almost 50000 employees (more statistics available here). Unfortunately, the additional documents are no longer available (and are generally not publicly available at the state procurement portal – you have to register as an interested party), but they are quoted in various places. Translating one particular requirement…

“Oslo municipality has standardised on Microsoft Office as office productivity software. It is therefore expected that solutions use MS Outlook 2003 and later as client.”

Two places where the offending requirements are reproduced are in complaints to the state procurement panel: 2009/124 and 2009/153. In these very similar complaints, it is pointed out that alternatives to Outlook can be offered as options (this is in the original tender), but that the municipality would only test proposed solutions with Outlook. As justification for insisting on Outlook compatibility, the municipality claimed that they had found “six different large companies providing relevant software in connection with the drafting of the requirements… all of which can be used together with Outlook”, and thus there was a basis for real competition. As a result, both complaints were rejected.

The Illusion of Compatibility

Now, one might claim that it is perfectly reasonable to want to acquire systems that work with the ones you already have. It is a bit like saying, “I’ve bought all this riding equipment: of course I want a horse!” The deeper issue here is whether anyone should be allowed to specify product compatibility to limit competition. In other words, when you just need transport to get around, why have you made your requirements so specific that you will only ever be getting a horse?

It is all very well demanding compatibility with a specific product, but when the means by which compatibility can be achieved are controlled by the vendor of that product, it is never going to be a fair competition for anyone trying to provide compatibility for their own separate products and solutions, especially when the vendor of the specified product is known to have used compatibility breakage to deliberately undermine the viability of competitors’ products. One response to this pitfall is to insist that those writing procurement tenders specify standards instead of products and that these standards must be genuinely open and not de-facto proprietary standards.

Unfortunately, the regulators of procurement do not seem to go even this far. The Norwegian government states that public sector institutions must support various standards, although the directorate concerned appears to have changed these obligations from the original directive and now insists that the dubious, forcibly- and incompletely-standardised Office Open XML document format must be accepted by the public sector in communications; they have also weakened the Internet publishing requirements for public sector institutions by permitting the use of various encumbered, cartel-controlled audio and video formats. For these changes, entertained in a review process, we can thank the likes of Statistics Norway who wanted “Word format” as well as OOXML to be permitted in the list of acceptable “standards”.

In any case, such directives only cover the surface of public sector activity, and the list of standards do not in general cover anything more than storage and interchange formats plus basic communications standards. This leaves quite a gap where established Internet standards exist but are not mandated, thus allowing proprietary protocols and technologies to insert themselves into infrastructure and pervert the processes of procurement and systems integration.

The Pretense of “Standards!”

But even if open standards were mandated in the public sector – a worthy and necessary measure – that wouldn’t mean that our work to ensure a level playing field – fairness in procurement – would be done. Because vendors can always advertise compliance with standards, they can still insist that their products be considered in any procurement contest, and even if those products do notionally support standards it does not mean that they will end up using them when deployed. For example, from the case of the Oslo municipality e-mail system, the councillor with responsibility for finance and development indicated the following:

“Oslo municipality is a complicated and comprehensive organisation and must take existing integration with specialist/bespoke systems into account. A procurement of other [non-Microsoft] end-user software will therefore result in unnecessary increases in costs for the municipality.”

In other words, even if existing software was acquired under the pretense that it supported standards, in deployment it may actually only function with other software using proprietary mechanisms, and the result of this is that newly-acquired software must also support these proprietary mechanisms. And so, a proprietary infrastructure grows, actively repelling components that employ open standards, with its custodians insisting that it is the fault of standards-compliant software that such an infrastructure would need to be dismantled almost in its entirety and replaced if even one standards-compliant component were to be admitted.

Who benefits the most from this? The vendor peddling the proprietary platforms and technologies that enable this morass of interdependency, of course. Make no mistake: any initial convenience promised by such a vendor fades away when the task of having to pursue an infrastructure strategy not dictated by outside interests is brought to bear on the purchaser. But such tasks are work, of course, and if there’s a way of avoiding it and insisting it doesn’t need attending to, a distraction can always be found.

And so, the horseplay continues under the excuse of “Standards!” when there is no real intent to uphold them or engage in the real work of maintaining a sustainable infrastructure that does not exclude open competition or channel public money to preferred vendors. Unlike the character in the comic strip whose code probably is still compiling, certain public sector institutions would have experienced a compilation error and be found out. It appears, unfortunately, that it is our job to peer around the cubicle partition and see what is happening on screen and perhaps to investigate the noises coming from the corridor. After all, our institutions don’t seem to be particularly concerned about doing so.