Paul Boddie's Free Software-related blog


Archive for the ‘education’ Category

Replaying the Microcomputing Revolution

Monday, January 6th, 2025

Since microcomputing and computing history are particular topics of interest of mine, I was naturally engaged by a recent article about the Raspberry Pi and its educational ambitions. Perhaps obscured by its subsequent success in numerous realms, the aspirations that originally drove the development of the Pi had their roots in the effects of the introduction of microcomputers in British homes and schools during the 1980s, a phenomenon that supposedly precipitated a golden age of hands-on learning, initiating numerous celebrated and otherwise successful careers in computing and technology.

Such mythology has the tendency to greaten expectations and deepen nostalgia, and when society enters a malaise in one area or another, it often leads to efforts to bring back the magic through new initiatives. Enter the Raspberry Pi! But, as always, we owe it to ourselves to step through the sequence of historical events, as opposed to simply accepting the narratives peddled by those with an agenda or those looking for comforting reminders of their own particular perspectives from an earlier time.

The Raspberry Pi and other products, such as the BBC Micro Bit, associated with relatively recent educational initiatives, were launched with the intention of restoring the focus of learning about computing to that of computing and computation itself. Once upon a time, computers were largely confined to large organisations and particular kinds of endeavour, generally interacting only indirectly with wider society. Thus, for most people, what computers were remained an abstract notion, often coupled with talk of the binary numeral system as the “language” of these mysterious and often uncompromising machines.

However, as microcomputers emerged both in the hobbyist realm – frequently emphasised in microcomputing history coverage – and in commercial environments such as shops and businesses, governments and educators identified a need for “computer literacy”. This entailed practical experience with computers and their applications, informed by suitable educational material, enabling the broader public to understand the limitations and the possibilities of these machines.

Although computers had already been in use for decades, microcomputing diminished the cost of accessible computing systems and thereby dramatically expanded their reach. And when technology is adopted by a much larger group, there is usually a corresponding explosion in applications of that technology as its users make their own discoveries about what the technology might be good for. The limitations of microcomputers relative to their more sophisticated predecessors – mainframes and minicomputers – also meant that existing, well-understood applications were yet to be successfully transferred from those more powerful and capable systems, leaving the door open for nimble, if somewhat less capable, alternatives to be brought to market.

The Capable User

All of these factors pointed towards a strategy where users of computers would not only need to be comfortable interacting with these systems, but where they would also need to have a broad range of skills and expertise, allowing them to go beyond simply using programs that other people had made. Instead, they would need to be empowered to modify existing programs and even write their own. With microcomputers only having a limited amount of memory and often less than convenient storage solutions (cassette tapes being a memorable example), and with few available programs for typically brand new machines, the emphasis of the manufacturer was often on giving the user the tools to write their own software.

Computer literacy efforts sensibly and necessarily went along with such trends, and from the late 1970s and early 1980s, after broader educational programmes seeking to inform the public about microelectronics and computing, these efforts targeted existing models of computer with learning materials like “30 Hour BASIC”. Traditional publishers became involved as the market opportunities grew for producing and selling such materials, and publications like Usbourne’s extensive range of computer programming titles were incredibly popular.

Numerous microcomputer manufacturers were founded, some rather more successful and long-lasting than others. An industry was born, around which was a vibrant community – or many vibrant communities – consuming software and hardware for their computers, but crucially also seeking to learn more about their machines and exchanging their knowledge, usually through the specialist print media of the day: magazines, newsletters, bulletins and books. This, then, was that golden age, of computer studies lessons at school, learning BASIC, and of late night coders at home, learning machine code (or, more likely, assembly language) and gradually putting together that game they always wanted to write.

One can certainly question the accuracy of the stereotypical depiction of that era, given that individual perspectives may vary considerably. My own experiences involved limited exposure to educational software at primary school, and the anticipated computer studies classes at secondary school never materialising. What is largely beyond dispute is that after the exciting early years of microcomputing, the educational curriculum changed focus from learning about computers to using them to run whichever applications happened to be popular or attractive to potential employers.

The Vocational Era

Thus, microcomputers became mere tools to do other work, and in that visionless era of Thatcherism, such other work was always likely to be clerical: writing letters and doing calculations in simple spreadsheets, sowing the seeds of dysfunction and setting public expectations of information systems correspondingly low. “Computer studies” became “information technology” in the curriculum, usually involving systems feigning a level of compatibility with the emerging IBM PC “standard”. Naturally, better-off schools will have had nicer equipment, perhaps for audio and video recording and digitising, plus the accompanying multimedia authoring tools, along with a somewhat more engaging curriculum.

At some point, the Internet will have reached schools, bringing e-mail and Web access (with all the complications that entails), and introducing another range of practical topics. Web authoring and Web site development may, if pursued to a significant extent, reveal such things as scripts and services, but one must then wonder what someone encountering the languages involved for the first time might be able to make of them. A generation or two may have grown up seeing computers doing things but with no real exposure to how the magic was done.

And then, there is the matter of how receptive someone who is largely unexposed to programming might be to more involved computing topics, lower-level languages, data structures and algorithms, of the workings of the machine itself. The mythology would have us believe that capable software developers needed the kind of broad exposure provided by the raw, unfiltered microcomputing experience of the 1980s to be truly comfortable and supremely effective at any level of a computing system, having sniffed out every last trick from their favourite microcomputer back in the day.

Those whose careers were built in those early years of microcomputing may now be seeing their retirement approaching, at least if they have not already made their millions and transitioned into some kind of role advising the next generation of similarly minded entrepreneurs. They may lament the scarcity of local companies in the technology sector, look at their formative years, and conclude that the system just doesn’t make them like they used to.

(Never mind that the system never made them like that in the first place: all those game-writing kids who may or may not have gone on to become capable, professional developers were clearly ignoring all the high-minded educational stuff that other people wanted them to study. Chess computers and robot mice immediately spring to mind.)

A Topic for Another Time

What we probably need to establish, then, is whether such views truly incorporate the wealth of experience present in society, or whether they merely reflect a narrow perspective where the obvious explanation may apply to some people’s experience but fails to explain the entire phenomenon. Here, we could examine teaching at a higher educational level than the compulsory school system, particularly because academic institutions were already performing and teaching computing for decades before controversies about the school computing curriculum arose.

We might contrast the casual, self-taught, experimental approach to learning about programming and computers with the structured approach favoured in universities, of starting out with high-level languages, logic, mathematics, and of learning about how the big systems achieved their goals. I encountered people during my studies who had clearly enjoyed their formative experiences with microcomputers becoming impatient with the course of these studies, presumably wondering what value it provided to them.

Some of them quit after maybe only a year, whereas others gained an ordinary degree as opposed to graduating with honours, but hopefully they all went on to lucrative and successful careers, unconstrained and uncurtailed by their choice. But I feel that I might have missed some useful insights and experiences had I done the same. But for now, let us go along with the idea that constructive exposure to technology throughout the formative education of the average person enhances their understanding of that technology, leading to a more sophisticated and creative population.

A Complete Experience

Backtracking to the article that started this article off, we then encounter one educational ambition that has seemingly remained unaddressed by the Raspberry Pi. In microcomputing’s golden age, the motivated learner was ostensibly confronted with the full power of the machine from the point of switching on. They could supposedly study the lowest levels and interact with them using their own software, comfortable with their newly acquired knowledge of how the hardware works.

Disregarding the weird firmware situation with the Pi, it may be said that most Pi users will not be in quite the same position when running the Linux-based distribution deployed on most units as someone back in the 1980s with their BBC Micro, one of the inspirations for the Pi. This is actually a consequence of how something even cheaper than a microcomputer of an earlier era has gained sophistication to such an extent that it is architecturally one of those “big systems” that stuffy university courses covered.

In one regard, the difference in nature between the microcomputers that supposedly conferred developer prowess on a previous generation and the computers that became widespread subsequently, including single-board computers like the Pi, undermines the convenient narrative that microcomputers gave the earlier generation their perfect start. Systems built on processors like the 6502 and the Z80 did not have different privilege levels or memory management capabilities, leaving their users blissfully unaware of such concepts, even if the curious will have investigated the possibilities of interrupt handling and been exposed to any related processor modes, or even if some kind of bank switching or simple memory paging had been used by some machines.

Indeed, topics relevant to microcomputers from the second half of the 1980s are surprisingly absent from retrocomputing initiatives promoting themselves as educational aids. While the Commander X16 is mostly aimed at those seeking a modern equivalent of their own microcomputer learning environment, and many of its users may also end up mostly playing games, the Agon Light and related products are more aggressively pitched as being educational in nature. And yet, these projects cling to 8-bit processors, some inviting categorisation as being more like microcontrollers than microprocessors, as if the constraints of those processor architectures conferred simplicity. In fact, moving up from the 6502 to the 68000 or ARM made life easier in many ways for the learner.

When pitching a retrocomputing product at an audience with the intention of educating them about computing, also adding some glamour and period accuracy to the exercise, it would arguably be better to start with something from the mid-1980s like the Atari ST, providing a more scalable processor architecture and sensible instruction set, but also coupling the processor with memory management hardware. The Atari ST and Commodore Amiga didn’t have a memory management unit in their earliest models, only introducing one later to attempt a move upmarket.

Certainly, primary school children might not need to learn the details of all of this power – just learning programming would be sufficient for them – but as they progress into the later stages of their education, it would be handy to give them new challenges and goals, to understand how a system works where each program has its own resources and cannot readily interfere with other programs. Indeed, something with a RISC processor and memory management capabilities would be just as credible.

How “authentic” a product with a RISC processor and “big machine” capabilities would be, in terms of nostalgia and following on from earlier generations of products, might depend on how strict one decides to be about the whole exercise. But there is nothing inauthentic about a product with such a feature set. In fact, one came along as the de-facto successor to the BBC Micro, and yet relatively little attention seems to be given to how it addressed some of the issues faced by the likes of the Pi.

Under The Hood

In assessing the extent of the Pi’s educational scope, the aforementioned article has this to say:

“Encouraging naive users to go under the hood is always going to be a bad idea on systems with other jobs to do.”

For most people, the Pi is indeed running many jobs and performing many tasks, just as any Linux system might do. And as with any “big machine”, the user is typically and deliberately forbidden from going “under the hood” and interfering with the normal functioning of the system. Even if a Pi is only hosting a single user, unlike the big systems of the past with their obligations to provide a service to many users.

Of course, for most purposes, such a system has traditionally been more than adequate for people to learn about programming. But traditionally, low-level systems programming and going under the hood generally meant downtime, which on expensive systems was largely discouraged, confined to inconvenient times of day, and potentially undertaken at one’s peril. Things have changed somewhat since the old days, however, and we will return to that shortly. But satisfying the expectations of those wanting a responsive but powerful learning environment was a challenge encountered even as the 1980s played out.

With early 1980s microcomputers like the BBC Micro, several traits comprised the desirable package that people now seek to reproduce. The immediacy of such systems allowed users to switch on and interact with the computer in only a few seconds, as opposed to a lengthy boot sequence that possibly also involved inserting disks, never mind the experiences of the batch computing era that earlier computing students encountered. Such interactivity lent such systems a degree of transparency, letting the user interact with the system and rapidly see the effects. Interactions were not necessarily constrained to certain facets of the system, allowing users to engage with the mechanisms “under the hood” with both positive and negative effects.

The Machine Operating System (MOS) of the BBC Micro and related machines such as the Acorn Electron and BBC Master series, provided well-defined interfaces to extend the operating system, introduce event or interrupt handlers, to deliver utilities in the form of commands, and to deliver languages and applications. Such capabilities allowed users to explore the provided functionality and the framework within which it operated. Users could also ignore the operating system’s facilities and more or less take full control of the machine, slipping out of one set of imposed constraints only to be bound by another, potentially more onerous set of constraints.

Earlier Experiences

Much is made of the educational impact of systems like the BBC Micro by those wishing to recapture some of the magic on more capable systems, but relatively few people seem to be curious about how such matters were tackled by the successor to the BBC Micro and BBC Master ranges: Acorn’s Archimedes series. As a step away from earlier machines, the Archimedes offers an insight into how simplicity and immediacy can still be accommodated on more powerful systems, through native support for familiar technology such as BASIC, compatibility layers for old applications, and system emulators for those who need to exercise some of the new hardware in precisely the way that worked on the older hardware.

When the Archimedes was delivered, the original Arthur operating system largely provided the recognisable BBC Micro experience. Starting up showed a familiar welcome message, and even if it may have dropped the user at a “supervisor” prompt as opposed to BASIC, something which did also happen occasionally on earlier machines, typing “BASIC” got the user the rest of the way to the environment they had come to expect. This conferred the ability to write programs exercising the graphical and audio capabilities of the machine to a substantial degree, including access to assembly language, albeit of a different and rather superior kind to that of the earlier machines. Even writing directly to screen memory worked, albeit at a different location and with a more sensible layout.

Under Arthur, users could write programs largely as before, with differences attributable to the change in capabilities provided by the new machines. Even though errant pokes to exotic memory locations might have been trapped and handled by the system’s enhanced architecture, it was still possible to write software that ran in a privileged mode, installed interrupt handlers, and produced clever results, at the risk of freezing or crashing the system. When Arthur was superseded by RISC OS, the desktop interface became the default experience, hiding the immediacy and the power of the command prompt and BASIC, but such facilities remained only a keypress away and could be configured as the default with perhaps only a single command.

RISC OS exposed the tensions between the need for a more usable and generally accessible interface, potentially doing many things at once, and the desire to be able to get under the hood and poke around. It was possible to write desktop applications in BASIC, but this was not really done in a particularly interactive way, and programs needed to make system calls to interact with the rest of the desktop environment, even though the contents of windows were painted using the classic BASIC graphics primitives otherwise available to programs outside the desktop. Desktop programs were also expected to cooperate properly with each other, potentially hanging the system if not written correctly.

The Maestro music player in RISC OS, written in BASIC.

The Maestro music player in RISC OS, written in BASIC. Note that the !RunImage file is a BASIC program, with the somewhat compacted code shown in the text editor.

A safer option for those wanting the classic experience and to leverage their hard-earned knowledge, was to forget about the desktop and most of the newer capabilities of the Archimedes and to enter the BBC Micro emulator, 65Host, available on one of the supplied application disks, writing software just as before, and then running that software or any other legacy software of choice. Apart from providing file storage to the emulator and bearing all the work of the emulator itself, this did not really exercise the newer machine, but it still provided a largely authentic, traditional experience. One could presumably crash the emulated machine, but this should merely have terminated the emulator.

An intermediate form of legacy application support was also provided. 65Tube, with “Tube” referencing an interfacing paradigm used by the BBC Micro, allowed applications written against documented interfaces to run under emulation but accessing facilities in the native environment. This mostly accommodated things like programming language environments and productivity applications and might have seemed superfluous alongside the provision of a more comprehensive emulator, but it potentially allowed such applications to access capabilities that were not provided on earlier systems, such as display modes with greater resolutions and more colours, or more advanced filesystems of different kinds. Importantly, from an educational perspective, these emulators offered experiences that could be translated to the native environment.

65Tube running in MODE 15.

65Tube running in MODE 15, utilising many more colours than normally available on earlier Acorn machines.

Although the Archimedes drifted away from the apparent simplicity of the BBC Micro and related machines, most users did not fully understand the software stack on such earlier systems, anyway. However, despite the apparent sophistication of the BBC Micro’s successors, various aspects of the software architecture were, in fact, preserved. Even the graphical user interface on the Archimedes was built upon many familiar concepts and abstractions. The difficulty for users moving up to the newer system arose upon finding that much of their programming expertise and effort had to be channelled into a software framework that confined the activities of their code, particularly in the desktop environment. One kind of framework for more advanced programs had merely been replaced by others.

Finding Lessons for Today

The way the Archimedes attempted to accommodate the expectations cultivated by earlier machines does not necessarily offer a convenient recipe to follow today. However, the solutions it offered should draw our attention to some other considerations. One is the level of safety in the environment being offered: it should be possible to interact with the system without bringing it down or causing havoc.

In that respect, the Archimedes provided a sandboxed environment like an emulator, but this was only really viable for running old software, as indeed was the intention. It also did not multitask, although other emulators eventually did. The more integrated 65Tube emulator also did not multitask, although later enhancements to RISC OS such as task windows did allow it to multitask to a degree.

65Tube running in a task window.

65Tube running in a task window. This relies on the text editing application and unfortunately does not support fancy output.

Otherwise, the native environment offered all the familiar tools and the desired level of power, but along with them plenty of risks for mayhem. Thus, a choice between safety and concurrency was forced upon the user. (Aside from Arthur and RISC OS, there was also Acorn’s own Unix port, RISC iX, which had similar characteristics to the kind of Linux-based operating system typically run on the Pi. You could, in principle, run a BBC Micro emulator under RISC iX, just as people run emulators on the Pi today.)

Today, we could actually settle for the same software stack on some Raspberry Pi models, with all its advantages and disadvantages, by running an updated version of RISC OS on such hardware. The bundled emulator support might be missing, however, but for those wanting to go under the hood and also take advantage of the hardware, it is unlikely that they would be so interested in replicating the original BBC Micro experience with perfect accuracy, instead merely seeking to replicate the same kind of experience.

Another consideration the Archimedes raises is the extent to which an environment may take advantage of the host system, and it is this consideration that potentially has the most to offer in formulating modern solutions. We may normally be completely happy running a programming tool in our familiar computing environments, where graphical output, for example, may be confined to a window or occasionally shown in full-screen mode. Indeed, something like a Raspberry Pi need not have any rigid notion of what its “native” graphical capabilities are, and the way a framebuffer is transferred to an actual display is normally not of any real interest.

The learning and practice of high-level programming can be adequately performed in such a modern environment, with the user safely confined by the operating system and mostly unable to bring the system down. However, it might not adequately expose the user to those low-level “under the hood” concepts that they seem to be missing out on. For example, we may wish to introduce the framebuffer transfer mechanism as some kind of educational exercise, letting the user appreciate how the text and graphics plotting facilities they use lead to pixels appearing on their screen. On the BBC Micro, this would have involved learning about how the MOS configures the 6845 display controller and the video ULA to produce a usable display.

The configuration of such a mechanism typically resides at a fairly low level in the software stack, out of the direct reach of the user, but allowing a user to reconfigure such a mechanism would risk introducing disruption to the normal functioning of the system. Therefore, a way is needed to either expose the mechanism safely or to simulate it. Here, technology’s steady progression does provide some possibilities that were either inconvenient or impossible on an early ARM system like the Archimedes, notably virtualisation support, allowing us to effectively run a simulation of the hardware efficiently on the hardware itself.

Thus, we might develop our own framebuffer driver and fire up a virtual machine running our operating system of choice, deploying the driver and assessing the consequences provided by a simulation of that aspect of the hardware. Of course, this would require support in the virtual environment for that emulated element of the hardware. Alternatively, we might allow some kind of restrictive access to that part of the hardware, risking the failure of the graphical interface if misconfiguration occurred, but hopefully providing some kind of fallback control mechanism, like a serial console or remote login, to restore that interface and allow the errant code to be refined.

A less low-level component that might invite experimentation could be a filesystem. The MOS in the BBC Micro and related machines provided filesystem (or filing system) support in the form of service ROMs, and in RISC OS on the Archimedes such support resides in the conceptually similar relocatable modules. Given the ability of normal users to load such modules, it was entirely possible for a skilled user to develop and deploy their own filesystem support, with the associated risks of bringing down the system. Linux does have arguably “glued-on” support for unprivileged filesystem deployment, but there might be other components in the system worthy of modification or replacement, and thus the virtual machine might need to come into play again to allow the desired degree of experimentation.

A Framework for Experimentation

One can, however, envisage a configurable software system where a user session might involve a number of components providing the features and services of interest, and where a session might be configured to exclude or include certain typical or useful components, to replace others, and to allow users to deploy their own components in a safe fashion. Alongside such activities, a normal system could be running, providing access to modern conveniences at a keypress or the touch of a button.

We might want the flexibility to offer something resembling 65Host, albeit without the emulation of an older system and its instruction set, for a highly constrained learning environment where many aspects of the system can be changed for better or worse. Or we might want something closer to 65Tube, again without the emulation, acting mostly as a “native” program but permitting experimentation on a few elements of the experience. An entire continuum of possibilities could be supported by a configurable framework, allowing users to progress from a comfortable environment with all of the expected modern conveniences, gradually seeing each element removed and then replaced with their own implementation, until arriving in an environment where they have the responsibility at almost every level of the system.

In principle, a modern system aiming to provide an “under the hood” experience merely needs to simulate that experience. As long as the user experiences the same general effects from their interactions, the environment providing the experience can still isolate a user session from the underlying system and avoid unfortunate consequences from that misbehaving session. Purists might claim that as long as any kind of simulation is involved, the user is not actually touching the hardware and is therefore not engaging in low-level development, even if the code they are writing would be exactly the code that would be deployed on the hardware.

Systems programming can always be done by just writing programs and deploying them on the hardware or in a virtual machine to see if they work, resetting the system and correcting any mistakes, which is probably how most programming of this kind is done even today. However, a suitably configurable system would allow a user to iteratively and progressively deploy a customised system, and to work towards deploying a complete system of their own. With the final pieces in place, the user really would be exercising the hardware directly, finally silencing the purists.

Naturally, given my interest in microkernel-based systems, the above concept would probably rest on the use of a microkernel, with much more of a blank canvas available to define the kind of system we might like, as opposed to more prescriptive systems with monolithic kernels and much more of the basic functionality squirrelled away in privileged kernel code. Perhaps the only difficult elements of a system to open up to user modification, those that cannot also be easily delegated or modelled by unprivileged components, would be those few elements confined to the microkernel and performing fundamental operations such as directly handling interrupts, switching execution contexts (threads), writing memory mappings to the appropriate registers, and handling system calls and interprocess communications.

Even so, many aspects of these low-level activities are exposed to user-level components in microkernel-based operating systems, leaving few mysteries remaining. For those advanced enough to progress to kernel development, traditional systems programming practices would surely be applicable. But long before that point, motivated learners will have had plenty of opportunities to get “under the hood” and to acquire a reasonable understanding of how their systems work.

A Conclusion of Sorts

As for why people are not widely using the Raspberry Pi to explore low-level computing, the challenge of facilitating such exploration when the system has “other jobs to do” certainly seems like a reasonable excuse, especially given the choice of operating system deployed on most Pi devices. One could remove those “other jobs” and run RISC OS, of course, putting the learner in an unfamiliar and more challenging environment, perhaps giving them another computer to use at the same time to look things up on the Internet. Or one could adopt a different software architecture, but that would involve an investment in software that few organisations can be bothered to make.

I don’t know whether the University of Cambridge has seen better-educated applicants in recent years as a result of Pi proliferation, or whether today’s applicants are as similarly perplexed by low-level concepts as those from the pre-Pi era. But then, there might be a lesson to be learned about applying some rigour to technological interventions in society. After all, there were some who justifiably questioned the effectiveness of rolling out microcomputers in schools, particularly when teachers have never really been supported in their work, as more and more is asked of them by their political overlords. Investment in people and their well-being is another thing that few organisations can be bothered to make, too.

The Academic Barriers of Commercialisation

Monday, January 9th, 2017

Last year, the university through which I obtained my degree celebrated a “milestone” anniversary, meaning that I got even more announcements, notices and other such things than I was already getting from them before. Fortunately, not everything published into this deluge is bound up in proprietary formats (as one brochure was, sitting on a Web page in Flash-only form) or only reachable via a dubious “Libyan link-shortener” (as certain things were published via a social media channel that I have now quit). It is indeed infuriating to see one of the links in a recent HTML/plain text hybrid e-mail message using a redirect service hosted on the university’s own alumni sub-site, sending the reader to a bit.ly URL, which will redirect them off into the great unknown and maybe even back to the original site. But such things are what one comes to expect on today’s Internet with all the unquestioning use of random “cloud” services, each one profiling the unsuspecting visitor and betraying their privacy to make a few extra cents or pence.

But anyway, upon following a more direct – but still redirected – link to an article on the university Web site, I found myself looking around to see what gets published there these days. Personally, I find the main university Web site rather promotional and arguably only superficially informative – you can find out the required grades to take courses along with supposed student approval ratings and hypothetical salary expectations upon qualifying – but it probably takes more digging to get at the real detail than most people would be willing to do. I wouldn’t mind knowing what they teach now in their computer science courses, for instance. I guess I’ll get back to looking into that later.

Gatekeepers of Knowledge

However, one thing did catch my eye as I browsed around the different sections, encountering the “technology transfer” department with the expected rhetoric about maximising benefits to society: the inevitable “IP” policy in all its intimidating length, together with an explanatory guide to that policy. Now, I am rather familiar with such policies from my time at my last academic employer, having been obliged to sign some kind of statement of compliance at one point, but then apparently not having to do so when starting a subsequent contract. It was not as if enlightenment had come calling at the University of Oslo between these points in time such that the “IP rights” agreement now suddenly didn’t feature in the hiring paperwork; it was more likely that such obligations had presumably been baked into everybody’s terms of employment as yet another example of the university upper management’s dubious organisational reform and questionable human resources practices.

Back at Heriot-Watt University, credit is perhaps due to the authors of their explanatory guide to try and explain the larger policy document, because it is most likely that most people aren’t going to get through that much longer document and retain a clear head. But one potentially unintended reason for credit is that by being presented with a much less opaque treatment of the policy and its motivations, we are able to see with enhanced clarity many of the damaging misconceptions that have sadly become entrenched in higher education and academia, including the ways in which such policies actually do conflict with the sharing of knowledge that academic endeavour is supposed to be all about.

So, we get the sales pitch about new things needing investment…

However, often new technologies and inventions are not fully developed because development needs investment, and investment needs commercial returns, and to ensure commercial returns you need something to sell, and a freely available idea cannot be sold.

If we ignore various assumptions about investment or the precise economic mechanisms supposedly required to bring about such investment, we can immediately note that ideas on their own aren’t worth anything anyway, freely available or not. Although the Norwegian Industrial Property Office (or the Norwegian Patent Office if we use a more traditional name) uses the absurd vision slogan “turning ideas into values” (it should probably read “value”, but whatever), this perhaps says more about greedy profiteering through the sale of government-granted titles bound to arbitrary things than it does about what kinds of things have any kind of inherent value that you can take to the bank.

But assuming that we have moved beyond the realm of simple ideas and have entered the realm of non-trivial works, we find that we have also entered the realm of morality and attitude management:

That is why, in some cases, it is better for your efforts not to be published immediately, but instead to be protected and then published, for protection gives you something to sell, something to sell can bring in investment, and investment allows further development. Therefore in the interests of advancing the knowledge within the field you work in, it is important that you consider the commercial potential of your work from the outset, and if necessary ensure it is properly protected before you publish.

Once upon a time, the most noble pursuit in academic research was to freely share research with others so that societal, scientific and technological progress could be made. Now it appears that the average researcher should treat it as their responsibility to conceal their work from others, seek “protection” on it, and then release the encumbered details for mere perusal and the conditional participation of those once-valued peers. And they should, of course, be wise to the commercial potential of the work, whatever that is. Naturally, “intellectual property” offices in such institutions have an “if in doubt, see us” policy, meaning that they seek to interfere with research as soon as possible, and should someone fail to have “seen them”, that person’s loyalty may very well be called into question as if they had somehow squandered their employer’s property. In some institutions, this could very easily get people marginalised or “reorganised” if not immediately or obviously fired.

The Rewards of Labour

It is in matters of property and ownership where things get very awkward indeed. Many people would accept that employees of an organisation are producing output that becomes the property of that organisation. What fewer people might accept is that the customers of an organisation are also subject to having their own output taken to be the property of that organisation. The policy guide indicates that even undergraduate students may also be subject to an obligation to assign ownership of their work to the university: those visiting the university supposedly have to agree to this (although it doesn’t say anything about what their “home institution” might have to say about that), and things like final year projects are supposedly subject to university ownership.

So, just because you as a student have a supervisor bound by commercialisation obligations, you end up not only paying tuition fees to get your university education (directly or through taxation), but you also end up having your own work taken off you because it might be seen as some element in your supervisor’s “portfolio”. I suppose this marks a new low in workplace regulation and standards within a sector that already skirts the law with regard to how certain groups are treated by their employers.

One can justifiably argue that employees of academic institutions should not be allowed to run away with work funded by those institutions, particularly when such funding originally comes from other sources such as the general public. After all, such work is not exactly the private property of the researchers who created it, and to treat it as such would deny it to those whose resources made it possible in the first place. Any claims about “rightful rewards” needing to be given are arguably made to confuse rational thinking on the matter: after all, with appropriate salaries, the researchers are already being rewarded doing work that interests and stimulates them (unlike a lot of people in the world of work). One can argue that academics increasingly suffer from poorer salaries, working conditions and career stability, but such injustices are not properly remedied by creating other injustices to supposedly level things out.

A policy around what happens with the work done in an academic institution is important. But just as individuals should not be allowed to treat broadly-funded work as their own private property, neither should the institution itself claim complete ownership and consider itself entitled to do what it wishes with the results. It may be acting as a facilitator to allow research to happen, but by seeking to intervene in the process of research, it risks acting as an inhibitor. Consider the following note about “confidential information”:

This is, in short, anything which, if you told people about, might damage the commercial interests of the university. It specifically includes information relating to intellectual property that could be protected, but isn’t protected yet, and which if you told people about couldn’t be protected, and any special know how or clever but non patentable methods of doing things, like trade secrets. It specifically includes all laboratory notebooks, including those stored in an electronic fashion. You must be very careful with this sort of information. This is of particular relevance to something that may be patented, because if other people know about it then it can’t be.

Anyone working in even a moderately paranoid company may have read things like this. But here the context is an environment where knowledge should be shared to benefit and inform the research community. Instead, one gets the impression that the wish to control the propagation of knowledge is so great that some people would rather see the details of “clever but non patentable methods” destroyed than passed on openly for others to benefit from. Indeed, one must question whether “trade secrets” should even feature in a university environment at all.

Of course, the obsession with “laboratory notebooks”, “methods of doing things” and “trade secrets” in such policies betrays the typical origins of such drives for commercialisation: the apparently rich pickings to be had in the medical, pharmaceutical and biosciences domains. It is hardly a coincidence that the University of Oslo intensified its dubious “innovation” efforts under a figurehead with a background (or an interest) in exactly those domains: with a narrow personal focus, an apparent disdain for other disciplines, and a wider commercial atmosphere that gives such a strategy a “dead cert” air of impending fortune, we should perhaps expect no more of such a leadership creature (and his entourage) than the sum of that creature’s instincts and experiences. But then again, we should demand more from such people when their role is to cultivate an institution of learning and not to run a private research organisation at the public’s expense.

The Dirty Word

At no point in the policy guide does the word “monopoly” appear. Given that such a largely technical institution would undoubtedly be performing research where the method of “protection” would involve patents being sought, omitting the word “monopoly” might be that document’s biggest flaw. Heriot-Watt University originates from the merger of two separate institutions, one of which was founded by the well-known pioneer of steam engine technology, James Watt.

Recent discussion of Watt’s contributions to the development and proliferation of such technology has brought up claims that Watt’s own patents – the things that undoubtedly made him wealthy enough to fund an educational organisation – actually held up progress in the domain concerned for a number of decades. While he was clearly generous and sensible enough to spend his money on worthy causes, one can always challenge whether the questionable practices that resulted in the accumulation of such wealth can justify the benefits from the subsequent use of that wealth, particularly if those practices can be regarded as having had negative effects of society and may even have increased wealth inequality.

Questioning philanthropy is not a particularly fashionable thing to do. In capitalist societies, wealthy people are often seen as having made their fortunes in an honest fashion, enjoying a substantial “benefit of the doubt” that this was what really occurred. Criticising a rich person giving money to ostensibly good causes is seen as unkind to both the generous donor and to those receiving the donations. But we should question the means through which the likes of Bill Gates (in our time) and James Watt (in his own time) made their fortunes and the power that such fortunes give to such people to direct money towards causes of their own personal choosing, not to mention the way in which wealthy people also choose to influence public policy and the use of money given by significantly less wealthy individuals – the rest of us – gathered through taxation.

But back to monopolies. Can they really be compatible with the pursuit and sharing of knowledge that academia is supposed to be cultivating? Just as it should be shocking that secretive “confidentiality” rules exist in an academic context, it should appal us that researchers are encouraged to be competitively hostile towards their peers.

Removing the Barriers

It appears that some well-known institutions understand that the unhindered sharing of their work is their primary mission. MIT Media Lab now encourages the licensing of software developed under its roof as Free Software, not requiring special approval or any other kind of institutional stalling that often seems to take place as the “innovation” vultures pick over the things they think should be monetised. Although proprietary licensing still appears to be an option for those within the Media Lab organisation, at least it seems that people wanting to follow their principles and make their work available as Free Software can do so without being made to feel bad about it.

As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.

So says the director of the MIT Media Lab. It says a lot about the times we live in that this needs to be said at all. Free Software licensing is, as a mechanism to encourage sharing, a natural choice for software, but we should also expect similar measures to be adopted for other kinds of works. Papers and articles should at the very least be made available using content licences that permit sharing, even if the licence variants chosen by authors might seek to prohibit the misrepresentation of parts of their work by prohibiting remixes or derived works. (This may sound overly restrictive, but one should consider the way in which scientific articles are routinely misrepresented by climate change and climate science deniers.)

Free Software has encouraged an environment where sharing is safely and routinely done. Licences like the GNU General Public Licence seek to shield recipients from things like patent threats, particularly from organisations which might appear to want to share their works, but which might be tempted to use patents to regulate the further use of those works. Even in realms where patents have traditionally been tolerated, attempts have been made to shield others from the effects of patents, intended or otherwise: the copyleft hardware movement demands that shared hardware designs are patent-free, for instance.

In contrast, one might think that despite the best efforts of the guide’s authors, all the precautions and behavioural self-correction it encourages might just drive the average researcher to distraction. Or, just as likely, to ignoring most of the guidelines and feigning ignorance if challenged by their “innovation”-obsessed superiors. But in the drive to monetise every last ounce of effort there is one statement that is worth remembering:

If intellectual property is not assigned, this can create problems in who is allowed to exploit the work, and again work can go to waste due to a lack of clarity over who owns what.

In other words, in an environment where everybody wants a share of the riches, it helps to have everybody’s interests out in the open so that there may be no surprises later on. Now, it turns out that unclear ownership and overly casual management of contributions is something that has occasionally threatened Free Software projects, resulting in more sophisticated thinking about how contributions are managed.

And it is precisely this combination of Free Software licensing, or something analogous for other domains, with proper contribution and attribution management that will extend safe and efficient sharing of knowledge to the academic realm. Researchers just cannot have the same level of confidence when dealing with the “technology transfer” offices of their institution and of other institutions. Such offices only want to look after themselves while undermining everyone beyond the borders of their own fiefdoms.

Divide and Rule

It is unfortunate that academic institutions feel that they need to “pull their weight” and have to raise funds to make up for diminishing public funding. By turning their backs on the very reason for their own existence and seeking monopolies instead of sharing knowledge, they unwittingly participate in the “divide and rule” tactics blatantly pursued in the political arena: that everyone must fight each other for all that is left once the lion’s share of public funding has been allocated to prestige megaprojects and schemes that just happen to benefit the well-connected, the powerful and the influential people in society the most.

A properly-funded education sector is an essential component of a civilised society, and its institutions should not be obliged to “sharpen their elbows” in the scuffle for funding and thus deprive others of knowledge just to remain viable. Sadly, while austerity politics remains fashionable, it may be up to us in the Free Software realm to remind academia of its obligations and to show that sustainable ways of sharing knowledge exist and function well in the “real world”.

Indeed, it is up to us to keep such institutions honest and to prevent advocates of monopoly-driven “innovation” from being able to insist that their way is the only way, because just as “divide and rule” politics erects barriers between groups in wider society, commercialisation erects barriers that inhibit the essential functions of academic pursuit. And such barriers ultimately risk extinguishing academia altogether, along with all the benefits its institutions bring to society. If my university were not reinforcing such barriers with its “IP” policy, maybe its anniversary as a measure of how far we have progressed from monopolies and intellectual selfishness would have been worth celebrating after all.

Testing Times for Free Software and Open Hardware

Tuesday, January 12th, 2016

The last few months haven’t been too kind on Free Software and open hardware initiatives in a number of ways. Here, in a shorter form than one might usually expect from me, are some problematic developments on topics that I may have covered in the past year.

Software Freedom Undervalued

About a couple of months ago, the Software Freedom Conservancy started a fund-raising campaign after it became apparent that companies could not be relied upon to support the organisation’s activities. Since the start of the campaign, many individuals have stepped up and pledged financial support of their own, which is very generous of them, as is the support of enlightened organisations that have offered to match individual contributions.

Sadly, such generosity seems not to be shared by many of the largest companies making money from Free Software and from Linux in particular, and thus from the non-financial contributions that make projects like Linux viable in the first place, with many of those even coming from those same generous individuals who have supported the Conservancy financially. And let us consider for a moment why one prominent umbrella organisation’s members might not want to enforce the GPL, especially given that some of them have been successfully prosecuted for violating that licence, in relation to various Free Software projects, in the past.

The Proprietary Instincts of the BBC

The BBC Micro Bit was a topic covered in the last year, when I indicated a degree of caution about the mistakes of the past being repeated needlessly. And indeed, for some time, everything was being done behind the curtain of a non-disclosure agreement (NDA), meaning that very little information was being made available about the device and accompanying materials, and thus very little could be done by the average member of the public to prepare for the availability of the device, let alone develop their own materials, software, accessories or anything else for it.

Since then, a degree of secrecy has been eliminated, and efforts have been made to get the embedded variant of Python known as Micropython working on the board. However, certain parts of that work still appear to be encumbered by NDA, arguably making the effort of developing Python-related materials something of a social networking exercise. Meanwhile, notorious industry monopolist, Microsoft, somehow managed to muscle in on the initiative and take control of the principally-supported method of developing software with the device. I guess people at the BBC and their friends in politics and business don’t always learn from the mistakes of the past, particularly as they spend other people’s money.

The Walled Garden Party’s Hangover for Free Software Development

Just over twelve months ago, I made some observations about the Python core development group’s attraction to GitHub. It seems that the infatuation with the enthusiastic masses and their inevitable unleashing on Python assets, with the expectation of stimulating an exponential upturn in development activity, will now be gratified through a migration of various Python infrastructure components to the proprietary and centralised service that GitHub offers. (I have my doubts as to whether CPython contribution barriers are really the cause of Python’s current malaise, despite the usual clamour for Git and the associated “network effects” amongst a community of self-proclaimed version control wizards whose powers somehow don’t extend to mastering simple workflows with other tools.)

Anatoly Techtonik makes some interesting points, which will presumably go unheard because those involved have all decided not to listen to him any more. One of the more disturbing ones is that the “comparison shopping” mentality, where Free Software developers abandon their colleagues writing various tools and systems in favour of random corporations offering proprietary stuff at no cost, may well result in the Free Software solutions in such areas becoming seen as uncompetitive and unattractive. What those making such foolish decisions fail to realise is that their own projects can easily get the same treatment, if nobody bothers to see beyond the end of their own nose.

The result of all this is less funding and fewer resources for Free Software projects, with potentially fewer contributions, too, as the attraction of supporting “losing” solutions starts to fade. Community-oriented Free Software is arguably grossly underfunded as it is: we don’t really need other Free Software developers abandoning or undermining their colleagues while ridiculing those colleagues’ “ideological purity“. And, of course, volunteer effort will undoubtedly be burned up in the needless migration to the proprietary solution, setting everyone up for another costly transition down the road, which experience indicates is always more work than anyone anticipated (if they even bothered to think ahead at all).

PayPal: Doesn’t Pay, Not Your Pal

It has been a long time since I wrote about the Neo900 project. Things were looking promising: necessary components had been secured, and everyone was just waiting for Nikolaus to finish his work with the Pyra handheld console. And then we learned that PayPal had decided to hold a significant amount of money as a form of “security”, thus cutting off a vital source of funds for actually doing the work. Apparently, PayPal have a habit of doing this kind of thing, on one reported occasion even taking the opportunity to then offer loans to those people they deliberately put in such a difficult position.

If you supported the Neo900 project and pledged funds via PayPal, you need to tell PayPal to actually pay the project. You know: like the verb in their company name. Otherwise, in the worst case, you may not only not get a Neo900 and not see it developed to completion, but you will also have loaned your money to a large corporation for a substantial period and earned no interest on that involuntary loan, perhaps even incurring fees for the privilege. (So, please see the “How to fix it” section of the relevant article.)

Maybe in 2016, people will become a lot clearer about who their real friends are. Let us hope so!

The BBC Micro and the BBC Micro Bit

Sunday, March 22nd, 2015

At least on certain parts of the Internet as well as in other channels, there has been a degree of excitement about the announcement by the BBC of a computing device called the “Micro Bit“, with the BBC’s plan to give one of these devices to each child starting secondary school, presumably in September 2015, attracting particular attention amongst technology observers and television licence fee-payers alike. Details of the device are a little vague at the moment, but the announcement along with discussions of the role of the corporation and previous initiatives of this nature provides me with an opportunity to look back at the original BBC Microcomputer, evaluate some of the criticisms (and myths) around the associated Computer Literacy Project, and to consider the things that were done right and wrong, with the latter hopefully not about to be repeated in this latest endeavour.

As the public record reveals, at the start of the 1980s, the BBC wanted to engage its audience beyond television programmes describing the growing microcomputer revolution, and it was decided that to do this and to increase computer literacy generally, it would need to be able to demonstrate various concepts and technologies on a platform that would be able to support the range of activities to which computers were being put to use. Naturally, a demanding specification was constructed – clearly, the scope of microcomputing was increasing rapidly, and there was a lot to demonstrate – and various manufacturers were invited to deliver products that could be used as this reference platform. History indicates that a certain amount of acrimony followed – a complete description of which could fill an entire article of its own – but ultimately only Acorn Computers managed to deliver a machine that could do what the corporation was asking for.

An Ambitious Specification

It is worth considering what the BBC Micro was offering in 1981, especially when considering ill-informed criticism of the machine’s specifications by people who either prefer other systems or who felt that participating in the development of such a machine was none of the corporation’s business. The technologies to be showcased by the BBC’s programme-makers and supported by the materials and software developed for the machine included full-colour graphics, multi-channel sound, 80-column text, Viewdata/Teletext, cassette and diskette storage, local area networking, interfacing to printers, joysticks and other input/output devices, as well as to things like robots and user-developed devices. Although it is easy to pick out one or two of these requirements, move forwards a year or two, increase the budget two- or three-fold, or any combination of these things, and to nominate various other computers, there really were few existing systems that could deliver all of the above, at least at an affordable price at the time.

Some microcomputers of the early 1980s
Computer RAM Text Graphics Year Price
Apple II Plus Up to 64K 40 x 25 (upper case only) 280 x 192 (6 colours), 40 x 48 (16 colours) 1979 £1500 or more
Commodore PET 4032/8032 32K 40/80 x 25 Graphics characters (2 colours) 1980 £800 (4032), £1030 (8032) (including monochrome monitor)
Commodore VIC-20 5K 22 x 23 176 x 184 (8 colours) 1980 (1981 outside Japan) £199
IBM PC (Model 5150) 16K up to 256K 40/80 x 25 640 x 200 (2 colours), 320 x 200 (4 colours) 1981 £1736 (including monochrome monitor, presumably with 16K or 64K)
BBC Micro (Model B) 32K 80/40/20 x 32/24, Teletext 640 x 256 (2 colours), 320 x 256 (2/4 colours), 160 x 256 (4/8 colours) 1981 £399 (originally £335)
Research Machines LINK 480Z 64K (expandable to 256K) 40 x 24 (optional 80 x 24) 160 x 72, 80 x 72 (2 colours); expandable to 640 x 192 (2 colours), 320 x 192 (4 colours), 190 x 96 (8 colours or 16 shades) 1981 £818
ZX Spectrum 16K or 48K 32 x 24 256 x 192 (16 colours applied using attributes) 1982 £125 (16K), £175 (48K)
Commodore 64 64K 40 x 25 320 x 200 (16 colours applied using attributes) 1982 £399

Perhaps the closest competitor, already being used in a fairly limited fashion in educational establishments in the UK, was the Commodore PET. However, it is clear that despite the adaptability of that system, its display capabilities were becoming increasingly uncompetitive, and Commodore had chosen to focus on the chipsets that would power the VIC-20 and Commodore 64 instead. (The designer of the PET went on to make the very capable, and understandably more expensive, Victor 9000/Sirius 1.) That Apple products were notoriously expensive and, indeed, the target of Commodore’s aggressive advertising did not seem to prevent them from capturing the US education market from the PET, but they always remained severely uncompetitive in the UK as commentators of the time indeed noted.

Later, the ZX Spectrum and Commodore 64 were released. Technology was progressing rapidly, and in hindsight one might have advocated waiting around until more capable and cheaper products came to market. However, it can be argued that in fulfilling virtually all aspects of the ambitious specification and pricing, it would not be until the release of the Amstrad CPC series in 1984 that a suitable alternative product might have become available. Even then, these Amstrad computers actually benefited from the experience accumulated in the UK computing industry from the introduction of the BBC Micro: they were, if anything, an iteration within the same generation of microcomputers and would even have used the same 6502 CPU as the BBC Micro had it not been for time-to-market pressures and the readily-available expertise with the Zilog Z80 CPU amongst those in the development team. And yet, specific aspects of the specification would still be unfulfilled: the BBC Micro had hardware support for Teletext displays, although it would have been possible to emulate these with a bitmapped display and suitable software.

Arise Sir Clive

Much has been made of the disappointment of Sir Clive Sinclair that his computers were not adopted by the BBC as products to be endorsed and targeted at schools. Sinclair made his name developing products that were competitive on price, often seeking cost-reduction measures to reach attractive pricing levels, but such measures also served to make his products less desirable. If one reads reviews of microcomputers from the early 1980s, many reviewers explicitly mention the quality of the keyboard provided by the computers being reviewed: a “typewriter” keyboard with keys that “travel” appear to be much preferred over the “calculator” keyboards provided by computers like the ZX Spectrum, Oric 1 or Newbury NewBrain, and they appear to be vastly preferred over the “membrane” keyboards employed by the ZX80, ZX81 and Atari 400.

For target audiences in education, business, and in the home, it would have been inconceivable to promote a product with anything less than a “proper” keyboard. Ultimately, the world had to wait until the ZX Spectrum +2 released in 1986 for a Spectrum with such a keyboard, and that occurred only after the product line had been acquired by Amstrad. (One might also consider the ZX Spectrum+ in 1984, but its keyboard was more of a hybrid of the calculator keyboard that had been used before and the “full-travel” keyboards provided by its competitors.)

Some people claim that they owe nothing to the BBC Micro and everything to the ZX Spectrum (or, indeed, the computer they happened to own) for their careers in computing. Certainly, the BBC Micro was an expensive purchase for many people, although contrary to popular assertion it was not any more expensive than the Commodore 64 upon that computer’s introduction in the UK, and for those of us who wanted BBC compatibility at home on a more reasonable budget, the Acorn Electron was really the only other choice. But it would be as childish as the playground tribalism that had everyone insist that their computer was “the best” to insist that the BBC Micro had no influence on computer literacy in general, or on the expectations of what computer systems should provide. Many people who owned a ZX Spectrum freely admit that the BBC Micro coloured their experiences, some even subsequently seeking to buy one or one of its successors and to go on to build a successful software development career.

The Costly IBM PC

Some commentators seem to consider the BBC Micro as having been an unnecessary diversion from the widespread adoption of the IBM PC throughout British society. As was the case everywhere else, the de-facto “industry standard” of the PC architecture and DOS captured much of the business market and gradually invaded the education sector from the top down, although significantly better products existed both before and after its introduction. It is tempting with hindsight to believe that by placing an early bet on the success of the then-new PC architecture, business and education could have somehow benefited from standardising on the PC and DOS applications. And there has always been the persistent misguided belief amongst some people that schools should be training their pupils/students for a narrow version of “the world of work”, as opposed to educating them to be able to deal with all aspects of their lives once their school days are over.

What many people forget or fail to realise is that the early 1980s witnessed rapid technological improvement in microcomputing, that there were many different systems and platforms, some already successful and established (such as CP/M), and others arriving to disrupt ideas of what computing should be like (the Xerox Alto and Star having paved the way for the Apple Lisa and Macintosh, the Atari ST, and so on). It was not clear that the IBM PC would be successful at all: IBM had largely avoided embracing personal computing, and although the system was favourably reviewed and seen as having the potential for success, thanks to IBM’s extensive sales organisation, other giants of the mainframe and minicomputing era such as DEC and HP were pursuing their own personal computing strategies. Moreover, existing personal computers were becoming entrenched in certain markets, and early adopters were building a familiarity with those existing machines that was reflected in publications and materials available at the time.

Despite the technical advantages of the IBM PC over much of the competition at the beginning of the 1980s, it was also substantially more expensive than the mass-market products arriving in significant numbers, aimed at homes, schools and small businesses. With many people remaining intrigued but unconvinced by the need for a personal computer, it would have been impossible for a school to justify spending almost £2000 (probably around £8000 today) on something without proven educational value. Software would also need to be purchased, and the procurement of expensive and potentially non-localised products would have created even more controversy.

Ultimately, the Computer Literacy Project stimulated the production of a wide range of locally-produced products at relatively inexpensive prices, and while there may have been a few years of children learning BBC BASIC instead of one of the variants of BASIC for the IBM PC (before BASIC became a deprecated aspect of DOS-based computing), it is hard to argue that those children missed out on any valuable experience using DOS commands or specific DOS-based products, especially since DOS became a largely forgotten environment itself as technological progress introduced new paradigms and products, making “hard-wired”, product-specific experience obsolete.

The Good and the Bad

Not everything about the BBC Micro and its introduction can be considered unconditionally good. Choices needed to be made to deliver a product that could fulfil the desired specification within certain technological constraints. Some people like to criticise BBC BASIC as being “non-standard”, for example, which neglects the diversity of BASIC dialects that existed at the dawn of the 1980s. Typically, for such people “standard” equates to “Microsoft”, but back then Microsoft BASIC was a number of different things. Commodore famously paid a one-off licence fee to use Microsoft BASIC in its products, but the version for the Commodore 64 was regarded as lacking user-friendly support for graphics primitives and other interesting hardware features. Meanwhile, the MSX range of microcomputers featured Microsoft Extended BASIC which did provide convenient access to hardware features, although the MSX range of computers were not the success at the low end of the market that Microsoft had probably desired to complement its increasing influence at the higher end through the IBM PC. And it is informative in this regard to see just how many variants of Microsoft BASIC were produced, thanks to Microsoft’s widespread licensing of its software.

Nevertheless, the availability of one company’s products do not make a standard, particularly if interoperability between those products is limited. Neither BBC BASIC nor Microsoft BASIC can be regarded as anything other than de-facto standards in their own territories, and it is nonsensical to regard one as non-standard when the other has largely the same characteristics as a proprietary product in widespread use, even if it was licensed to others, as indeed both Microsoft BASIC and BBC BASIC were. Genuine attempts to standardise BASIC did indeed exist, notably BASICODE, which was used in the distribution of programs via public radio broadcasts. One suspects that people making casual remarks about standard and non-standard things remain unaware of such initiatives. Meanwhile, Acorn did deliver implementations of other standards-driven programming languages such as COMAL, Pascal, Logo, Lisp and Forth, largely adhering to any standards subject to the limitations of the hardware.

However, what undermined the BBC Micro and Acorn’s related initiatives over time was the control that they as a single vendor had over the platform and its technologies. At the time, a “winner takes all” mentality prevailed: Commodore under Jack Tramiel had declared a “price war” on other vendors and had caused difficulties for new and established manufacturers alike, with Atari eventually being sold to Tramiel (who had resigned from Commodore) by Warner Communications, but many companies disappeared or were absorbed by others before half of the decade had passed. Indeed, Acorn, who had released the Electron to compete with Sinclair Research at the lower end of the market, and who had been developing product lines to compete in the business sector, experienced financial difficulties and was ultimately taken over by Olivetti; Sinclair, meanwhile, experienced similar difficulties and was acquired by Amstrad. In such a climate, ideas of collaboration seemed far from everybody’s minds.

Since then, the protagonists of the era have been able to reflect on such matters, Acorn co-founder Hermann Hauser admitting that it may have been better to license Acorn’s Econet local area networking technology to interested competitors like Commodore. Although the sentiments might have something to do with revenues and influence – it was at Acorn that the ARM processor was developed, sowing the seeds of a successful licensing business today – the rest of us may well ask what might have happened had the market’s participants of the era cooperated on things like standards and interoperability, helping their customers to protect their investments in technology, and building a bigger “common” market for third-party products. What if they had competed on bringing technological improvements to market without demanding that people abandon their existing purchases (and cause confusion amongst their users) just because those people happened to already be using products from a different vendor? It is interesting to see the range of available BBC BASIC implementations and to consider a world where such building blocks could have been adopted by different manufacturers, providing a common, heterogeneous platform built on cooperation and standards, not the imposition of a single hardware or software platform.

But That Was Then

Back then, as Richard Stallman points out, proprietary software was the norm. It would have been even more interesting had the operating systems and the available software for microcomputers been Free Software, but that may have been asking too much at the time. And although computer designs were often shared and published, a tendency to prevent copying of commercial computer designs prevailed, with Acorn and Sinclair both employing proprietary integrated circuits mostly to reduce complexity and increase performance, but partly to obfuscate their hardware designs, too. Thus, it may have been too much to expect something like the BBC Micro to have been open hardware to any degree “back in the day”, although circuit diagrams were published in publicly-available technical documentation.

But we have different expectations now. We expect software to be freely available for inspection, modification and redistribution, knowing that this empowers the end-users and reassures them that the software does what they want it to do, and that they remain in control of their computing environment. Increasingly, we also expect hardware to exhibit the same characteristics, perhaps only accepting that some components are particularly difficult to manufacture and that there are physical and economic restrictions on how readily we may practise the modification and redistribution of a particular device. Crucially, we demand control over the software and hardware we use, and we reject attempts to prevent us from exercising that control.

The big lesson to be learned from the early 1980s, to be considered now in the mid-2010s, is not how to avoid upsetting a popular (but ultimately doomed) participant in the computing industry, as some commentators might have everybody believe. It is to avoid developing proprietary solutions that favour specific organisations and that, despite the general benefits of increased access to technology, ultimately disempower the end-user. And in this era of readily available Free Software and open hardware platforms, the lesson to be learned is to strengthen such existing platforms and to work with them, letting those products and solutions participate and interoperate with the newly-introduced initiative in question.

The BBC Micro was a product of its time and its development was very necessary to fill an educational need. Contrary to the laziest of reports, the Micro Bit plays a different role as an accessory rather than as a complete home computer, at least if we may interpret the apparent intentions of its creators. But as a product of this era, our expectations for the Micro Bit are greater: we expect transparency and interoperability, the ability to make our own (just as one can with the Arduino, as long as one does not seek to call it an Arduino without asking permission from the trademark owner), and the ability to control exactly how it works. Whether there is a need to develop a completely new hardware solution remains an unanswered question, but we may assert that should it be necessary, such a solution should be made available as open hardware to the maximum possible extent. And of course, the software controlling it should be Free Software.

As we edge gradually closer to September and the big deployment, it will be interesting to assess how the device and the associated initiative measures up to our expectations. Let us hope that the right lessons from the days of the BBC Micro have indeed been learned!

Når man fokuserer på merkevarer og ikke standarder

Friday, January 24th, 2014

Det var interessant å lese et leserbrev i Uniforum med tittelen “Digitale samarbeidsverktøy i undervisning“: noe som trykker alle de riktige knappene i dagens akademiske landskap med et økende fokus på nettbasert opplæring, bredere tilgang til kurs, og mange andre ting som kanskje motiveres mer av “prestisje” og “å kapre kunder” enn å øke generell tilgjengelighet til en institusjons kunnskap og ekspertise. Brevforfatterne beskriver utbredt bruk av videokonferanseløsninger og skryter på en proprietær løsning som de tilsynelatende liker ganske godt, og henviser til universitetets anbefalinger for tekniske løsninger. Etter at man graver litt til å finne disse anbefalingene ser man ganske fort at de dreier seg om tre proprietære (eller delvis proprietære) systemer: Adobe Connect, Skype, og spesialiserte videokonferanseutstyr som finnes i noen møterom.

Det er i all sannsynlighet en konsekvens av forbrukersamfunnet vi lever i: at man tenker produkter og merkevarer og ikke standarder, og at etter man oppdager et spennende produkt vil man gjerne øke oppmerksomheten på produktet blant alle som har liknende behov. Men når man blir produkt- og merkevarefokusert kan man fort miste blikket over det egentlige problemet og den egentlige løsningen. At så mange fortsetter å insistere på Ibux når de kunne kjøpe generiske medikamenter for en brøkdel av merkevarens pris er bare et eksempel på at folk ikke lenger er vant til å vurdere de virkelige forholdene og vil heller ty til merkevarer for en enkel og rask løsning de ikke trenger å tenke så mye på.

Man bør kanskje ikke legge så veldig mye skyld på vanlige databrukere når de begår slike feil, spesielt når store deler av det offentlige her i landet fokuserer på proprietære produkter som, hvis de utnytter genuine standarder i det hele tatt, pleier å blande dem med proprietære teknologier for å styre kunden mot en forpliktelse overfor noen få leverandører i en nesten ubegrenset tid fremover. Men det er litt skuffende at “grønne” representanter ikke vurderer bærekraftige teknologiske løsninger – de som gjøres tilgjengelig som fri programvare og som bruker dokumenterte og åpne standarder – når det ville forventes at slike representanter foreslår bærekraftige løsninger i andre områder.

Students: Beware of the Academic Cloud!

Sunday, July 21st, 2013

Things were certainly different when I started my university degree many years ago. For a start, institutions of higher education provided for many people their first encounter with the Internet, although for those of us in the United Kingdom, the very first encounter with wide area networking may well have involved X.25 and PAD terminals, and instead of getting a “proper” Internet e-mail address, it may have been the case that an address only worked within a particular institution. (My fellow students quickly became aware of an Internet mail gateway in our department and the possibility, at least, of sending Internet mail, however.)

These days, students beginning their university studies have probably already been using the Internet for most of their lives, will have had at least one e-mail address as well as accounts for other online services, may be publishing blog entries and Web pages, and maybe even have their own Web applications accessible on the Internet. For them, arriving at a university is not about learning about new kinds of services and new ways of communicating and collaborating: it is about incorporating yet more ways of working online into their existing habits and practices.

So what should students expect from their university in terms of services? Well, if things have not changed that much over the years, they probably need a means of communicating with the administration, their lecturers and their fellow students, along with some kind of environment to help them do their work and provide things like file storage space and the tools they won’t necessarily be able to provide themselves. Of course, students are more likely to have their own laptop computer (or even a tablet) these days, and it is entirely possible that they could use that for all their work, subject to the availability of tools for their particular course, and since they will already be communicating with others on the Internet, communicating with people in the university is not really anything different from what they are already doing. But still, there are good reasons for providing infrastructure for students to use, even if those students do end up working from their laptops, uploading assignments when they are done, and forwarding their mail to their personal accounts.

The Student Starter Kit

First and foremost, a university e-mail account is something that can act as an official communications channel. One could certainly get away with using some other account, perhaps provided by a free online service like Google or Yahoo, but if something went wrong somewhere – the account gets taken over by an imposter and then gets shut down, for example – that channel of communication gets closed and important information may be lost.

The matter of how students carry out their work is also important. In computer science, where my experiences come from and where computer usage is central to the course, it is necessary to have access to suitable tools to undertake assignments. As everyone who has used technology knows, the matter of “setting stuff up” is one that often demands plenty of time and distracts from the task at hand, and when running a course that requires the participants to install programs before they can make use of the learning materials, considerable amounts of time are wasted on installing programs and troubleshooting. Thus, providing a ready-to-use environment allows students to concentrate on their work and to more easily relate to the learning materials.

There is the matter of the nature of teaching environments and the tools chosen. Teaching environments also allow students to become familiar with desirable practices when finding solutions to the problems in their assignments. In software engineering, for example, the use of version control software encourages a more controlled and rational way of refining a program under development. Although the process itself may not be recognised and rewarded when an assignment is assessed, it allows students to see how things should be done and to take full advantage of the opportunity to learn provided by the institution.

Naturally, it should be regarded as highly undesirable to train students to use specific solutions provided by selected vendors, as opposed to educating them to become familiar with the overarching concepts and techniques of a particular field. Schools and universities are not vocational training institutions, and they should seek to provide their students with transferable skills and knowledge that can be applied generally, instead of taking the easy way out and training those students to perform repetitive tasks in “popular” software that gives them no awareness of why they are doing those things or any indication that the rest of the world might be doing them in other ways.

Construction of the IT department's new building, University of Oslo

Minority Rule

So even if students arrive at their place of learning somewhat equipped to learn, communicate and do their work, there may still be a need for a structured environment to be provided for them. At that place of learning they will join those employed there who already have a structured environment in place to be able to do their work, whether that is research, teaching or administration. It makes a certain amount of sense for the students to join those other groups in the infrastructure already provided. Indeed, considering the numbers of people involved, with the students outnumbering all other groups put together by a substantial margin, one might think that the needs of the students would come first. Sadly, things do not always work that way.

First of all, students are only ever “passing through”. While some university employees may be retained for similar lengths of time – especially researchers and others on temporary contracts (a known problem with social and ethical dimensions of its own) – others may end up spending most of their working life there. As a result, the infrastructure is likely to favour such people over time as their demands are made known year after year, with any discomfort demanding to be remedied by a comfortable environment that helps those people do their work. Not that there is anything wrong with providing employees with a decent working environment: employers should probably do even more to uphold their commitments in this regard.

But when the demands and priorities of a relatively small group of people take precedence over what the majority – the students – need, one can argue that such demands and priorities have begun to subvert the very nature of the institution. Imposing restrictions on students or withholding facilities from them just to make life easier for the institution itself is surely a classic example of “the tail wagging the dog”. After all, without students and teaching an institution of higher education can no longer be considered a university.

Outsourcing Responsibility

With students showing up every year and with an obligation to provide services to them, one might imagine that an institution might take the opportunity to innovate and to evaluate ways in which it might stand out amongst the competition, truly looking after the group of people that in today’s increasingly commercialised education sector are considered the institution’s “customers”. When I was studying for my degree, the university’s mathematics department was in the process of developing computer-aided learning software for mathematics, which was regarded as a useful way of helping students improve their understanding of the course material through the application of knowledge recently acquired. However, such attempts to improve teaching quality are only likely to get substantial funding either through teaching-related programmes or by claiming some research benefit in the field of teaching or in another field. Consequently, developing software to benefit teaching is likely to be an activity located near the back of the queue for attention in a university, especially amongst universities whose leadership regard research commercialisation as their top priority.

So it becomes tempting for universities to minimise costs around student provision. Students are not meant to be sophisticated users whose demands must be met, mostly because they are not supposed to be around for long enough to be comfortable and for those providing services to eventually have to give in to student demands. Moreover, university employees are protected by workplace regulation (in theory, at least) whereas students are most likely protected by much weaker regulation. To take one example, whereas a university employee could probably demand appropriate accessibility measures for a disability they may have, students may have to work harder to get their disabilities recognised and their resulting needs addressed.

The Costs of Doing Business

So, with universities looking to minimise costs and to maximise revenue-generating opportunities, doing things like running infrastructure in a way that looks after the needs of the student and researcher populations seems like a distraction. University executives look to their counterparts in industry and see that outsourcing might offer a solution: why bother having people on the payroll when there are cloud computing services run by friendly corporations?

Let us take the most widely-deployed service, e-mail, as an example. Certainly, many students and employees might not be too concerned with logging into a cloud-based service to access their university e-mail – many may already be using such services for personal e-mail, and many may already be forwarding their university e-mail to their personal account – and although they might be irritated by the need to use one service when they have perhaps selected another for their personal use, a quick login, some adjustments to the mail forwarding settings, and logging out (never to return) might be the simple solution. The result: the institution supposedly saves money by making external organisations responsible for essential services, and the users get on with using those services in the ways they find most bearable, even if they barely take advantage of the specially designated services at all.

However, a few things may complicate this simplified scenario somewhat: reliability, interoperability, lock-in, and privacy. Reliability is perhaps the easiest to consider: if “Office 365” suddenly becomes “Office 360” for a few days, cloud-based services cannot be considered suitable for essential services, and if the “remedy” is to purchase infrastructure to bail out the cloud service provider, one has to question the decision to choose such an external provider in the first place. As for interoperability, if a user prefers Gmail, say, over some other hosted e-mail solution where that other solution doesn’t exchange messages properly with Gmail, that user will be in the awkward position of having to choose between a compromised experience in their preferred solution or an experience they regard as inconvenient or inferior. With services more exotic and less standardised than e-mail, the risk is that a user’s preferred services or software will not work with the chosen cloud-based service at all. Either way, users are forced to adopt services they dislike or otherwise have objections to using.

Miscellaneous waste

Product Placement

With users firmly parked on a specific vendor’s cloud-based platform, the temptation will naturally grow amongst some members of an organisation to “take advantage” of the other services on that platform whether they support interoperability or not. Users will be forced to log into the solution they would rather avoid or ignore in order to participate in processes and activities initiated by those who have actively embraced that platform. This is rather similar to the experience of getting a Microsoft Office document in an e-mail by someone requesting that one reads it and provides feedback, even though recipients may not have access to a compatible version of Microsoft Office or even run the software in question at all. In an organisational context, legitimate complaints about poor workflow, inappropriate tool use, and even plain unavailability of software (imagine being a software developer using a GNU/Linux or Unix workstation!) are often “steamrollered” by management and the worker is told to “deal with it”. Suddenly, everyone has to adapt to the tool choices of a few managerial product evangelists instead of participating in a standards-based infrastructure where the most important thing is just getting the work done.

We should not be surprised that vendors might be very enthusiastic to see organisations adopt both their traditional products as well as cloud-based platforms. Not only are the users exposed to such vendors’ products, often to the exclusion of any competing or alternative products, but by having to sign up for those vendors’ services, organisations are effectively recruiting customers for the vendor. Indeed, given the potential commercial benefits of recruiting new customers – in the academic context, that would be a new group of students every year – it is conceivable that vendors might offer discounts on products, waive the purchase prices, or even pay organisations in the form of services rendered to get access to new customers and increased revenue. Down the line, this pays off for the vendor: its organisational customers are truly locked in, cannot easily switch to other solutions, and end up paying through the nose to help the vendor recruit new customers.

How Much Are You Worth?

All of the above concerns are valid, but the most important one of all for many people is that of privacy. Now, most people have a complicated relationship with privacy: most people probably believe that they deserve to have a form of privacy, but at the same time many people are quite happy to be indiscreet if they think no-one else is watching or no-one else cares about what they are doing.

So, they are quite happy to share information about themselves (or content that they have created or acquired themselves) with a helpful provider of services on the Internet. After all, if that provider offers services that permit convenient ways of doing things that might be awkward to do otherwise, and especially if no money is changing hands, surely the meagre “payment” of tedious documents, mundane exchanges of messages, unremarkable images and videos, and so on, all with no apparently significant value or benefit to the provider, gets the customer a product worth far more in return. Everybody wins!

Well, there is always the matter of the small print – the terms of use, frequently verbose and convoluted – together with how other people perceive the status of all that content you’ve been sharing. As your content reaches others, some might perceive it as fair game for use in places you never could have imagined. Naturally, unintended use of images is no new phenomenon: I once saw one of my photographs being used in a school project (how I knew about it was that the student concerned had credited me, although they really should have asked me first, and an Internet search brought up their page in the results), whereas another photograph of mine was featured in a museum exhibition (where I was asked for permission, although the photograph was a lot less remarkable than the one found by the student).

One might argue that public sharing of images and other content is not really the same as sharing stuff over a closed channel like a social network, and so the possibility of unintended or undesirable use is diminished. But take another look at the terms of use: unlike just uploading a file to a Web site that you control, where nobody deems to claim any rights to what you are sharing, social networking and content sharing service providers frequently try and claim rights to your work.

Privacy on Parade

When everyone is seeking to use your content for their own goals, whether to promote their own businesses or to provide nice imagery to make their political advocacy more palatable, or indeed to support any number of potential and dubious endeavours that you may not agree with, it is understandable that you might want to be a bit more cautious about who wants a piece of your content and what they intend to do with it once they have it. Consequently, you might decide that you only want to deal with the companies and services you feel you can trust.

What does this have to do with students and the cloud? Well, unlike the services that a student may already be using when they arrive at university to start their studies, any services chosen by the institution will be imposed on the student, and they will be required to accept the terms of use of such services regardless of whether they agree with them or not. Now, although it might be said that the academic work of a student might be somewhat mundane and much the same as any other student’s work (even if this need not be the case), and that the nature of such work is firmly bound to the institution and it is therefore the institution’s place to decide how such work is used (even though this could be disputed), other aspects of that student’s activities and communications might be regarded as beyond the interests of the institution: who the student communicates with, what personal views they may express in such communications, what academic or professional views they may have.

One might claim that such trivia is of no interest to anyone, and certainly not to commercial entities who just want to sell advertising or gather demographic data or whatever supposedly harmless thing they might do with the mere usage of their services just to keep paying the bills and covering their overheads, but one should still be wary that information stored on some remote server in some distant country might somehow make its way to someone who might take a closer and not so benign interest in it. Indeed, the matter of the data residing in some environment beyond their control is enough for adopters of cloud computing to offer specially sanctioned exemptions and opt-outs. Maybe it is not so desirable that some foreign student writing about some controversial topic in their own country has their work floating around in the cloud, or as far as a university’s legal department is concerned, maybe it does not look so good if such information manages to wander into the wrong hands only for someone to ask the awkward question of why the information left the university’s own systems in the first place.

A leaflet for a tourist attraction in the Cambridge area

Excuses, Excuses

Cloud-based service providers are likely to respond to fears articulated about privacy violations and intrusions by insisting that such fears are disproportionate: that no-one is really looking at the data stored on their servers, that the data is encrypted somewhere/somehow, that if anything does look at the data it is merely an “algorithm” and not a person. Often these protests of innocence contradict each other, so that at any point in time there is at least one lie being told. But what if it is “only an algorithm” looking at your data? The algorithm will not be keeping its conclusions to itself.

How would you know what is really being done with your data? Not only is the code being run on a remote server, but with the most popular cloud services the important code is completely proprietary – service providers may claim to support Free Software and even contribute to it, but they do so only for part of their infrastructure – and you have no way of verifying any of their claims. Disturbingly, some companies want to enforce such practices within your home, too, so that when Microsoft claims that the camera on their most recent games console has to be active all the time but only for supposedly benign reasons and that the data is only studied by algorithms, the company will deny you the right to verify this for yourself. For all you know the image data could be uploaded somewhere, maybe only on command, and you would not only be none the wiser but you would also be denied the right to become wiser about the matter. And even if the images were not shared with mysterious servers, there are still unpleasant applications of “the algorithm”: it could, for example, count people’s faces and decide whether you were breaking the licensing conditions on a movie or other content by holding a “performance” that goes against the arbitrary licensing that accompanies a particular work.

Back in the world of the cloud, companies like Microsoft typically respond to criticism by pointing the finger at others. Through “shell” or “front” organisations the alleged faults of Microsoft’s competitors are brought to the attention of regulators, and in the case of the notorious FairSearch organisation, to take one example, the accusing finger is pointed at Google. We should all try and be aware of the misdeeds of corporations, that unscrupulous behaviour may have occurred, and we should insist that such behaviour be punished. But we should also not be distracted by the tactics of corporations that insist that all the problems reside elsewhere. “But Google!” is not a reason to stop scrutinising the record of a company shouting it out loud, nor is it an excuse for us to disregard any dubious behaviour previously committed by the company shouting it the loudest. (It is absurd that a company with a long history of being subject to scrutiny for anticompetitive practices – a recognised monopoly – should shout claims of monopoly so loudly, and it is even more absurd for anyone to take such claims at face value.)

We should be concerned about Google’s treatment of user privacy, but that should not diminish our concern about Microsoft’s treatment of user privacy. As it turns out, both companies – and several others – have some work to do to regain our trust.

I Do Not Agree

So why should students specifically be worried about all this? Does this not also apply to other groups, like anyone who is made to use software and services in their job? Certainly, this does affect more than just students, but students will probably be the first in line to be forced to accept these solutions or just not be able to take the courses they want at the institutions they want to attend. Even in countries with relatively large higher education sectors like the United Kingdom, it can be the case that certain courses are only available at a narrow selection of institutions, and if you consider a small country like Norway, it is entirely possible that some courses are only available at one institution. For students forced to choose a particular institution and to accept that institution’s own technological choices, the issue of their online privacy becomes urgent because such institutional changes are happening right now and the only way to work around them is to plan ahead and to make it immediately clear to those institutions that the abandonment of the online privacy rights (and other rights) of their “customers” is not acceptable.

Of course, none of this is much comfort to those working in private businesses whose technological choices are imposed on employees as a consequence of taking a job at such organisations. The only silver lining to this particular cloud is that the job market may offer more choices to job seekers – that they can try and identify responsible employers and that such alternatives exist in the first place – compared to students whose educational path may be constrained by course availability. Nevertheless, there exists a risk that both students and others may be confronted with having to accept undesirable conditions just to be able to get a study place or a job. It may be too disruptive to their lives not to “just live with it” and accept choices made on their behalf without their input.

But this brings up an interesting dilemma. Should a person be bound by terms of use and contracts where that person has been effectively coerced into accepting them? If their boss tells them that they have to have a Microsoft or Google account to view and edit some online document, and when they go to sign up they are presented with the usual terms that nobody can reasonably be expected to read, and given that they cannot reasonably refuse because their boss would then be annoyed at that person’s attitude (and may even be angry and threaten them with disciplinary action), can we not consider that when this person clicks on the “I agree” button it is only their employer who really agrees, and that this person not only does not necessarily agree but cannot be expected to agree, either?

Excuses from the Other Side

Recent events have probably made people wary of where their data goes and what happens with it once it has left their own computers, but merely being concerned and actually doing something are two different things. Individuals may feel helpless: all their friends use social networks and big name webmail services; withdrawing from the former means potential isolation, and withdrawing from the latter involves researching alternatives and trying to decide whether those alternatives can be trusted more than one of the big names. Certainly, those of us who develop and promote Free Software should be trying to provide trustworthy alternatives and giving less technologically-aware people the lifeline that they need to escape potentially exploitative services and yet maintain an active, social online experience. Not everyone is willing to sacrifice their privacy for shiny new online toys that supposedly need to rifle through your personal data to provide that shiny new online experience, nor is everyone likely to accept such constraints on their privacy when made aware of them. We should not merely assume that people do not care, would just accept such things, and thus do not need to be bothered with knowledge about such matters, either.

As we have already seen, individuals can make their own choices, but people in organisations are frequently denied such choices. This is where the excuses become more irrational and yet bring with them much more serious consequences. When an organisation chooses a solution from a vendor known to share sensitive information with other, potentially less friendly, parties, they might try and explain such reports away by claiming that such behaviour would never affect “business applications”, that such applications are completely separate from “consumer applications” (where surveillance is apparently acceptable, but no-one would openly admit to thinking this, of course), and that such a vendor would never jeopardise their relationship with customers because “as a customer we are important to them”.

But how do you know any of this? You cannot see what their online services are actually doing, who can access them secretly, whether people – unfriendly regimes, opportunistic law enforcement agencies, dishonest employees, privileged commercial partners of the vendor itself – actually do access your data, because how all that stuff is managed is secret and off-limits. You cannot easily inspect any software that such a vendor provides to you because it will be an inscrutable binary file, maybe even partially encrypted, and every attempt will have been made to forbid you from inspecting it both through licence agreements and legislation made at the request of exactly these vendors.

And how do you know that they value your business, that you are important to them? Is any business going to admit that no, they do not value your business, that you are just another trophy, that they share your private data with other entities? With documentation to the contrary exposing the lies necessary to preserve their reputation, how do you know you can believe anything they tell you at all?

Clouds over the IT building, University of Oslo

The Betrayal

It is all very well for an individual to make poor decisions based on wilful ignorance, but when an organisation makes poor decisions and then imposes them on other people for those people to suffer, this ignorance becomes negligence at the very least. In a university or other higher education institution, apparently at the bottom of the list of people to consult about anything, the bottoms on seats, the places to be filled, are the students: the first in line for whatever experiment or strategic misadventure is coming down the pipe of organisational reform, rationalisation, harmonisation, and all the other buzzwords that look great on the big screen in the boardroom.

Let us be clear: there is nothing inherently wrong with storing content on some network-accessible service, provided that the conditions under which that content is stored and accessed uphold the levels of control and privacy that we as the owners of that data demand, and where those we have elected to provide such services to us deserve our trust precisely by behaving in a trustworthy fashion. We may indeed select a service provider or vendor who respects us, rather than one whose terms and conditions are unfathomable and who treats its users and their data as commodities to be traded for profits and favours. It is this latter class of service providers and vendors – ones who have virtually pioneered the corrupted notion of the consumer cloud, with every user action recorded, tracked and analysed – that this article focuses on.

Students should very much beware of being sent into the cloud: they have little influence and make for a convenient group of experimental subjects, with few powerful allies to defend their rights. That does not mean that everyone else can feel secure, shielded by employee representatives, trade unions, industry organisations, politicians, and so on. Every group pushed into the cloud builds the pressure on every subsequent group until your own particular group is pressured, belittled and finally coerced into resignation. Maybe you might want to look into what your organisation is planning to do, to insist on privacy-preserving infrastructure, and to advocate Free Software as the only effective way of building that infrastructure.

And beware of the excuses – for the favourite vendor’s past behaviour, for the convenience of the cloud, for the improbability that any undesirable stuff could happen – because after the excuses, the belittlement of the opposition, comes the betrayal: the betrayal of sustainable and decentralised solutions, the betrayal of the development of local and institutional expertise, the betrayal of choice and real competition, and the betrayal of your right to privacy online.

Site Licences and Volume Licensing: Locking You Both In… and Out

Sunday, June 9th, 2013

Once upon a time, back in the microcomputer era, if you were a reputable institution and were looking to acquire software it was likely that you would end up buying proprietary software, mostly because Free Software was not a particularly widely-known concept, and partly because any “public domain” or “freeware” programs probably didn’t give you or your superiors much confidence about the quality or maintenance of those programs, although there were notable exceptions on some platforms (some of which are now Free Software). As computers became more numerous, programs would be used on more and more computers, and producers would take exception to their customers buying a single “copy” and then using it on many computers simultaneously.

In order to avoid arguments about common expectations of reasonable use – if you could copy a program onto many floppy disks and run that program on many computers at once, there was obviously no physical restriction on the use of copies and thus no apparent need to buy “official” copies when your computer could make them for you – and in order to avoid needing to engage in protracted explanations of copyright law to people for whom such law might seem counter-intuitive or nonsensical, the concept of the “site licence” was born: instead of having to pay for one hundred official copies of a product, presumably consisting of one hundred disks in one hundred boxes with one hundred manuals, at one hundred times the list price of the product, an institution would buy a site licence for up to one hundred computers (or perhaps as many as the institution has, betting on the improbability that the institution will grow tenfold, say) and pay somewhat less than one hundred times the original price, although perhaps still a multiple of ten of that price.

Thus, the customer got the vendor off their back, the vendor still got more or less what they thought was a fair price, and everyone was happy. At least that is how it all seemed.

The Physical Discount Store

Now, because of the apparent compromise made by the vendor – that the customer might be paying somewhat less per copy – the notion of the “volume licence” or “bulk discount” arose: suddenly, software licences start to superficially resemble commodities and people start to think of them just like they do when they buy other things in bulk. Indeed, in the retail sector the average person became aware of the concept of bulk purchasing with the introduction of cash and carry stores, discount stores, and so on: the larger the volume of goods passing through those channels, the bigger the discounts on those goods.

Now, economies of scale exist throughout modern commerce and often for good reason: any fixed costs (or costs largely insensitive to the scale of output) in production and distribution can be diluted by an increased number of units produced and shipped, making the total per-unit cost less; commitments to larger purchases, potentially over a longer period of time, can also provide stability to producers and suppliers and encourage mutually-beneficial and lasting relationships throughout the supply chain. A thorough treatment of this topic is clearly beyond a blog post, but it is worthwhile to briefly explore how savings arise and how discounts are made.

Let us consider a producer whose factory can produce at most a million units of a product every year, it may not seek to utilise this capacity if it cannot be sure that all units will be sold: excess inventory may incur warehouse costs and also result in an uncompetitive product going unsold or needing to be heavily discounted in order to empty those warehouses and make room for more competitive stock. Moreover, the producer may need to reconsider their employment levels if the demand varies significantly, which in some places incurs significant costs both in reduction and expansion. Adding manufacturing capability might not be as easy as finding a spare factory, either. All this additional flexibility is expensive for producers.

However, if a large, well-known retailer like Wal-Mart or Tesco (to name but two that come to mind immediately) comes along and commits to buying most or all of the production, a producer now has more certainty that the inventory will be sold and that it will not be paying people to do nothing or to suddenly have to change production lines to make new products, and so on. Even things like product variations can be minimised by having a single customer or few customers, and this reduces costs for the producer still further. Naturally, Wal-Mart would expect some of the savings to be passed on to them, and so this relationship benefits both parties. (It also produces a potential discount to be passed on to retail customers who may not be buying in bulk after all, but that is another matter.)

The Software Discount Store?

For software, even though the costs of replication have been driven close to nothing, the production of software certainly has a significant fixed cost: the work required to develop a viable product in the first place. Let us say that an organisation wishes to make and sell a non-niche product but needs to employ fifty people for two years to do so (although this would have been almost biblical levels of manpower for some successful software companies in the era of the microcomputer); thus one hundred person-years are invested in development. To just remain in business while selling “copies” of the software, one might need to sell one hundred thousand individual copies. That is if the company wants to just sell “licences” and not do things like services, consulting, paid support, and so on.

Now, the cost of each copy can be adjusted according to the number of sales. If things go better than expected, the prices could be lowered because the company will cover its costs more quickly than anticipated, but they may also raise the prices to take advantage of the desirability of the product. If things go worse than expected, the prices might be raised to bring in more revenue per sale, but such pricing decisions also have to consider the customer reaction where an increased price turns away customers who can no longer justify the expense. In some cases, however, raising the price might make the product seem more valuable and make it more attractive to potential customers, despite the initial lack of interest from such customers.

So, can one talk about economies of scale with regard to software as if it were a physical product or commodity? Not really. The days of needing to get more disks from the duplicator, more manuals from the printer, and to send more boxes to distributors are over, leaving the bulk of the expense in employing people to get the software written. And all those people developing the product are not producing more units by writing more code or spending more time in the office. One can argue that by adding more features they are generating more sales, but it is doubtful that the relationship between features and sales is so well defined: after a while, a lot of the new features will be superfluous for all but “power users”. One can also argue that by adding more features they are making the product seem more valuable, and so a higher price can be justified. To an extent this may be the case, but the relationship between price and sales is not always so well defined, either (despite attempts to do so). But certainly, you do not need to increase your “production capacity” to fulfil a sales need: whether you make one hundred or one million sales (or generate a tenth of or ten times the anticipated revenue) is probably largely independent of how many people were hired to write the code.

But does it make sense to consider bulk purchasing of software as a way of achieving savings? Not really. Unlike physical production, there is no real limit to how many units are sold to customers, and so beyond a certain threshold demanded by profitability, there is no need for anyone to commit to purchasing a certain number of units. Especially now that a physical component of a software product is unlikely to be provided in any transaction – the software is downloaded, the manual is downloaded, there is no “retail box”, no truck arriving at the customer, no fork-lift offloading pallets of the product – there is also no inventory sitting in a warehouse going unsold. It might be nice if someone paid a large sum of money so that the developers could keep working on the product and not have to be moved to some other project, but the constraints of physical products do not apply so readily here.

Who Benefits from Volume Licensing?

It might be said, then, that the “economies of scale” argument starts to break down when software is considered. Producers can more or less increase supply at will and at a relatively low cost, and they need only consider demand in order to break even. Beyond that point, everything is more or less profit and they deliver units at no risk to themselves. Certainly, a producer could use this to price their products aggressively and to pass on considerable savings to customers, but they have no obligation and arguably little inclination to do so for profitability reasons alone. Indeed, they probably want to finance new products and therefore need the money.

When purchasers of physical goods choose to buy in bulk, they do so to get access to savings passed on by the producer, and for some categories of products the practice of committing larger sums of money to larger purchases carries little risk. For example, an organisation might buy a larger quantity of toilet paper than it normally would – even to the point of some administrator complaining that “this must be more than we really need!” – and as long as the organisation had space to store it, it would surely be used over time with very little money wasted as a result.

But for software, any savings passed on by the producer are more discretionary than genuine products of commerce, and there is a real risk of buying “more than we really need”: a licence for an office application will not get “used up” when someone has “reached the end” of another licence; overspending on such capacity is just throwing money away. It is simply not in the purchaser’s interest to buy too many licences.

Now, software producers have realised that their customers are sensitive to this issue. Presumably, the notion of the site licence or “volume licensing” arose fairly quickly: some customers may have indicated that their needs were not so well-defined that they could say that they needed precisely one hundred copies of a product, and besides, their computer users might not have all been using the software at the same time, and so it might not make sense to provide everyone with a copy of a program when they could pass the disks around (or in later times use “floating licences”). So, producers want customers to feel that they are getting value for money and not spending too much, and thus the site licence was presumably offered as a way of stopping them from just buying exactly what they need, instead getting them to spend a bit more than they might like, but perhaps a bit less than they would need to if money were no object and per-unit pricing was the only thing on offer. (The other way of influencing the customer is, of course, the threat of audits by aggressive proprietary software organisations, but that is another matter.)

Regardless of the theory and the mechanisms involved, do customers benefit from site licences? Well, if they spend less on a site licence than they do on the list price of a product multiplied by the number of active users of that product, then they at least benefit from savings on the licensing fees, certainly. However, there are other factors involved, introducing other broader costs, that we will return to in a moment.

Do producers benefit from site licences? Almost certainly. They allow companies to opportunistically increase revenue by inviting customers to spend a bit more for “peace of mind” and convenience of administration (no more having to track all by yourself who is using which product and whether too many people are doing so because a “helpful” company will take care of it for you). If such a thing did not exist, customers would probably choose to act conservatively and more closely review their purchases. (Or they might just choose to embrace Free Software instead, of course.)

All You Won’t Eat

But it is the matter of what the customer needs that should interest us here. If customers did need to review their purchases more closely, they might find it hard to justify spending large sums on volume licences. After all, not everyone might be in need of some product that can theoretically be rolled out to everyone. Indeed, some people might prefer another product instead: it might be much more appropriate for their kind of work, or it might work better on their platform (or even actually work on their platform where the already-bought product does not).

And where the organisation’s purse strings are loosened when buying a site licence for a product in the first instance, the organisation may not be so forthcoming with finance to acquire other products in the same domain, even if there are genuine reasons for doing so. “You already have an office program you can use; why do you want us to buy another?” Suddenly, instead of creating opportunities, volume licensing eliminates them: if the realm of physical products worked like this, Tesco would offer only one brand of toilet paper and perhaps not even a particularly pleasant one at that!

But it doesn’t stop there. Some vendors bundle products together in volume licensing deals. “Why not indulge yourself with a package of products featuring the ones you want together with some you might like?” This is what customers are made to ask themselves. Suddenly, the justification for acquiring a superior product from a competitor of the volume licensing provider is subject to scrutiny. “You already have access to an intranet solution; why do you want us to spend time and money on another?” And so the supposedly generous site licence becomes a mechanism to rein in spending and even the mere usage of alternatives (which may be Free Software acquired at no cost), all because the acquisition cost of things that people are not already actively using are wrongly perceived as being “free”. “Just take advantage of the site licence!” is what people are told, and even if the alternatives are zero cost, the pressure will still be brought to bear because “we paid for things we could use, so let’s use them!”

And the Winner is…

With such blinkered thinking the customer can no longer sensibly exercise choice: it becomes too easy to constrain an organisation’s strategy based on what else is in the lucky dip of products included in the multiple product volume licensing programme. Once one has bought into such a scheme, there is a disincentive to look elsewhere for other solutions, and soon every need to be satisfied become phrased in terms of the solutions an organisation has already “bought”. Need an e-mail system? The solution now has to be phrased in terms of a single vendor’s product that “we already have”. And when such extra purchases merely add to proprietary infrastructure with proprietary dependencies, that supposedly generous site licence is nothing but bait on the end of the vendor’s fishing line.

We know who the real winner is here. The real loser is anyone having to compete with such schemes, especially anyone using open standards in their products, particularly anyone delivering Free Software using open standards. Because once people have paid good money for something, they will defend that “investment” even when it makes no real sense: this is basic human psychology at work. But the customer is the loser, too: what once seemed like a good deal will just result in them throwing good money after bad, telling themselves that it’s the volume of usage – the chance to sample everything at the “all you can eat” buffet – that makes it a “good investment”, never mind that some of the food at the buffet is unhealthy, poor quality, or may even make people ill.

The customer becomes increasingly “locked in”, unable to consider alternatives. The competition becomes “locked out”, unable to persuade the customer to migrate to open-standards-based solutions or indeed anything else, because even if the customer recognised their dependency on their existing vendor, the cost of undoing the mess might well be less predictable and less palatable than a subscription fee to that “preferred” vendor, appearing as an uncomfortably noticeable entry in the accounts that might indicate strategic ineptitude or wrongdoing – that a mistake has been made – which would be difficult to acknowledge and tempting to conceal. But when the outcome of taking such uncomfortable remedial measures would be lower costs, truly interoperable systems and vastly increased choice, it would be the right thing to do.

One might be tempted to just sit back and watch all this unfold, especially if one has no connection with any of the organisations involved and if the competition consists only of a bunch of proprietary software vendors. But remember this: when the customer is spending your tax money, you are the loser, too. And then you have to wonder who apart from the “preferred” vendor benefits from making you part of the losing team.

Why the Raspberry Pi isn’t the new BBC Micro (and perhaps shouldn’t be, either)

Wednesday, January 23rd, 2013

Having read a commentary on “rivals” to the increasingly well-known Raspberry Pi, and having previously read a commentary that criticised the product and the project for not upholding the claimed ideals of encouraging top-to-bottom experimentation in computing and recreating the environment of studying systems at every level from the hardware through the operating system to the applications, I find myself scrutinising both the advocacy as well as the criticism of the project to see how well it measures up to those ideals and whether the project objectives and how the way they are to be achieved can be seen as still being appropriate thirty years on from the introduction of microcomputers to the masses.

The latter, critical commentary is provocatively titled “Why Raspberry Pi Is Unsuitable for Education” because the Raspberry Pi product, or at least the hardware being sold, is supposedly aimed at education just as the BBC Microcomputer was in the early 1980s. A significant objective of the Computer Literacy Project in that era was to introduce microcomputers in the educational system (at many levels, not just in primary and secondary schools, although that is clearly by far the largest area in the educational sector) and to encourage learning using educational software tools as well as learning about computing itself. Indeed, the “folklore” around the Raspberry Pi is meant to evoke fond memories of that era, with Model A and B variants of the device, and with various well-known personalities being involved in the initiative in one way or another.

Now, if I were to criticise the Raspberry Pi initiative and to tread on toes in doing so, I would rather state something more specific than “education” because I don’t think that making low-cost hardware to education is a bad thing at all, even if it does leave various things to be desired with regard to the openness of the hardware. The former commentary mentioned above makes the point that cheap computers means fewer angry people when or if they get broken, and this is hard to disagree with, although I would caution people into thinking that it means we can treat these devices as disposable items to be treated carelessly. In fact, I would be as specific as to state that the Raspberry Pi is not the equivalent of the BBC Micro.

In the debate about openness, one focus is on the hardware and whether the users can understand and experiment with it. The BBC Micro used a number of commodity components, many of which are still available in some form today, with only perhaps one or two proprietary integrated circuits in the form of uncommitted logic arrays (ULAs), and the circuit diagram was published in various manuals. (My own experience with such matters is actually related to the Acorn Electron which is derived from the BBC Micro and which uses fewer components by merging the tasks of some of those omitted components into a more complicated ULA which also sacrifices some functionality.) In principle, apart from the ULAs for which only block diagrams and pin-outs were published, it was possible to understand the functioning of the hardware and thus make your own peripheral hardware without signing non-disclosure agreements (NDAs) or being best friends with the manufacturer.

Meanwhile, although things are known about the components used by the Raspberry Pi, most obviously the controversial choice of system-on-a-chip (SoC) solution, the information available to make your own is not readily available. Would it be possible to make a BBC Micro from the published information? In fact, there was a version of the BBC Micro made and sold under licence in India where the intention was to source only the ULAs from Acorn (the manufacturer of the BBC Micro) and to make everything else in India. Would it be desirable to replicate the Raspberry Pi exactly? For a number of reasons it would neither be necessary nor would it be desirable to focus so narrowly on one specific device, and I will return to this shortly.

But first, on the most controversial aspect of the Raspberry Pi, it has been criticised by a number of people for using a SoC that incorporates the CPU core alongside proprietary functionality including the display/graphics hardware. Indeed, the system can only boot through the use of proprietary firmware that runs on a not-publicly-documented processing core for which the source code may never be made available. This does raise concern about the sustainability of the device in terms of continued support from the manufacturer of the SoC – it is doubtful that Broadcom will stick with the component in question for very long given the competitive pressures in the market for such hardware – as well as the more general issues of transparency (what does that firmware really do?) and maintainability (can I fix bad hardware behaviour myself?). Many people play down these latter issues, but it is clear that many people also experience problems with proprietary graphics hardware, with its sudden unexplainable crashes, and proprietary BIOS firmware, with its weird behaviour (why does my BIOS sometimes not boot the machine and just sit there with a stupid Intel machine version message?) and lack of functionality.

One can always argue that the operating system on the BBC Micro was proprietary and the source code never officially published – books did apparently appear in print with disassembled code listings, clearly testing then-imprecisely-defined boundaries of copyright – and that the Raspberry Pi can run GNU/Linux (and the proprietary operating system, RISC OS, that is perhaps best left as a historical relic), and if anything I would argue that the exposure that Free Software gets from the Raspberry Pi is one of the initiative’s most welcome outcomes. Back in the microcomputer era, proprietary things were often regarded as being good things in the misguided sense that since they are only offered by one company to customers of that company, they would presumably offer exclusive features that not only act as selling points for that company’s products but also give customers some kind of “edge” over people buying the products of the competitors, if this mattered to you, of course, which is arguably most celebrated in recollections of playground/schoolyard arguments over who had the best computer.

The Computer Literacy Project, even though it did offer funding to buy hardware from many vendors, sadly favoured one vendor in particular. This might seem odd as a product of a government and an ideology that in most aspects of public life in the United Kingdom emphasised and enforced competition, even in areas where competition between private companies was a poor solution for a problem best solved by state governance, and so de-facto standards as opposed to genuine standards ruled the education sector (just as de-facto standards set by corporations, facilitated by dubious business practices, ruled other sectors from that era onwards). Thus, a substantial investment was made in equipment and knowledge tied to one vendor, and it would be that vendor the customers would need to return to if they wanted more of the same, either to continue providing education on the range of supported topics or related ones, or to leverage the knowledge gained for other purposes.

The first commentary mentioned above uses the term “the new Raspberry Pi” as if the choice is between holding firm to a specific device with its expanding but specific ecosystem of products and other offerings or discarding it and choosing something that offers more “bang for the buck”. Admittedly, the commentary also notes that there are other choices for other purposes. But just as the BBC Micro enjoyed a proliferation of peripheral hardware, software commissioned for the platform as well as software written as the market expanded, and even though this does mean that people will be able to do things that they never considered doing before, particularly with hardware and electronics, there is a huge risk that all of this will be done in a way that emphasises a specific device – a specific solution involving specific investments – that serves to fragment the educational community and reproduce the confusion and frustration of the microcomputer era where a program or device required a specific machine to work.

Although it appeals to people’s nostalgia, the educational materials that should be (and presumably are) the real deliverable of the Raspberry Pi initiative should not seek to recreate the Tower of Babel feeling brought about by opening a 1980s book like Computer Spacegames and having to make repeated corrections to programs so that they may have a chance of running on a particular system (even though this may in itself have inspired a curiosity in myself for the diversity seen in systems and both machine and natural languages). Nothing should be “the old Raspberry Pi” or “the new Raspberry Pi” or “the even newer Raspberry Pi” because dividing things up like this will mean that people will end up using the wrong instructions for the wrong thing and being frustrated and giving up. Just looking at the chaos in the periphery around the Arduino is surely enough of a warning.

In short, we should encourage diversity in the devices and solutions offered for people to learn about computing, and we should seek to support genuine standards and genuine openness so that everyone can learn from each other, work with each other’s kit, and work together on materials that support them all as easily and as well as possible. Otherwise, we will have learned nothing from the past and will repeat the major mistakes of the 1980s. That is why the Raspberry Pi should not be the new BBC Micro.