Paul Boddie's Free Software-related blog


Archive for February, 2022

Making a Board for the PIC32 VGA Signal Generation Project

Tuesday, February 22nd, 2022

Although I have tried to maintain an interest in computing hardware over the last two years or so, both contemporary and somewhat older technologies, I haven’t really had much time or energy to devote to electronics projects. However, I was becoming increasingly bothered by my existing VGA signal generation project taking up space on a couple of solderless breadboards, demanding lots of jumper wires, and generally not helping me organise and consolidate my collection of electronics-related acquisitions, components, and so on. I was also feeling a bit bad for not moving the VGA project to the next step, which is what I had virtually promised to do. Having acquired a new computer in 2020 but not having really looked at KiCad since migrating to the new machine, I finally picked up the courage to set out translating my existing notes into a proper circuit diagram, reacquainting myself with KiCad’s peculiarities, and then translating this into an actual board layout.

My priorities for a circuit board were to support signal generation and programming, gathering together all the different passive components associated with signal generation – the resistors for combining colour intensity levels – plus all the components associated with programming the microcontroller – more resistors – and putting them on the board with a socket for the microcontroller. Unlike boards trying to replicate the Arduino experience, this board would not seek to offer programming facilities itself: that would remain the job of another board, with the Arduino Duemilanove already doing this job adequately under the control of the Nanu Nanu software. Headers would therefore expose the programming pins for connection to the Arduino or other device.

Nor would the board offer its own VGA connector for the VGA signals: I already have a VGA connector terminal block, and mounting a DE-15 connector on a board raises issues of selecting an appropriate component, defining a usable component footprint, making room on the board for it, and ensuring that the connector is securely attached without risking stress on the board or solder joints. Besides, the idea was to make use of my existing parts, and by retaining use of the terminal block, I could just use a header for the VGA signal outputs which would also require a modest amount of space. I envisaged that any future solution with a proper socket for a VGA cable might well involve a separate board securely mounted to a case, with the case taking the strain of cable insertions and removals, and with cables connecting this separate board to the board being designed here.

One issue when making a board that I could easily imagine stalling the process is that of the size and shape of any given board. With something resembling a miniature computing system, one might get tempted by selecting a broadly-adopted circuit board profile like PC/104 or one of the ITX family, or perhaps leveraging the deluge of Raspberry Pi cases available for one of those products (despite their variations and incompatibilities). However, my choice was practically made for me: I already had one Arduino case that I hadn’t used for anything, and it seemed to me that the traditional Arduino board profile was appropriately sized and would not be particularly inconvenient to adopt, although the actual physical characteristics were not adequately defined, leaving it to others to do the work of documentation that should have been done by the Arduino initiative right from the start. (There are subtle differences between Duemilanove models that have actually made some cases incompatible with some versions of that board, so this was a real problem.)

Anyway, with the physical and electrical constraints mostly figured out, I laid out the board, put headers in places I considered sensible enough, not aiming for Arduino shield compatibility since I needed more flexibility than that would demand. I also wanted to add support for various other useful features including some that I had already tested, such as UART communication, together with some I had not been able to adequately investigate, such as the driving of peripherals (or the microcontroller itself) over the parallel bus and USB connectivity. The latter involved messing around with the differential pair routing features of KiCad, but it is entirely possible that such features are largely superfluous at the transmission frequencies this board would end up using. After much checking, rechecking, agonising, and so on, I uploaded the board design to OSHPark and placed an order.

Several weeks later, I received the boards in the post, thankfully without any customs fees or other charges. In the meantime, I had also ordered components that I needed from a supplier in the UK, Technobots, who happen to sell logic chips and other things that suppliers targeting the “maker” community tend not to bother selling. Thankfully also in this case, the components arrived without incurring fees and charges, although I had kept the value of my order fairly low. In Norway, the industrial lobby hate to see people importing things, often taking the tone that people should buy locally produced goods, but since nobody makes these things here, anyone “local” that sells them has to import them, and the result is often just the cheapest stuff at ridiculously high prices and some middlemen making all the money, rather than anyone earning a decent living out of actually producing anything. But anyway.

The boards themselves were well made, as I have seen before, although I was disappointed with the “tabs” around the edge. When boards get made, people will tell you that they are all put on a big panel together and that they therefore need to be attached to each other somehow. The manufacturer will then typically spell out that apart from separating the boards by snapping them apart, they don’t do any further finishing work on the edges because we, the customers, aren’t paying for that level of service. However, I don’t remember the remnants of these inter-board connections – the tabs – being so awkward on previous boards from OSHPark. If I had to do anything before, I’m sure it just involved some gentle trimming, but these needed the application of glasspaper to grind down the tabs to be level with the actual board edge. Given the sub-millimetre tolerances involved in board fabrication, I find it perverse that such finishing would fall to someone – me or someone in the factory – to do this by hand.

The fabricated boards

The fabricated boards, with the annoying “tabs” visible around the edges.

Having tested the continuity between different pads using a simple power plus LED plus resistor arrangement, all that remained was to get the soldering equipment out and to build myself up to the usually arduous task of fitting all the headers, resistors, capacitors and the socket. This was something of a trial in itself, but I eventually remembered the various tricks needed to coerce my soldering iron to apply the solder in a barely decent way.

The finished board

The finished board connected to the VGA terminal block and Arduino Duemilanove.

One thing I realised very quickly was that I had chosen the wrong component footprint for the resistors. In navigating the rather unreadable list provided by the inflexible component assignment dialogue within KiCad, I had chosen footprints with the pads not being sufficiently separated for the resistors I happen to have (and ones that are likely to be sold for these purposes). So, I had to raise the resistors up from the board and tuck the leads under them slightly. Despite the additional height occupied by the resistors, the capacitors and headers are already rather tall and so this workaround causes no other problems. I also discovered that placing the resistors in the compact arrangement shown, while very neat and tidy, made all the unsoldered leads rather crowded on the underside of the board and rather increased the risk of accidental solder bridging between components, at least with my soldering technique. The capacitors had appropriate footprints, however, and were less of a challenge.

What followed was another trial, mostly caused by a simple wiring up error, and lots of troubleshooting involving the programming software and circuits both on this board and on the breadboard, chasing down a non-existent programming issue. The programming circuit was actually fine, and the programming was actually being performed as normal, but in maintaining a level of doubt about the outcome of the exercise, I had run the verification operation of the programming software and had seen that it was consistently complaining about a programming error. Worryingly, this error now seemed to occur both on the manufactured board and on the breadboard, for both a microcontroller that I had used before and a completely unused one. I started to test and investigate different versions of the programming software to see if any regressions had occurred.

Eventually, I realised that the error was related to the last thing the software would do when programming the microcontroller and was related to setting a configuration register. Ignoring this and reminding myself of the UART configuration, I sought to test the example program demonstrating UART communication and found, eventually, that it worked on the breadboard. It also worked on the manufactured board, too, raising my confidence slightly. And then, while perusing my previous blog post, I realised that I had connected the VGA sync signal pins the wrong way round! Programming the VGA example program and fixing the wiring finally got me to where I should have been to begin with.

The board generating a VGA signal.

The board generating a VGA signal via a terminal block and cable, with the Arduino supplying power and acting as programmer.

The arrangement of boards required here is a bit cumbersome, although the Arduino could be replaced by a simple 3.3V power source if the appropriate software has already been programmed into the PIC32 microcontroller. As noted above, an auxiliary board could provide a VGA socket for a cable and thus eliminate the bulky terminal connector.

The three-board arrangement.

The three-board arrangement of VGA terminal block, VGA signal board, and Arduino Duemilanove.

As far as using an existing Arduino case is concerned, there are some obvious limitations and possibilities for improvement and refinement. The size and shape of the manufactured board is compatible with the case I happen to have, but where an Arduino shield would be stacked on top of the Duemilanove directly, this board necessitates that the connections be routed using wires or cables between the Arduino’s analogue header and the programming header on this board. Having mounted female headers on this board, this involves routing cables around the edge of the board, and the lack of clearance between the Arduino’s headers and the underside of this board means that there is not enough space to use jumper wires. Fortunately, breadboard jumper cables can do this job, if not entirely elegantly.

The board and the Duemilanove in a case.

The board and the Duemilanove in a case, with the cables routing programming and power signals between the boards.

Introducing a degree of shield compatibility might be a solution to such problems, but perhaps I should not be too hard on myself. The new computer I bought in 2020 has its own compartment for cables, and this compartment is rammed full of seemingly needless, bulky cabling, presumably demonstrating the designers’ aptitude for cable management as they neglected other basic functionality one would expect from a computer case in the third decade of the twenty-first century.

There are other modifications that I might make in a second version of this board. Challenged by the layout issues, I neglected to realise that if the VGA signal resistors are fitted, some of the parallel bus pins will have their own signals mixed with others: I should have realised this and specified diodes for the various signal lines concerned. Still, with spare boards and another microcontroller to play with, I could possibly just make up a board without those resistors if I wanted to experiment with parallel mode. Since this would most likely involve driving a screen, it wouldn’t make sense to try and support that and VGA output on the same board, although I could imagine some kind of switching solution disabling one and enabling the other as appropriate.

In reflection, the outcome is satisfactory. I did spend rather too long designing, building and – especially – troubleshooting the board, and there are definitely things that are obviously wrong with it, but it has now allowed me to dismantle the original breadboard circuit, free up some jumper wires and components, and to put all of those things away properly. It has also helped me become familiar with KiCad once again, encouraged me to document more schematics, and opened the door to doing more circuit design in the future. In any case, I hope that this was a somewhat interesting view into the realisation of a long overdue project.

Some thoughts about technological sustainability

Saturday, February 12th, 2022

It was interesting to see an apparently recent article “On the Sustainability of Free Software” published by the FSFE in the context of the Upcycling Android campaign. I have been advocating for sustainable Free Software for some time. When I wasn’t posting articles about my own Python-like language, electronics projects or microkernel-based system development, it seems that I was posting quite a few about sustainable software, hardware and technology like these:

So, I hardly feel it necessary to go back through much of the same material again. Frustratingly, very little has improved over the years, it would seem: some new initiatives emerge, and such things always manage to excite some people, but the same old underlying causes of a general lack of sustainability remain, these including access to affordable, long-lasting and supportable hardware, and the properly funded development of Free Software and the hardware that would run it.

Of course, I wouldn’t even be bothered to write this if I didn’t feel that there might be some positive insights to share, and recent events have prompted me to do so. Hopefully, I can formulate them concisely and constructively in the following paragraphs.

The Open Hardware Crisis

Alright, so that was a provocative heading – hardly positive or constructive – and with so much hardware hacking (of the good kind) going on these days, it might be tempting to ask “what crisis?” Well, evidently, some people think there has been a crisis around the certification of hardware by the Free Software Foundation: that the Respects Your Freedom criteria don’t really help get hardware designed and made that would support (or be supported by) Free Software; that the criteria fail to recognise practical limitations with some elements of hardware, imposing rigid and wasteful measures that fail to enhance the rights of users and that might even impair the longevity and repairability of devices.

A lot of the hardware we rely on nowadays depends on features that cannot easily be supported by Free Software. The system I use has integrated graphics that require proprietary firmware from AMD to work in any half-decent way, as do many processors and interfacing chips, some not even working in any real way at all without it. Although FPGA technology has become more accessible and has invigorated the field of open hardware, there are still considerable challenges around the availability of Free Software toolchains for those kinds of devices. It is also not completely clear how programmable logic devices intersect with the realm of Free Software, either, as far as I can tell. Should people expect the corresponding source code and the means to generate a “bitstream” for the FPGAs in a system? I would think so, given appropriate licensing, but I am not familiar enough with the legal and regulatory constraints to allow me to expect it to be so.

The discussion around this may sound like a storm in a teacup, especially if you do not follow the appropriate organisations, figures, and their mailing lists (and I recommend saving your time and not bothering to, either), but it also sounds rather like tactics have prevailed over strategy. The fact is that without hardware being made to run Free Software, there isn’t really going to be much of a Free Software movement. So, instead of recommending increasingly ancient phones as “ethical gifts” or hoping that the latest crop of “Linux phones” will deliver a package that will not only run Linux but also prove to be usable as a phone, maybe a move away from consumerism is advised. Consumerism, of course, being the tendency to solve every problem by choosing the “best deal” the market happens to be offering today.

What the likes of the FSF need to do is to invest in hardware platforms that are amenable to the deployment of Free Software. This does not necessarily mean totally rejecting hardware if it has unfortunate characteristics such as proprietary firmware, particularly if there is no acceptable alternative, but the initiative has to start somewhere, however imperfect that somewhere might be. Much as we would all like to spend thousands of dollars on hardware that meets some kind of liberty threshold, most of us don’t have that kind of money and would accept some kind of compromise (just as we have to do most of the time, anyway), especially if we felt there was a chance to make up for any deficiencies later on. Trying to start from an impossible position means that there is no “here and now”, never mind “later on”.

Sadly, several attempts at open hardware platforms have struggled and could not be sustained. Some of these were criticised for having some supposed flaw or other that apparently made them unacceptable to the broader Free Software community, and yet they could have led to products that might have remedied such supposed flaws. Meanwhile, consumerist instincts had all the money chasing the latest projects, and yet here we are in 2022, barely any better off than we were in 2012. Had the FSF and company actually supported hardware projects that sought to support their own vision, as opposed to just casually endorsing projects and hoping they came good, we might be in better place by now.

The Ethical Software Crisis

One depressingly recurring theme is the lack of support given to Free Software developers and to Free Software development, even as billion-to-trillion-dollar corporations bank substantial profits on the back of Free Software. As soon as some random developer deletes his JavaScript package from some repository or other, or even switches it out for something that breaks the hyperactive “continuous integration” of hundreds or thousands of projects, everyone laments the fragility of “the system” and embeds that XKCD cartoon with the precarious structure that you’ve all presumably seen. Business-as-usual, however, is soon restored.

Many of the remedies for overworked, underpaid, burned-out developers have the same, familiar consumerist or neoliberal traits. Trait number one is, of course, to let a million projects bloom, carefully selecting the winners and discarding any that fail to keep up with the constant technological churn that also plagues our societies. Beyond that, things like bounties and donations are proposed, and funding platforms helpfully materialise to facilitate the transaction, themselves mostly funded not by bounties and donations (other than the cut of other people’s bounties and donations, of course) but by venture capital money. Because who would actually want to be going from one “gig” to the next when they could actually get a salary?

And it is revealing that organisations engaging in Free Software development tend to have an enthusiasm not for hiring actual developers but for positions like “community manager”, frequently with the responsibility for encouraging contributions from that desirable stream of eager volunteers. Alongside this, funding is sought from a variety of sources, some of whom being public institutions or progressive organisations perhaps sensing a growing crisis and feeling that something should be done. Other sources are perhaps more about doing “philanthropic work” on behalf of wealthy patrons, although I think I would think twice about taking money from people whose wealth has been built on the back of facilitating psychological warfare on entire populations, undermining public health policies and climate change mitigation, enabling inter-ethnic violence and hate generally, and providing a broadcasting platform for extremists. But as they say, beggars can’t be choosers, right?

It may, of course, be argued that big companies are big employers of Free Software developers. Certainly, lots of people seem to work on Free Software projects in companies like, ahem, Blue Hat. And some of that corporate development does deliver usable software, or at least it helps to mitigate the usability issues of the software being produced elsewhere, maybe in another part of the very same corporation. Large, stable organisations may well be the key to providing developers with secure incomes and the space to focus on producing high-quality, well-designed, long-lasting software. Then again, such organisations sometimes exhibit ethical deficiencies in their own collective activities by seeking to aggressively protect revenue streams by limiting interoperability, reduce costs through offshoring, assert patents against others, and impose needless technological change on their customers and the broader market simply to achieve a temporary competitive advantage.

Free Software organisations should be advocating for quality, stable employment for software developers. For too long, Free Software has been perceived as something for nothing where “everybody else” pays, even as organisations and individuals happily pay substantial sums for hardware and for proprietary software. Deferring to “the market” does nobody any good in the end: “the market” will only pay for what it absolutely has to, and businesses doing nicely selling solutions (who might claim that “the market” works for them and should be good enough for everyone) all too frequently rely on practically invisible infrastructure projects that they get for free. It arguably doesn’t matter if it would be public institutions, as opposed to businesses, ending up hiring people as long as they get decent contracts and aren’t at risk of all being laid off because some right-wing government wants to slash taxes for rich people, as tends to happen every few years.

And Free Software organisations should be advocating for ethical software development. Although the public mood in general may lag rather too far behind that of more informed commentary, the awareness many of us have of the substantial ethical concerns around various applications of computing – artificial intelligence, social media/manipulation platforms, surveillance, “cryptocurrencies”, and so on – requires us to uphold our principles, to recognise where our own principles fall short, and to embrace other causes that seek to safeguard the rights of individuals, the health of our societies, and the viability of our planet as our home.

The Accessible Infrastructure Crisis

Some of that ethical software development would also recognise the longevity we hope our societies may ultimately have. And yet we have every reason to worry about our societies becoming less equitable, less inclusive, and less accessible. The unquestioning adoption of technology-driven, consumerist solutions has led to many of the interactions us individuals have with institutions and providers of infrastructure being mediated by random companies who have inserted themselves into every kind of transaction they have perceived as highly profitable. Meanwhile, technologists have often embraced change through newer and newer technology for its own sake, not for the sake of actual progress or for making life easier.

While devices like smartphones have been liberating for many, providing capabilities that one could only have dreamed of only a few decades ago, they also introduce the risk of imposing relationships and restrictions on individuals to the point where those unable to acquire or use technological devices may find themselves excluded from public facilities, commercial transactions, and even voting in elections or other processes of participatory democracy. Such conditions may be the result of political ideology, the outsourcing and offshoring of supposedly non-essential activities, and the trimming back of the public sector, with any consequences, conflicts of interest, and even corrupt dealings being ignored or deliberately overlooked, dismissed as “nothing that would happen here”.

The risk to Free Software and to our societies is that we as individuals no longer collectively control our infrastructure through our representatives, nor do we control the means of interacting with it, the adoption of technology, or the pace such technology is introduced and obsoleted. When the suggestion to problems using supposedly public infrastructure is to “get a new phone” or “upgrade your computer”, we are actually being exploited by corporate interests and their facilitators. Anyone participating in such a cynical deployment of technology must, I suppose, reconcile their sense of a job well done with the sight of their fellow citizens being obstructed, impoverished or even denied their rights.

Although Free Software organisations have tried to popularise unencumbered sources of mobile software and to promote techniques and technologies to lengthen the lifespan of mobile devices, more fundamental measures are required to reverse the harmful course taken by many of our societies. Some of these measures are political or social, and some are technological. All of them are necessary.

We must reject the notion that progress is dependent on technological consumption. While computers and computing devices have managed to keep getting faster, despite warnings that such trends would meet their demise one way or another, improvements in their operational effectiveness in many regards have been limited. We may be able to view higher quality video today than we could ten years ago, and user interfaces may be pushing around many more pixels, but the results of our interactions are not necessarily more substantial. Yet, the ever-increasing demands of things like Web browsers means that systems become obsolete and are replaced with newer, faster systems to do exactly the same things in any qualitative sense. This wastefulness, burdening individuals with needless expenditure and burdening the environment with even more consumption, must stop.

We must demand interoperability and openness with regard to public infrastructure and even commercial platforms. It should be forbidden to insist that specific products be used to interact with public services and amenities or with commercial operators. The definition and adoption of genuinely open standards would be central to any such demands, and we would need to insist that such standards must encompass every aspect of such interactions and activities, without permitting companies to extend them in incompatible, proprietary ways that would deliberately undermine such initiatives.

We must insist that individuals never be under any obligation to commercial interests when interacting with public infrastructure: that their obligations are only to the public bodies concerned. And, similarly, we should insist that when dealing with companies, they may not also require us to enter into ongoing commercial relationships with other companies, purely as a condition of any transaction. It is unacceptable, for example, that individuals require such a relationship with a foreign technology company purely to gain access to essential services or to conduct purchases or similar transactions.

Ideas for Remedies

In conclusion, we need to care about technological freedoms – our choices of hardware and software, things like online privacy, and all that – but we also need to recognise and care about the social, economic and political conditions that threaten such freedoms. We can’t expect to set up a nice Free Software computing system and to use it forever when forces in society compel us to upgrade every few years. Nor can we expect people to make the hardware for such systems, let alone at an affordable price, when technological indulgence drives the sophistication of hardware to levels where investment in open hardware production is prohibitively costly. And we can’t expect to use our Free Software systems if consumerist and/or corrupt choices sacrifice interoperability and pander to entrenched commercial interests.

The vision here, if we can even call it that, is that we might embrace the “essential” nature of our computing needs and thus embrace hardware with adequate levels of sophistication that could, if everyone were honest with themselves, get the job done just fine. Years ago, people used to say how “Linux is great because it works on older systems”, but these days you apparently cannot even install some distributions with less than 1GB of RAM, and Blue Hat is apparently going to put all the bloat of a “modern” Web browser into its installer. And once we have installed our system, do we really need a video playing in the background of a Web page as we navigate a simple list of train times? Do we even need something as sophisticated as a Web browser for that at all?

Embracing mature, proven, reliable, well-understood hardware would help hardware designers to get their efforts right, and if hardware standards and modularity were adopted, there would still be the chance to introduce improvements and enhancements. Such hardware characteristics would also help with the software support: instead of rushing the long, difficult journey of introducing support for poorly documented components from unhelpful manufacturers eager to retire those components and to start making future products, the aim would be to support components with long commercial lifespans whose software support is well established and hopefully facilitated by the manufacturer. And with suitably standardised or modular hardware, creativity and refinement could be directed towards other aspects of the hardware which are often neglected or underestimated such as ergonomics and the other aspects of traditional product design.

With stable hardware, there might be more software options, too. Although many would propose “just putting Linux on it” for any given device, one need only consider the realm of smartphones to realise that such convenient answers are not necessarily the most obviously correct ones, particularly for certain definitions of “Linux”. Instead of choosing Linux because it probably supports the hardware, only to find that as much time is spent fixing that support and swimming against the currents of upstream development as would have been spent implementing that support elsewhere, other systems with more desirable properties could be considered and deployed. We might even encourage different systems to share functionality, instead of wrapping it up in a specific framework that resists portability. Such systems would also aspire to avoid the churn throughout the GNU-plus-Linux-plus-graphical-stack-of-the-day familiar to many of us, potentially allowing us to use familiar software over much longer periods than we have generally been allowed to before, retrocomputing platforms aside.

But to let all of this happen and to offer a viable foundation, we must also ensure that such systems can be used in the wider world. Otherwise, this would merely be an exercise in retrocomputing. Now, there is an argument that there are plenty of existing standards that might facilitate this vision, and perhaps going along with the famous saying about standards (the good thing being that there are so many to choose from), we might wish to avoid the topic of yet another widely-referenced XKCD cartoon by actually adopting some of them instead of creating more of them. That is not to say that we would necessarily want to go along with the full breadth of some standards, however. XML deliberately narrowed down SGML to be a more usable technology, despite its own reputation for complexity. Since some standards were probably “front-run” by companies wishing to elevate their own products, within which various proposed features were already implemented, thus forcing their competitors to play catch up, it is entirely possible that various features are superfluous or frivolous.

There have already been attempts to simplify the Web or to make a simpler Web-like platform, Gemini being one of them, and there are persuasive arguments that such technologies should be considered as separate from the traditional Web. After all, the best of intentions in delivering a simple, respectful experience can easily be undermined by enthusiasm for the latest frameworks and fashions, or by the insistence that less than respectful techniques and technologies – user surveillance, to take one example – be introduced to “help understand” or “better serve” users. A distinct technology might offer easy ways of resisting such temptations by simply failing to support them conveniently, but the greater risk is that it might not even get adopted significantly at all.

New standards might well be necessary, but revising and reforming existing ones might well be more productive, and there is merit in focusing standards on the essentials. After all, people used the Web for real work twenty years ago, too. And some would argue that today’s Web is just reimplementing the client-server paradigm but with JavaScript on the front end, grinding your CPU, with application-specific communications conducted between the browser and the server. Such communications will, for the most part, not be specified and be prone to changing and breaking, interoperability being the last thing on everybody’s mind. Formalising such communications, and adopting technologies more appropriate to each device and to each user, might actually be beneficial: instead of megabytes of JavaScript passing across the network and through the browser, the user would get to choose how they access such services, which programs they might use, rather than having “an experience” foisted upon them.

Such an approach would actually return us to something close to the original vision of the Web. But standards surely have to be seen as the basis of the Free Software we might hope to use, and as the primary vehicle for the persuasion of others. Public institutions and businesses care about reaching the biggest possible audience, and this has brought us to a rather familiar sight: the anointment of two viable players in a particular market and no others. Back in the 1990s, the two chosen ones in desktop computing were Microsoft and Apple, the latter kept afloat by the former so as to avoid being perceived as a de-facto monopoly and thereby avoiding being subject to proper regulation. Today, Apple and Google are the gatekeepers in mobile computing, with even Microsoft being an unwelcome complication.

Such organisations want to offer solutions that supposedly reach “the most users”, will happily commission “apps” for the big two players (and Microsoft, sometimes, because habits and favouritism die hard), and will probably shy away from suggesting other solutions, labelling them as confusing or unreliable, mostly because they just don’t want to care: their job is done, the boxes ticked, more effort gives no more reward. But standards offer the possibility of reaching every user, of meeting legal accessibility requirements, and potentially allowing such organisations to delegate the provision of solutions to their favourite entity: “the market”. Naturally, some kind of validation of standards compliance would probably be required, but this need not be overly restrictive nor the business of every last government department or company.

So, I suppose a combination of genuinely open standards facilitating Free Software and accessible public and private services, with users able to adopt and retain open and long-lasting hardware, might be a glimpse of some kind of vision. How people might make good enough money to be able to live decently is another question entirely, but then again, perhaps cultivating simpler, durable, sustainable infrastructure might create opportunities in the development of products that use it, allowing people to focus on improving those products, that infrastructure and the services they collectively deliver as opposed to chasing every last fad and fashion, running faster and faster and yet having the constant feeling of falling behind. As many people seem to experience in many other aspects of their lives.

Well, I hope the positivity was in there somewhere!