Paul Boddie's Free Software-related blog


Archive for the ‘operating systems’ Category

Filesystem Abstractions for L4Re

Tuesday, February 12th, 2019

In my previous posts, I discussed the possibility of using “real world” filesystems in L4Re, initially considering the nature of code to access an ext2-based filesystem using the library known as libext2fs, then getting some of that code working within L4Re itself. Having previously investigated the nature of abstractions for providing filesystems and file objects to applications, it was inevitable that I would now want to incorporate libext2fs into those abstractions and to try and access files residing in an ext2 filesystem using those abstractions.

It should be remembered that L4Re already provides a framework for filesystem access, known as Vfs or “virtual file system”. This appears to be the way the standard file access functions are supported, with the “rom” part of the filesystem hierarchy being supported by a “namespace filesystem” library that understands the way that the deployed payload modules are made available as files. To support other kinds of filesystem, other libraries must apparently be registered and made available to programs.

Although I am sure that the developers of the existing Vfs framework are highly competent, I found the mechanisms difficult to follow and quite unlike what I expected, which was to see a clear separation between programs accessing files and other programs providing those files. Indeed, this is what one sees when looking at other systems such as Minix 3 and its virtual filesystem. I must also admit that I became tired of having to dig into the code to understand the abstractions in order to supplement the reference documentation for the Vfs framework in L4Re.

An Alternative Framework

It might be too soon to label what I have done as a framework, but at the very least I needed to decide upon a few mechanisms to implement an alternative approach to providing file-like abstractions to programs within L4Re. There were a few essential characteristics to be incorporated:

  • A way of requesting access to a named file
  • The provision of objects maintaining the state of access to an opened file
  • The transmission of file content to file readers and from file writers
  • A way of cleaning up when programs are no longer accessing files

One characteristic that I did want to uphold in any solution was to make programs largely oblivious to the nature of the accessed filesystems. They would navigate a virtual filesystem hierarchy, just as one does in Unix-like systems, with certain directories acting as gateways to devices exposing potentially different filesystems with superficially similar semantics.

Requesting File Access

In a system like L4Re, with notions of clients and servers already prevalent, it seems natural to support a mechanism for requesting access to files that sees a client – a program wanting to access a file – delegating the task of locating that file to a server. How the server performs this task may be as simple or as complicated as we wish, depending on what kind of architecture we choose to support. In an operating system with a “monolithic” kernel, like GNU/Linux, we also see such delegation occurring, with the kernel being the entity having to support the necessary wiring up of filesystems contributing to the virtual filesystem.

So, it makes sense to support an “open” system call just like in other operating systems. The difference here, however, is that since L4Re is a microkernel-based environment, both the caller and the target of the call are mere programs, with the kernel only getting involved to route the call or message between the programs concerned. We would first need to make sure that the program accessing files has a reference (known as a “capability”) to another program that provides a filesystem and can respond to this “open” message. This wiring up of programs is a task for the system’s configuration file, as featured in some of my previous articles.

We may now consider what the filesystem-providing program or filesystem “server” needs to do when receiving an “open” message. Let us consider the failure to find the requested file: the filesystem server would, in such an event, probably just return a response indicating failure without any real need to do anything else. It is in the case of a successful lookup that the response to the caller or client needs some more consideration: the server could indicate success, but what is the client going to do with such information? And how should the server then facilitate further access to the file itself?

Providing Objects for File Access

It becomes gradually clearer that the filesystem server will need to allocate some resources for the client to conduct its activities, to hold information read from the filesystem itself and to hold data sent for writing back to the opened file. The server could manage this within a single abstraction and support a range of different operations, accommodating not only requests to open files but also operations on the opened files themselves. However, this might make the abstraction complicated and also raise issues around things like concurrency.

What if this server object ends up being so busy that while waiting for reading or writing operations to complete on a file for one program, it leaves other programs queuing up to ask about or gain access to other files? It all starts to sound like another kind of abstraction would be beneficial for access to specific files for specific clients. Consequently, we end up with an arrangement like this:

Accessing a filesystem and a resource

Accessing a filesystem and a resource

When a filesystem server receives an “open” message and locates a file, it allocates a separate object to act as a contact point for subsequent access to that file. This leaves the filesystem object free to service other requests, with these separate resource objects dealing with the needs of the programs wanting to read from and write to each individual file.

The underlying mechanisms by which a separate resource object is created and exposed are as follows:

  1. The instantiation of an object in the filesystem server program holding the details of the accessed file.
  2. The creation of a new thread of execution in which the object will run. This permits it to handle incoming messages concurrently with the filesystem object.
  3. The creation of an “IPC gate” for the thread. This effectively exposes the object to the wider environment as what often appears to be known as a “kernel object” (rather confusingly, but it simply means that the kernel is aware of it and has to do some housekeeping for it).

Once activated, the thread created for the resource is dedicated to listening for incoming messages and handling them, invoking methods on the resource object as a proxy for the client sending those messages to achieve the same effect.

Transmitting File Content

Although we have looked at how files manifest themselves and may be referenced, the matter of obtaining their contents has not been examined too closely so far. A program might be able to obtain a reference to a resource object and to send it messages and receive responses, but this is not likely to be sufficient for transferring content to and from the file. The reason for this is that the messages sent between programs – or processes, since this is how we usually call programs that are running – are deliberately limited in size.

Thus, another way of exchanging this data is needed. In a situation where we are reading from a file, what we would most likely want to see is a read operation populate some memory for us in our process. Indeed, in a system like GNU/Linux, I imagine that the Linux kernel shuttles the file data from the filesystem module responsible to an area of memory that it has reserved and exposed to the process. In a microkernel-based system, things have to be done more “collaboratively”.

The answer, it would seem to me, is to have dedicated memory that is shared between processes. Fortunately, and arguably unsurprisingly, L4Re provides an abstraction known as a “dataspace” that provides the foundation for such sharing. My approach, then, involves requesting a dataspace to act as a conduit for data, making the dataspace available to the file-accessing client and the file-providing server object, and then having a protocol to notify each other about data being sent in each direction.

I considered whether it would be most appropriate for the client to request the memory or whether the server should do so, eventually concluding that the client could decide how much space it would want as a buffer, handing this over to the server to use to whatever extent it could. A benefit of doing things this way is that the client may communicate initialisation details when it contacts the server, and so it becomes possible to transfer a filesystem path – the location of a file from the root of the filesystem hierarchy – without it being limited to the size of an interprocess message.

Opening a file using a path written to shared memory

Opening a file using a path written to shared memory

So, the “open” message references the newly-created dataspace, and the filesystem server reads the path written to the dataspace’s memory so that it may use it to locate the requested file. The dataspace is not retained by the filesystem object but is instead passed to the resource object which will then share the memory with the application or client. As described above, a reference to the resource object is returned in the response to the “open” message.

It is worthwhile to consider the act of reading from a file exposed in this way. Although both client (the application in the above diagram) and server (resource object) should be able to access the shared “buffer”, it would not be a good idea to let them do so freely. Instead, their roles should be defined by the protocol employed for communication with one another. For a simple synchronous approach it would look like this:

Reading data from a resource via a shared buffer

Reading data from a resource via a shared buffer

Here, upon the application or client invoking the “read” operation (in other words, sending the “read” message) on the resource object, the resource is able to take control of the buffer, obtaining data from the file and writing it to the buffer memory. When it is done, its reply or response needs to indicate the updated state of the buffer so that the client will know how much data there is available, potentially amongst other things of interest.

Cleaning Up

Many of us will be familiar with the workflow of opening, reading and writing, and closing files. This final activity is essential not only for our own programs but also for the system, so that it does not tie up resources for activities that are no longer taking place. In the filesystem server, for the resource object, a “close” operation can be provided that causes the allocated memory to be freed and the resource object to be discarded.

However, merely providing a “close” operation does not guarantee that a program would use it, and we would not want a situation where a program exits or crashes without having invoked this operation, leaving the server holding resources that it cannot safely discard. We therefore need a way of cleaning up after a program regardless of whether it sees the need to do so itself.

In my earliest experiments with L4Re on the MIPS Creator CI20, I had previously encountered the use of interrupt request (IRQ) objects, in that case signalling hardware-initiated events such as the pressing of physical switches. But I also knew that the IRQ abstraction is employed more widely in L4Re to allow programs to participate in activities that would normally be the responsibility of the kernel in a monolithic architecture. It made me wonder whether there might be interrupts communicating the termination of a process that could then be used to clean up afterwards.

One area of interest is that concerning the “IPC gate” mentioned above. This provides the channel through which messages are delivered to a particular running program, and up to this point, we have considered how a resource object has its own IPC gate for the delivery of messages intended for it. But it also turns out that we can also enable notifications with regard to the status of the IPC gate via the same mechanism.

By creating an IRQ object and associating it with a thread as the “deletion IRQ”, when the kernel decides that the IPC gate is no longer needed, this IRQ will be delivered. And the kernel will make this decision when nothing in the system needs to use the IPC gate any more. Since the IPC gate was only created to service messages from a single client, it is precisely when that client terminates that the kernel will realise that the IPC gate has no other users.

Resource deletion upon the termination of a client

Resource deletion upon the termination of a client

To enable this to actually work, however, a little trick is required: the server must indicate that it is ready to dispose of the IPC gate whenever necessary, doing so by decreasing the “reference count” which tracks how many things in the system are using the IPC gate. So this is what happens:

  1. The IPC gate is created for the resource thread, and its details are passed to the client, exposing the resource object.
  2. An IRQ object is bound to the thread and associated with the IPC gate deletion event.
  3. The server decreases its reference count, relinquishing the IPC gate and allowing its eventual destruction.
  4. The client and server communicate as desired.
  5. Upon the client terminating, the kernel disassociates the client from the IPC gate, decreasing the reference count.
  6. The kernel notices that the reference count is zero and sends an IRQ telling the server about the impending IPC gate deletion.
  7. The resource thread in the server deallocates the resource object and terminates.
  8. The IPC gate is deleted.

Using the “gate label”, the thread handling communications for the resource object is able to distinguish between the interrupt condition and normal messages from the client. Consequently, it is able to invoke the appropriate cleaning up routine, to discard the resource object, and to terminate the thread. Hopefully, with this approach, resource objects will no longer be hanging around long after their clients have disappeared.

Other Approaches

Another approach to providing file content did also occur to me, and I wondered whether this might have been a component of the “namespace filesystem” in L4Re. One technique for accessing files involves mapping the entire file into memory using a “mmap” function. This could be supported by requesting a dataspace of a suitable size, but only choosing to populate a region of it initially.

The file-accessing program would attempt to access the memory associated with the file, and upon straying outside the populated region, some kind of “fault” would occur. A filesystem server would have the job of handling this fault, fetching more data, allocating more memory pages, mapping them into the file’s memory area, and disposing of unwanted pages, potentially writing modified pages to the appropriate parts of the file.

In effect, the filesystem server would act as a pager, as far as I can tell, and I believe it to be the case that Moe – the root task – acts in such a way to provide the “rom” files from the deployed payload modules. Currently, I don’t find it particularly obvious from the documentation how I might implement a pager, and I imagine that if I choose to support such things, I will end up having to trawl the code for hints on how it might be done.

Client Library Functions

To present a relatively convenient interface to programs wanting to use files, some client library functions need to be provided. The intention with these is to support the traditional C library paradigms and for these functions to behave like those that C programmers are generally familiar with. This means performing interprocess communications using the “open”, “read”, “write”, “close” and other messages when necessary, hiding the act of sending such messages from the library user.

The details of such a client library are probably best left to another article. With some kind of mechanism in place for accessing files, it becomes a matter of experimentation to see what the demands of the different operations are, and how they may be tuned to reduce the need for interactions with server objects, hopefully allowing file-accessing programs to operate efficiently.

The next article on this topic is likely to consider the integration of libext2fs with this effort, along with the library functionality required to exercise and test it. And it will hopefully be able to report some real experiences of accessing ext2-resident files in relatively understandable programs.

Using ext2 Filesystems with L4Re

Tuesday, February 5th, 2019

Previously, I described my initial investigations into libext2fs and the development of programs to access and populate ext2/3/4 filesystems. With a program written and now successfully using libext2fs in my normal GNU/Linux environment, the next step appeared to be the task of getting this library to work within the L4Re system. The following steps were envisaged:

  1. Figuring out the code that would be needed, this hopefully being supportable within L4Re.
  2. Introducing the software as a package within L4Re.
  3. Discovering the configuration required to build the code for L4Re.
  4. Actually generating a library file.
  5. Testing the library with a program.

This process is not properly completed in that I do not yet have a good way of integrating with the L4Re configuration and using its details to configure the libext2fs code. I felt somewhat lazy with regard to reconciling the use of autotools with the rather different approach taken to build L4Re, which is somewhat reminiscent of things like Buildroot and OpenWrt in certain respects.

So, instead, I built the Debian package from source in my normal environment, grabbed the config.h file that was produced, and proceeded to use it with a vastly simplified Makefile arrangement, also in my normal environment, until I was comfortable with building the library. Indeed, this exercise of simplified building also let me consider which portions of the libext2fs distribution would really be needed for my purposes. I did not really fancy having to struggle to build files that would ultimately be superfluous.

Still, as I noted, this work isn’t finished. However, it is useful to document what I have done so far so that I can subsequently describe other, more definitive, work.

Making a Package

With a library that seemed to work with the archiving program, written to populate filesystems for eventual deployment, I then set about formulating this simplified library distribution as a package within L4Re. This involves a few things:

  • Structuring the files so that the build system may process them.
  • Persuading the build system to install things in places for other packages to find.
  • Formulating the appropriate definitions to build the source files (and thus producing the right compiler and linker invocations).
Here are some notes about the results.

The Package Structure

Currently, I have the following arrangement inside the pkg/libext2fs directory:

include
include/libblkid
include/libe2p
include/libet
include/libext2fs
include/libsupport
include/libuuid
lib
lib/libblkid
lib/libe2p
lib/libet
lib/libext2fs
lib/libsupport
lib/libuuid

To follow L4Re conventions, public header files have been moved into the include hierarchy. This breaks assumptions in the code, with header files being referenced without a prefix (like “ext2fs”, “et”, “e2p”, and so on) in some places, but being referenced with such a prefix in others. The original build system for the code gets away with this by using the “ext2fs” and other prefixes as the directory names containing the code for the different libraries. It then indicates the parent “lib” directory of these directories as the place to start looking for headers.

But I thought it worthwhile to try and map out the header usage and distinguish between public and private headers. At the very least, it helps me to establish the relationships between the different components involved. And I may end up splitting the different components into their own packages, requiring some formalisation of their interactions.

Meanwhile, I defined a Control file to indicate what the package provides:

provides: libblkid libe2p libet libext2fs libsupport libuuid

This appears to be used in dependency resolution, causing the package to be built if another package requires one of the named entities in its own Control file.

Header File Locations

In each include subdirectory (such as include/libext2fs) is a Makefile indicating a couple of things, the following being used for libext2fs:

PKGNAME = libext2fs
CONTRIB_HEADERS = 1

The effect of this is to install the headers into a include/contrib/libext2fs directory in the build output.

In the corresponding lib subdirectory (which is lib/libext2fs), the following seems to be needed:

CONTRIB_INCDIR = libext2fs

Hopefully, with this, other packages can depend on libext2fs and have the headers made available to it by an include statement like this:

#include <ext2fs/ext2fs.h>

(The ext2fs prefix is provided by a directory inside include/libext2fs.)

Otherwise, headers may end up being put in a special “l4” hierarchy, and then code would need changing to look something like this:

#include <l4/ext2fs/ext2fs.h>

So, avoiding this and having the original naming seems to be the benefit of the “contrib” settings, as far as I can tell.

Defining Build Files

The Makefile in each specific lib subdirectory employs the usual L4Re build system definitions:

TARGET          = libext2fs.a libext2fs.so
PC_FILENAME     = libext2fs

The latter of these is used to identify the build products so that the appropriate compiler and linker options can be retrieved by the build system when this library is required by another. Here, PC is short for “package config” but the notion of “package” is different from that otherwise used in this article: it just refers to the specific library being built in this case.

An important aspect related to “package config” involves the requirements or dependencies of this library. These are specified as follows for libext2fs:

REQUIRES_LIBS   = libet libe2p

We saw these things in the Control file. By indicating these other libraries, the compiler and linker options to find and use these other libraries will be brought in when something else requires libext2fs. This should help to prevent build failures caused by missing headers or libraries, and it should also permit more concise declarations of requirements by allowing those declarations to omit libet and libe2p in this case.

Meanwhile, the actual source files are listed using a SRC_C definition, and the PRIVATE_INCDIR definition lists the different paths to be used to search for header files within this package. Moving the header files around complicates this latter definition substantially.

There are other complications with libext2fs, notably the building of a tool that generates a file to be used when building the library itself. I will try and return to this matter at some point and figure out a way of doing this within the build system. Such generation of binaries for use in build processes can be problematic, particularly if there is some kind of assumption that the build system is the same as the target system, but such assumptions are probably not being made here.

Building the Library

Fortunately, the build system mostly takes care of everything else, and a command like this should see the package being built and libraries produced:

make O=mybuild S=pkg/libext2fs

The “S” option is a real time saver, and I wish I had made more use of it before. Use of the “V” option can be helpful in debugging command options, since the normal output is abridged:

make O=mybuild S=pkg/libext2fs V=1

I will admit that since certain header files are not provided by L4Re, a degree of editing of the config.h file was required. Things like HAVE_LINUX_FD_H, indicating the availability of Linux-specific headers, needed to be removed.

Testing the Library

An appropriate program for testing the library is really not much different from one used in a GNU/Linux environment. Indeed, I just took some code from my existing program that lists a directory inside a filesystem image. Since L4Re should provide enough of a POSIX-like environment to support such unambitious programs, practically no changes were needed and no special header files were included.

A suitable Makefile is needed, of course, but the examples package in L4Re provides plenty of guidance. The most important part is this, however:

REQUIRES_LIBS   = libext2fs

A Control file requiring libext2fs is actually not necessary for an example in the examples hierarchy, it would seem, but such a file would otherwise be advisible. The above library requirements pull in the necessary compiler and linker flags from the “package config” universe. (It also means that the libext2fs headers are augmented by the libe2p and libet headers, as defined in the required libraries for libext2fs itself.)

As always, deploying requires a suitable configuration description and a list of modules to be deployed. The former looks like this:

local L4 = require("L4");

local l = L4.default_loader;

l:startv({
    log = { "ext2fstest", "g" },
  },
  "rom/ex_ext2fstest", "rom/ext2fstest.fs", "/");

The interesting part is right at the end: a program called ex_ext2fstest is run with two arguments: the name of a file containing a filesystem image, and the directory inside that image that we want the program to show us. Here, we will be using the built-in “rom” filesystem in L4Re to serve up the data that we will be decoding with libext2fs in the program. In effect, we use one filesystem to bootstrap access to another!

Since the “rom” filesystem is merely a way of exposing modules as files, the filesystem image therefore needs to be made available as a module in the module list provided in the conf/modules.list file, the appropriate section starting off like this:

entry ext2fstest
roottask moe rom/ext2fstest.cfg
module ext2fstest.cfg
module ext2fstest.fs
module l4re
module ned
module ex_ext2fstest
# plus lots of library modules

All these experiments are being conducted with L4Re running on the UX configuration of Fiasco.OC, meaning that the system runs on top of GNU/Linux: a sort of “user mode L4”. Running the set of modules for the above test is a matter of running something like this:

make O=mybuild ux E=ext2fstest

This produces a lot of output and then some “logged” output for the test program:

ext2fste| Opened rom/ext2fstest.fs.
ext2fste| /
ext2fste| drwxr-xr-x-       0     0        1024 .
ext2fste| drwxr-xr-x-       0     0        1024 ..
ext2fste| drwx-------       0     0       12288 lost+found
ext2fste| -rw-r--r---    1000  1000       11449 e2access.c
ext2fste| -rw-r--r---    1000  1000        1768 file.c
ext2fste| -rw-r--r---    1000  1000        1221 format.c
ext2fste| -rw-r--r---    1000  1000        6504 image.c
ext2fste| -rw-r--r---    1000  1000        1510 path.c

It really isn’t much to look at, but this indicates that we have managed to access an ext2 filesystem within L4Re using a program that calls the libext2fs library functions. If nothing else, the possibility of porting a library to L4Re and using it has been demonstrated.

But we want to do more than that, of course. The next step is to provide access to an ext2 filesystem via a general interface that hides the specific nature of the filesystem, one that separates the work into a different program from those wanting to access files. To do so involves integrating this effort into my existing filesystem framework, then attempting to re-use a generic file-accessing program to obtain its data from ext2-resident files. Such activities will probably form the basis of the next article on this topic.

Filesystem Familiarisation

Tuesday, January 29th, 2019

I previously noted that accessing filesystems would be a component in my work with microkernel-based systems, and towards the end of last year I began an exercise in developing a simple “toy” filesystem that could hold file-like entities. Combining this with some L4Re-based components that implement seemingly reasonable mechanisms for providing access to files, I was able to write simple test programs that open and access these files.

The starting point for all this was the observation that a normal system file – that is, something stored in the filesystem in my GNU/Linux environment – can be treated like an archive containing multiple files and therefore be regarded as providing a filesystem itself. Such a file can then be embedded in a payload providing a L4Re system by specifying it as a “module” in conf/modules.list for a particular payload entry:

module image_root.fs

Since L4Re provides a rudimentary “rom” filesystem that exposes the modules embedded in the payload, I could open this “toy” filesystem module as a file within L4Re using the normal file access functions.

fp = fopen("rom/image_root.fs", "r");

And with that, I could then use my own functions to access the files stored within. Some additional effort went into exposing file access via interprocess communication, which forms the basis of those mechanisms mentioned above, those mechanisms being needed if such filesystems are to be generally usable in the broader environment rather than by just a single program.

Preparing Filesystems

The first step in any such work is surely to devise how a filesystem is to be represented. Then, code must be written to access the filesystem, firstly to write files and directories to it, and then to be able to perform the necessary task of reading that file and directory information back out. At some point, an actual filesystem image needs to be prepared, and here it helps a lot if a convenient tool can be developed to speed up testing and further development.

I won’t dwell on the “toy” representation I used, mostly because it was merely chosen to let me explore the mechanisms and interfaces to be provided as L4Re components. The intention was always to switch to a “real world” filesystem and to use that instead. But in order to avoid being overwhelmed with learning about existing filesystems alongside learning about L4Re and developing file access mechanisms, I chose some very simple representations that I thought might resemble “real world” filesystems sufficiently enough to make the exercise realistic.

With the basic proof of concept somewhat validated, my attentions have now turned to “real world” filesystems, and here some interesting observations can be made about tools and libraries. If you were to ask someone about how they might prepare a filesystem, particularly a GNU/Linux user, it would be unsurprising to me if they suggested preparing a file…

dd if=/dev/zero of=image_root.fs bs=1024 count=1 seek=$SIZE_IN_KB

…then a filesystem in the file…

/sbin/mkfs.ext2 image_root.fs

…and then mounting it as follows:

sudo mount image_root.fs $MOUNTPOINT

Here, an ext2 filesystem is prepared in a normal system file, and then the operating system is asked to mount the filesystem and to expose it via a mountpoint, this being a directory in the general hierarchy of files and filesystems. But this last step requires special privileges and for the kernel to get involved, and yet all we are doing is accessing a file with the data inside it stored in a particular way. So why is there not a more straightforward, unprivileged way of writing data to that file in the required format?

Indeed, other projects of mine have needed to initialise filesystems, and such mounting operations have been a necessary aspect of those, given the apparent shortage of other methods. It really seemed that filesystems and kernel mechanisms were bound to each other, requiring us to always get the kernel involved. But it turns out that there are other solutions.

A History Lesson

I am reminded of the mtools suite of programs for accessing floppy disks. Once upon a time, when I was in my first year of university studies, practically all of our class’s programming was performed on a collection of DECstations. Although networked, each of these also provided a floppy drive capable of supporting 2.88MB disks: an uncommon sight, for me at least, with the availability of media and compatibility concerns dictating the use of 720KB and 1.44MB disks instead.

Presumably, within the Ultrix environment we were using, normal users were granted access to the floppy drive when logged in. With a disk inserted, mtools could then be used to access the disk as one big file, interpreting the contents and presenting the user with a view onto files and directories. Of course, mtools exposes a DOS-like interface to the disk, with DOS-like commands providing DOS-like output, and it does not attempt to integrate the contents of a disk within the general Unix filesystem hierarchy.

Indeed, the mechanisms of integrating such foreign data into the general filesystem hierarchy are denied to mere programs, this being a motivation for pursuing alternative operating system architectures like GNU Hurd which support such integration. But the point here is that filesystems – in this example, DOS-based filesystems on floppy disks – can readily be interpreted with the appropriate tools and without “operator” privileges.

Decoding Filesystem Data

Since filesystems are really just data structures encoded in storage, there should really be no magic involved in decoding and accessing them. After all, the code in the Linux kernel and in other operating system kernels has to do just that, and these things are just programs that happen to run under certain special conditions. So it would make sense if some of the knowledge encoded in these kernels had been extracted and made available as library code for other purposes. After all, it might come in useful elsewhere.

Fortunately, it is likely that such library code is already installed on your system, at least if you are using the ext2 family of filesystems. A search for some common utilities can be informative in this respect. Here is a query being issued for the appropriate filesystem checking utility on a Debian system:

$ dpkg -S e2fsck
e2fsprogs: /usr/share/man/man5/e2fsck.conf.5.gz
e2fsprogs: /sbin/e2fsck
e2fsprogs: /usr/share/man/man8/e2fsck.8.gz

And for the filesystem initialisation utility mentioned above:

$ dpkg -S mkfs.ext2
e2fsprogs: /sbin/mkfs.ext2
e2fsprogs: /usr/share/man/man8/mkfs.ext2.8.gz

The e2fsprogs package itself depends on a package called libext2fs2 – or e2fslibs on earlier distribution versions – and ultimately one discovers that these tools and their libraries are provided by a software distribution, e2fsprogs, whose aim is to provide programs and libraries for general access to the ext2/3/4 filesystem format. So it turns out to be possible and indeed feasible to write programs accessing filesystems without needing to make use of code residing in some kernel or other.

Tooling Up

Had I bothered to investigate further, I might have discovered another useful package. Running one or both of the following commands on a Debian system lets us see which other packages make use of the library functionality of e2fsprogs:

apt-cache rdepends e2fslibs
apt-cache rdepends libext2fs2

Amongst those listed is e2tools which offers a suite of commands resembling those provided by mtools, albeit with a Unix flavour instead of a DOS flavour. Investigating this, I discovered that these tools inherit somewhat from the utilities provided by e2fsprogs, particularly the debugfs utility.

However, investigating e2fsprogs by myself gave me a chance to become familiar with the details of libext2fs and how the different utilities managed to use it. Since it is not always obvious to me how the library should be used, and I find myself missing some good documentation for it, the more program code I can find to demonstrate its use, the better.

For my purposes, accessing individual files and directories is not particularly interesting: I really just want to treat an ext2 filesystem like an archive when preparing my L4Re payload; it is only within L4Re that I actually want to access individual things. Outside L4Re, having an equivalent to the tar command, but with the output being a filesystem image instead of a tar file, would be most useful for me. For example:

e2archive --create image_root.fs $ROOTFS

Currently, this can be made to populate a filesystem for eventual deployment, although the breadth of support for the filesystem features is rather limited. It is possible that I might adopt e2tools as the basis of this archiving program, given that it is merely a shell script that calls another program. Then again, it might be useful to gain direct experience with libext2fs for my other activities.

Future Directions

And so, in the GNU/Linux environment, the creation of such archives has been the focus of my experiments. Meanwhile, I need to develop library functions to support filesystem operations within L4Re, which means writing code to support things like file descriptor abstractions and the appropriate functions for accessing and manipulating files and directories. The basics of some of this is already done for the “toy” filesystem, but it will be a matter of figuring out which libext2fs functions and abstractions need to be used to achieve the same thing for ext2 and its derivatives.

Hopefully, once I can demonstrate file access via the same interprocess communications mechanisms, I can then make a start in replacing the existing conventional file access functions with versions that use my mechanisms instead of those provided in L4Re. This will most likely involve work on the C library support in L4Re, which is a daunting prospect, but some familiarity with that is probably beneficial if a more ambitious project to replace the C library is to be undertaken.

But if I can just manage to get the dynamic linker to be able to read shared libraries from an ext2 filesystem, then a rather satisfying milestone will have been reached. And this will then motivate work to support storage devices on various hardware platforms of interest, permitting the hosting of filesystems and giving those systems some potential as L4Re-based general-purpose computing devices, too.

Some Ideas for 2019

Wednesday, January 23rd, 2019

Well, after my last article moaning about having wishes and goals while ignoring the preconditions for, and contributing factors in, the realisation of such wishes and goals, I thought I might as well be constructive and post some ideas I could imagine working on this year. It would be a bonus to get paid to work on such things, but I don’t hold out too much hope in that regard.

In a way, this is to make up for not writing an article summarising what I managed to look at in 2018. But then again, it can be a bit wearing to have to read through people’s catalogues of work even if I do try and make my own more approachable and not just list tons of work items, which is what one tends to see on a monthly basis in other channels.

In any case, 2018 saw a fair amount of personal focus on the L4Re ecosystem, as one can tell from looking at my article history. Having dabbled with L4Re and Fiasco.OC a bit in 2017 with the MIPS Creator CI20, I finally confronted certain aspects of the software and got it working on various devices, which had been something of an ambition for at least a couple of years. I also got back into looking at PIC32 hardware and software experiments, tidying up and building on earlier work, and I keep nudging along my Python-like language and toolchain, Lichen.

Anyway, here are a few ideas I have been having for supporting a general strategy of building flexible, sustainable and secure computing environments that respect the end-user. Such respect not being limited to software freedom, but also extending to things like privacy, affordability and longevity that are often disregarded in the narrow focus on only one set of end-user rights.

Building a General-Purpose System with L4Re

Apart from writing unfinished articles about supporting hardware devices on the Ben NanoNote and Letux 400, I also spent some time last year considering the mechanisms supporting filesystems in L4Re. For an outsider like myself, the picture isn’t particularly clear, but the mechanisms don’t really seem particularly well documented, and I am not convinced that the filesystem support is what people might expect from a microkernel-based system.

Like L4Re’s device support, the way filesystems are made available to tasks appears to use libraries extensively, whereas I would expect more use of individual programs, with interprocess messages and shared memory being employed to move the data around. To evaluate my expectations, I have been writing programs that operate in such a way, employing a “toy” filesystem to test the viability of such an approach. The plan is to make use of libext2fs to expose ext2/3/4 filesystems to L4Re components, then to try and replace the existing file access mechanisms with ones that access these file-serving components.

It is unfortunate that systems like these no longer seem to get much attention from the wider Free Software community. There was once a project to port GNU Hurd to L4-family microkernels, but with the state of the art having seemingly been regarded as insufficient or inappropriate, the focus drifted off onto other things, and now there doesn’t seem to be much appetite to resume such work. Even the existing Hurd implementation doesn’t get that much interest these days, either. And yet there are plenty of technical, social and practical reasons for not just settling for using the Linux kernel and systems based on it for every last application and device.

Extending Hardware Support within L4Re

It is all very well developing filesystem support, but there also has to be support for the things that provide the storage on which those filesystems reside. This is something I didn’t bother to look at when getting L4Re working on various devices because the priority was to have something to show, meaning that the display had to work, along with testing and demonstrating other well-understood peripherals, with the keyboard or keypad being something that could be supported with relative ease and then used to demonstrate other aspects of the solution.

It seems perverse that one must implement support for SD or microSD card storage all over again when the software being run is already being loaded from such storage, but this is rather like the way that “live CD” versions of GNU/Linux would load an environment directly from a CD, yet an installed version of such an environment might not have the capability to access the CD drive. Still, this is an unavoidable step along the path to make a practical system.

It might also be nice to make the CI20 support a bit better. Although a device notionally supported by L4Re, various missing pieces of hardware support needed to be added, and the HDMI output capability remains unavailable. Here, the mystery hardware left undocumented by the datasheet happens to be used in other chipsets and has been supported in the Linux kernel for many of them for quite some time. Hopefully, the exercise will not be too frustrating.

Another device that might be a good candidate for L4Re is the Efika MX Smartbook. Although having a modest specification by today’s bloated-Web and pointless-eye-candy standards, it has a nice keyboard (with a more sensible layout than the irritating HP Elitebook, as I recently discovered) and is several times more powerful than the Letux 400. My brother, David, has already investigated certain aspects of the hardware, and it might make the basis of a nice portable system. And since support in Linux and other more commonly-used technologies has been left to rot, why not look into developing a more lasting alternative?

Reviving Mail-Based Communication

It is tiresome to see the continuing neglect of the health of e-mail, despite it being used as the bedrock of the Internet’s activities, while proprietary and predatory social media platforms enjoy undeserved attention and promotion in mass media and in society at large. Governmental and corporate negligence mean that the average person is subjected to an array of undesirable, disturbing and generally unwanted communications from chancers and criminals through their e-mail accounts which, if it had ever happened to the same degree with postal mail, would have seen people routinely arrested and convicted for their activities.

It is tempting to think that “they” do not want “us” to have a more reliable form of mail-based communication because that would involve things like encryption and digital signatures. Even when these things are deemed necessary, they always seem to be rolled out as part of a special service that hosts “our” encryption and signing keys for us, through which we must go to access our messages. It is, I suppose, yet another example of the abandonment of common infrastructure: that when something better is needed, effort and attention is siphoned off from the “commons” and ploughed into something separate that might make someone a bit of money.

There are certainly challenges involved in making e-mail better, with any discussion of managing identities, vouching for and recognising correspondents, and the management of keys most likely to lead to dispute about the “best” way of doing things. But in the end, we probably find ourselves pursuing perfect solutions that do everything whilst proprietary competitors just focus on doing a handful of things effectively. So I envisage turning this around and evaluating whether a more limited form of mail-based communication can be done in the way that most people would need.

I did look fairly recently at a selection of different projects seeking to advise and support people on providing their own e-mail infrastructure. That is perhaps worth an article in its own right. And on the subject of mail-based communication, I hope to look a bit more at imip-agent again after neglecting it for so long.

Sustaining a Python Alternative

One motivation for developing my Python-like language and toolchain, Lichen, was to explore ways in which Python might have been developed to better suit my own needs and preferences. Although I still use Python extensively, I remain mindful of the need to write conservative, uncomplicated code without the kind of needless tricks and flourishes that Python’s expanding feature set can tempt the developer with, and thus I almost always consider the possibility of being able to use the Lichen toolchain with my projects one day.

Lichen may still be a proof of concept, but there has been work done on gradually pushing it towards being genuinely usable. I spent some time considering the way floating point numbers might be represented, and this raised some interesting issues around how they might be stored within instances. Like the tuple performance optimisations, I hope to introduce floating point support into the established feature set of Lichen and hopefully offer decent-enough performance, with the latter aspect being yet another path of investigation this year.

Documenting and Publishing

A continuing worry I have is whether I should still be running MoinMoin sites or even sites derived from MoinMoin “export dumps” for published information that is merely static. I have spent some time developing a parsing and formatting tool to generate static content from Moin content, thus avoiding running Moin altogether and also avoiding having to run a script acting as a URL-preserving front-end to exported Moin content (which is unfortunately required because of how the “export dump” seems to work).

Currently, this tool supports HTML and Moin output, the latter to test the parsing activity, with Graphviz content rendered as inline SVG or in other supported formats (although inline SVG is really what you want). Some macros are supported, but I need to add support for others, like the search macros which are useful for generating page listings. Themes are also supported, but I need to make sure that things like tables of contents – already supported with a macro – can be styled appropriately.

Already, I can generate the bulk of my existing project documentation, and the aim here is to be able to focus on improving that documentation, particularly for things like Lichen that really need explanations to be written before I need to start reviewing the code from scratch as if I were a total newcomer to the work. I have also considered using this tool as the basis for a decentralised wiki solution, but that can probably wait for a while given how many other things I have said I want to do!

Anything More?

There are probably other things that are worth looking at, but these are perhaps the ones I feel are most pressing. It could be said that pursuing all these at once would spread me and my efforts very thin, but I tend to cycle through projects periodically and hope that I can catch up with what I was previously doing, hence the mention above of documenting my work.

I wonder how much other people think about the year ahead, whether it is a productive and ultimately rewarding exercise to state aspirations and goals in this kind of way. New Year’s resolutions are a familiar practice, of course, but here I make no promises!

Nevertheless, a belated Happy New Year to anyone still reading! I hope we can all pursue our objectives enthusiastically over the year ahead and make a real and positive difference to computing, its users and to our societies.

Shared-Mode Executables in L4Re for MIPS-Based Devices

Sunday, July 8th, 2018

I have been meaning to write about my device driver experiments with L4Re, following on from my porting exercises, but that exercise took me along various routes and I haven’t yet got back to documenting all of them. Meanwhile, one thing that did start to bother me was how much space the software was taking up when compiled, linked and ready to deploy.

Since each of my device drivers is a separate program, and since each one may be linked to various libraries, they each started to contribute substantially to the size of the resulting file – the payload – needing to be transferred to the device. At one point, I had to resize the boot partition on the memory card used by the Letux 400 notebook computer to make the payload fit in the available space.

The work done to port L4Re to the MIPS Creator CI20 had already laid the foundations for functioning payloads, and once the final touches were put in place to support the peculiarities of the Ingenic JZ4780 system-on-a-chip, it was possible to run both the conventional “hello” example which is statically linked to its libraries, as well as a “shared-hello” example which is dynamically linked to its libraries. The latter configuration of the program results in a smaller executable program and thus a smaller payload.

So it seemed clear that I might be able to run my own programs on the Letux 400 or Ben NanoNote with similar reductions in payload size. Unfortunately, nothing ever seems to be as straightforward as it ought to be.

Exceptional Obstructions and Observations

Initially, I set about trying one of my own graphical examples with the MODE variable set to “shared” in its Makefile. This, upon powering up, merely indicated that it had not managed to start up properly. Instead of a blank screen, the viewports set up by the graphical multiplexer, Mag, were still active and showing their usual blankness. But these regions did not then change in any way when I pressed keys on the keyboard (which is functionality that I will hopefully get round to describing in another article).

I sought some general advice from the l4-hackers mailing list, but quickly realised that to make any real progress, I would need a decent way of accessing the debugging output produced by the dynamic linker. This took me on a diversion that led to my debugging capabilities being strengthened with the availability of a textual output console on the screen of my devices. I still don’t like the idea of performing hardware modifications to get access to the serial console, so this is a useful and welcome alternative.

Having switched out the “hello” program with the “shared-hello” program in the system configuration and module list demonstrating the framebuffer terminal, I deployed the payload and powered up, but I did not get the satisfying output of the program operating normally. Instead, the framebuffer terminal appeared and rewarded me with the following message:

L4Re: rom/ex_hello_shared: Unhandled exception: PC=0x800000 PFA=8d7a LdrFlgs=0

This isn’t really the kind of thing you want to see. Having not had to debug L4Re or Fiasco.OC in any serious fashion for a couple of months, I was out of practice in considering the next step, but fortunately some encouragement arrived in a private e-mail from Jean Wolter. This brought the suggestion of triggering the kernel debugger, but since this requires serial console access, it wasn’t a viable approach. But another idea that I could use involved writing out a bit more information in the routine that was producing this output.

The message in question originates in the pkg/l4re-core/l4re_kernel/server/src/region.cc file, within the Region_map::op_exception method. The details it produces are rather minimal and generic: the program counter (PC) tells us where the exception occurred; the loader flags (LdrFlags) presumably tell us about the activity of the library loader; the mysterious “PFA” is supposedly the page fault address but it actually seemed to be the stack pointer address on these MIPS-based systems.

On their own, these details are not particularly informative, but I suppose that more useful information could quickly become fairly specific to a particular architecture. Jean suggested looking at the structure describing the exception state, l4_exc_regs_t (defined with MIPS-specific members in pkg/l4re-core/l4sys/include/ARCH-mips/utcb.h), to see what else I might dig up. This I did, generating the following:

pc=0x800000
gp=0x82dd30
sp=0x8d7a
ra=0x802f6c
cause=0x1000002c

A few things interested me, thus motivating my choice of registers to dump. The global pointer (gp) register tells us about symbols in the problematic code, and I felt that having once made changes to the L4Re sources – way back in the era of getting the CI20 to run GCC-generated code – so that another register (t9) would be initialised correctly, this so that the gp register would be set up correctly within programs, it was entirely possible that I had rather too enthusiastically changed something that was now causing a problem.

The stack pointer (sp) is useful to check, just to see if it located in a sensible region of memory, and here I discovered that this seemed to be the same as the “PFA” number. Oddly, the “PFA” seems to occupy the same place in the exception structure as any “bad virtual address” featuring in an address exception, and so I started to suspect that maybe the stack pointer was referencing the wrong part of memory. But this was partially ruled out by examining the value of the stack pointer in the “hello” example, which appeared to reference broadly the same part of memory. And, of course, the “hello” example works just fine.

In fact, the cause register indicated another kind of exception entirely, and it was one I was not really expecting: a “coprocessor unusable” exception indicating that coprocessor 1, typically a floating point arithmetic unit, was being illegally requested by an instruction. Here is how I interpreted that register’s value:

hex value   binary value
1000002c == 00010000000000000000000000101100
              --                     -----
              CE                     ExcCode

=> CE == 1; ExcCode == 11 (coprocessor unusable)
=> coprocessor 1 unusable

Now, as I may have mentioned before, the hardware involved in this exercise does not support floating point instructions itself, and this is why I have configured compilers to use “soft-float” (software-based floating point arithmetic) support. It meant that I had to find places that might have wanted to use floating point instructions and eliminate those instructions somehow. Fortunately, only code generated by the compiler was likely to contain such instructions. But now I wondered if there weren’t some instructions of this nature lurking in places I hadn’t checked.

I had also thought to check the return address (ra) register. This tells us where the processor will jump to when it has finished executing the current routine, and since this is usually a matter of “returning” somewhere, it tells us something about the code that was being executed before the problematic routine was called. I figured that the work being done before the exception was probably going to be more important than the exception itself.

Floating Point Magic

Another debugging suggestion that now became unavoidable was to inspect the erroneous instruction. I noted above that this instruction was causing the processor to signal an illegal attempt to use an unusable – actually completely unavailable – coprocessor. Writing a numeric representation of the instruction to the display provided me with the following hexadecimal (base 16) value:

464c457f

This can be interpreted as follows in binary, with groups of bits defined for interpretation according to the MIPS instruction set architecture, and with tentative interpretations of these groups provided beneath:

010001 10010  01100 01000 10101 111111
COP1   rs/fmt rt/ft rd/fs       C.ABS.NGT

The first group of bits is the opcode field which is interpreted as a coprocessor 1 (COP1) opcode. Should we then wish to consider what the other groups mean, we might then examine the final group which could indicate a comparison instruction. However, this becomes rather hypothetical since the processor will most likely interpret the opcode field and then decide that it cannot handle the instruction.

So, I started to look for places where the instruction might have been written, but no obvious locations were forthcoming. One peculiar aspect of all this is that the location of the instruction is at a rather “clean” location – 0x800000 – and some investigations indicated that this is where the library containing the problematic code gets loaded. I actually don’t remember precisely how I figured this out, but I think it was as follows.

I had looked at linker scripts that might give some details of the location of program objects, and one of them (pkg/l4re-core/ldscripts/ARCH-mips/main_dyn.ld) seemed to be related. It gave an address for the code of 0x400000. This made me think that some misconfiguration or erroneous operation was putting the observed code somewhere it shouldn’t be. But changing this address in the linker script just gave another exception at 0x400000, meaning that I had disrupted something that was intentional and probably working fine.

Meanwhile, emitting the t9 register’s value from the exception state yielded 0x800000, indicating that the calling routine had most likely jumped straight to that address, not to another address with execution having then proceeded normally until reaching the exception location. I decided to look at the instructions around the return address, these most likely being the ones that had set up the call to the exception location. Writing these locations out gave me some idea about the instructions involved. Below, I provide the stored values and their interpretations as machine instructions:

8f998250 # lw $t9, -32176($gp)
24a55fa8 # addiu $a1, $a1, 0x5fa8
0320f809 # jalr $t9
24844ee4 # addiu $a0, $a0, 0x4ee4
8fbc0010 # lw $gp, 16($sp)

One objective of doing this, apart from confirming that a jump instruction (jalr) was involved, with the t9 register being employed as is the convention with MIPS code, was to use the fragment to identify the library that was causing the error. A brute-force approach was employed here, generating “object dumps” from the library files and writing them out as files in a new directory:

mkdir tmpdir
for FILENAME in mybuild/lib/mips_32/l4f/* ; do
    mipsel-linux-gnu-objdump -d "$FILENAME" > tmpdir/`basename "$FILENAME"`
done

The textual dump files were then searched for the instruction values using grep, narrowing down the places where these instructions were found in consecutive locations. This yielded the following code, found in the libld-l4.so library:

    2f5c:       8f998250        lw      t9,-32176(gp)
    2f60:       24a55fa8        addiu   a1,a1,24488
    2f64:       0320f809        jalr    t9
    2f68:       24844ee4        addiu   a0,a0,20196
    2f6c:       8fbc0010        lw      gp,16(sp)

The integer operands for the addiu instructions are the same, of course, just being shown as decimal rather than hexadecimal values. Now, we previously saw that the return address (ra) register had the value 0x802f6c. When a MIPS processor executes a jump instruction, it will also fetch the following instruction and execute it, this being a consequence of the way the processor architecture is designed.

So, the instruction after the jump, residing in what is known as the “branch delay slot” is not the instruction that will be visited upon returning from the called routine. Instead, it is the instruction after that. Here, we see that the return address from the jump at location 0x2f64 would be two locations later at 0x2f6c. This provides a kind of realisation that the program object – the libld-l4.so library – is positioned in memory at 0x800000: 0x2f6c added to 0x800000 gives the value of ra, 0x802f6c.

And this means that the location of the problematic instruction – the cause of our exception – is the first location within this object. Anyone with any experience of this kind of software will have realised by now that this doesn’t sound like a healthy situation: the first location within a library is not actually going to be code because these kinds of objects are packaged up in a way that permits their manipulation by other programs.

So what is the first location of a library used for? Since such objects employ the Executable and Linkable Format (ELF), we can take a look at some documentation. And we see that the first location is used to identify the kind of object, employing a “magic number” for the purpose. And that magic number would be…

464c457f

In the little-endian arrangement employed by this processor, the stored bytes are as follows:

7f
45 ('E')
4c ('L')
46 ('F')

The value was not a floating point instruction at all, but the magic number at the start of the library object! It was something of a coincidence that such a value would be interpreted as a floating point instruction, an accidentally convenient way of signalling something going badly wrong.

Missing Entries

The investigation now started to focus on how the code trying to jump to the start of the library had managed to get this incorrect address and what it was trying to do by jumping to it. I started to wonder if the global pointer (gp), whose job it is to reference the list of locations of program routines and other global data, might have been miscalculated such that attempts to load the addresses of routines would then be failing with data being fetched from the wrong places.

But looking around at code fragments where the gp register was being calculated, they seemed to look set to calculate the correct values based on assumptions about other registers. For example, from the object dump for libld-l4.so:

00002780 <_ftext>:
    2780:       3c1c0003        lui     gp,0x3
    2784:       279cb5b0        addiu   gp,gp,-19024
    2788:       0399e021        addu    gp,gp,t9

Assuming that the processor has t9 set to 0x2780 and then jumps to the value of t9, as is the convention, the following calculation is then performed:

gp = 0x30000 (since lui loads the "upper" half-word)
gp = gp - 19024 = 0x30000 - 19024 = 0x2b5b0
gp = gp + t9 = 0x2b5b0 + 0x2780 = 0x2dd30

Using the nm tool, which tells us about symbols in program objects, it was possible to check this value:

mipsel-linux-gnu-nm -n mybuild/lib/mips_32/l4f/libld-l4.so

This shows the following at the end of the output:

0002dd30 d _gp

Also appearing somewhat earlier in the output is this, telling us where the table of symbols starts (as well as the next thing in the file):

00025d40 a _GLOBAL_OFFSET_TABLE_
00025f90 g __dso_handle

Some digging around in the L4Re source code gave a kind of confirmation that the difference between _gp and _GLOBAL_OFFSET_TABLE_ was to be expected. Here is what I found in the pkg/l4re-core/uclibc/lib/contrib/uclibc/ldso/ldso/mips/elfinterp.c file:

#define OFFSET_GP_GOT 0x7ff0

If gp, when recalculated in other places, ended up getting the same value, there didn’t seem to be anything wrong with it. Some quick inspections of neighbouring calculations indicated that this wasn’t likely to be the problem. But what about the values used in conjunction with gp? Might they be having an effect? In the case of the erroneous jump, the following calculation is involved:

lw t9,-32176(gp) => load word into t9 from the location at gp - 32176
                 => ...               from 0x2dd30 - 32176
                 => ...               from 0x25f80

The calculated address, 0x25f80, is after the start of _GLOBAL_OFFSET_TABLE_ providing entries for program routines and other things, which is a good sign, but what is perhaps more troubling is how far after the start of the table such a value is. In the above output, another symbol (__dso_handle) indicates something that is located at the end of the table. Now, although its address is still greater than the one computed above, meaning that the computation does not cause us to stray off the end of the table, the computed address is suspiciously close to the end.

There was nothing else to do than to have a look at the table contents itself, and here it was rather useful to have a way of displaying a number of values on the screen. At this point, we have to note that the addresses in use in the running system are adjusted according to the start of the loaded object, so that the table is positioned at 0x25d40 in the object dump, but in the running system we would see 0x800000 + 0x25d40 and thus 0x825d40 instead.

What I saw was that the table contained entries that varied in the expected way right up until 0x825f60 (corresponding to 0x25f60 in the object dump) being only 0x30 (or 48 bytes, or 12 entries) before the end of the table, but then all remaining entries starting at 0x825f64 (corresponding to 0x25f64) yielded a value of 0x800000, apart from 0x825f90 (corresponding to 0x25f90, right at the end of the table) which yielded itself.

Since the calculated address above (0x25f80, adjusted to 0x825f80 in the running system) lies in this final region, we now know the origin of this annoying 0x800000: it comes from entries at the end of the table that do not seem to hold meaningful values. Indeed, the object dump for the library seemed to skip over this region of the table entirely, presumably because it was left uninitialised. And using the readelf tool with the –relocs option to show “relocations”, which applies to this table, it appeared that the last entries rather confirmed my observations:

00025d34  00000003 R_MIPS_REL32
00025f90  00000003 R_MIPS_REL32

Clearly, something is missing from this table. But since something has to adjust the contents of the table to add the “base address”, 0x800000, to the entries in order to provide valid addresses within the running program, what started to intrigue me was whether the code that performed this adjustment had any idea about these missing entries, and how this code might be related to the code causing the exception situation.

Routines and Responsibilities

While considering the nature of the code causing the exception, I had been using the objdump utility with the -d (disassemble) and -D (disassemble all) options. These provide details of program sections, code routines and the machine instructions themselves. But Jean pointed out that if I really wanted to find out which part of the source code was responsible for producing certain regions of the program, I might use a combination of options: -d, -l (line numbers) and -S (source code). This was almost a revelation!

However, the code responsible for the jump to the start of the library resisted such measures. A large region of code appeared to have no corresponding source, suggesting that it might be generated. Here is how it starts:

_ftext():
    2dac:       00000000        nop
    2db0:       3c1c0003        lui     gp,0x3
    2db4:       279caf80        addiu   gp,gp,-20608
    2db8:       0399e021        addu    gp,gp,t9
    2dbc:       8f84801c        lw      a0,-32740(gp)
    2dc0:       8f828018        lw      v0,-32744(gp)

There is no function defined in the source code with the name _ftext. However, _ftext is defined in the linker script (in pkg/l4re-core/ldscripts/ARCH-mips/main_rel.ld) as follows:

  .text           :
  {
    _ftext = . ;
    *(.text.unlikely .text.*_unlikely .text.unlikely.*)
    *(.text.exit .text.exit.*)
    *(.text.startup .text.startup.*)
    *(.text.hot .text.hot.*)
    *(.text .stub .text.* .gnu.linkonce.t.*)
    /* .gnu.warning sections are handled specially by elf32.em.  */
    *(.gnu.warning)
    *(.mips16.fn.*) *(.mips16.call.*)
  }

If you haven’t encountered linker scripts before, then you probably don’t want to spend too much time looking at this, linker scripts being frustratingly terse and arcane, but the essence of the above is that a bunch of code is stuffed into the .text section, with _ftext being assigned the address of the start of all this code. Now, _ftext in the linker script corresponds to a particular label in the object dump (which we saw earlier was positioned at 0x2780) whereas the _ftext function in the code occurs later (at 0x2dac, above). After the label but before the function is code whose source is found by objdump.

So I took the approach of removing things from the linker script, ultimately removing everything from the .text section apart from the assignment to _ftext. This removed the annotated regions of the code and left me with only the _ftext function. It really did appear that this was something the compiler might be responsible for. But where would I find the code responsible?

One hint that was present in the _ftext function code was the use of another identified function, __cxa_finalize. Searching the GCC sources for code that might use it led me to the libgcc sources and to code that invokes destructor functions upon program exit. This wasn’t really what I was looking for, but the file containing it (libgcc/crtstuff.c) would prove informative.

Back to the Table

Jean had indicated that there might be a difference in output between compilers, and that certain symbols might be produced by some but not by others. I investigated further by using the readelf tool with the -a option to show almost everything about the library file. Here, the focus was on the global offset table (GOT) and information about the entries. In particular, I wanted to know more about the entry providing the erroneous 0x800000 value, located at (gp – 32176). In my output I saw the following interesting thing:

 Global entries:
   Address     Access  Initial Sym.Val. Type    Ndx Name
  00025f80 -32176(gp) 00000000 00000000 FUNC    UND __register_frame_info

This seems to tell us what the program expects to find at the location in question, and it indicates that the named symbol is presumably undefined. There were some other undefined symbols, too:

_ITM_deregisterTMCloneTable
_ITM_registerTMCloneTable
__deregister_frame_info

Meanwhile, Jean was seeing symbols with other names:

__register_frame_info_base
__deregister_frame_info_base

During my perusal of the libgcc sources, I had noticed some of these symbols being tested to see if they were non-zero. For example:

  if (__register_frame_info)
    __register_frame_info (__EH_FRAME_BEGIN__, &object);

These fragments of code appear to be located in functions related to program initialisation. And it is also interesting to note that back in the library code, after the offending table entry has been accessed, there are tests against zero:

    2f34:       3c1c0003        lui     gp,0x3
    2f38:       279cadfc        addiu   gp,gp,-20996
    2f3c:       0399e021        addu    gp,gp,t9
    2f40:       27bdffe0        addiu   sp,sp,-32
    2f44:       8f828250        lw      v0,-32176(gp)
    2f48:       afbc0010        sw      gp,16(sp)
    2f4c:       afbf001c        sw      ra,28(sp)
    2f50:       10400007        beqz    v0,2f70 <_ftext+0x7f0>

Here, gp gets set up correctly, v0 is set to the value of the table entry, which we now believe refers to __register_frame_info, and the beqz instruction tests this value against zero, skipping ahead if it is zero. Does that not sound a bit like the code shown above? One might think that the libgcc code might handle an uninitialised table entry, and maybe it is intended to do so, but the table entry ends up getting adjusted to 0x800000, presumably as part of the library loading process.

I think that the most relevant function here for the adjustment of these entries is _dl_perform_mips_global_got_relocations which can be found in the pkg/l4re-core/uclibc/lib/contrib/uclibc/ldso/ldso/ldso.c file as part of the L4Re C library code. It may well have changed the entry from zero to this erroneous non-zero value, merely because the entry lies within the table and is assumed to be valid.

So, as a consequence, the libgcc code acts as if it has a genuine __register_frame_info function to call, and doing so causes the jump to the start of the library object and the exception. Maybe the code is supposed to be designed to handle missing symbols, those symbols potentially being deliberately omitted, but it doesn’t function correctly under these particular circumstances.

Symbol Restoration

However, despite identifying this unfortunate interaction between C library and libgcc, the matter of a remedy remained unaddressed. What was I to do about these missing symbols? Were they supposed to be there? Was there a way to tell libgcc not to expect them to be there at all?

In attempting to learn a bit more about the linking process, I had probably been through the different L4Re packages several times, but Jean then pointed me to a file I had seen before, perhaps before I had needed to think about these symbols at all. It contained “empty” definitions for some of the symbols but not for others. Maybe the workaround or even the solution was to just add more definitions corresponding to the symbols the program was expecting? Jean thought so.

So, I added a few things to the file (pkg/l4re-core/ldso/ldso/fixup.c):

void __deregister_frame_info(void);
void __register_frame_info(void);
void _ITM_deregisterTMCloneTable(void);
void _ITM_registerTMCloneTable(void);

void __deregister_frame_info(void) {}
void __register_frame_info(void) {}
void _ITM_deregisterTMCloneTable(void) {}
void _ITM_registerTMCloneTable(void) {}

I wasn’t confident that this would fix the problem. After all the investigation, adding a few lines of trivial code to one file seemed like too easy a way to fix what seemed like a serious problem. But I checked the object dump of the library, and suddenly things looked a bit more reasonable. Instead of referencing an uninitialised table entry, objdump was able to identify the jump target as __register_frame_info:

    2e14:       8f828040        lw      v0,-32704(gp)
    2e18:       afbc0010        sw      gp,16(sp)
    2e1c:       afbf001c        sw      ra,28(sp)
    2e20:       10400007        beqz    v0,2e40 <_ftext+0x7f0>
    2e24:       8f85801c        lw      a1,-32740(gp)
    2e28:       8f84803c        lw      a0,-32708(gp)
    2e2c:       8f998040        lw      t9,-32704(gp)
    2e30:       24a55fa8        addiu   a1,a1,24488
    2e34:       04111c39        bal     9f1c <__register_frame_info>

Of course, the code had been reorganised and so things were no longer in quite the same places, but in the above, (gp – 32704) is actually a reference to __register_frame_info, and although this value gets tested against zero as before, we can see that enough is now known about the previously-missing symbol that a branch directly to the location of the function has been incorporated, rather than a jump to the address stored in the table.

And sure enough, powering up the Letux 400 produced the framebuffer terminal showing the expected output:

Hi World! (shared)

It had been a long journey for such a modest reward, but thanks to Jean’s help and a bit of perseverance, I got there in the end.

L4Re: Textual Debugging Output on the Framebuffer

Monday, May 21st, 2018

I have actually been in the process of drafting another article about writing device drivers to run within the L4 Runtime Environment (L4Re) on top of the Fiasco.OC microkernel, this being for the Ben NanoNote and Letux 400 notebook computers. That article started to trail behind a lot of the work being done, and there are a few loose ends to be tied up before I can finish it.

Meanwhile, on the way towards some kind of achievement with L4Re, confounded somewhat by the sometimes impenetrable APIs, I managed to eventually get something working that I had thought would have been one of the first things to demonstrate. When initially perusing the range of software in the “pkg” directory within the L4Re distribution, I saw a package called “fbterminal” providing a terminal program that shows itself on the framebuffer (or display).

I imagined being able to launch this on top of the graphical user interface multiplexer, Mag, and then have the “hello” program provide some output to this terminal. I even imagined having the terminal accept input from the keyboard, but we aren’t quite at that point, and that is where my other article comes in. Of course, I initially had no idea how to achieve this, and there needed to be a lot of work put in just to get back to this particular point of entry.

Now, however, the act of launching fbterminal and have it work is fairly straightforward. A few additional packages are required, but the framebuffer works satisfactorily as far as the other components are concerned, and the result will be a blank region of the screen with the terminal showing precisely nothing. Obviously, we want it to show something in order to confirm that it is working. I had to get used to seeing this blank terminal for a while.

The intended companion to fbterminal for testing purposes is the hello program which merely writes output to what might be described as a logging destination. This particular output channel is usually the serial console for the device, which meant that when porting the system to the Ben and the Letux, the hello program was of no use to me. But now, with a framebuffer to show things on, and with a terminal that might be able to accept output from other things, it becomes interesting to see if the hello program can be persuaded to send its output elsewhere.

It was useful to investigate how the output from the hello program actually makes its way to its destination. Since it uses standard C library functions, there has to be a mechanism for those functions to use. As far as I know, these would typically involve various system calls concerning files and streams. A perusal of the sources dredged up an interesting symbol called “__rtld_l4re_env_posix_vfs_ops”. Further investigation led me to the L4Re virtual filesystem (Vfs) functionality and the following interesting files:

  • pkg/l4re-core/l4re_vfs/include/vfs.h
  • pkg/l4re-core/l4re_vfs/include/impl/vfs_impl.h

And these in turn led me to the virtual console (Vcon) functionality:

  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream.h
  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream_impl.h

It seems that standard output from the hello program goes via the standard C calls and Vfs functions and is packaged up and sent using the Vcon mechanisms to the logging destination, which is typically provided by the root task, Moe. Given that fbterminal understands the Vcon protocol and acts as a console server, there appeared to be some potential in looking at Vcon mechanisms more closely. It seemed that fbterminal might be able to take the place of Moe.

Indeed, the documentation offers some clues. In the description of the init process, Ned, a mention is made of a program loader configuration parameter called “log_fab” that indicates an object that can create a suitable logging destination. When starting a program, the program loader creates such an object using “log_fab” and presents it to the new program as a capability (or object reference).

However, this is not quite what we want because we don’t need anything else to be created: we already have fbterminal ready for us to use. I suppose something could be conjured up to act as a factory and provide a fbterminal instance, and maybe this is not too arduous in the Lua-based configuration environment used by Ned, but I wanted a more direct solution.

Contemplating this, I went and rediscovered the definitions used by Ned to support its configuration scripting (found in pkg/l4re-core/ned/server/src/ned.lua). Here, the workings of the “log_fab” mechanism can be found and studied. But what I started to favour was a way of just indicating a capability to the hello program and not have the loader create something else. This required a simple edit to one of the functions:

function App_env:log()
  Class.check(self, App_env);
  if self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end
  return self.loader.log_fab:create(Proto.Log, table.unpack(self.log_args));
end

Here, we want to ignore “log_fab” and just have our existing capability used instead. So, I introduced another clause to the if statement:

  if self.log_cap then
    return self.log_cap
  elseif self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end

Now, if we specify “log_cap” when starting a program, it should want to direct logging messages to the referenced object instead. So, with this available to us, it becomes possible to adjust the way the hello program is started. First of all, we define the way fbterminal is set up and started:

local term = l:new_channel();

l:start({
    caps = {
      fb = mag_caps.svc:create(L4.Proto.Goos, "g=320x230+0+0", "barheight=10"),
      term = term:svr(),
    },
  },
  "rom/fbterminal");

Since fbterminal needs to “export” its console abilities using a capability called “term”, this needs to be indicated in the “caps” table. (It doesn’t matter what the local variable for the channel is called.) So, the hello program is defined accordingly:

l:start({
    log_cap = term,
  },
  "rom/hello");

Here, we make use of “log_cap” and allow the output to be directed to the terminal that has already been started. And the result is this:

fbterminal on the Ben NanoNote showing the hello program's output

fbterminal on the Ben NanoNote showing the hello program's output

And at long last, it becomes possible to see what programs are printing out to the log!

Extending L4Re/Fiasco.OC to the Letux 400 Notebook Computer

Wednesday, April 18th, 2018

In my summary of the port of L4Re and Fiasco.OC to the Ben NanoNote, I remarked that progress had been made on supporting other products and hardware peripherals. In fact, such progress occurred more rapidly than I had thought possible, and I have been able to extend the work to support the Letux 400 notebook computer. It is perhaps worth describing the Letux 400 in a bit more detail because it has an interesting place in the history of netbook computers.

Some History

Back in the early 21st century, laptop computers were becoming increasingly popular at the expense of desktop computers, but as laptops began to take the place of desktops in homes and workplaces, this gradually led each successive generation of laptops to sacrifice portability and affordability in favour of larger, faster, higher-resolution screens and general hardware specifications more competitive with the desktop offerings they sought to replace. Laptops were becoming popular but also bigger, heavier and more expensive.

Things took an interesting turn in 2006 with the introduction of the XO-1 from the One Laptop per Child (OLPC) initiative. With rather different goals to those of the mainstream laptop vendors, the focus was to deliver a relatively-inexpensive yet robust portable computer for use by schoolchildren, many of whom might be living in places with limited infrastructure where increasingly power-hungry mainstream laptops would have been unsuitable, even unusable.

One unexpected consequence of the introduction of the XO-1 was the revival in interest in modestly-performing portable computing hardware. People were actually interested in a computer that did the things they needed, rather than having to buy something designed for gamers, software developers, or corporate “power users” (of both the pretend and genuine kinds). Rather than having to haul increasingly big and heavy laptops and all the usual accessories in a big dedicated bag, they liked the idea of slipping a smaller, lighter device into their everyday bag, as had probably been the idea with subnotebooks while they were still a thing.

Thus, the Asus Eee PC came about, regarded as the first widely-available netbook of recent times (acknowledging the earlier Psion netBook, of course), bringing with it the attention of large-volume manufacturers and economies of scale. For “lightweight tasks”, netbooks were enough for many people: a phenomenon that found itself repeating with tablets, particularly as recreational usage of technology became more important to buyers and evolved in certain ways.

Now, one thing that had been a big part of the OLPC initiative’s objectives was a $100 price point. At first, despite fairly radical techniques being used to reduce cost, and despite the involvement of a major original equipment manufacturer in the production of the XO-1, that price point of $100 was out of reach. Even the Eee PC retailed for a few hundred dollars.

This is where a product known as the Skytone Alpha 400 enters the picture. Some vendors, rebranding this product, offered it as possibly the first $100 laptop – or netbook – to be made available for sale. One of the vendors offers it as the Letux 400, and it has been available for as little as €125 during its retail lifespan. Noting that it has rather similar hardware to the Ben NanoNote, but has a more conventional physical profile and four times as much RAM, my brother bought one to investigate a few years ago. That is how I eventually ended up embarking on this experiment.

Extending Recent Work

There are many similarities between the JZ4720 system-on-a-chip (SoC) used in the Ben and the JZ4730 used in the Letux 400. However, it can be said that the JZ4720 is much better understood. The JZ4740 and closely-related devices like the JZ4720 have appeared in a number of different devices, documentation has surfaced for these products, and vendor source code has always been available, typically using or implicitly documenting most of the hardware.

In contrast, limited documentation is known to exist for the JZ4730, and the available vendor source code has not always described every detail of the hardware, even though the essential operations and register details appear to be present. Having looked at the Linux kernel sources that support the JZ4730, together with U-Boot source code, the similarities and differences between the JZ4720 and JZ4730 began to take shape in my mind.

I took an optimistic approach that mostly paid off. The Fiasco.OC kernel needs augmenting with the details of the JZ4730, but these are similar in some ways to the JZ4720 and familiar otherwise. For instance, the JZ4730 has a 32-bit “operating system timer” (OST) that curiously does not appear in the JZ4740 but does appear in more recent products such as the JZ4780. Bearing such things in mind, the timer and interrupt support was easily enough added.

One very different thing about the JZ4730 is that it does not seem to support the “set” and “clear” register locations that are probably common to most modern SoCs. Typically, one might want to update a hardware-related register to change a peripheral’s configuration, and it must have become apparent to hardware designers that such updates mostly want to either set or clear bits. Normally in a program, to achieve such things involves reading a value, performing a logical operation that combines the value with a description of the bits to be set or cleared, and then the value is written back to where it came from. For example:

define bits to set
load value from location (exposing a hardware register, perhaps)
logical-or value with bits
store result in location

Encapsulating this in a single instruction avoids potential issues with different things competing to update the location at the same time, if the hardware permits this, and just offers something that is more efficient and convenient, anyway. Separate locations are provided for “set” and “clear” operations, and the original location is provided to read and to overwrite the hardware register’s value. Sometimes, such registers might only support read-only access, in fact. But the JZ4730 does not support such additional locations, and so we have to do things the hard way when updating registers and doing things like clearing and setting bits.

One odd thing that caught me out was a strange result from the special “exception base” (EBASE) register that does not seem to return zero for the CPU identifier, something that the bootstrap code in L4Re expects. I suppressed this test and made the kernel always return zero when it asks for this identifier. To debug such things, I could not use the screen as I had done with the Ben since the bootloader does not configure it on the Letux. Fortunately, unlike the Ben, the Letux provides a few LEDs to indicate things like keyboard and network status, and these can be configured and activated to communicate simple status information.

Otherwise, the exercise mostly involved me reworking some existing code I had (itself borrowing somewhat from existing driver code) that provides driver support for the Letux hardware peripherals. The clock and power management (CPM) arrangement is familiar but different from the JZ4720; the LCD driver can actually be used as is; the general-purpose input/output (GPIO) arrangement is different from the JZ4720 and, curiously enough once again, perhaps more similar to the JZ4780 in some ways. To support the LCD panel’s backlight, a pulse-width modulation (PWM) driver needed to be added, but this involves very little code.

I also had to deal with the mistakes I made myself when not concentrating hard enough. Lots of testing and re-testing occurred. But in the space of a weekend or so, I had something to show for all the previous effort plus this round’s additional effort.

The Letux 400 and Ben NanoNote running the "spectrum" example

The Letux 400 and Ben NanoNote running the "spectrum" example

Here, you can see what kind of devices we are dealing with! The Letux 400 is less than half the width of a normal-size keyboard (with numeric keypad), and the Ben NanoNote is less than half the width of the Letux 400. Both of them were inexpensive computing devices when they were introduced, and although they may not be capable of running “modern” desktop environments or Web browsers, they offer computing facilities that were, once upon a time, “workstation class” in various respects. And they did, after all, run GNU/Linux when they were introduced.

And that is why it is attractive to consider running other “proper” operating system technologies on them now. Maybe we can revisit the compromises that led to the subnotebook and the netbook, perhaps even the tablet, where devices that are not the most powerful still have a place in fulfilling our computing needs.

Porting L4Re and Fiasco.OC to the Ben NanoNote (Summary)

Monday, April 16th, 2018

As promised, here is a summary of the work involved in porting L4Re and Fiasco.OC to the Ben NanoNote. First of all, a list of all the articles with some brief descriptions of what they cover:

  1. Familiarisation with L4Re and Fiasco.OC on the MIPS Creator CI20, adding some missing pieces
  2. Setting up and introducing a suitable compiler for the Ben, also describing the hardware in the kernel
  3. Handling instructions unsupported by the JZ4720 (the Ben’s SoC) in the kernel
  4. Describing the Ben and dealing with unsupported instructions in the L4Re portion of the system
  5. Configuring the memory layout and attempting to bootstrap the kernel
  6. Making the kernel support the MIPS architecture revision used by the JZ4720, also fixing the interrupt system description
  7. Investigating context/thread switching and fixing an inadvertently-introduced fault in the unsupported instruction handling
  8. Configuring user space examples and getting a simple framebuffer demonstration working
  9. Getting the framebuffer driver, GUI multiplexer, and “spectrum” example working

As I may have noted a few times in the articles, this work just builds on previous work done by a number of people over the years, obviously starting with the whole L4 microkernel effort, the development of Fiasco.OC, L4Re and their predecessors, and the work done to port these components to the MIPS architecture. On the l4-hackers mailing list, Adam Lackorzynski was particularly helpful when I ran into obstacles, and Sarah Hoffman provided some insight into problems with the CI20 just as it was needed.

You really don’t have to read all the articles or even any of them! The point of this article is to summarise the work and perhaps make similar porting efforts a bit more approachable for others in the same position: anyone having a vague level of familiarity with L4Re/Fiasco.OC or similar systems, also having a device that might be supported, and being somewhat familiar with writing code that drives hardware.

Practical Details

It might be useful to give certain practical details here, if only to indicate the nature of the development and testing routine employed in this endeavour. First of all, I have been using a chroot containing the Debian “unstable” distribution for the i386 architecture. Although this was essential for a time when building the software for the CI20 and trying to take advantage of Debian’s cross-compiler packages, any fairly recent version of Debian would probably be fine because I ended up using a Buildroot toolchain to be able to target the Ben. You could probably choose any Free Software distribution and reproduce what I have done.

The distribution of patches contains instructions regarding preparation and the building of the software. It isn’t too useful to repeat that information here, but the following things need doing:

  1. Installing packages for build tools
  2. Obtaining or building a cross-compiler
  3. Checking out the source code for L4Re and Fiasco.OC from its repository
  4. Applying the patches
  5. Configuring and building the kernel
  6. Configuring and building the runtime environment
  7. Copying the payload to a memory card
  8. Booting the device

Some scripts have been included in the patch distribution, one of which should do the tricky job of applying patches to the repository checkout according to the chosen device configuration. Because a centralised version control system (Subversion) has been used to publish the L4Re and Fiasco.OC sources, I had to find a way of working with my own local changes. Consequently, I wrote a few scripts to maintain bundles of changes associated with certain files, and I then managed these bundles in a different version control system. Yes, this effectively meant versioning the changes themselves!

Things would be simpler with a decentralised version control system because local commits would be convenient, and upstream updates would be incorporated into the repository separately and merged with local changes in a controlled fashion. One of the corporate participants has made a Git repository for Fiasco.OC available, which may alleviate some issues, although I am increasingly finding larger Git repositories to be unusable on my modest hardware, and I also tend to disagree with everybody deciding to put everything on GitHub.

Fixing and Building

Needing to repeatedly build, test, go back and fix, I found myself issuing the same command sequences a lot. When working with the kernel, I tended to enter the kernel build directory, which I called “mybuild”, edit the kernel sources, and then re-run the make command:

cd mybuild
vi ../src/kern/mips/exception.S # edit a familiar file with vim
make

Having built a new kernel, I would then need to build a new payload to deploy, which meant ascending the directory hierarchy and building an image in the L4Re section of the sources:

cd ../../../l4
make O=mybuild uimage E=mips-qi_lb60-spectrum-example

Given a previously-built “user space”, this would bundle the new kernel together with code that might be able to test it. Of particular importance is the bootstrap code which launches the kernel: without that, there is no point in even trying to test the kernel!

I found that re-building L4Re components seemed to require a general build to be performed:

make O=mybuild

If that proved successful, an image would then be built and tested. In general, focusing on either the kernel or some user space component meant that there was rarely a need to build a new kernel and then build much of the user space.

Work Summary

The patches accumulated during this process cover a range of different areas of functionality. Looking at them organised by functional area, instead of in the more haphazard fashion presented throughout the series of articles, allows for a more convenient review of the work actually needed to get the job done.

Build System Adjustments and Various Fixes

As early as my experiments with the CI20, I experienced the need to fix some things that didn’t work on my system, either due to some Debian peculiarities or differences in compiler behaviour:

  • l4util-mips-thread.diff (fixes a symbol visibility issue with certain compiler versions)
  • mips-gcc-cpload.diff (fixes the initialisation of certain L4Re components)
  • no-at.diff (allows the build to work on Debian for the i386 architecture)

Other adjustments are required to let the build system do its job, setting paths for other components and for the toolchains:

  • conf-makeconf-boot.diff (lets the L4Re build system find things like the kernel, modules and hardware descriptions)
  • qi_lb60-gcc-buildroot-fiasco.diff (changes the compiler and architecture settings)
  • qi_lb60-gcc-buildroot-l4re.diff (changes the compiler, architecture and soft-float settings)

The build system also needs directing towards new drivers, and various files need to be excluded or changed:

  • ingenic-mips-drivers-top.diff (enables drivers added by this work)
  • qi_lb60-fbdrv.diff (changes the splash image for the framebuffer driver)
  • qi_lb60-l4re.diff (includes a temporary fix disabling a Mag plugin)

The first of these is important to remember when adding drivers since it changes the l4/pkg/drivers/Control file and defines the driver “packages” provided by each of the driver libraries. These package definitions help the build system work out which other parts of the system need to be consulted when building a particular driver.

Supporting MIPS32r1 Devices

Throughout the kernel and L4Re, changes need making to support the earlier architecture version provided by the JZ4720. The bulk of the following patch files deals with such changes:

  • qi_lb60-fiasco.diff
  • qi_lb60-l4re.diff

Maybe I will try and break out the architecture version changes into specific patch files, provided this does not result in the original source files ending up being patched by multiple patch files. My aim has been to avoid patches having to be applied in a particular order, and that starts to happen when multiple patches modify the same file.

Describing the Ben NanoNote

The kernel needs some knowledge of the Ben with regard to timers and interrupts. Meanwhile, L4Re needs to set the Ben up correctly when booting. Both sections of the system need an awareness of how memory is going to be used, and extra configuration options need to be provided to merely allow the selection of the Ben for building. Currently, the following patch files include things concerned with such matters:

  • qi_lb60-fiasco.diff (contains timer, interrupt and memory details, plus configuration system changes)
  • qi_lb60-l4re.diff (contains bootstrap and memory details, plus configuration system changes)
  • qi_lb60-platform.diff (platform definitions for the Ben in L4Re)

One significant objective here is to be able to offer the Ben as a “first class” configuration option and have the build system do the right thing, setting up all the components and code regions that the Ben needs to function.

Introducing Driver Code

To be able to activate the framebuffer on the Ben, driver code needs introducing for a few peripherals provided by the JZ4720: CPM (clock/power management), GPIO (general-purpose input/output) and LCD (liquid crystal display, or similar). A few different patch files cover these areas:

  • ingenic-mips-cpm.diff (CPM support for JZ4720 and JZ4780)
  • ingenic-mips-gpio.diff (GPIO support for JZ4720 and JZ4780)
  • qi_lb60-lcd.diff (LCD support for JZ4720)

The JZ4780 support is intended for the CI20 and will not be used with the Ben. However, it is convenient to incorporate support for these different platforms in the same patch file in each instance.

Meanwhile, the LCD driver should work with a range of JZ4700-series devices (labelled as JZ4740 in the patches). While focusing on getting things working, the only panel supported by this work was that provided by the Ben. Since then, support has been made slightly more general, just as was done with the Linux kernel support for products employing this particular SoC family and, subsequently, for panels in general. (Linux has moved towards a “device tree” approach for specifying things like panels and displays, although this is arguably just restating things that were once C-coded structures in another, rather peculiar, format.)

To support these drivers, some useful code has been copied from elsewhere in L4Re:

  • drivers_frst-register-block.diff

This provides a convenient abstraction for registers that is exposed via an include directive:

#include <l4/drivers/hw_mmio_register_block.h>

Indeed, it is worth focusing on the LCD driver briefly. The code has its origins in existing driver code written for the Ben that I adapted to get working as part of a simple “bare metal” payload. I have maintained a separation between the more intricate hardware configuration and aspects that deal with the surrounding software. As part of L4Re, the latter involves obtaining access to memory using the appropriate API calls and invoking other drivers.

In L4Re, there is a kind of framework for LCD drivers, and the existing drivers seem to be written in C rather than C++. Reminiscent of Linux, there is a mechanism for exporting driver operations using a well-defined data structure, and this permits the “probing” of drivers to see if they can be enabled and if meaningful information can be obtained about things like the supported resolution, colour depth and pixel format. To make the existing code compatible with L4Re, a fair amount of the work involves translating the information already known (and used) in the hardware configuration activity to a form that other L4Re components can understand and use.

Originally, for the GPIO driver, I had intended it to operate as part of the Io server framework. Components using GPIO functionality would then employ the appropriate API to configure and interact with the exposed input and output pins. Unfortunately, this proved rather cumbersome, and so I decided to take a simpler approach of providing the driver as an abstraction that a program would use together with explicitly-requested memory. I did decide to preserve the general form of the API for this relocated abstraction, however, meaning that various classes and methods are provided that behave in the same way as those “left behind” in the Io server framework.

Thus, a program would itself request access to the GPIO-related memory, and it would then use GPIO-related abstractions to “do the right thing” with this memory. One would envisage that such a program would not be a “normal”, unprivileged program as such, but instead be more like a server or driver in its own right. Indeed, the LCD driver employs these abstractions to use SPI-based signalling with the LCD panel, and it uses the same techniques to configure the LCD clock frequencies using the CPM-related memory and CPM-related abstractions.

Although the GPIO driver follows existing conventions, the CPM driver has no obvious precedent in L4Re, but I adopted some of the conventions employed in the GPIO driver, adding more specialised methods and functions to expose functionality specific to the SoC. Since I had previously written a CPM driver for the JZ4780, the main objective was to make the JZ4720/JZ4740 driver resemble the existing driver as much as possible.

Introducing and Configuring Example Programs

Throughout the series of articles, I was working towards running one specific example program, making some new ones on the way for testing purposes. These additional programs are provided together with their configuration, accompanied by suitable configurations for existing examples and components, by the following patch files:

  • ingenic-mips-modules.diff (example program definitions)
  • qi_lb60-examples.diff (example program implementations and configuration details)

The additional programs (defined in l4/conf/modules.list) are as follows:

  • mips-qi_lb60-lcd-example (implemented by qi_lb60_lcd, configured by the mips-qi_lb60-lcd files)
  • mips-qi_lb60-lcd-driver-example (implemented by qi_lb60_lcd_driver, configured by the mips-qi_lb60-lcd-driver files)

Configurations are provided for the existing examples and components as follows:

  • mips-qi_lb60-fbdrv-example (configured by the mips-qi_lb60-fbdrv files)
  • mips-qi_lb60-spectrum-example (configured by the mips-qi_lb60-spectrum files)

All configurations reside in the l4/conf/examples directory. All new examples reside in the l4/pkg/examples/misc directory.

Further Work

In the final article in the series, I mentioned a few ideas for further work based on that described above:

Since completing the above work, I have already made some progress on the first two of these topics. More on that in an upcoming post!

Some Updates

Since writing this article, a few things are worth adding. First of all, the patches produced in the initial effort described by this series of articles are now available in an “initial archive” via the Web page documenting the effort. In contrast, a “current archive” provides patches for the current state of the work, with the aim being to focus these patches only on essential support for these devices within L4Re and Fiasco.OC, and with future development being done elsewhere.

Another couple of observations have been made since completing this initial effort that qualify or correct some information provided here. On the topic of the memory map needed to support the Ben NanoNote and its bootloader, it turned out that a fairly conventional arrangement was feasible after all and that only the “exception base” might be a problem. Here is the more conventional arrangement:

0x80600000 payload load address when copied by bootm
0x802d0000 bootstrap start address
0x80010000 kernel load address
0x80001000 exception handlers

The kernel load address and bootstrap start address are now the same as for other MIPS platforms. The exception handlers are positioned 4 kilobytes above the normal exception base just in case overwriting them might upset the bootloader.

Previously, I had experienced problems with Mag and its mag-input-libinput plugin, causing me to disable that plugin to get things working. This can be avoided by just providing a capability called “vbus” when starting Mag.

Porting L4Re and Fiasco.OC to the Ben NanoNote (Part 9)

Thursday, March 29th, 2018

After all my effort modifying the Fiasco.OC kernel, adapting driver code for the Ben NanoNote to work in the L4 Runtime Environment (L4Re), I had managed to demonstrate programs that could access the framebuffer and thus show things on the screen. But my stated goal is to demonstrate the functioning of existing code and a range of different components, specifically the “spectrum” example, GUI multiplexer (Mag), and framebuffer driver (fb-drv), supported by additional driver functionality I have added myself.

Still to do were the tasks of getting the framebuffer driver to access my LCD driver, getting Mag to work on top of that, and then getting the “spectrum” example to work as it had done with Fiasco.OC-UX. Looking back at this exercise now, I can see that it is largely a case of wiring up the components just as we saw earlier when I showed how to get access to the hardware through the Io server. But this didn’t prevent excursions into the code for some debugging operations or a few, more lasting, modifications.

Making a Splash

I had sought to test the entire stack consisting of the example, GUI multiplexer, framebuffer driver, and the rest of the software using a configuration derived from both the UX “spectrum” demonstration (found in l4/conf/examples/x86-fb.cfg) and the corresponding demonstration for ARM-based devices (found in l4/conf/examples/arm-rv-lcd.cfg). Unsurprisingly, this did not work, and so I started out stripping down my own example configuration in order to test the components individually.

On the way, I learned a few things that I wished I’d realised earlier. The first one is pretty mundane but very important. In order to strip down the configuration, I had tried to comment out various components and had done so using the hash symbol (#), which vim had helped to make me believe was a valid comment symbol. In fact, in Lua, if the hash symbol can be used for “program metadata”, perhaps for the usual Unix scripting declaration, then its use may be restricted to such narrow purposes and no others (as far as I could tell from quickly looking through the documentation). Broader use of the symbol appears to involve it acting as the length operator.

By making an assumption almost instinctively, due to the prevalence of the hash character as a comment symbol in Unix scripting, I was now being confronted with “Starting kernel …” and nothing more, making me think that I had really broken something. I had to take several steps back, consider what might really be happening, that the Ned task responsible for executing the configuration had somehow failed, and then come to the realisation that were I able to read the serial console output, I would be staring at a Lua syntax error!

So, removing the configuration details that I didn’t want to test straight away, I set about testing the framebuffer driver all by itself. Here is the configuration file involved (edited slightly):

local io_buses =
  {
    fbdrv = l:new_channel();
  };

l:start({
  caps = {
      fbdrv = io_buses.fbdrv:svr(),
      icu = L4.Env.icu,
      sigma0 = L4.cast(L4.Proto.Factory, L4.Env.sigma0):create(L4.Proto.Sigma0),
    },
  },
  "rom/io -vvvv rom/hw_devices.io rom/mips-qi_lb60-fbdrv.io");

local fbdrv_fb = l:new_channel();

l:startv({
  caps = {
      vbus = io_buses.fbdrv,
      fb   = fbdrv_fb:svr(),
    },
  },
  "rom/fb-drv", "-c", "nanonote"); -- configuration passed to the LCD driver

It was around this time that I discovered the importance of the naming of certain things, noted previously, and so in the accompanying Io configuration, the devices are added to a vbus called “fbdrv”. What this example should do is to run just the framebuffer driver (fb-drv) and the Io server.

Now, you might be wondering what the point of running just the driver, without any client programs, might be. Well, I had noticed that the driver should show a “splash screen”, and so I eagerly anticipated seeing whatever the L4Re developers had decided to show to us upon the driver starting up. Unfortunately, all I got was a blank screen. That made me think some bad things about what might be happening in the code. Fortunately, it didn’t take me long to realise what the problem was, discovering the following (in l4/pkg/fb-drv/server/src/splash.cc):

  if (fb_info->width < SPLASHNAME.width)
    return;
  if (fb_info->height < SPLASHNAME.height)
    return;

Meanwhile, in the source file providing the screen data (l4/pkg/fb-drv/server/data/splash1.c), I saw that the width and height of the image were given as being 480 pixels and 65 pixels respectively. The Ben’s screen is 320 pixels wide and 240 pixels high, thus preventing the supplied image from being shown.

It seemed worthwhile trying to replace this image just to reassure myself that the driver was actually working. The supplied image was exported from GIMP and so I attempted to reproduce this by loading one of my own images into GIMP, cropping to 320×240, and saving as a C source file with run-length encoding, 3 bytes per pixel, just as the supplied image had been created. I then copied it into the appropriate package subdirectory (l4/pkg/fb-drv/server/data) and modified the Makefile (l4/pkg/fb-drv/server/src/Makefile), changing it to reference my image instead of the supplied one.

I also needed to change a few other things in the Makefile, as it turned out, such as the definitions of the sources and libraries used by MIPS architecture devices. It was odd to encounter architecture-specific artefacts at this level in the system, but I suppose that different architectures tend to employ different kinds of display mechanisms: the x86 architecture and derivatives have their “legacy” devices; other architectures used in system-on-a-chip products have other peripherals and devices. Anyway, building the payload and booting the Ben gave me this:

The Ben NanoNote showing my L4Re framebuffer driver splash screen

The Ben NanoNote showing my L4Re framebuffer driver splash screen: rainbow lorikeets on a suburban lawn

So, the framebuffer driver will happily run, showing its splash screen until something connects to it and gets something else onto the screen. Here, we just let it run by itself until it is time to switch off!

Missing Inputs

There was still one component to be tested before arriving at the “spectrum” example: the GUI multiplexer, Mag. But testing it alone did not necessarily seem to make sense, because unlike the framebuffer driver, Mag’s role is merely to arbitrate between different framebuffer clients. So testing it and some kind of example together seemed like the only workable strategy.

I tried to test it with the “spectrum” example, but I only ever saw the framebuffer splash screen, so it seemed that either the example or Mag wasn’t working. But if it were Mag that wasn’t working, how would I be able to find my way to the problem? I briefly considered whether Mag had a problem with the display configuration, given that I had already seen such a problem, albeit a minor one, in the framebuffer driver.

I chased references around the source code until I established that Mag was populating a fixed-length array of “factory” objects defined in one place (l4/pkg/mag-gfx/include/factory) whose members were statically defined in another place (l4/pkg/mag/server/src/screen.cc), each one indicating how to deal with a particular pixel format. Such objects being of type Mag_gfx::Mem::Factory are returned in the Mag main program (in pkg/mag/server/src/main.cc) when the following code is executed:

  Screen_factory *f = dynamic_cast<Screen_factory*>(Screen_factory::set.find(view_i.pixel_info));

I had wondered whether f was being presented with a null value and thus stopping Mag right there, but since there was a suitable factory object being created for the Ben’s pixel format, it seemed rather unlikely. So, the only approach I had considered at this point was not a particularly convenient one: I would have to replicate Mag piece by piece until discovering which part of it caused it to fail.

I set out with a simple example borrowing code from Mag that requests a memory region via a capability or channel, displaying some data on the screen. This managed to work. I then expanded the example, adding various data structures, copying functionality into the example directory from Mag, slowly reassembling what Mag had been all along. Things kept working, right until the point where I needed to set the example going as a server rather than have it wait in an infinite loop doing nothing. Then, the screen would very briefly show the splash image, then the bit patterns, and then go blank.

But maybe Mag was going to clear the framebuffer anyway and thus the server loop was working? Perhaps this was what success actually looked like under these circumstances, which is to say that it did not involve seeing two brightly-coloured birds on a lawn. At this point, out of laziness, I had avoided integrating the plugins into my example, and that was the only remaining difference between it and Mag.

I started to realise that maybe it was a matter of removing things from Mag and seeing if I could get it to work, and the obvious candidates for removal were the plugins. So I removed all the plugins as defined in the Makefile (found at pkg/mag/server/src/Makefile) and tested to see if it changed Mag’s behaviour. Sure enough, the screen went blank. I then added plugins back one by one, knowing by now that the mag-client_fb plugin was going to be required to get the example working.

Well, it turned out that there was one plugin causing all the problems: mag-input-libinput. Removing that got the “spectrum” example to work:

The Ben NanoNote showing the "spectrum" framebuffer example

The Ben NanoNote showing the "spectrum" framebuffer example

Given that I haven’t addressed the matter of supporting input devices, which for the Ben would mostly concern the keyboard, disabling this errant plugin seemed like a reasonable compromise for now.

Update: a description of a remedy for the problem with the plugin is described in the summary article.

A Door Opens?

There isn’t much left to be said, which perhaps makes the ending something of an anticlimax. But perhaps this is part of the point of the exercise, that the L4Re/Fiasco.OC combination now just seems to work on the Ben NanoNote, and that it could potentially in future be just another thing that this software supports. Of course, the Ben is such a relatively rare device that it isn’t likely to have many potential users, but the SoC family of which the JZ4720 is a part is employed by a number of different devices.

If there haven’t been any privately-maintained ports of this software to those other devices, then this work potentially opens the door to its use on other handheld devices like the GCW Zero or any number of randomly-sourced pocket games consoles, portable media players, and even smartwatches and wearable devices, all of which have been vehicles for the SoC vendor’s products. The Letux 400 could probably be supported given the similarity of its own SoC, the JZ4730, to that used in the Ben.

When the Ben was released, work had been done, first by the SoC vendor, then by the Qi Hardware people, to provide a functioning GNU/Linux system for the device. Clearly, there isn’t an overwhelming need for another kind of system if the intention is to just use the device with existing Free Software. But perhaps this is another component in this exercise: to make other technological solutions possible and to explore avenues that get ignored as everyone follows the mainstream. The Ben is hardly a mainstream product itself; why not use it to make other alternative choices?

It seems interesting to consider writing other drivers for the Ben and to gain experience with microkernel-based systems design. The Genode framework might also be worth investigating, given its focus on becoming a system suitable for deployment to end-users. The Hurd was also ported in an exploratory fashion to one of the L4 implementations some time ago, and revisiting this might be possible, even simplifying the effort thanks to the evolution in features provided by implementations like Fiasco.OC.

In any case, I hope that this account has been vaguely interesting and entertaining. If you managed to read it all, you deserve my sincere thanks for spending the time to do so. A summary of the work will probably follow, and the patches themselves are available on a page dedicated to the effort. Good luck with your own investigations into alternative Free Software operating systems!

Porting L4Re and Fiasco.OC to the Ben NanoNote (Part 8)

Tuesday, March 27th, 2018

With some confidence that the effort put in previously had resulted in a working Fiasco.OC kernel, I now found myself where I had expected to be at the start of this adventure: in the position of trying to get an example program working that could take advantage of the framebuffer on the Ben NanoNote. In fact, there is already an example program called “spectrum” that I had managed to get working with the UX variant of Fiasco.OC (this being a kind of “User Mode Fiasco” which runs as a normal process, or maybe a few processes, on top of a normal GNU/Linux system).

However, if this exercise had taught me nothing else, it had taught me not to try out huge chunks of untested functionality and then expect everything to work. It would instead be a far better idea to incrementally add small chunks of functionality whose correctness (or incorrectness) would be easier to see. Thus, I decided to start with a minimal example, adding pieces of tested functionality from my CI20 experiments that obtain access to various memory regions. In the first instance, I wanted to see if I could exercise control over the LCD panel via the general-purpose input/output (GPIO) memory region.

Wiring Up

At this point, I should make a brief detour into the matter of making peripherals available to programs. Unlike the debugging code employed previously, it isn’t possible to just access some interesting part of memory by taking an address and reading from it or writing to it. Not only are user space programs working with virtual memory, whose addresses need not correspond to the actual physical locations employed by the underlying hardware, but as a matter of protecting the system against program misbehaviour, access to memory regions is strictly controlled.

In L4Re, a component called Io is responsible for granting access to predefined memory regions that describe hardware peripherals or devices. My example needed to have the following resources defined:

  • A configuration description (placed in l4/conf/examples/mips-qi_lb60-lcd.cfg)
  • The Io peripheral bus description (placed in l4/conf/examples/mips-qi_lb60-lcd.io)
  • A list of modules needing deployment (added as an entry to l4/conf/modules.list, described separately in l4/conf/examples/mips-qi_lb60-lcd.list)

It is perhaps clearer to start with the Io-related resource, which is short enough to reproduce here:

local hw = Io.system_bus()

local bus = Io.Vi.System_bus
{
  CPM = wrap(hw:match("jz4740-cpm"));
  GPIO = wrap(hw:match("jz4740-gpio"));
  LCD = wrap(hw:match("jz4740-lcd"));
}

Io.add_vbus("devices", bus)

This is Lua source code, with Lua having been adopted as a scripting language in L4Re. Here, the important things are those appearing within the hw:match function call brackets: each of these values refers to a device defined in the Ben’s hardware description (found in l4/pkg/io/io/config/plat-qi_lb60/hw_devices.io), identified using the “hid” property. The “jz4740-gpio” value is used to locate the GPIO device or peripheral, for instance.

Also important is the name used for the bus, “devices”, in the add_vbus function call at the bottom, as we shall see in a moment. Moving on from this, the configuration description for the example itself contains the following slightly-edited details:

local io_buses =
  {
    devices = l:new_channel();
  };

l:start({
  caps = {
      devices = io_buses.devices:svr(),
      icu = L4.Env.icu,
      sigma0 = L4.cast(L4.Proto.Factory, L4.Env.sigma0):create(L4.Proto.Sigma0),
    },
  },
  "rom/io -vvvv rom/hw_devices.io rom/mips-qi_lb60-lcd.io");

l:start({
  caps = {
      icu = L4.Env.icu,
      vbus = io_buses.devices,
    },
  },
  "rom/ex_qi_lb60_lcd");

Here, the “devices” name seemingly needs to be consistent with the name used in the caps mapping for the Io server, set up in the first l:start invocation. The name used for the io_buses mapping can be something else, but then the references to io_buses.devices must obviously be changed to follow suit. I am sure you get the idea.

The consequence of this wiring up is that the example program, set up at the end, accesses peripherals using the vbus capability or channel, communicating with Io and requesting access to devices, whose memory regions are described in the Ben’s hardware description. Io acts as the server on this channel whereas the example program acts as the client.

Is This Thing On?

There were a few good reasons for trying GPIO access first. The most significant one is related to the framebuffer. As you may recall, as the kernel finished starting up, sigma0 started, and we could then no longer be sure that the framebuffer configuration initialised by the bootloader had been preserved. So, I could no longer rely on any shortcuts in getting information onto the screen.

Another reason for trying GPIO operations was that I had written code for the CI20 that accessed input and output pins, and I had some confidence in the GPIO driver code that I had included in the L4Re package hierarchy actually working as intended. Indeed, when working with the CI20, I had implemented some driver code for both the GPIO and CPM (clock/power management) functionality as provided by the CI20’s SoC, the JZ4780. As part of my preparation for this effort, I had adapted this code for the Ben’s SoC, the JZ4720. It was arguably worthwhile to give this code some exercise.

One drawback with using general-purpose input/output, however, is that the Ben doesn’t really have any conveniently-accessed output pins or indicators, whereas the CI20 is a development board with lots of connectors and some LEDs that can be used to give simple feedback on what a program might be doing. Manipulating the LCD panel, in contrast, offers a very awkward way of communicating program status.

Experiments proved somewhat successful, however. Obtaining access to device memory was as straightforward as it had been in my CI20 examples, providing me with a virtual address corresponding to the GPIO memory region. Inserting some driver code employing GPIO operations directly into the principal source file for the example, in order to keep things particularly simple and avoid linking problems, I was able to tell the panel to switch itself off. This involves bit-banging SPI commands using a number of output pins. The consequence of doing this is that the screen fades to white.

I gradually gained confidence and decided to reserve memory for the framebuffer and descriptors. Instead of attempting to use the LCD driver code that I had prepared, I attempted to set up the descriptors manually by writing values to the structure that I knew would be compatible with the state of the peripheral as it had been configured previously. But once again, I found myself making no progress. No image would appear on screen, and I started to wonder whether I needed to do more work to reinitialise the framebuffer or maybe even the panel itself.

But the screen was at least switched on, indicating that the Ben had not completely hung and might still be doing something. One odd thing was that the screen would often turn a certain colour. Indeed, it was turning purple fairly consistently when I realised my mistake: my program was terminating! And this was, of course, as intended. The program would start, access memory, set up the framebuffer, fill it with a pattern, and then finish. I suspected something was happening and when I started to think about it, I had noticed a transient bit pattern before the screen went blank, but I suppose I had put this down to some kind of ghosting or memory effect, at least instinctively.

When a program terminates, it is most likely that the memory it used gets erased so as to prevent other programs from inheriting data that they should not see. And naturally, this will result in the framebuffer being cleared, maybe even the descriptors getting trashed again. So the trivial solution to my problem was to just stop my program from terminating:

while (1);

That did the trick!

The Build-Up

Accessing the LCD peripheral directly from my example is all very well, but I had always intended to provide a display driver for the Ben within L4Re. Attempting to compile the driver as part of the general build process had already identified some errors that had been present in what effectively had been untested code, migrated from my earlier work and modified for the L4Re APIs. But I wasn’t about to try and get the driver to work in one go!

Instead, I copied more functionality into my example program, giving things like the clock management functionality some exercise, using my newly-enabled framebuffer output to test the results from various enquiries about clock frequencies, also setting various frequencies and seeing if this stopped the program from working. This did lead me on another needlessly-distracting diversion, however. When looking at the clock configuration values, something seemed wrong: they were shifted one bit to the left and therefore provided values twice as large as they should have been.

I emitted some other values and saw that they too were shifted to the left. For a while, I wondered whether the panel configuration was wrong and started adjusting all sorts of timing-related values (front and back porches, sync pulses, video-related things), reading up about how the panel was configured using the SPI communications described above. It occurred to me that I should investigate how the display output was shifted or wrapped, and I used the value 0x80000001 to put bit values at the extreme left and right of the screen. Something was causing the output to be wrapped, apparently displaced by 9 pixels or 36 bytes to the left.

You can probably guess where this is heading! The panel configuration had not changed and there was no reason to believe that it was wrong. Similarly, the framebuffer was being set up correctly and was not somehow moved slightly in memory, not that there is really any mechanism in L4Re or the hardware that would cause such a bizarre displacement. I looked again at my debugging code and saw my mistake; in concise terms, it took this form:

for (i = 0; i < fb_size / 4; i++)
{
    fb[i] = value & mask ? onpix : offpix; // write a pixel
    if ((i % 10) == 0)                     // move to the next bit?
    ...
}

For some reason, I had taken some perfectly acceptable code and changed one aspect of it. Here is what it had looked like on the many occasions I had used it in the past:

i = 0;
while (i < fb_size / 4)
{
    fb[i] = value & mask ? onpix : offpix; // write a pixel
    i++;
    if ((i % 10) == 0)                     // move to the next bit?
    ...
}

That’s right: I had accidentally reordered the increment and “next bit” tests, causing the first bit in the value to occupy only one pixel, prematurely skipping to the next bit; at the end of the value, the mask would wrap around and show the remaining nine pixels of the first bit. Again, I was debugging the debugging and wasting my own time!

But despite such distractions, over time, it became interesting to copy the complete source file that does most of the work configuring the hardware from my driver into the example directory and to use its functions directly, these functions normally being accessed by more general driver code. In effect, I was replicating the driver within an environment that was being incrementally enhanced, up until the point where I might assume that the driver itself could work.

With no obvious problems having occurred, I then created another example without all the memory lookup operations, copied-in functions, and other “instrumentation”: one that merely called the driver API, relying on the general mechanisms to find and activate the hardware…

  struct arm_lcd_ops *ops = arm_lcd_probe("nanonote");

…obtaining a framebuffer address and size…

  fb_vaddr = ops->get_fb();
  fb_size = ops->get_video_mem_size();

…and then writing data to the screen just as I had done before. Indeed, this worked just like the previous example, indicating that I had managed to implement the driver’s side of the mechanisms correctly. I had a working LCD driver!

Adopting Abstractions

If the final objective had been to just get Fiasco.OC running and to have a program accessing the framebuffer, then this would probably be the end of this adventure. But, in fact, my aim is to demonstrate that the Ben NanoNote can take advantage of the various advertised capabilities of L4Re and Fiasco.OC. This means supporting facilities like the framebuffer driver, GUI multiplexer, and potentially other things. And such facilities are precisely those demonstrated by the “spectrum” example which I had promised myself (and my readership) that I would get working.

At this point, I shouldn’t need to write any more of my own code: the framebuffer driver, GUI multiplexer, and “spectrum” example are all existing pieces of L4Re code. But what I did need to figure out was whether my recipe for wiring these things up was still correct and functional, and whether my porting exercise had preserved the functionality of these things or mysteriously broken them!