Paul Boddie's Free Software-related blog

Paul's activities and perspectives around Free Software

Reconsidering Classic Programming Interfaces

Since my last update, I have been able to spend some time gradually broadening and hopefully improving the support for classic programming interfaces in my L4Re-based experiments, centred around a standard C library implementation based on Newlib. Of course, there were some frustrations experienced along the way, and much remains to be done, not only in terms of new functionality that will need to be added, but also verification and correction of the existing functionality as I come to realise that I have made mistakes, these inevitably leading to new frustrations.

One area I previously identified for broadened support was that of process creation and the ability to allow programs to start other programs. This necessitated a review of the standard C process control functions, which are deliberately abstracted from the operating system and are much simpler and more constrained than those found in the unistd.h file that Unix programmers might be more familiar with. The traditional Unix functions are very much tied up with the Unix process model, and there are some arguments to be made that despite being “standard”, these functions are a distraction and, in various respects, undesirable from a software architecture perspective, for both applications and the operating systems that run them.

So, ignoring the idea that I might support the likes of execl, execv, fork, and so on, I returned to consideration of the much more limited system function that is part of the C language standards, this simply running an abstract command provided by a character string and returning a result code when the command has completed:

int system(const char *command);

To any casual application programmer, this all sounds completely reasonable: they embed a command in their code that is then presented to “the system”, which runs the commands and hands back a result or status code. But those of us who are accustomed to running commands at the shell and in our own programs might already be picking apart this simple situation.

First of all, “the system” needs to have what the C standards documentation calls a “command processor”. In fact, even Unix standardisation efforts have adopted the term, despite the Linux manual pages referring to “the shell”. But at this point, my own system does not have a shell or a command processor, instead providing only a process server that manages the creation of new processes. And my process server deals with arrays or “vectors” of strings that comprise a command to be used to run a given program, configured by a sequence of arguments or parameters.

Indeed, this brings us to some other matters that may be familiar to anyone who has had the need to run commands from within programs: that of parameterising command invocations by introducing our own command argument values; and that of making sure that the representation of the program name and its arguments do not cause the shell to misinterpret these elements, by letting an errant space character break the program name into two, for instance. When dealing only with command strings, matters of quoting and tokenisation enter the picture, making the exercise very messy indeed.

So, our common experience has provided us with a very good reason to follow the lead of the classic execv Unix function and to avoid the representational issues associated with command string processing. In this regard, the Python standard library has managed to show the way in some respects, introducing the subprocess module which features interfaces that are equivalent to functions like system and popen, supporting the use of both command strings and lists of command elements to represent the invoked command.

Oddly, however, nobody seems to provide a “vector” version of the system function at the C language level, but it seemed to be the most natural interface I might provide in my own system:

int systemv(int argc, const char *argv[]);

I imagine that those doing low-level process creation in a Unix-style environment would be content to use the exec family of functions, probably in conjunction with the fork function, precisely because a function like execv “shall replace the current process image with a new process image”, as the documentation states. Obviously, replacing the current process isn’t helpful when implementing the system function because it effectively terminates the calling program, whereas the system function is meant to allow the program to continue after command completion. So, fork has to get involved somehow.

The Flow of Convention

I get the impression that people venturing along a similar path to mine are often led down the trail of compatibility with the systems that have gone before, usually tempted by the idea that existing applications will eventually be content to run on their system without significant modification, and thus an implementer will be able to appeal to an established audience. In this case, the temptation is there to support the fork function, the exec semantics, and to go with the flow of convention. And sometimes, a technical obstacle seems like a challenge to be met, to show that an implementation can provide support for existing software if it needs or wants to.

Then again, having seen situations where software is weighed down by the extra complexity of features that people believe it should have, some temptations are best resisted, perhaps with a robust justification for leaving out any particular supposedly desirable feature. One of my valued correspondents pointed me to a paper by some researchers that provides a robust argument for excluding fork and for promoting alternatives. Those alternatives have their shortcomings, as noted in the paper, and they seem rather complicated when considering simple situations like merely creating a completely separate process and running a new program in it.

Of course, there may still be complexity in doing simple things. One troublesome area was that of what might happen to the input and output streams of a process that creates another one: should the new process be able to receive the input that has been sent to the creating process, and should it be able to send its output to the recipient of the creating process’s output? For something like system or systemv, the initial “obvious” answer might be the total isolation of the created process from any existing input, but this limits the usefulness of such functions. It should arguably be possible to invoke system or systemv within a program that is accepting input as part of a pipeline, and for a process created by these functions to assume the input processing role transparently.

Indeed, the Unix world’s standards documentation for system extends the C standard to assert that the system function should behave like a combination of fork and execl, invoking the shell utility, sh, to initiate the program indicated in the call to system. It all sounds a bit prescriptive, but I suppose that what it largely means is that the input and output streams should be passed to the initiated program. A less prescriptive standard might have said that, of course, but who knows what kind of vendor lobbying went on to avoid having to modify the behaviour of those vendors’ existing products?

This leads to the awkward problem of dealing with the state of an input stream when such a stream is passed to another process. If the creating process has already read part of a stream, we need the newly created process to be aware of the extent of consumed data so that it may only read unconsumed data itself. Similarly, the newly created process must be able to append output to the existing output stream instead of overwriting any data that has already been written. And when the created process terminates, we need the creating process to synchronise its own view of the input and output streams. Such exercises are troublesome but necessary to provide predictable behaviour at higher levels in the system.

Some Room for Improvement

Another function that deserves revisiting is the popen function which either employs a dedicated output stream to capture the output of a created process within a program, or a dedicated input stream so that a program can feed the process with data it has prepared. The mode indicates what kind of stream the function will provide: “r” yields an output stream passing data out of the process, “w” yields an input stream passing data into the process.

FILE *popen(const char *command, const char *mode);

This function is not in the C language standards but in Unix-related standards, but it is too useful to ignore. Like the system function, the standards documentation also defines this function in terms of fork and execl, with the shell getting involved again. Not entirely obvious from this documentation is what happens with the stream that isn’t specified, however, but we can conclude that with its talk of input and output filters, as well as the mention of those other functions, that if we request an output stream from the new process, the new process will acquire standard input from the creating process as its own input stream. Correspondingly, if we request an input stream to feed the new process, the new process will acquire standard output for itself and write output to that.

This poses some concurrency issues that the system function largely avoids. Since the system function blocks until the created process is completed, the state of the shared input and output streams can be controlled. But with popen, the created process runs concurrently and can interact with whichever stream it acquired from the creating process, just as the creating process might also be using it, at least until pclose is invoked to wait for the completion of the created process. The standards documentation and the Linux manual page both note such pitfalls, but the whole business seems less than satisfactory.

Again, the Python standard library shows what a better approach might be. Alongside the popen function, the popen2 function creates dedicated input and output pipes for interaction with the created process, the popen3 function adds an error pipe to the repertoire, and there is even a popen4 function that presumably does what some people might expect from popen2, merging the output and error streams into a single stream. Naturally, this was becoming a bit incoherent, and so the subprocess module was brought in to clean it all up.

Our own attempt at a cleaner approach might involve the following function:

pid_t popenv(int argc, const char *argv[], FILE **input, FILE **output, FILE **error);

Here, we want to invoke a program using a vector containing the program and arguments, just as we did before, but we also want to acquire the input, output and error streams. However, we might allow any of these to be specified as NULL, indicating that any such stream will not be opened for the created process. Since this might cause problems, we might need to create special “empty” or “null” streams, where appropriate, so as not to upset the C library.

Unlike popen, we might also provide the process identifier for the created process. This would allow us to monitor the process, control it in some way, and to wait for its completion. The nature of a process identifier is potentially more complicated than one might think, especially in my own system where there can be many process servers, each of them creating new processes without any regard to the others.

A Simpler Portable Environment Standard

Maybe I am just insufficiently aware of the historical precedents in this regard, but it seems that while C language standards are disappointingly tame when it comes to defining interaction with the host environment, the Unix or POSIX standardisation efforts go into too much detail and risk burdening any newly designed system with the baggage of systems that happened to be commercially significant at a particular point in time. Windows NT infamously feigned compliance with such standards to unlock the door to lucrative government contracts and to subvert public software procurement processes, generating colossal revenues that easily paid for any inconvenient compliance efforts. However, for everybody else, such standards seem to encumber system and application developers with obligations and practices that could be refined, improved and made more suitable for modern needs.

My own work depends on L4Re which makes extensive use of capabilities to provide access to entities within the system. For example, each process relies on a task that provides a private address space, within which code and data reside, along with an object space that retains the capabilities available within the task. Although the Fiasco (or L4Re) microkernel has some notion of all the tasks in the system, as well as all the threads, together with other kinds of objects, such global information is effectively private to the kernel, and “user space” programs merely deal with capabilities that reference specific objects. For such programs, there is no way to get some kind of universal list of tasks or threads, or to arbitrarily request control over any particular instances of them.

In systems with different characteristics to the ones we already know, we have to ask ourselves whether we want to reproduce legacy behaviour. To an extent, it might be desirable to have registers of resident processes and the ability to list the ones currently running in the system, introducing dedicated components to retain this information. Indeed, my process servers could quite easily enumerate and remember the details of processes they create, also providing an interface to query this register, maybe even an interface to control and terminate processes.

However, one must ask whether this is essential functionality or not. For now, the rudimentary shell-like environment I employ to test this work provides similar functionality to the job control features of the average Unix shell, remembering the processes created in this environment and offering control in a limited way over this particular section of the broader system.

And so the effort continues to try and build something a little different from, and perhaps a bit more flexible than, what we use today. Hopefully it is something that ends up being useful, too.