Bobulate


Archive for April, 2010

FOSS Nigeria 2010 report

Tuesday, April 27th, 2010

It’s a cold and windy day here in Kano; that’s comparable to a nice warm day in summer in Nijmegen, so I keep explaining, and sitting around in shorts on the steps in front of the university guest house at BUK old campus this morning was very nice.

Frederik has already blogged a bit about the conference and talks and added some pictures — pictures which are quite similar to last year’s bunch, which you can see at a-ig’s blog from last year. I will try to write about the way we prepared for the conference and what our contribution is. We sort of go to the conference as rock stars, but in all honesty there is a great deal we don’t know (and we say so). Just last night, over a fanta at the CS student’s joint here at BUK, I learned a great deal more about Debian packaging and how to effectively share packages when only limited bandwidth is available. Something to add to my trove of knowledge and to share when someone needs it.

So in a sense we (Frederik and myself) act as catalysts, triggering other people to come forward and share their knowledge.

In these past two editions of the FOSS Nigeria conference, the conference schedule has not been fixed beforehand except in the broadest terms (Friday the whole day except for Juma’a prayers, Saturday afternoon, Sunday the whole day until 4 for the closing ceremony) and it hasn’t been clear just how much we’d be speaking anyway. So we prepared some introductory talks on Free Software: what is it; what do software licenses do; what is Linux; what is GNU; how do Free Software projects work; why (and how) you can contribute. For these introductory talks we had slides prepared beforehand; on the evening before the conference we discussed with Mustapha what the schedule should be and how long the talks would go on.

During the conference itself we take questions on paper. The conference writing pad is heavily used for that. The reason we don’t take the questions at the end of each talk from the audience is that that takes too much time to do right — it would mean more mikes, walking into the audience, etc. Also, some of the questions take some serious thought before answering. So we take notes. All the questions are collected and we type them up into slides during breaks or during each other’s talks. Then whenever convenient we do a Q&A session where we go through the question slides and answer them as best we can. Typically that gives us a 45-minute session right after lunch and one at the end of the day. Some questions we got this year:

  • Can you tell us about some Free Software application for agriculture?
  • Can VirtualBox access my Windows partition directly?
  • Can I have your autograph?
  • Is there a Free Software replacement for SPSS?
  • Please demonstrate some 3D and animation tools under Linux
  • Is there a good forms and reports generator for MySQL?

Dear Lazyweb: do you have good answers for these questions? Because we could say “umm .. maybe” ; “probably, but I think it requires guest additions” ; “yes, see me later” ; “I think R does the stats part but I don’t otherwise know” ; “there’s Blender, but I don’t know how to use it” ; “I don’t know”. Really makes you feel useless going over answers like that.

Fortunately a local business is specialized in using Blender for architectural modeling and walkthroughs, so they gave a presentation on the third day of the conference on how they use a Free Software tool (Blender, of course) in their business. “Using FOSS to be your own BOSS”. They said they decided to give their talk after hearing our rather limited answer about 3D, so that was a great catalytic moment.

After day 1 we sat down to plan day 2: who else is talking, how long, what makes the most sense for the audience based on the questions we have had now. So day 2 got a bit more of a practical slant for the afternoon — the morning was taken up by LPI exams. At the end of the day we still had 30-odd questions left in the queue to answer, and not much material for the next day, so we spent the evening writing up slides for new topics and dealing with the bits of paper we’d been handed.

Day 3 turned out to have no slots for us except Q&A sessions, because there were four other speakers including the Blender guys. That’s a great sign, so we skipped our KDE multimedia and Javascript and Python talks to listen to what was happening in Free Software in Kano and surroundings. Blender; databases and web portals; how to make money with Free Software.

There was also an announcement of a Kano Python users group (PyKano) in an effort to get some more Python development off the ground here. I dare say that was one of the best moments of the conference, the launch of a concrete, short term project to improve the software development and ICT community in Kano and promote Free Software at the same time.

A number of translation efforts have been pushed forward as well during these three days. The keyboard stuff I’ve written about is part of an effort to make it easier for everyone to type translated strings. I believe that we’ll get a big influx of Hausa translations over the next few months — probably by email through me, so I’m going to end up as translation coordinator for a language I don’t speak — as well as new work on Kanuri, where first we’ll have to figure out what the language code for it even is.

Next day or two I need to get a translation environment set up here so I can explain better how to begin doing the translations, including tools for testing and whatnot. That means running Lokalize and figuring out how it works. It looks rather intimidating compared to vi applied to the same data.

Anyway, returning to conference planning and the like: the schedule here is a living, changing thing that requires plenty of flexibility, but that makes speaking here such fun. It’s also a broad conference and a great chance to learn many new things — not in the least because we get asked about everything so we have to research a little about everything as well. Expect the unexpected, I guess.

Planning is underway for next year, probably in a different city and with any luck there will be some satellite events as well in outlying towns; if that means traveling around for a few days extra to help ICT in Kano and other states and to spread Free Software (low-cost, freedom-granting, straightforward and by-the-rules Software) then I’m looking forward to FOSS Nigeria 2011 already.

Kanuri and Hausa Keyboards, part 2

Monday, April 26th, 2010

Thanks to the comments on my previous blog entry about Hausa and Kanuri keyboard layouts, I’ve looked into the topic a little more, adding yet more options for typing.

Dead keys: the keyboard may have “dead keys” for accents. A dead key does not print anything when pressed, but combines with whatever you type next. For instance, you could have a dead accent acute key that combines with a, so you hit dead-acute followed by a and get á. In Kubuntu 9.04 (which is the OS I have at hand during this trip, so some of the advice offered in comments previously doesn’t apply) dead-acute k combines to ḱ. But this does offer a possible solution: using dead-acute with b, d, k, r and e. These will produce the Hausa b, d and k with a hook and the Kanuri r with a line and upside-down e.

First, I had to find a key on my keyboard to define as dead-acute, since I don’t have one otherwise. I used the strange key squashed between enter and the arrow keys, keycode 94.

xmodmap -e "keycode 94 = dead_acute"

From then, I could immediately use the dead-acute key for combinations that are already defined in X, like á, é, ń and others. But it doesn’t help me type Hausa all that much.

Next stop, .Xcompose. This is a file I create in my home directory. Reading it in seems to require an X restart, so log out and log in after creating the file. You will note that it uses the new dead-acute key to create all of the Hausa and Kanuri and the Naira symbol out of existing keys. The Naira is already pre-defined as multi-N-equals, but that is equally hard to type.

So I put this into my .XCompose. It takes the current configuration and then adds the dead-acute key combinations just described. Of course this is no official keyboard layout, but it may help some people type Hausa at speed when no other keyboard is available.


# Import default rules from the system Compose file:
include "/usr/share/X11/locale/en_US.UTF-8/Compose"

# Hausa definitions
<dead_acute> <k> : "ƙ" U0199 # HAUSA k
<dead_acute> <K> : "Ƙ" U0198 # HAUSA K
<dead_acute> <b> : "ɓ" U0253 # HAUSA b
<dead_acute> <B> : "Ɓ" U0181 # HAUSA B
<dead_acute> <d> : "ɗ" U0257 # HAUSA d
<dead_acute> <D> : "Ɗ" U018A # HAUSA D
<dead_acute> <e> : "ə" schwa # KANURI e
<dead_acute> <E> : "Ə" SCHWA # KANURI E
<dead_acute> <r> : "ɍ" U024D # KANURI r
<dead_acute> <R> : "Ɍ" U024C # KANURI R
<dead_acute> <n> : "₦" U20a6 # NAIRA SIGN
<dead_acute> <N> : "₦" U20a6 # NAIRA SIGN

I think an advantage of a dead-key approach is that you don’t have to hold anything down and the symbols on the keyboard still resemble what you’re typing. Of course, getting a dead-acute key to stick across sessions is the next issue to deal with, or perhaps I should set up something like super instead (that can be done with the KDE keyboard layout tool) so that super-b is ɓ.

In any case, it goes to show that there’s many ways to do it — whatever it is. In the end the KDE UserBase Compose Key tutorial has been vaguely helpful, but mostly the links at the bottom of that page.

[[ One might wonder why I’m blogging this kind of thing, but that’s because I promised some people here in Kano that I would look some things up for them, and the most effective way to get the information back to them — and back to their students — is to bung it on the internet, so that searching for Nigerian keyboard layouts for Hausa turns this information up (as well as useful things like the OLPC layout). ]]

Hausa and Kanuri Keyboards

Monday, April 26th, 2010

To type Hausa and Kanuri letters like ə, Ɓ or ƙ you will need to modify your keyboard layout. A keyboard layout maps the buttons on the keyboard to symbols which are displayed — you do not have to look at the letters painted on the keyboard and the computer doesn’t know what’s painted there anyway. Let’s look at three (and a half) ways of putting these letters into your document.

KCharSelect: the application ‘kcharselect’ helps you pick characters from the entire range of Unicode characters — if there’s a letter for something, then kcharselect can help you find it. On my KUbuntu 9.04 installation it is not installed by default, so I used the package manager to add it first (I used sudo apt-get install kcharselect but the graphical manager can do it as well). If you like GNOME apps, it looks like ‘gucharmap’ is the one you want.

Once kcharselect is started it presents you with a page full of characters. You can click on any character to obtain information about it, or double click on it to add it to the little text box at the bottom of the window.

At the top of the window there is a search bar; type part of a description or character name and the displayed characters will be filtered. For instance, I typed in “Nigeria” and that slims the display down to the letters I’m interested in (no capital ƙ though?) as well as a Naira sign. Good!

After double-clicking the characters I am interested in into the text bar below, the “to clipboard” button puts them on the clipboard so I can paste them into the document I’m writing. Bit of a round-about way to do it and not fast when typing long documents, but it works.

Using xmodmap: Each key on the keyboard has a number. To find out which number, start the “xev” program from a terminal window (in KDE, start konsole; in GNOME, start gnome-terminal; then type xev). A small window with a box will appear. Move the mouse over the window. Notice that lots of output appears in the terminal from which you started xev.

Next, hit a key on your keyboard. You will see more output, which looks something like this:


KeyPress event, serial 30, synthetic NO, window 0x1600001,
root 0x13c, subw 0x1600002, time 21276913, (48,63), root:(53,895),
state 0x0, keycode 51 (keysym 0x1000259, schwa), same_screen YES,

The important number is the keycode (51 in this example). That is the number of the key you just pressed.

It is convenient to find four different keys on your system, one for ə, one for ƙ, one for ɓ and one for ɗ. On my keyboard, I used the keys [, ], \ and \ (I have two \ keys) which have keycodes 34, 35, 51 and 94.

The next step is knowing what the keyboard symbols are that you want for each key. The ə is called schwa, but the other Hausa letters don’t have a name in the standard X keysym table. I looked them up with kcharselect so I could use their Unicode names, and found the following:


Khook Ƙ 0198
khook ƙ 0199
Bhook Ɓ 0181
bhook ɓ 0253
Dhook Ɗ 018A
dhook ɗ 0257

The final step is to assign these keyboard symbols to the key codes, so that when you press the key the right symbol appears. I’m going to put ƙ on my [ key (key code 34, key symbols U0199 and U0198). To do this, use xmodmap. This utility can be used to remap your keyboard, and can also hopelessly mess the keyboard up until you logout — so be a little careful.


xmodmap -e "keycode 34 = U0199 U0198"

This maps keycode 34 to U0199 (ƙ) when pressed normally and to U0198 (Ƙ) when pressed together with shift — just like normal lowercase- and uppercase letters. The next three commands assign the other keys:


xmodmap -e "keycode 35 = U0253 U0181"
xmodmap -e "keycode 51 = U0257 U018A"
xmodmap -e "keycode 94 = schwa SCHWA"

After these four commands, when I run my finger across the top row of letters on my keyboard, it produces qwertyuiopƙɓɗ; the ə is a strange key sandwiched between enter and the arrow keys. But the result is that I can type Hausa and Kanuri in a sensible way.

xmodmap revisited: instead of picking separate keys for each Hausa letter, you could add them to the existing keys for b, k, d and e. For that, we’ll need to make sure there is a Mode_switch key set. I used the AltGr key on my keyboard, and found out that it had keycode 108. So I set it to be the mode switcher with


xmodmap -e "keycode 108 = Mode_switch"

Then I set up the b key (for instance) with the symbols for b, B, ɓ and Ɓ. These will appear when the key is pressed normally, when it is pressed with shift, pressed with AltGr and when pressed with shift and AltGr.


xmodmap -e "keysym b = b B U0253 U0181"

Using keyboard layouts: I have tried the Nigerian keyboard layout, Hausa variant, but there I couldn’t find the ƙ at all, so I think that layout is broken in some way. I selected it from KDE’s systemsettings / Language & Region module under keyboard, but it’s no success. I can type lots of strange characters, but not the ones I need in Hausa.

On Source Signing

Thursday, April 22nd, 2010

Brian Gough (a gentleman with whom I’ve stood in a room at one point, but I don’t think we’ve ever spoken) points out that KDE (that is, the release team for KDE) doesn’t GPG sign the source packages that it releases. Hm, interesting, as it’s true that the recent KDE SC 4.4 source tarballs don’t come with an MD5SUMS file or anything else. Going back in history, I find Attic/3.5.9/src which has an MD5SUMS file or the older Attic/3.5/src directory which has an MD5SUMS and an .asc file. Hunh, come to think of it I don’t even know how to check if the .asc file matches the MD5SUMS.

So clearly the source-signing and integrity-checking has decreased over the years. Not sure why — are we (KDE) relying on the packagers to (re-)host the sources and sign them themselves? Have we realized that no-one except the core of the gpg web of trust is checking these things and that it’s not worth spending release team time on? I don’t know.

I do know that the last time I posted a KPilot source tarball — we’re talking five years ago here, at least — I did gpg sign it with the KPilot key. Goodness knows where that thing has gotten to.

And on a related note, I wandered off to look at the OpenSolaris packages to see if they are signed in a meaningful way. The manifests include hashes of all the files in a package, but I don’t see the manifest itself protected or signed in any way. That leads to the same issues that Brian points out again — but it’ll be an interesting day wrt. porcine aviation when Oracle starts using a well-connected gpg key, methinks.

As far as KDE goes: I’ll ask the sysadmins; it might just be an oversight.

[ade] — bridging planetkde and planet.fsfe since 2009.

Heat it Up

Thursday, April 22nd, 2010

With highs below 40 degrees it’s hard to know if it’s Minnesota in November or Kano this week. Scale is the thing here, and I do hope to be hotting it up in Kano for the second annual FOSS Nigeria conference this week. Akademy 2008 in Mechelen brought us Mustapha Abubakar and he went back home and set up a conference for Free Software in the north of Nigeria. I’m pleased and honoured to be going back for a second round. As in previous years, talks will include Free Software background, some legal stuff, C++, python, and whatever else strikes the fancy of the speakers (some of whom are me and Frederik). I think this year I’m better prepared for the destination — although I’m still looking for my Hausa hat, it’s gotta be in the house somewhere. Sinasiri, here I come! Also, I’m looking forward to seeing how the guys from Hutsoft are doing, which way IT in Kano is growing, meeting up with Mr. Tata again and once again contemplating Free Software under a breadfruit tree.

New Beginnings

Wednesday, April 21st, 2010

In spring I tend to write sentimental bits, like this one. I’ve spent the past two weeks mostly in the vegetable garden, fostering new life — potatoes, carrots and beans — and tearing out unwanted plants — stinging nettles and crab grass. There must be a metaphor for software development there somewhere. It’s a matter of out with the old and in with the new.

This spring brings renewed determination to do something with the garden; that’s both on my part and on the part of the other people with whom I keep it. I think there’s a realization that the discipline of maintaining the vegetable plots does us (nerd, philosopher, teachers of dutch and mathematics) all good. It gets me away from the computer, anyway, and hoeing is one way of cleansing the spirit.

Related, I am happy to see that Henrik Sandklef has regained his footing: Restarting life sounds pretty drastic. Catching up on the todo list is sometimes good — and sometimes you have to throw out the list and start over. I’m glad FSCONS is still on Henrik’s list, because that was probably the event that made the biggest impression on me last year, for being eclectic and social and technical and drunken and excellent all at once.

So, stay balanced. Create Free Software.

VirtualBox on FreeBSD

Saturday, April 17th, 2010

For some of the things I would like to do with the EBN code-quality checking machine — things outside the immediate realm of quality checking — I need some VMs beyond what FreeBSD’s jails give me.

In particular, I need Solaris running on the machine as well as FreeBSD, so I looked into VirtualBox. I’ve used it on Solaris for various purposes (including running FreeBSD) so it seemed appropriate. There’s no binary version of VirtualBox for FreeBSD, but you can compile the Open Source Edition from ports, so that’s what I did. The results — missing USB passthrough and no RDP — are things I can live with, as it just means I’ll do most of my work on the VMs through ssh.

Installation (of VirtualBox OSE 3.1.6) is pretty non-eventful, the remember-to-load-the-kernel-driver reminder at the end of installation useful, and it just works.

For testing purposes and to satisfy some curiosity on my part I decided to install the Maemo development environment provided by Nokia (hey, maybe I’ll write my first GTK+ program like that). This turns out to be a VMDK (VMWare) file, not a OVF file, so a little bit of fiddling about was necessary. There are installer scripts available with lots of disclaimers for VBox 2, but none for version 3. So I tweaked and twiddled a little, and ended up with this installer script that calls VBoxManage a bunch of times to set up the machine. This is largely cribbed from Nokia’s script, adapted for new syntax in version 3.

One thing to note here is the invocation with –usb on. That could be combined with other calls to VBoxManage, but since the OSE has no USB passthrough, this will fail. For those using the non-OSE version, the call will succeed. Run the script with either –add or –remove from the directory containing the .vmdk file and it’ll set things up or tear them down.

In the resulting VM I haven’t gotten any further than starting esbox and quitting it again — but it shows that the VM works, that the apps work. I can ssh in for whatever command-line work I need to do.

The upshot — after all, the Maemo dev environment is just an experiment — is that a Solaris VM will happen real soon now. I was hoping for something jeos-like, but I hear that the people responsible for that have left Oracle now (this looks like a common theme: all the nifty Open Source but non-revenue projects are bailing out). So I’ll have to find another means of installing a fairly-minimal OpenSolaris version into the VM (which, thanks to Jignesh, is a matter of VBoxManage import). For my purposes no X is needed, so that saves us at least one (if not two) desktop environments in the disk image.

Credits where credit is due: to Jignesh Shah for the OSOL image and to Miwi for not only porting KDE4 to FreeBSD but also acting as source of information for VirtualBox.

FreeBSD and Radeon 4350

Friday, April 16th, 2010

The revival of my FreeBSD system meant that I was once again confronted with Xorg driver issues. The on-board GeForce 7050 isn’t recognized by the nv(4x) driver, and the proprietary nvidia one is a no-go because I’m running FreeBSD amd64 (and the Linux driver only works on FreeBSD i386). So, time to shop around a little.

This is one of those cases where I wish the Internet would forget sometimes. Wading through the reports of video card compatibility from 2006 just isn’t useful. I had one (I thought) simple question: will an ATI Radeon 4350 work with Xorg 1.6.5 under FreeBSD 8-STABLE?

Perhaps it’s just my search-fu letting me down, but in the end I went and just bought one (as it’s the cheapest video card available in town across the river right now).

And the answer seems to be: yes, the Radeon 4350 is supported under FreeBSD 8-STABLE with Xorg 1.6.5_1,1 and the xf86-video-ati 6.12.4_1 driver. At least I can get twm up and running and exit that same twm and restart X multiple times. As usual it took longest for me to remember which ports to install to get a workable X locally (xorg-minimal + xorg-apps + dbus and hal seems to do the trick).

Excerpt lines from Xorg.0.log:

(--) PCI:*(0:2:0:0) 1002:954f:1043:02a8 ATI Technologies Inc RV710 [Radeon HD 4350] rev 0, Mem @ 0xc0000000/268435456, 0xdfff0000/65536, I/O @ 0x0000e800/256, BIOS @ 0x????????/65536
(II) RADEON(0): [dri] Found DRI library version 1.3.0 and kernel module version 1.31.0
(II) RADEON(0): Detected total video RAM=524288K, accessible=262144K (PCI BAR=262144K)
(II) RADEON(0): Output VGA-0 connected
(II) RADEON(0): Output HDMI-0 disconnected
(II) RADEON(0): Output DVI-0 disconnected

So, aside from the oddness of having 512MB present and only half of that accessible, it looks OK. KDE4 from ports is still compiling, so I can’t comment on any 3D or compositing features.

One little bit of coolness I hadn’t expected is this, from dmesg:

hdac1: mem 0xdffec000-0xdffeffff irq 19 at device 0.1 on pci2

So presumably the audio via HDMI will work as well. No way to test, as I don’t have anything to plug into that.

I should note, too, that running twm and xterm and Qt4 VirtualBox and then running Ubuntu 8.10 with GNOME inside that looks .. decidedly strange. Time traveling desktops.

Compliance Engineering

Thursday, April 15th, 2010

Compliance engineering as a topic covers those activities that make it possible to ship a (consumer electronics) product that complies with the license(s) of the software contained in that product. That includes things like: figuring out what software actually is in the product (you’d be surprised how often vendors don’t even know); ensuring that you know what configurations and versions were chosen to put in the product; finding out what the licenses on those versions of the software are; finding out out what the obligations under those licenses are; and finally actually doing what those obligations demand. Hence, comply.

Comply or explain (to one of the organizations that look into enforcing software license obligations, like the BSA or gpl-violations.org).

The FSFE has long had a brief article on how to report and fix violations and Armijn Hemel at Loohuis Consulting has written a fairly lengthy compliance engineering guide (also some articles on LWN).

One popular license for software that tends to end up in consumer electronics products is the GPL. Either version 2 or version 3. It has some specific obligations that make compliance both important and sensitive. Those are the clauses requiring the complete corresponding source code, which means you need to know what the code is and how to provide it. It also means that for every binary release you need to provide the sources that can be used to create exactly that binary release. Not every company does that consistently.

Heck, I’ll name names: Conceptronic, a Dutch consumer electronics company, tries hard to comply. It delivers source code for the firmware shipped with the original release of devices, and it sometimes updates the available source tarballs. But not always. Dennis, the guy responsible, knows this is a problem. He tries, but time pressure and the upstream don’t always make it possible to do the right thing.

So there’s a company technically in the wrong where I’m willing to believe that they could be in the right if there was a little less effort involved, or a little better support in the compliance engineering process.

Enter, once more, Armijn and Shane, in their business guises of Loohuis Consulting and Opendawn Consulting. They work, shall we say, both sides of the fence: both in helping people improve their compliance processes and in tracking down violators later. For both sides, knowing which sources should have been supplied with a given binary release is of paramount importance.

So Shane and Armijn — supported by the Linux Foundation and Stichting NLnet — have produced a tool that helps in identifying what software has gone into a binary firmware image. It’s still in its infancy, but it can usefully detect Linux kernel versions, Busybox versions and configurations. That means it can be used — for products containing those pieces of software — to answer questions like “what sources and configuration files and scripts should be delivered with this product?” And that’s important because of the requirement in the GPL to provide (when necessary as defined by the other license obligations) the complete corresponding source code. Not just a bunch of tarballs and a “figure it out” notice; not just the upstream code, but whatever patches went into the device as well; and preferably not a whole bunch of extraneous cruft, either.

The tool makes it easier to do compliance checking from the outside, and easier and cheaper (as in Free beer) to do basic checking on the inside. It’s no replacement for a dedicated compliance engineer, but it does help a lot in answering questions about “what’s in here?” before firmware goes out the door.

I should add that the tool understands some common firmware packaging styles, so it will find and unpack and check things in a squashfs image. Upcoming features will add more filesystems, like concatenated squashfs filesystems, which will save a lot of time compared to running od -c, grepping for magic numbers, dd-ing things apart and then loopback mounting parts individually — that will become automatic.

You can find the tool (which is Free Software under the Apache license) at BinaryAnalysis.org. BA to the rescue. Man, I love it when a plan comes together.

Moving and Updating a FreeBSD Boot Disk

Wednesday, April 14th, 2010

Part of my FreeBSD update effort consists of tackling a peculiar problem. I haven’t found the problem described online anywhere else, so I’m going to detail the steps taken. But first, an effort to pin down the problem itself.

I have a (remote) server. It has no DVD drive. It should remain up as much as possible. It’s currently running 6.1-STABLE, which is horribly old. It does have a spare drive in the machine, available for whatever. So what I want to do is make that spare disk a bootable 8-STABLE disk remotely, set up the 8-STABLE system as much as possible (still remotely), then reboot into the new system in one swell foop.

Let me rephrase: how do I add a new boot disk to a FreeBSD system and create a full bootable system on it without using the CD installer?

Or with different emphasis: I want to upgrade FreeBSD to a new major release and want to keep a complete bootable old version around just in case.

Preliminaries: let’s assume you have a full /usr/src for the system you want to end up running (for me, that’s 8-STABLE, and it actually lives in /mnt/sys/src-8); also a full ports tree; also that the system is currently running and that the new disk is /dev/ad6. The desired end situation is that /dev/ad6 is bootable and contains the whole new system.

Note, though, that 6-STABLE can’t even compile 8-STABLE from a source checkout, because of libelf header file problems. You need to go through two stages here: go from 6- to 7-, then 7- to 8-. However, one might hope that the second step is less invasive (in my case, no need to update ports into 7-, so I can boot 7-STABLE just once to update to 8-STABLE).

Make backups now. Really. This is all about messing around with the fundaments of the system with the intention of not touching the installed system and keeping a safe “way back”, but it’s still hazardous. Make backups now. Make sure they’re on physically removable media and remove them. Make another copy. Take it to a remote location. Store it in a dragon-proof safe.

You may also want to take a look at this upgrade tutorial for some other preliminaries.

Setting up the disk: we have the (new, presumed empty) disk attached as /dev/ad6. We’ll slice and partition it so that it can be used. We will use the whole disk, but not in “dangerously dedicated” mode. So we’ll put a single slice on it (partition in Linux and just about everybody else’s parlance).


fdisk -BI /dev/ad6 # Single slice on whole disk
fdisk -a1 /dev/ad6 # Make that slice active

Now that we’ve got a disk with a FreeBSD slice on it — and the rather simple FreeBSD boot manager in the MBR — we can set up partitions in the slice so that we can allocate filesystems. This requires thinking about the disk layout and sizing filesystems (we wouldn’t have to do that if we used ZFS, but I’m sticking to ZFS on OpenSolaris only for now, even if it is no longer considered experimental in FreeBSD). This means editing the label using $EDITOR, so I’ll show the end result as well.


bsdlabel -w /dev/ad6s1 # Create standard label
bsdlabel -e /dev/ad6s1 # Edit the label

When using the -e option to bsdlabel, you get a text editor to futz around with the partition layout and you can screw it up pretty badly if you try. I used this setup, which makes use of the modern size deisgnators and auto-offset so you can read it as “4G for this, then three of 8G, then all the rest”. Partition c is historically the whole disk.


# /dev/ad6s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
a: 4G 16 unused
b: 8G * swap
c: 143363997 0 unused 0 0 # "raw" part, don't edit
d: 8G * unused
e: 8G * unused
f: * * unused

I’ve left all the fstypes as unused except for swap, since they will get updated by newfs(8) later and it saves typing. Not to mention that the parameters are all pretty uninteresting or not worth tuning at this point.

In FreeBSD you can refer to a filesystem through its device (e.g. /dev/ad6s1a) or through its label — at least, if you have GEOM labels enabled in your kernel or loaded as a module. The label allows you to assign a human-readable name to a disk partition or filesystem so you can later refer to it by name. The name is independent of the physical location of the partition, so you can label something “myroot” and later refer to /dev/label/myroot regardless of where the disk has gotten shuffled off to. That’s really quite handy when you swap drives or cables around or add another SATA controller that potentially bumps device names around. So we’ll label everything, including swap:


glabel label myswap /dev/ad6s1b
newfs -U -L myroot /dev/ad6s1a
newfs -U -L mytmp /dev/ad6s1d
newfs -U -L myvar /dev/ad6s1e
newfs -U -L myusr /dev/ad6s1f

Now that all the filesystems are created and named, we can move on to filling them up with an installed base system. Do note that we’re not done with making the disks bootable — but for that, we need the right bits from the still-to-be-populated filesystems.

Populating filesystems: the newly-created filesystems are available in the running system, so we’re going to mount them and then put the updated system in them. Let’s assume we have a mountpoint /mnt/newsys in the running system to begin with. So we will start with mounting them all to re-create the future filesystem hierarchy under that mountpoint. We’ll throw in devfs for good measure.


mount /dev/ufs/myroot /mnt/newsys
mkdir /mnt/newsys/{tmp,var,usr,dev}
mount /dev/ufs/mytmp /mnt/newsys/tmp
# Similar for var and usr
mount -t devfs devfs /mnt/newsys/dev

If we were to chroot to the (still empty) /mnt/newsys we’d see the filesystem layout we want, including devices and everything. But we still need to populate them, so it’s time to build a new world. We assumed that /usr/src contains the sources for the system we want to end up with (e.g. it’s been csup’ped to RELENG_8). Plan another activity for an hour or so while the next steps complete (depending on the compile speed of your running machine).


cd /usr/src
make world DESTDIR=/mnt/newsys
make buildkernel installkernel DESTDIR=/mnt/newsys
make distribution DESTDIR=/mnt/newsys

You’ll note that some of those commands show up in the jail(8) manpage, which is where I cribbed them from. Because setting up a new bootable system is a lot like setting up a jail, just with disk, filesystem, kernel and boot blocks thrown in. Speaking of which, let’s update all the boot bits with the newly-generated files.


boot0cfg -B -b /mnt/newsys/boot/boot0 /dev/ad6
bsdlabel -B -b /mnt/newsys/boot/boot /dev/ad6s1

The last step — bsdlabel — might not work just like that, as there’s issues with (re-)labeling mounted disks. You may have to copy the new boot file to the running system, umount the whole newsys tree and then label. I don’t remember exactly what I did. Regardless, make sure that the filesystems are mounted again afterwards. You may find this blog post which mentions bsdlabel helpful, although I can’t figure out gpart(8) myself and it doesn’t seem to work under a running 8-STABLE system either. Some futzing required if the dreaded bsdlabel(8) “Class not found” error pops up.

But carrying on, once the filesystems are mounted again, you’ll need to add several files to the newly-populated system for it to boot and be useful. These are: /boot/loader.conf (kernel modules) and /etc/fstab (otherwise it won’t mount / and get confusing during boot; feel free to use the labels of the partitions if you add glabel_load=”YES” to loader.conf) and /etc/resolver.conf and /etc/rc.conf. Generally you could copy them over from the running system.

Testing: after all this, we have a filesystem filled with an updated system, created pretty much as if we were building a jail. Make sure devfs is mounted in there, and you can actually use it. Let’s assume you have an interface configured with IP 192.168.43.70 for the jail to run with. You could run a shell in there:


jail /mnt/newsys newsys 192.168.43.70 /bin/sh

You could even use the traditional /etc/rc instead of /bin/sh to bring the whole jail up, but this is fraught with peril. As in “Danger, Will Robinson!” This is particularly so when the jail and the running system do not share a kernel version. I experimented with a 7-STABLE system and an 8-STABLE jail, and noted the following (these are not bugs):

  • ls works, but ls -la fails with “unsupported system call”.
  • uname reports the kernel version of the running system, so pkg_add -r will use the running system, not the new system, as a source for packages. Fetch them manually.
  • Some shell constructs just hang. I tried to build the libtool22 port and it hung with /bin/sh spinning at 100%. Again, probably a syscall problem.

On the other hand, being able to check that the new system is functional enough to compile something is useful.

Deployment: the last step is to reboot the machine into the new system. If you have a console (yay ILOM! or otherwise yay physical access!) it’s easy to babysit the system. In my testing I just swapped disks around (yay hot-swap SATA bays in my desktop machine!) but on the remote system, I’m going to want it to boot the boot manager from the first disk — which is now the old system disk attached as /dev/ad0 — then chain to the new bootloader on the new system disk /dev/ad4 and then boot from there.

Frankly, that’s something I have not tested or tried yet. I expect that the following will work: boot0cfg -s 5 -o noupdate -t 40 /dev/ad0 ; bootcfg -s 1 -o noupdate -t 40 to chain from one to the next with 2-second timeouts on both, but again: not tested. Yay ILOM.