Linux.conf.au 2009 - Hobart Tasmania

29 January 2009, 03:03 UTC

I recently had the pleasure of attending linux.conf.au 2009. When I go to these conferences I like to write a trip report to share will my colleagues. And having written it, I thought I would share it with you too.

Enjoy.

linux.conf.au 2009 was held in late January in Hobart, Tasmania - the southernmost city in Australia. With a population of around 200,000 it is big enough to offer all the facilities conference attendees would require, without being so big that it takes an hour just to get out into the countryside. And the country side is well worth seeing as Tasmania has much beautiful scenery and wilderness to enjoy.

The talks covered a wide range of the sort of technical and social commentary that you would expect from an Open Source based conference. From providing low-infrastructure small town telephony through mesh networks (google:Mesh Potato) through the use of wikis in documentation and community building to the difficulties of building a robot to play a clarinet, there were talks for all tastes.

One theme that emerged for me (possibly because I attended a number of talks at the "mobile devices" mini-conference) relates to issues with Embedded Linux.

David Woodhouse recycled his talk from other conferences about working with the embedded commuity and being an embedded maintainer. One interesting question that keeps popping up is "What exactly does 'embedded' mean"? It clearly means different things to different people, but it is instructive to look at the issues that were brought up during the conference, and particularly the mobile devices mini-conference. Two themes appeared for me.

The first is user interaction. While not all embedded devices have a display at all, those that do tend to have a small, though possibly high-res, displays. This is challenging not only for modern applications, but equally for modern widget sets. While it is always important to make effective use of available space, it is even more important, and much harder, to do so in a small screen.

Carsten Haitzler, aka Rasterman, is building a new widget kit with his 'e' infrastructure (apparently a weekend's work to get it mostly working) which is aimed at small devices: phones, pdas etc. One particularly important aspect is that he has sensible scaling built in to the design - very import if you want to scale to a variety of screen sizes and resolutions. Some elements scale linearly. Some elements are fixed size. The tool kit allows (requires?) these differences to be detailed.

Ubuntu is creating a 'Mobile' distro aimed at these devices and is focussing on making sure apps work in the restricted screen space. They have a twin focus on "netbooks", which have a keyboard and probably no touch screen, and "Mobile Internet Devices" or MIDs which have no keyboard, just a touch screen of some sort.

This brings us to part 2 of user-interaction: input devices. We are seeing an explosion of input options with these small devices. Touch pads can be single or multi-touch, and can require, allow, or forbid the use of a stylus. Keyboards can be tiny or non-existent. Accelerometers can measure orientation of the device, or small taps on various sides. Writing applications that work on a particular device is relatively easy. Writing an application that will work well on any device will really require some generic infrastructure to make all of these available. We heard in one talk that the X input system is too complex already. Whether we want to use it to pull together all the different options, or use something else completely is an open question.


The other theme that appeared to me involves heterogeneous multi-processing. The main processor in these smaller devices is relatively low powered (by today's standards) so there is an incentive to provide co-processors that can perform certain tasks that run too slowly on the main processor. Working with these co-processors provides interesting challenges.

The most obvious example of this, which isn't restricted to small devices, is the graphics co-processor. Keith Packard gave a talk about how significant progress had recently been made in working effectively with graphics processors by introducing a memory management layer (GEM). Previously there was one big lock controlling who owns the graphics memory. When an application got access to the memory it had to assume anything that it had left there previously was gone. This of course makes sharing between e.g. 2D and 3D engines impossible. Now there is a generic memory manager that allows applications to request the memory they want and to hand references to that memory around. It also allows the kernel to page out memory that has not been used for a while so that some other application can have as much as it wants.

By creating an appropriate abstraction, the graphics co-processor becomes more useful.

This pattern applies else where, though the abstraction is different.

Conrad Parker talked about his work with the hardware CODECs that are part of current SH-Mobile chips. These are multi-media codecs which enable hardware decoding and post processing (e.g. colour-space conversion) of compressed audio and video streams.

It is simple enough to drive these devices using the "UIO" infrastructure (with a tiny kernel drive and most of the work being done in user-space). However just driving them isn't enough - you want to be able to share the services they provide among multiple applications. And you want to be able to plug the services together into a pipeline (when that makes sense) without the high-level code (e.g. gstreamer) needing to know too much about the low-level hardware.

Conrad talked about the abstractions he was building to make these codecs easy to work with. It seems that putting together that infrastructure isn't too hard. But understanding the problem and choosing a good abstraction up front is the important bit.

This leads me to think about my areas of interest (Filesystems, RAID, storage) and where similar abstractions might need to be considered.

One area that is fairly closely related is hardware offload of the XOR function for RAID5 and the "Q-syndrome" calculation for RAID6. We have had this implement in the MD driver in Linux for quite a while now. It makes use of the "async-crypto" API. While it isn't crypto, it is a similar task as a piece of hardware bangs on a bunch of bytes in memory and reports when it is finished. I wonder if it would make sense to use that API for CODECs too?

The order hardware offload task that could benefit MD/RAID involves cards that contain both memory (possibly battery-backed) and drive controllers (e.g. SATA). With a card like this, and when driving a RAID1 array, it makes sense to DMA the data for a write into the cards memory, then request it to write that data out to both drives. That way it only goes over the PCI buss once, thus potentially improving bandwidth. If that card also has an XOR engine, then the same approach could be used for accelerating RAID5.

This sounds quite similar to the situation with Graphics controllers, where there is memory on the external device that wants to be managed by the kernel in a sensible way. MD/RAID might want to write to two separate cards which each have their own memory and disk drives. Similarly, thanks to the "Shatter" project, the X server might one day perform a single graphics operation to multiple video cards each with their own memory. It would be worth keeping an eye on how this progresses.

One other area that I was thinking about where some better abstractions or infrastructure might be a good thing related to removable devices. This thought didn't really come out any talks at the conference but rather my experiences while being there.

I had brought two notebook computers with me. My main work horse, a Dell Latitude D820 which is relative large (15inch screen) and so awkward to carry around, and my Asus EeePC which is easy to carry but not good for doing much work. I left the Dell in the hotel room and took the EeePC with me to the conference each day. I have an SD card slot in each and so I kept the files that I was regularly working on on an SD card which moved from laptop to laptop.

The frustrating thing about this was that I really needed to unmount the device on each computer before pulling the card out - or at least before plugging it back in again. And that is such a pain. Unmounting a device requires closing all active file descriptors. For the most part that is trivial - I had finished working of files so everything was closed. The exception is current working directories. Any process (e.g. a shell or a terminal window) that happened to have a current directory on the SD card would stop it from unmounting. If ever there was a case for using textual path names rather than file pointers for "current working directory" this is it. But that is a change that it is too late to make for Posix.

As it is very possible these days for devices to disappear while they are being used, and it would be good if Linux could cope well with this.

Part of the issue is simply telling users of the device that it has gone away. I often have people telling me that they pulled a device out of their md/raid array and md didn't notice. This is because it cannot notice until it tries to access the device and fails. As a simple user-friendliness issue it would be nice if md was told immediately that a device was gone. It already has code to recover if the device gets re-added, so this would make a more complete solution.

I think user-space does get told when a device is removed. Maybe I just need to plug that into md somehow.

But of more interest to me is filesystems. A filesystem should be able to cope if the device goes away and then comes back. Obviously any IO requests during this time would have to either block or fail (or maybe be served by a cache), but when the device comes back, new accesses should be allowed to proceed.

This would essentially require the filesystem to discard or invalidate anything that was cached, and to read all metadata afresh when the device re-appears. If there are open file handles, it could check if they are still valid (e.g. same mtime, size, etc) and if they are, just continue as if nothing had happened, possibly even flushing any unwritten data.

Naturally this would require a significant change to the filesystem not to mention lots of careful design and planning first. The device might come back with a different name to the original so we would need some way to tell a filesystem to use a different device.

There is currently a strong tie between a filesystem and the device that it is mounted from. Maybe the first step should be to break this tie. There a filesystems today that can make use of multiple devices and requiring them to do that through an abstraction layer like DM doesn't seem to make sense.

So there is plenty of work here to do if someone is interested. And has the time. I wonder if that will be me.

But back to the original question of 'What exactly does "embedded" mean?'. I think my answer would be "anything that is significantly different to mainstream." It might not be an entirely satisfactory answer, and doesn't seem to related at all the to the term "embedded", but it does seem to explain all the issues that get brought up in the "embedded" space. Of course, all the devices that are embedded today will be main stream in a few years. And then we will find a new set of problems to solve. But that will be a topic of a different conference.






[æ]