Linux Container in Fatdog


As noted in this and this post, I have been experimenting with Linux Containers (LXC or lxc for short) lately.

What is LXC? In short, it is a collection of technologies that allows processes to be compartmentalised. A process in one compartment (or "container", in LXC-speak) cannot interact with processes in other compartments. It's the equivalent (in terms of concept) of FreeBSD jail in Linux.

What is it good for? It's good to secure 'server' processes, as when these processes are broken, an attacker can only affect things inside the container (presumably, each server process is allocated one container to limit this kind of damage). This has been the basis for the so-called Operating System Virtualisation.

In this respect, LXC is late to the game. FreeBSD has its jail for years. Solaris as 'Solaris Container', and so is many other operating systems (except Windows, probably ). In Linux alone, it has been preceeded by Linux VServer and Virtuozzo/OpenVZ by years. They also have more features that LXC currently doesn't have. LXC however, has the benefit of being available from the vanilla kernel, not patches needed - just compile-time configuration.

More information about LXC: here (LXC manpage), here (IBM developerworks), here (Ubuntu community wiki) and many others you can find using your favorite search engine.

LXC support in Fatdog
In next version of Fatdog, the kernel will support LXC.

Many years in the making, the final component of LXC ("user namespaces") was merged with the kernel in Linux 3.8. Unfortunately, this component requires extensive changes to other kernel components that when many of them were not ready for this change; thus making 3.8 kernel less than ideal for running LXC. The situation is better in Linux 3.9 - only the XFS filesystem still doesn't work with user namespaces. I was hoping that 3.10 would fix this but as of today, at 3.10-rc4, I see don't see it happening, so if we ever need to ship Fatdog with one of these kernel, we will have to choose either to enable XFS filesystem support, or to enable user namespaces. I hope we don't have to choose by the time we release. LXC can still run without user namespaces feature, although it will be less secure.

In addition to the kernel, one needs to install LXC userspace utilities from the Package Manager (this model is similar to User Mode Linux support in Fatdog - the scripts are there, but you need to install the userspace utilities and the UML kernel first).

Now, when creating the Fatdog convenience interface to LXC, I have two choices:
a) follow the sandbox model (zero configuration)
b) follow the UML model (persistent configuration)

I have decided to follow the sandbox model (zero configuration). The scripts are called sandbox-lxc (and rw-sandbox-lxc for the one with persistent storage). The reasons I go with this model is:
a) it is slightly easier to use
b) if one needs to run an LXC container with persistent configuration, the LXC userspace tools themselves already supports this model (see lxc-create and format of lxc-template config-file, so there is no need to duplicate this excellent functionality.
c) lastly, with the sandbox model you can easily copy/transfer/backup the container root filesystem from the "host" (e.g. a tool like will work; if you need to copy files to the sandbox just copy it over to the sandbox's fakeroot).

How to use these scripts? They are identical with sandbox scripts, so instead of you run Instead of you run

When run, in addition to asking about the layers you want to use, it will also ask whether you want to be able to launch a desktop. If you said yes, the host's /tmp will be mapped inside the container - which means the processes inside the sandbox can change stuff in the host system and connect to processes in the host - but it is a requirement if you want to run Xnest from inside the container.

These scripts by default will run without user namespaces. You can tell them to run with user namespaces by putting IDMAP=yes to the environment before running them, like this: IDMAP=yes Support for kernel user namespaces is auto-detected, if the kernel doesn't support it IDMAP will have no effect.

When run without user namespaces, these capabilities are dropped: sys_module (load kernel module), sys_time (change system time), and syslog (change kernel logging parameters).

When run with user namespaces, all the UID are shifted by 10000 (ie, the container's root is actually a user with UID 10000 in the system).

In either case, the container cannot create block device nodes (only character devices) and will have its own hostname and network interface which you need to configure manually if you want to connect to the network from within the container.

These scripts, the LXC-enabled kernel will be in the next release of Fatdog (the LXC-userspace tools are already in Package Manager but they are useless without LXC-enabled kernel). Beware that they are still considered experimental, as is LXC itself.

Posted on 8 Jun 2013, 3:04 - Categories: Fatdog64 Linux
Comments - Edit - Delete

Fun with Fatdog

One of the forum member in Puppy Linux forum, Ted Dog, asked whether Fatdog can be modified to run with uncompressed filesystem in RAM, provided that one has enough RAM to do so. His idea is that since uncompressed Fatdog stands at around 700MB, on machines with more than 4GB it should be possible to have one giant initrd that contains uncompressed filesystem that got fully loaded into RAM and run Fatdog from there. Since there is no decompression is involved other than possible once at boot-up (if the initrd is compressed), it should run like the wind. The original post was here.

There is no practical purpose of doing this (that I can see), but it sounds fun :)

There are two possible way of doing it:
1. Expand the content of the basesfs (fd64-620.sfs currently) and merge it with initrd.
2. Expand the content of the basesfs and store it into uncompressed ext2 filesystem.
In either case, it should be possible to include modifications of the original Fatdog stuff like a "remaster".

Fatdog already supports booting without any basesfs. It also supports basesfs with filesystems other than squashfs. So all that is needed is to prep the initrd.

For 1, what is needed is to get a snapshot of the running system, and do minimal modification so that it is bootable, and re-create the initrd from there.
For 2, what is needed is to create a basesfs that contains uncompressed ext2 filesystem instead of squashfs, and insert this basesfs into an existing initrd.

As it turns out - Fatdog already has the tools to help to do both: the sandbox. One runs sandbox to have a "snapshot" of the running system which one can play inside without affecting the real system. When one starts sandbox, one can choose which layers to be included in the sandbox - including the modifications already in the savefile. As it turns out, the sandbox root filesystem is also visible from outside the sandbox, in "/mnt/fakeroot" for 621 and "/mnt/sb/fakeroot" for 622 onwards. By choosing the correct layers, we can get the needed view for either 1 or 2.

From here it is straightforward.
For 1, one just needs to create the initrd from the sandbox root, after undo-ing some changes done by sandbox script, and restoring deleted links required for booting, and a minor modification to the initrd init script (/sbin/system-init).
For 2, it is even simpler - just create an empty ext2 image file, and copy over the sandbox root to that image file, again after undoing some changes done by the sandbox script. Then open up an existing initrd and overwrite the basesfs (fd64-620.sfs) there with the image file you've just created; then rebuild the script.

Fatdog's initrd currently stands at around 210MB. This "giant" initrd, when gzipped, is about 260MB. It's bigger because it is compressed with gzip instead of with xz; if one choose xz compression it will have more or less the same size. But this size difference is not important, either way the initrd will be decompressed before Fatdog starts - and the memory consumed will be the decompressed size of the initrd which is around 700MB. But then, as Ted Dog said, there will be no decompression overhead for anything, so it should run like the wind :)

I have written the details and the exact steps on how to do both 1) and 2) in the forum, so I will not repeat it here.
Method 1
Method 2

Posted on 8 Jun 2013, 1:40 - Categories: Fatdog64
Comments - Edit - Delete

Fatdog64 sandbox as application container

Fatdog64 has a built-in sandbox for experimenting. It is an old feature, existing since as early as Fatdog64 520. It has been a useful feature, I use it almost always when it comes to building and packaging applications. But it has one main weakness - there can only be one sandbox running at one time. In other words, you cannot run multiple sandboxes at the same time.

That restriction was largely for stability reasons (long time ago, the sandbox didn't have proper termination code - you could accidentally close the sandbox shell and the sandbox will continue to run, with no way to properly shut it down). That has been addressed for a while now so it is no longer an issue.

But what purpose would it serve to be able to run multiple sandboxes? After all, sandbox was originally invented to "test foreign applications" by installing them in a throwaway memory filesystem (tmpfs). What good use could ever come out of running it multiple times?

The key answer is rw-sandbox. Sandbox actually consist of two applications - sandbox proper and a counter-part called rw-sandbox. rw-sandbox is an under-utilised, under-appreciated application that works and behaves just like the regular sandbox except that it stores all the changes (="state" if you will) into a persistent image file instead of in a throwaway memory layer. Thus it is possible to start an rw-sandbox session, do something and then leave, to return and re-start the session later and find all the changes are still there. The difference between sandbox and rw-sandbox is just like running Fatdog without a savefile (sandbox) and with a savefile (rw-sandbox) - in fact rw-sandbox's image has identical format as Fatdog's savefile.

As it turned out, rw-sandbox is ideal for isolate applications. One can install and run an application into a rw-sandbox, and keep its state - the binaries, the configuration files, etc, from one invocation to another - just like as if the application is installed on the main system itself. In fact, with rw-sandbox, I could install the same application in multiple rw-sandboxes, each with a different configuration. Pop! Instant "multiple-profile" for applications even for those who don't support it natively. I could also install different version of the application, without conflicting with each other.

This also happens to be another way to deliver packages - instead of the regular PET package or SFS package, one can deliver a compressed rw-sandbox image. It has the benefits of containing both the application binaries *and* configuration in one place - the rw-sandbox. (Using PET, both the binaries and configs are stored in savefile. Using SFS, the binaries is in the SFS but the configs is in the savefile).

In other words, one can use rw-sandbox as an "application container". And what makes it more useful is that since the application (and all of its configuration) resides outside the savefile, outside the OS), one can upgrade the system (within limits of compatibility), the OS, etc even wiping out and re-creating savefile from scratch if need to, and the application and its data is still faithfully there for you to use.

And how about backing up? As easy as backing up a savefile: copy the rw-sandbox image, and you're good to go.

Of course, it is only useful if we can run multiple sandboxes at the same time - otherwise it is too restrictive if when I run LibreOffice from the sandbox, I can't run Sweet Home 3D stored in different sandbox.

That would be motivation enough to modify sandbox, and so I did. Multiple sandbox can now be run at the same time; and this change will be in the next release of Fatdog.

Note: The sandbox is currently implemented as chroot jail. As such, it currently only isolates the filesystem, but nothing else. Applications within sandboxes shares networks, process ids etc with the rest of the system and thus malicious applications can easily kill each other, even on those on the "host" or on another sandbox. Thus this method is only suitable for "well-behaved" applications. Obviously this method would also not be suitable for "network" or "server" applications (those that listens on sockets), unless you can configure which socket to listen/connect to so they don't try to stomp on each other's network sockets.

If you want complete isolation then consider using other methods, such as User Mode Linux (UML) discussed here (also included in Fatdog64, see this FAQ entry) or even KVM/Qemu (available from the Package Manager). In the future, I may add Linux Containers too.

Posted on 23 May 2013, 3:36 - Categories: Fatdog64 Linux
Comments - Edit - Delete

Fatdog64 621 is released

Belated announcement (my DSL was down), but anyway ...

Forum Announcement:
Release Notes:
Get it from:

Fatdog64 621 fixes the most annoying bugs from 620 - the "eject problem". Mostly a problem when you use a desktop with an optical drive that can be commanded to close its tray, as well as enabling panning to work properly.

It also has a new on-shutdown filesystem cleanup code which in theory should work identically if not better than before.

Other than that it is identical to 620 release - same kernel, same browser, same devx.

Posted on 15 May 2013, 17:22 - Categories: Fatdog64
Comments - Edit - Delete

Fatdog64 620 Final is released

I'm running out of time, so I will short and sweet. Fatdog 620 Final has been released with a lot of improvements. Check it out!

Release Notes:
Get it from:

Posted on 17 Apr 2013, 22:46 - Categories: Fatdog64
Comments - Edit - Delete

User Mode Linux (UML)

User Mode Linux (UML) is one of the earliest virtualisation solutions for Linux (the other one is lguest, which is only available for 32-bit kernel).

It has been largely superseded by KVM and of only academic interest these days, but it still has life left in it. For example it can be used to provide relatively easy and safe, secure sandboxing of untrusted applications.

Since 620beta1, Fatdog comes with UML support, all you need to do is install "linux_uml" and "uml_utilities" packages from the repositories and you're good to go. Open terminal, and type "" and off you go: the script will automatically create a "throwaway" VM with 256M of memory. Shut the VM down by shutting it down from the desktop, or just close the Xephyr window. It is "throwaway" in the sense that as soon as you stopped the session, all is gone - no traces of it is ever saved in your machine, at all.

If you want to be able to save your UML session, create a blank directory somewhere to save your "savefile". Then start UML like this: " /path/to/your/session/directory". It is important to note that this directory must already exist, otherwise it is ignored and you'll end up with a throwaway session again.
Once UML has run for the first time in this way, it will create a "config" file in that your session directory; which you can edit later. A 128MB savefile is also automatically created for you; if you need anything bigger, just delete that savefile and edit the config to specify a larger value. The options in the config file should be self-explanatory but in case it is not, you can look at /usr/lib64/uml/ --- the top of that file contains default settings of these config options as well as their explanation.


1. Q: Why UML, why not KVM or even VirtualBox?
A: UML uses your existing Fatdog setup - no need for separate ISO etc. UML consumes lesser resources (memory and CPU) than a full virtualisation system like KVM or VirtualBox, and I believe it is much easier to setup.

2. Q: Fatdog already has sandbox, so what is this UML for?
A: Fatdog's sandbox isn't meant for security. It was originally devised for testing foreign packages, so while it can be used for other purposes, there is information leakages to/from the sandbox; and this is intentional - otherwise you'll face problems when testing. For example sandbox and the host shares the same "/tmp" directory. Fatdog's UML however, doesn't suffer from such problem - host and UML guest is completely separate.
E.g. If you choose "shutdown" from sandbox's desktop, you will shutdown your entire system (not only the sandbox). Not so with UML. Choosing "shutdown" from UML's desktop will just shutdown UML VM.

3. Q: What can the UML VM do?
A: Almost everything the host system can do. Browse internet, run network applications (servers), etc. Probably not good enough to watch videos, though - there is no video acceleration in UML.

4. Q: I understand that the UML is isolated, but is there a way to pass data to/from the the UML? Otherwise how can I ever do anything worthwhile with UML?
A: Of course there is. The point is, these data sharing is completely under your control.

A1. You can setup a "savefile", which can be opened while the VM is offline; from there you can copy data to/from it.

A2. You can copy data using the network (e.g. using samba rox app / yassm, netcat, ssh, rsync, etc).

A3. You can create additional disk images which can be mounted both by the host and by the UML (using ubdxxx parameter - see below). You can open these disk images from host when they are not mounted from UML VM; and when done you can mount them and access the data from the inside the VM.

5. Q: UML only loads Fatdog's base sfs. Is there a way to load additional SFS too?
A: Yes. Specify additional SFS like this (assuming you want to use a throwaway session): "" ubd1rc=/path/to/your/sfs ubd2rc=/path/to/your/sfs

etc. "udb1rc" will show up as "/dev/udbb", "udb2rc" will show up as "/dev/udbc", and so on.
To load these SFS, once you're inside the UML VM, open terminal and type: /dev/udbb /dev/udbc

and so on. If you're using persistence then replace "" with path to your session directory.

6. Q: UML desktop doesn't show any drive icons.
A: Yes, on purpose. Drive icons needs "udev daemon" and daemons takes up additional resources, so they are not run unless they are absolutely necessary. If you need drive icons do this from terminal; type this in terminal inside the UML VM:
udevd --daemon; udevadm trigger --action=add 

7. Q: Does running UML require root access?
A: No. You can run UML even if you're not root. Inside UML, you're still root :)

8. Q: The UML VM starts with a desktop. I don't need no stinking desktop, give me the console!
A: If you use saved session, edit your config file and empty out the START_CMD option. If you use throwaway session, do it like this: "" START_CMD=
You still still get an empty Xephyr window (so that you can type "xwin" later and start a desktop if you wish), if you don't need it just close that Xephyr window.

9. Q: OK, now that I can use console only stuff, I need more consoles. How do I get that?
A: Many ways. The easiest way would be to run:
getty 38400 tty1 &
Replace "tty1" with tty2, tty3 etc as many as you need. A bit of warning: Do not close the xterm windows that are opened automatically, otherwise your UML VM may hang. They will be closed automatically when you close the UML VM.

10. Q: UML is slow!
A: Unlike other virtualisation solutions, UML does not create demand to your CPU - it will simply use whatever idle power which is give to it. The default CPU frequency scaling governor in Fatdog is "ondemand", which means that the CPU power will be raised only when it detects that there is a "demand" / need for it. UML does not create this demand (even though in reality it is running slow), so the governor does not think it necessary to make the CPU go faster. To fix this, temporarily change the default CPU frequency scaling governor from the default "ondemand" to "performance" (which means run the CPU at full power) while running UML. There is a commented script of how to do so in /etc/rc.d/rc.local.

11. Q: Why this information is not in Fatdog's FAQ?
A: It will, sooner or later. Be patient

12: Q: I need further information!
A1: Visit UML's web page here:

A2: See what kernel options are available by opening terminal and running this (from host!):
/usr/lib64/uml/vmlinux --help

A3: See what other script options and how Fatdog's UML works by looking at the scripts in /usr/lib64/uml

Posted on 17 Apr 2013, 6:47 - Categories: Fatdog64
Comments - Edit - Delete

Fatdog turns 5 today

It has been a long journey. Nobody took count of time, and we went where the wind blew. The road was long, the days were filled with weariness and sweats and joy. We saw happy faces, we met frowning grimaces. The sails kept falling down, the wheels kept breaking down; but we fixed them and we toiled and we went along.

It has been five years.

Nobody knows what happens next. Prediction is a difficult business, especially when it is about the future. But we know one for sure - the desire is still strong, the hands are still steady, and Fatdog will continue to plod along.

Read more of Fatdog's history here.

Posted on 12 Apr 2013, 5:42 - Categories: Fatdog64
Comments - Edit - Delete

Fatdog64 620beta3 is released

I'm running out of time, so I will short and sweet. Fatdog 620beta3 has been released with a lot of improvements. Check it out!

Release Notes:

Posted on 9 Apr 2013, 15:57 - Categories: Fatdog64
Comments - Edit - Delete

Updates to the Radeon saga

This is the continuation to my previous post of the radeon problems. Earlier we thought the newer kernels will fix it. They didn't. We've tried to upgrade Xorg to the latest one. It didn't fix it. We tried to disable all the advanced KMS acceleration features. It didn't fix it.

Our latest attempt, upgrading to kernel 3.8.5 also doesn't fix it.

But there is a light at the end of the tunnel. As mentioned before, JustGreg found out that problem didn't happen if one use "coldplug", or even "loadmodules". I found out yesterday that the problem also doesn't happen if I blacklist radeon, then boot to console, wait a little until the dust settles, and then load the module myself manually using modprobe (followed by starting the desktop, of course).

Previously my guess was that the radeon module needs to be loaded early, and indeed loading it early with "loadmodules" and "coldplug" makes it work. But yesterday results told me that loading it late also works. What gives?

I noticed this: the way "coldplug" and "udev" works is different. "Coldplug" load modules serially (ie, one at at time), while "udev" load modules in parallel (many at a time) - and in fact that's one of udev's benefits --- parallel loading makes boot process a lot faster.

So my guess is this: somehow loading modules in parallel makes things brittle for radeon module - it may work one one time but not on the other time. There may be some race conditions, or perhaps locking / synchronisation issues when multiple modules are being loaded at the same time, that only happens in the recent kernels, in a very specific conditions. Now I'm not sure what the conditions are, but it definitely happens on radeon. It may happens in other modules too (perhaps nvidia?), but so far we haven't noticed anything else.

Assuming that the hypothesis above is correct, then the workaround is simple: load radeon module by itself. Don't load it under udev. And that simple idea seems to work, so that would be the fix we're going to apply.

Of course, the best is to fix whatever the real underlying issues in the kernel or radeon module itself, but neither myself or kirk are kernel developers , so we leave that to the experts

Posted on 6 Apr 2013, 5:52 - Categories: Fatdog64
Comments - Edit - Delete

Radeon problems

There is some problems brewing with Radeon card owners when using radeon open source drivers on the latest Fatdog beta (620beta2). The problem is: it simply doesn't work - there will be image corruption and Xorg server crash, sometimes to the point of locking up the computer requiring hard power-cycle. Not good.

Upon further investigation, it seems that the problem is caused by GPU lockup when using that driver.

As far as I'm aware, this happens only on UEFI machines and only on relatively new machines. My test machine can boot either in UEFI or BIOS mode (CSM = Compatibility Support Module), the problem only happens when I boot using UEFI - same machine, same version of Fatdog64, but different outcome.

Some possible workarounds (from forum member JustGreg - thank you!):
- Boot using BIOS mode
- Use the "coldplug" boot parameter
- Use "loadmodules=radeon" boot parameter
- Use Catalyst proprietary driver.

The 2nd and 3rd workaround forces radeon module to be loaded earlier than other modules. For some unknown reasons, this would prevent the problem from happening .

Anyway, kirk has just compiled a newer kernel - 3.8.3 and in our (limited) experiment it seems that the problem is now gone (although 3.8.3 brings a problem of its own ). So there is hope that the workarounds are no longer necessary for the next release of Fatdog.

Posted on 19 Mar 2013, 20:26 - Categories: Fatdog64
Comments - Edit - Delete

Pages: ... [6] [7] [8]