QEMU virtual machine for kernel testing
Contents
Abstract
Testing kernel on bare-metal machine is time-consuming. Boot failures, screwing up installed distro, slow restart cycles, etc. There is a very convenient method for booting up a freshly compiled kernel with QEMU. You can compile the kernel on your host machine, then simply slide-load that to a virtual machine with your favorite GNU/Linux distribution. In the following, I will show some commands, how to make an easy to use virtual test environment with QEMU and Ubuntu Server cloud image.
Preparations on the host machine
My host machine running a Ubuntu 18.04 LTS GNU/Linux distribution with a hwe-edge
kernel which is version 5.0 currently. For kernel compilation, we should have to install some additional packages:
|
|
Also, for booting the virtual machine later, we should install the QEMU and some tools for later image preparations:
|
|
Kernel compilation
There are lots of ways to get the kernel source code, for example, clone a repository of it:
|
|
I use the net-next repository right now, but the process is similar to the stable mainline kernel too.
Kernel configuration
The whole kernel compilation process depends on the .config
file, which is located at the kernel source root directory (net-next
in our case). However, this file did not exist by default, we have to make one. Because we compile it to QEMU-KVM, we don’t need many device drivers and real hardware related options. The kernel contains a good default configuration for QEMU-KVM, so we can use that:
|
|
Creating and applying config fragments
This is okay for test if the kernel compiles and boot successfully, but what if we would like to test the full BPF featureset of the kernel. We have menuconfig
of course, but searching all the BPF settings one-by-one is time-consuming and brings the possibility of skipping some accidentally. To overcome the problem, we will use config fragments. One way to create them is to cut out the required settings from allyesconfig
, in our case the enabled BPF settings.
|
|
In line 1 we make a backup from our original x86_64 kvmconfig, because we use that later. Then we make the allyesconfig
in line 2. This is a special built in configuration, which enable every kernel feature, including modules and drivers. That one takes forever to compile, so we just cut the BPF part (BPF config fragment) and of that in the line 3. In line 4 we restore the KVM config backup and apply the BPF fragment for that (line 5). The output of the method is a lightweight config for virtual machine usage, but with all the BPF features enabled.
Now the kernel is ready for compilation, use the make -j9
command for run on 9 threads.
Booting the virtual machine
I use Ubuntu cloud image as the basis of my virtual machine. Most of the core packages still included in this one, but also designed to run in a virtual environment, so the desktop manager and many other bloats not included.
|
|
Good practice to keep this image untouched and make an overlay image on top of that for later usage. Writing files in the overlay image is persistent but it does not affect the base image. So if we mess up something very badly, we still have the original image and use that without further problems. We can also make overlay image over overlay images, so if we installed and configured the guest system, we can make an overlay image on top of that and if some bad thing happens, we can revert to the last stable version. Let’s create the overlay image:
|
|
This Ubuntu image is untouched, we have to put some configuration into it. cloud-init
is a method for that. First, we have to create a cloud config file (with very simple yaml syntax) without identity details, hostname, ssh public key, etc. Save this file as init.yaml
for example:
|
|
The #cloud-config
comment on the top is mandatory for cloud-init. Now let’s build a cloud init image from that file:
|
|
This init.img
also mounted to the virtual machine, and the preinstalled cloud-init
tool in the Ubuntu image will take a look into it and configure the required parameters according to that.
Now we should start the virtual machine using our own kernel:
|
|
- line 1: staring the QEMU with our host architecture
- line 2: sideload the kernel we compiled before for the virtual machine, which will use that instead of his default kernel
- line 3: the overlay image of our untouched Ubuntu cloud image as the rootfs
- line 4: the initialization image for the cloud-init
- line 5: two gigabytes RAM for the VM
- line 6-7: speed up and emulation optimizations
- line 8: forward the virtual machine’s SSH port to the host machine’s 2222 TCP port
- line 9-10: not necessary, a shared folder between the guest and the host
Configuration of the guest
To access the guest over SSH we should use the 2222 port of the host:
|
|
Now for folder sharing, we have to put the following line into the end of the /etc/fstab
. In this example, it will mount the /home/spyff/folder_to_share
from the host into the guest’s /root/shared
folder:
|
|
Extending the disk space of the guest
By default, the cloud image configured for 2 gigabytes of additional space. This is a virtual limit, so we can extend it for our needs. The following command extend the image (the hard disk in the virtual machine) with 10 gigabytes.
|
|
The cloud-init
will automatically grow the rootfs to the end of the additional space in next boot. However, we can do that manually if our cloud-init is too old for that:
|
|
or if that fails, we still able to do that with parted
|
|
Miscellaneous
Installing kernel headers
The official and most reliable documentation available here: https://www.kernel.org/doc/Documentation/kbuild/headers_install.txt In our case this looks like the following:
|
|
Installing linux tools
For installing perf
and bpftool
(including libbpf
) do the following:
|
|
Kernel debugging with GDB (with Ubuntu 19.10 cloud-image)
Start the machine with the command below, where the -s
open up the QEMU’s GDB stub socket at the TCP 1234 port:
|
|
To begin with, we have to get the debug symbols for the kernel. This package not available from the default repositories, we need the debug repositories for that:
|
|
The debug repository contains the required debug packages, select the matching one for our current kernel
|
|
Then copy the kernel file to the host machine in order to make it available for GDB to search symbols in it. The required file is /usr/lib/debug/boot/vmlinux-5.3.0-18-generic
in this case.
By default, kernel address space layout randmization (KASLR and KAISER) is enabled. This feautire prevent GDB to match the addresses in the loaded kernel with the ones defined in the debug file. Edit the /etc/default/grub.d/50-cloudimg-settings.cfg
file and append nokaiser nokaslr
to the kernel command line arguments like below:
|
|
Then update the GRUB and reboot the guest machine:
|
|
Now on the host machine, start GDB and attach to the remote target
|
|
Now the symbols and breakpoints should work but we still unable to see the source lines. For that, download the source files for our kernel sudo apt install linux-source
then copy the downloaded archive into the host machine /usr/src/linux-source-5.3.0/linux-source-5.3.0.tar.bz2
. Now we have to tell where the GDB can find the source files. Untar the sources to a directory and set it in the GDB: the original path told by the GDB.
|
|
For kernel module debugging, I found this excellent blogpost: https://medium.com/@navaneethrvce/debugging-your-linux-kernel-module