SDB:How to install openSUSE 13.1 as PV guest on XenServer

Jump to: navigation, search

Tested on openSUSE

Recommended articles

Icon-manual.png

Related articles

Icon-help.png

Illustrated instructions on how to install openSUSE 13.1 x86_64 on XenServer 6.x and other useful tips.

Problem

Since the minimum compatibility version of Xen has been increased in openSUSE 13.1 to Xen 4.2, the native installer will refuse to boot on XenServer versions older than 6.5 that still run Xen 4.1.x.

If you run XenServer 6.5 or newer, you can install openSUSE 13.1 using the SLES 12 template as usual. You may have to delete /etc/systemd/system/YaST2-Firstboot.service to make it boot properly (boot with "init=/bin/bash" in "Boot parameters" under your VM's properties in XenCenter).

After creation of this page, openSUSE development team has reviewed the decision and reverted Xen compatibility version in the kernel build tree back to 4.1. This doesn't affect any openSUSE 13.1 kernels released prior to version 3.11.10-17.1. See discussion on Bug 851338 for details.

As the result, suggested method of installing openSUSE 13.1 on Xen 4.1 is to install it as HVM, update with kernel 3.11.10-17.1 or higher and then convert it to PV.

Solution

Instructions below provided for alternative methods of installing openSUSE 13.1 on Xen 4.1.

To overcome the problem of Xen compatibility in kernel, it can be easily recompiled with compatibility set to Xen 4.1 using openSUSE pre-configured "Linux Kernel Development" pattern of HVM installation.

To learn more about Linux guest running as HVM guest with unmodified kernel, please refer to the following article: Linux PV on HVM drivers

Procedure

Process consists of two major steps:

  1. Install openSUSE 13.1 as HVM guest and recompile kernel
  2. Setup installation environment that will provide recompiled kernel, which XenServer can use to install openSUSE 13.1 as PV guest natively.


Installing openSUSE 13.1 as HVM guets

Start installation of openSUSE 13.1 in HVM mode. For that you would need to download installation media from [1] I personally suggest to use x86_64 bit version, as these days(IMHO) there are no advantages to run i386 on Xen. I would suggest to grab Network install iso as well, as we can use it to create local repository to initiate PV mode installation with recompiled kernel.

  1. Create new VM using "Other install media"(slide1).
  2. Provide meaningful name(slide2).
  3. Use iso of full DVD installation that you have downloaded(slide3).
  4. Because we are going to recompile the kernel, allocate as much resources as your host permits(slide4).
  5. Building kernel can utilize quite a bit of space. be generous with that too(slide5).
  6. I use text mode as it make things a bit lighter. you can carry on with default X installation if you prefer(slide6).
  7. Again, as a personal choice I select minimal system because for the task at hand I don't need any extra stuff, but you can select any other Desktop type since HVM mode provides VNC monitor that WM can take advantage of. To navigate use Alt+<highlighted letter> to select items and actions. for instance on the screen in the slide7, to proceed you would press Alt+n to activate "Next" action.
  8. At the final screen of the installer it is important to select "Change" and "Software" to add "Linux kernel development pattern" to the installation.(slide8).
  9. From filter(Alt+f) select Patterns.(slide9).
  10. In patterns scroll down until you reach "Linux Kenel Dev" and press space to select it.(slideslide10).
  11. Let's also install "Web and LAMP Server" patter so we can setup for our repository later. Navigate to Web and LAMP Ser and press space to select it(slide11).
  12. Confirm additional packages that will be added to the installation. (slide12).
  13. This slide13 is a bit of personal choice. While I'm trying to embrace new GRUB2 in desktop/server physical systems, for VMs I still feel much more comfortable with GRUB. besides it saves 1 step if you decide to convert this VM to PV later on(I wouldn't advise it though, as this VM will be very useful as building environment for future kernel releases or other pv-ops restricted development).
  14. Once GRUB is selected you can accept to propose new configuration. At this stage there is nothing too complex in the boot options so it should do a good job at new config.(slide14).
  15. This is all configuration required before installation. Now you can press Install(Alt+I) and Install again to complete.
    A little remark about HVM linux. Common opinion that Linux VM shouldn't run as HVM guest due to performance penalties on emulated hardware. However, with new physical hardware extensions, HVM mode might have advantages over PV due to smaller number of syscalls that PV guest makes to Xen. With hardware that provides HAP[2] much better response can be expected on particular operations. In fact, currently developed PVH containers[3] might be the future for running Linux on Xen. To check if physical host has HAP support, you can run "xl dmesg
    .
  16. Once installation is complete, go to the General Tab of the VM and click on the link "XenServer tools not installed"(slide16).
  17. Select to Install XenServer tools. XenServer will mount xs-tools.iso and take you directly to VM console.(slide17).
  18. You will see "ata" warning popping up as openSUSE installation media has disappeared and new iso was inserted - I believe you can safely ignore it. mount the DVD drive:(slide18)
    # mount /dev/sr0 /mnt
    .
  19. Since XenServer supports only particular distributions of Linux, you cannot just run installer. You need to explicitly specify type and major number of the Distribution that closely matches openSUSE. in our case distribution will be sles and major version will be 11.(slide19)
    #./install.sh -d sles -m 11
    .
  20. As you can see on slide20, VM will communicate to xenstore via event channel and XenServer will recognize that tools are installed. But where is distribution information? Remember how I selected Minimal Server in desktop selection? well, apparently that "minimal" pattern doesn't comply with LSB. "So what?", you may say, "the /etc/SuSE-release is still there, right?". That's true, but as I said, xs-tools only look for particular distributions and openSUSE is not of them. But not to worry, just install lsb-release and it will be all pretty again.
  21. Install lsb-release:(slide21)
    # zypper in lsb-release
    .
  22. At this stage you can reboot the host, or just restart xe-linux-distribution daemon. user systemctl or init.d command:(slide22)
    # /etc/init.d/xe-linux-distribution restart
    .
  23. Voila! Now you have HVM openSUSE with dynamic memory control, that can migrate between hosts, display operating system versions and network information right in the XenCenter.(slide23)
    Just a side note. If you wish to use HVM VM for more than just build environment, there are some optimizations that you might do to reduce number of event channels and increase HVM density of the host[4]
  24. This is a good opportunity to take snapshot. This is a base HVM installation without any addition stuff(except Linux Kernel Development and LAMP patterns, LSB release package and xs-tools) that can be cloned, exported or simply reverted to.(slide24)
  25. now, we are ready for not-so-much-fun part - let's rebuild Xen enabled kernel with compatibility matching Xen 4.1.
    # cd /usr/src/linux-obj/x86_64/xen
    # make menuconfig
    Note that by selecting Linux Kernel Development pattern, openSUSE installed for us complete toolchain and preconfigured .config file for building the kernel - ain't that nice? love it!
  26. Navigate to:
    Device Drivers -> XEN -> Xen version compatibility (4.2.0 and later) ---> Change 4.2.0 to 4.1.0
    Exit all subcategories and save config. run make with -j(number of threads = 2 * vCPU). In my case my physical host has 4 pCPUs. I've assigned 4 vCPUs to HVM VM so it will be 4 vCPUs * 2 = 8:
    # make -j8
  27. As you can see on performance graph(slide25), it took about 30 minutes to compile kernel - so go put a kettle on, it'll be awhile.
  28. Woo hoo! - our kernel is baked and fresh out of the oven. lets copy it to Dom0 of the XenServer. First I log to XenServer Dom0 console and create a new directory under /boot
    # mkdir /boot/guest
  29. then I check current version of running kernel and scp it to my dom0, which at address 192.168.0.196
    # uname -r
    # scp ./arch/x86_64/boot/bzImage root@192.168.0.196:/boot/guest/vmlinuz-3.11.6-4-xen
    Note how we use version appended to the vmlinuz when copying it to the Dom0. This will become very helpful during further instructions in this article as well as handy to keep track in case other kernels will be added to Dom0's /boot/guest
  30. A good thing about having this HVM VM is that you can setup an installation repository without leaving the shell. First configure http-server:
    # yast http-server
    You don't have change anything in the yast workflow there. just accept defaults and finish.
  31. Create two directories inside the http root folder(/srv/www/htdocs/). first I called net-install - that's where I copy content of the network installation media, so that I can replace the vmlinuz-xen with my Xen 4.1 compatible version. Second I call repo, there I will mount full DVD iso image that I inserted into my HVM vm:
    # mkdir /srv/www/htdocs/repo
    # mkdir /srv/www/htdocs/net-install
  32. adjust permissions for both directories in /etc/apache2/httpd.conf after root like so:
    <Directory />
        Options Indexes FollowSymLinks
        AllowOverride None
        Order deny,allow
        Deny from all
    </Directory>
    <Directory /srv/www/htdocs/repo>
        Options Indexes FollowSymLinks
        AllowOverride None
        Order deny,allow
        Allow from all
    </Directory>
    <Directory /srv/www/htdocs/net-install>
        Options Indexes FollowSymLinks
        AllowOverride None
        Order deny,allow
        Allow from all
    </Directory>
  33. Insert Net installer iso into DVDROM, mount it with:
    #mount /dev/sr0 /mnt"
  34. Copy content to net-install with:
    # cp -pbr /mnt/* /srv/www/htdocs/net-install/
  35. Once finished, copy new kernel to replace the vmlinuz-xen in net-install/boot/x86_64:
    # cp /usr/src/linux-obj/x86_64/xen/arch/x86_64/boot/bzImage /srv/www/htdocs/net-install/boot/x86_64/vmlinuz-xen
  36. Unmount DVDROM:
    # umount /mnt
  37. Insert installation into DVDROM and mount it as repo:
    # mount /dev/sr0 /srv/www/htdocs/repo
  38. Make sure that apache is running:
    # systemctl status apache2
  39. Additionally you can use a browser to view the path http://<HVM openSUSE ip>/repo to make sure that it is accessible.
  40. If it doesn't come up, switch off firewall or create a rule for port 80.
    # yast firewall
    To navigate:
    "Alt + t" to Stop firewall
    "Alt + n" to proceed with configuration
    "Alt + f" to finish

    Phew, that was a few long steps. now we are all set and ready to carry on with actual installation.

Method 1: install openSUSE as PV guest using SLES template

  1. Create VM using latest SLES template and provide meanigful name(e.g. PV openSUSE 13.1 x64 from SLES template).
  2. Here slide26 you would select to install from URL and provide URL of the http server on your HVM VM. I would also recommend to use VNC installation mode if you are not hard set on installing tiny/minimal version. To use VNC installation, in the "Advanced OS boot parameters" add "vnc=1 vncpassword=password" to read as following:
    console=ttyS0 xencons=ttyS vnc=1 vncpassword=password
    Installation should start but you will be interrupted with message that repository is not found. go through default steps and select installation from http. when asked to provide server IP, set it to your HVM VM with apache and for path on the next screen type "repo/" (no quotes). Once installation is complete VM will fail to reboot and will stay shutdown(RPM in the DVD still has kernel that only compatible with Xen 4.2). This is not a problem though, as we can copy compatible kernel directly into VM's partition from XenServer console.
  3. In XenCenter, go to Storage tab of the VM and select properties of the root disk. Copy the name.
  4. Get uuid of the disk with xe vdi-list command:
    # xe vdi-list name-label=<paste name of the VM's system disk>
  5. Then use this uuid with /opt/xensource/debug/with-vdi command to attach it to Dom0.
    # /opt/xensource/debug/with-vdi <uuid from the output of vdi-list command above>
  6. Once disk attached it will be mapped to /dev/sm/backend/<sr uuid>/<vdi uuid>. Use kpartx -a to map partitions in the block device:
    # kpartx -a /dev/sm/<sr uuid>/<vdi uuid>
  7. This will add partitions to /dev/mapper. mount the second partition(first one is swap with default installation) as so:
    # mount /dev/mapper/<vdi uuid>p2 /mnt
  8. Copy the kernel we prepare earlier to the /mnt/boot:
    # cp /boot/guest/vmlinuz-3.11.6-4-xen /mnt/boot
    Notice how we made the full name of recompiled vmlinuz to match the one installed in by the default - that means we don't need to make any adjustment to the grub's menu.lst to boot it back up.
  9. Before you can boot the vm, disk must be detached from dom0:
    # umount /mnt
    # kpartx -d /dev/sm/<sr uuid>/<vdi uuid>
    # exit
    Icon-warning.png
    Warning: Keep in mind that If you fail to cleanly detach VDI from Dom0 with steps above, VM will still boot, but you will have difficulties shutting it down as internal logic of unplugging tapdisks will struggle to unmap vhd that still attached to Dom0. If this sounds too complicated, just remember to do those steps above.
  10. Boot the VM. Once VM is up, if you didn't use VNC option, it will be a bit flaky for a while until yast finishes auto config. By the look of it, this is due to systemd trying to switch consoles between login and yast. But just stick with it and keep trying to login until it's complete and then repeat tools installation like we did with HVM installation, in steps 16-23.
    Here is few things that users installing opensuse using SLES template might not know or notice. When you set Advanced OS options in XenCenter, it actually writes it to the grub config so pygrub will use those instead of changes you make to "Advanced OS options". In addition, by the default it will set maxcpus=1. So if you are using pygrub as PV-bootloader, no matter how many vCPU you give to VM on XenCenter it will always use only 1, until you remove it from /boot/grub/menu.lst. I would also suggest removing those console/xencons parameters so you have option to control those directly from XenCenter. In addition, this doesn't seem to be an issue in 13.1, but in 12.3 there was a problem with systemd serial-console that would load and make shell unbearable(no syntax highlights and bunch of other things). From there I've learned to use xvc0 as default console, but you must add it to /etc/securetty if you plan to login on console as root. After adding xvc0 to /etc/securetty, in XenCenter -> VM properties -> OS boot parameters set xencons=xvc0
  11. If you wish to update packages on the system but avoid recompiling or updating kernel, just lock kernel-xen package via zypper and then use "zypper up" to grab some latest stuff:
    # zypper al kernel-xen
    # zypper up
  12. Now you should have pretty much fresh install of PV openSUSE 13.1 x64 VM. It's probably a good idea to take a snapshot and export it as XVA for use in other environments. I usually create a directory under local storage(formated as ext3 by selecting Thin Provisoning during XenServer installation) mount path
    # mkdir /var/run/sr-mount/<local storage uuid>/xva
    and export VMs like so:
    # cd /var/run/sr-mount/<local storage uuid>/xva
    # xe vm-export vm=<Name label of the vm that can be autocomplete with tab key> compress=true filename=./openSUSE13.1-x86_64.xva
    Method 1 described above, is not my favorite due to the fact that SLES template doesn't exactly match openSUSE installation. Beside the console issues and maxcpus=1, SLES installation is actually separated into different stages, so for instance, default user won't be created at all(you only get root) as SLES sets user with support details in the yast autoconfig stage after the reboot. While those things are not major dealbreaker, I still prefer to install from scratch as described in method 2

Method 2: install openSUSE PV by creating template from scratch

Another method of installing PV openSUSE when you have compiled kernel with compatibility set to Xen 4.1

  1. Create VM using "Other install media" but do not start it(untick "Start the new VM automatically" on last "Read to create the new virtual machine" screen). I called it "suse-pv from scratch"
  2. get uuid of the new created VM:
    # xe vm-list name-label=suse-pv\ from\ scratch params=
  3. unset HVM boot policy to turn in into PV:
    # xe vm-param-set uuid=<vm-uuid> HVM-boot-policy=""
  4. point PV-kernel parameter to /boot/guest, where we copied our compatible kernel:
    # xe vm-param-set uuid=<vm-uuid> PV-kernel="/boot/guest/vmlinuz-3.11.6-4-xen"
    Note that if you specify "pygrub" in PV-bootloader, PV-kernel and PV-ramdisk parameters are ignored. We are not going to use pygrub, but rather boot images directly
  5. From inside HVM VM that we used to compile kernel(or DVD mounted inside of the Dom0) copy initrd-xen into /boot/guest of Dom0 and set it as PV-ramdisk of the VM that we are preparing(below is command to run in the HVM guest that we use as local repo):
    # scp /srv/www/htdocs/repo/boot/x86_64/initrd-xen root@192.168.0.196:/boot/guest/
  6. Now in Dom0 of XenServer set that initrd-xen as ramdisk for PV VM:
    # xe vm-param-set uuid=<vm-uuid> PV-ramdisk="/boot/guest/initrd-xen"
  7. set available install-methods to cdrom,nfs,http and ftp like so:
    # xe vm-param-set uuid=<vm-uuid> other-config:install-methods="cdrom,nfs,http,ftp"
  8. set console args as well as install path to HVM VM repo(again VNC is optional, but I like it)
    # xe vm-param-set uuid=<vm-uuid> PV-args="console=ttyS0 xencons=ttyS vnc=1 vncpassword=password install=http://192.168.0.194/repo/"
  9. Once installation has finished it will try to reboot. However, since we have initrd-xen from the installation it will try to install again. We have two options from this point onward:
    • Use "with-vdi" to copy new initrd-<version>-xen into Dom0's /boot/guest and then set it as new PV-ram disk with root=/dev/xvda2 in boot options.
    • Use "with-vdi" to copy compatible kernel-xen into the VM and set PV-bootloader to pygrab to use /boot/grub/menu.lst to provide correct boot values.
    I personally favor first option as this gives me more control from XenServer console and let's face it - "with-vdi" process can be a bit of headache when trying to make VM boot changes
  10. For both options, Force shut down the VM.
  11. Get uuid of the disk with:
    # xe vbd-list vm-name-label=<vm name>
  12. then use this uuid as argument for with-vdi command:
    # /opt/xensource/debug/with-vdi <vdi field form output for command above>
  13. Once disk attached it will be mapped to /dev/sm/backend/<sr uuid>/<vdi uuid>. Use kpartx -a to map partitions in the block device:
    # kpartx -a /dev/sm/<sr uuid>/<vdi uuid>
  14. This will add partitions to /dev/mapper. mount the second partition as so:
    # mount /dev/mapper/<vdi uuid>p2 /mnt
  15. Make a note of boot parameters of the VM:
    # cat /boot/grub/menu.lst

Option 1: copy new initrd-<version>-xen into Dom0

  1. Copy /mnt/boot/initrd-3.11.6-4-xen to /boot/guest in dom0:
    # cp /mnt/boot/initrd-3.11.6-4-xen /boot/guest
    Another good thing about this option is that you get full set of boot items(vmlinuz-3.11.6-4-xen, initrd-xen from installer, initrd-3.11.6-4-xen), which means you can skip this step altogether when installing PV VM in another environment. Just boot VM with initrd-xen as PV-ramdisk to install and straight after installation begins you can set PV-ramdisk to vmlinuz-3.11.6-4-xen and PV-args to root=/dev/xvda2. this will not affect running installation, but when it's rebooted in the end, correct initrd and rootfs will be loaded.
  2. Set PV-args to point to correct root(I used standard options that you can get from /boot/grub/menu.lst section):
    # xe vm-param-set uuid=<vm-uuid> PV-args="xencons=xvc0 root=/dev/xvda2 resume=/dev/xvda1 splash=silent quiet showopt"
  3. Set correct PV-ramdisk:
    # xe vm-param-set uuid=<vm-uuid> PV-ramdisk="/boot/guest/initrd-3.11.6-4-xen"

Option 2: copy new vmlinuz-<version>-xen into VM's root partition

  1. Copy the vmlinuz-3.11.6-4-xen into /mnt/boot/ in the mapped driver:
    # cp /boot/guest/vmlinuz-3.11.6-4-xen /mnt/boot
  2. Set boot option xencons to xvc0:
    # xe vm-param-set uuid=<vm-uuid> PV-args="xencons=xvc0"
  3. And set PV-bootloader to pygrub(meaning that it will read /boot/grub/menu.lst to get remaining boot options including ram disk, kernel and root path):
    # xe vm-param-set uuid=<vm-uuid> PV-bootloader=pygrub
Icon-warning.png
Warning: Don't forget to detach mapped VDI:
# umount /mnt
# kpartx -d /dev/sm/<sr uuid>/<vdi uuid>
# exit

You now should be able to boot openSUSE 13.1 VM in PV mode.

Note: just a remark on the boot loader methods for PV VMs used in XenServer. You can have 3 options:

  1. no PV-bootloader provided: PV-kernel and PV-ramdisk referencing locations inside of Dom0 required to boot VM
  2. PV-bootloader set to eliloader: used for installation, it will read other-config:install-repository settings to present installation media to the VM
  3. PV-bootloader set to pygrub: pygrub will read boot partition of the VM to obtain grub's menu.lst and boot default option.

Method 3: Use EC2 Kernel to boot PV Mode

  1. Install openSUSE in HVM mode using steps in 3.1 above. When prompted for the bootloader make sure to install grub (NOT grub2).
  2. Log in to VM and install the kernel-ec2 module.
  3. Edit the /boot/grub/menu.list file and set the kernel-ec2 entry as the default.
  4. Shut down the VM and switch over the VM to PV mode.
    1. Get the UUID of the VM using:
      # xe vm-list name-label=<VM Name> params=uuid
    2. Turn off HVM boot:
      # xe vm-param-set uuid=<VM UUID> HVM-boot-policy=""
    3. Enable pygrub boot:
      # xe vm-param-set uuid=<VM UUID> PV-bootloader=pygrub
    4. Find the UUID of the VBD (NOT the VDI):
      # xe vm-disk-list uuid=<VM UUID>
    5. Set the VBD to be bootable:
      # xe vbd-param-set uuid=<VBD UUID> bootable=true

Note: EC2 kernel image does *not* include support for PCI passthrough. If you need PCI passthrough, this won't work for you.

Method 4: Use openSUSE 12.3 installer with latest kernel

Kernel provided in openSUSE 12.3 can be used to boot openSUSE 13.1 installer. That way installation can be updated directly to Xen 4.1 compatible kernel before rebooting.

  1. Pick local mirror from the openSUSE list: http://mirrors.opensuse.org/list/12.3.html
  2. In XenServer change directory to /opt/xensource/www
  3. Create installation path: mkdir -p ./boot/<arch> and change directory to it.
    [root@xs /opt/xensource/www ]# mkdir -p ./boot/x86_64
  4. use wget to download vmlinuz-xen and initrd-xen from the 12.3 oss repo of your choice:
  5. Create new VM using SLES template, select to "Install from URL" and point to your XenServer address: http://<xenserver IP address>
  6. At the end of the New VM wizard untick "Start the new VM automatically".
  7. Once VM has been created, insert openSUSE 13.1 installation iso into the virtual DVD Drive and start the VM
  8. VM will boot up but installation complains that 12.3 media is not found. Keep selecting defaults until you reach "Choose the source medium." where you need to pick "Hard Disk".
  9. For the device name provide /dev/xvdd and on the second screen leave single forward slash. Installer will begin to download packages from the 13.1 ISO
  10. Icon-warning.png
    Warning: Accept licenses and on the "Installation Mode" make sure to tick "Add Online Repositories Before Installation" and on the next screen enable "Main Update Repository", otherwise VM will fail to boot after the installation.
    From this point you can carry on with normal installation as it will pull latest kernel-xen version that is compatible with Xen4.1

External links