SDB:KIWI Cookbook ONebula Cloud
tagline: From openSUSE
All of KIWI edit
|This procedure was tested on at least KIWI version 5.01.7|
Create a private Cloud using OpenNebula
This example will guide you through the steps to create your own cloud infrastructure based on OpenNebula. The example leads you through the build of 3 separate images, a head node image, a cloud node image, and a guest image. With the head node and cloud node images created, the cloud infrastructure can be deployed on dedicated hardware in less than 10 minutes. Adding cloud nodes in the future takes as little as 5 minutes.
As of Kiwi version 4.91.3 the kiwi-doc package contains the suse-nebula-cloud example in version specific directories in /usr/share/doc/packages/kiwi/examples/extras. The example is adjusted on a periodic basis to match the requirements of new OpenNebula packages from the Virtualization:Cloud:OpenNebula project in the openSUSE Build service. Therefore it is recommended to use the latest example version from the kiwi-doc package.
The examples provided with the kiwi-doc package are tested and supported only on the currently "supported" openSUSE releases.
Prior to getting started decide if you want to modify the example along the way to customize the images to your needs. The example provides a basic setup of the cloud. This means VM images are stored on the local disc of the head node which is shred with the cloud nodes via NFS. The network configuration on the cloud node is also using the most basic approach by setting up a network bridge and allowing all VMs to connect to this network bridge. For more advanced storage and network configurations consult the OpenNebula documentation. Any of the advance setup options can be built into the KIWI generated images by modifying the basic configuration provided with this example.
If you want to make modifications it is recommended that you copy the example configuration trees to a working directory, for example:
cp -r /usr/share/doc/packages/kiwi/examples/extras/<VERSION>/suse-nebula-cloud /tmp
will do the trick. Without modifications you can simply use the commands given in the .readme file (/usr/share/doc/packages/kiwi/examples/extras/<VERSION>/suse-nebula-cloud.readme).
This cloud infrastructure example uses KVM (the Linux Kernel Virtual Machine) as the underlying virtualization technology. This implies that you can only deploy the cloud node images on hardware that supports virtualization instructions. The head node image can run in a virtual machine or on a "lesser" machine. Having the head node image running as a guest of the cloud it is administering is asking for trouble, thus the "lesser" machine with lots of storage is probably your best approach.
OpenNebula also supports the use of Xen as the underlying virtualization technology. If you prefer Xen, you will need to make appropriate changes to the example configuration files (config.xml) for the head node, cloud node, and the guest image. Everything else should pretty much stay the same (Xen deployment has not been tested).
The basic concepts of KIWI, configuration trees, configuration files, etc. are explained in other examples. If you are not yet familiar with KIWI please consult the more basic examples first, do NOT make this cloud example your first KIWI project.
For cloud administration please consult the OpenNebula documentation.
Creating the head node
With the preliminaries out of the way lets dive right into the subject at hand. The basics of cloud computing are well explained elsewhere on the Internet and thus, we will focus our attention on the specifics of the OpenNebula example at hand. For all the gory details of OpenNebula please consult the OpenNebula site.
Create the head node configuration
The head node configuration tree is contained in /usr/share/doc/packages/kiwi/examples/extras/<VERSION>/suse-nebula-cloud/cloud_head. Let's first take a look at the config.xml file. The type section is configured as follows:
<type image="oem" filesystem="ext3" boot="oemboot/suse-12.1" installstick="true" installboot="install" boottimeout="2"> <oemconfig> <oem-boot-title>OpenNebula Head Node</oem-boot-title> <oem-shutdown>true</oem-shutdown> <oem-swap>true</oem-swap> <oem-swapsize>2048</oem-swapsize> <oem-unattended>true</oem-unattended> </oemconfig> </type>
The image is an OEM image, i.e. it is self installing, as indicated by the value of the image attribute. The value of the boot attribute may be different in your config.xml file, depending on the version of the example you are examining. The image is set up to be deployed from a USB stick as configured by the value of the installstick attribute. With the installboot attribute we select the default boot option to be the install mode as opposed to attempting to boot from the hard drive. The boottimeout attribute sets the time the boot loader will wait until booting the selected option. Following the basic configuration of the type some specific options for the OEM image are configured. First the boot title for the deployed image is set to OpenNebula Head Node using the <oem-boot-title> element. The <oem-shutdown> element value is set to true to force a shutdown of the system after the deployment. In combination with the <oem-unattended> feature this allows you to plug the stick into your target machine, turn it on and walk away. When you come back and the machine is off, you know the deployment is complete. Talk about convenient. Installation time is dominated by the size of the disk in the system as a file system gets created on the entire disk. If you have a large drive, you certainly have time for a cup of coffee.
The root password is set to cloudDemo in the users section of the config.xml file. However, the root password is changed during the firstboot procedure and thus, this configuration is somewhat irrelevant (more on the firstboot procedure below).
<users group="root"> <user pwd="cloudDemo" pwdformat="plain" home="/root" name="root"/> </users>
Prior to moving on to the next file in the configuration tree, take a look at the configuration of the repositories. In addition to the "standard" repository for the distribution version, the Virtualization:Cloud:OpenNebula project repository from the openSUSE Build Service is added. This repository provides the packages necessary to install OpenNebula as the cloud infrastructure.
The remainder of the config.xml file is pretty standard and should be self explanatory.
Next lets take a look at the config.sh script. In the openSUSE 12.1 example you will find code, as shown below to manipulate the default services files for opennebula and opennebula-sunstone.
sed -i 's/\[Unit\]/\[Unit\]\nAfter=YaST2-Firstboot.service\nRequisite=YaST2-Firstboot.service/' /lib/systemd/system/sunstone.service sed -i 's/\[Unit\]/\[Unit\]\nAfter=YaST2-Firstboot.service\nRequisite=YaST2-Firstboot.service/' /lib/systemd/system/one.service
We want to make sure that the services get started after the firstboot process (more on this below) is complete. Services are added to the system using the suseInsertService function provided by KIWI.
suseInsertService nfsserver suseInsertService sshd
These calls enable the NFS and SSH services on the system. The suseInsertService function handles the difference between the SysV init system and systemd implementation transparently to the user. In the version 12.1 example you will also find the following:
suseInsertService cloudinfo suseInsertService noderegistrar suseInsertService one suseInsertService rcpbind suseInsertService sunstone
Not in this order
Prior to openSUSE 12.1 and systemd these services are enabled at the end of the finalizeSetup script. With the change in behavior between SysV init and systemd these are now enabled in the build phase. Proper ordering is ensured through the manipulation of the systemd unit files through the sed commands shown earlier.
OpenNebula uses ssh for actions between cloud nodes and the head node. The NFS server is needed to export the home directory of the cloud administrator (oneadmin) and allow it to be NFS mounted on the cloud nodes accordingly. Note that in openSUSE 12.1 not all services are fully integrated into systemd, thus some manual ordering is required, an appropriate comment is found in config.sh. For details about the cloud administrator and the sharing of the home directory see "Planning the Installation" in the OpenNebula Documentation.
Following the addition of the services we set the welcome text for the firstboot procedure by modifying the FIRSTBOOT_WELCOME_DIR variable using the KIWI provided baseUpdateSysConfig function. This function only modifies existing values, it will not insert new variables into configuration files. As indicated by the comment, IPv6 is disabled for the cloud.
baseUpdateSysConfig /etc/sysconfig/firstboot FIRSTBOOT_WELCOME_DIR /usr/share/susenebula # Disable IPv6 echo “alias net-pf-10 off” >> /etc/modprobe.conf.local echo “alias ipv6 off” >> /etc/modprobe.conf.local
Last but not least we fix up some permissions that are important to get things working and allow us to run the cloud infrastructure as a non root user.
# Directory for authentication file must exist such that YaST module can # write the file mkdir /var/lib/one/.one # Set the desired permissions chown oneadmin:cloud /var/lib/one/.one # The permissions for the testcase chown -R oneadmin:cloud /home/ctester
The /var/lib/one directory is the home directory for the oneadmin user as setup by the opennebula package built in OpenNebula project on OBS. The ctester directory is setup in the overlay tree and can be used to test the basic functionality of the cloud using the guest image based on the configuration supplied with this example. This directory is not needed if you built your own cloud, it exists for demonstration and verification purposes only.
In general, ownership management is not needed when building images with KIWI, as kiwi generally sets the proper ownership. However, in this case we are working outside the scope of kiwi's algorithm for ownership assignment. Kiwi applies a simple rule to ownership. Everything in a users home directory is owned by this user, everything else created during image build is owned by root. In our case we do not create a ctester user within the config.xml file as there is no need for a "ctester" user account on the system. Therefore, kiwi sees the /home/ctester directory (from the overlay tree, more on this a bit later) as a created directory not owned by a user. This results in kiwi setting ownership to root. However, since we want to use this directory as a test case for the cloud infrastructure we need the oneadmin user to have read/write access.
Other entries in the config.sh file are pretty standard and should not require explanation.
Now lets take a look at the contents of the overlay tree. As this is an example for KIWI (and by necessity system configuration) and not a tutorial for a given programing language or related topics, the implementation details of the scripts are omitted.
For versions prior to openSUSE 12.1 two init scripts in cloud_head/root/etc/init.d exist. One is used to control a registration service and the other is used to control an information service, more on these a bit later. For openSUSE 12.1 and later the control of these services is handled by systemd and the unit files are located in cloud_head/root/lib/systemd/system.
The cloud_head/root/etc/YaST2/ directory contains the firstboot.xml file that describes the procedure for the firstboot process. The FIRSTBOOT_WELCOME_DIR variable, set in config.sh as shown above, guides the firstboot process to pick up the specialized welcome message found in cloud_head/root/usr/share/susenebula/welcome.txt. The firstboot process is configured to display a license, if one is present, and let the user configure the keyboard layout, the time and date, and the root password. These are standard YaST modules. The final configuration step described in firstboot.xml is a custom module named opennebula.ycp and found in cloud_head/root/usr/share/YaST2/clients/. This module is used to configure the information to complete the head node setup. The script cloud_head/root/usr/share/firstboot/scripts/finalizeSetup (written in Python) completes the setup and runs after all "GUI" based steps in the firstboot procedure are complete. For information about configuring firstboot procedures see the YaST Firstboot documentation. For information about ycp, the implementation language for YaST modules see the YaST documentation. One final part related to firstboot in the overlay tree is the file cloud_head/root/var/lib/YaST2/reconfig_system. This file is "empty" (actually it contains a 1 or it would be removed by the source code control system that is used to develop KIWI) and is the trigger file to start the firstboot process, simply by being present.
The services controlled by the init system mentioned above, are cloud_head/root/usr/sbin/suseNebulaRegSrv and cloud_head/root/usr/sbin/suseNebulaConfSrv (in versions prior to openSUSE 12.1 the files are located in /sbin). Both services are implemented in Python. suseNebulaRegSrv is the registration service that allows the cloud node to register itself on firstboot (cloud node firstboot). This is basically a service that waits for a connection on the network interface (configured during firstboot of the head node) on port 8445 and based on the information provided, by the connecting sender, registers the cloud node with the cloud infrastructure code running on the head node. The registration service also modifies the file /etc/dhcpd.conf to make sure the registered node always gets the same IP address, otherwise the registration with the cloud infrastructure would have to change on every boot of the cloud node and that's just not necessary. The information service implemented in suseNebulaConfSrv provides information to a cloud node such that the cloud node can self configure. The information provided contains the user and group information of the oneadmin account created by the opennebula package on the head node. It also contains the IP address of the head node which is used by the cloud node code to setup the mount information in /etc/fstab (more on this in the cloud node section.). Take a look at the code to see all the information provided through the information service.
We also have a cloud_head/root/etc/exports exports file. This file simply exports the /var/lib/one directory such that it can be NFS mounted.
Finally we arrive at the cloud_head/root/home/ctester directory. The contents of this directory are strictly to support quick verification of the functionality of the cloud setup. Once the head node and one cloud node is setup one can simply copy the .qcow2 image created by the cloud_guest image to the /home/ctester directory and then run the startTestVM script as the oneadmin user. This will register the image and start a VM of the image. The files testVM.dsconf, testVM.one, and testVM.vmd are OpenNebula configuration files for image storage setup, image registration, and VM (Virtual Machine) setup, respectively. For details about the configuration files see the Image Guide. The .README file has a short reminder about all of this.
Before we move on to examine the cloud node setup a few more details about the head node. The head node runs a DHCP server (IPv4 only) on a configured network bridge (br0). The bridge has the static IP address configured during firstboot and has an alias for the link local IP of 169.254.1.1. The link local IP is used as the listening IP for the information service on port 8445. With this we can hard code the information discovery of the cloud node to connect to this alias and no dynamic discover via avahi is required (avahi would be overkill in this case). Further our DHCP server has been setup to have a "special feature" (this occurs in the opennebula.ycp YaST module). This "special feature" identifies this DHCP server and we use it to assure that cloud nodes only accept leases from this DHCP server. Thus, even if other DHCP servers exist on the network, our cloud nodes will not accept leases unless they are offered by the cloud head node. For details about the use of this neat little feature you can explore the opennebula.ycp code or consult the DHCP man pages.
This pretty much explains the image setup of the head node and covers the basic setup and functionality of a running head node. Time to move on to the cloud node image.
Create the cloud node
For an OpenNebula cloud node we need nothing else than a machine running a hypervisor, no OpenNebula code is installed on the cloud node. OpenNebula code needed is contained in the NFS mounted home directory of the oneadmin user. This minimal requirement for the cloud node is reflected in the config.xml for the cloud node.
Create the cloud node configuration
The only difference from a basic KVM based hypervisor configuration are the additional Ruby packages. The cloud node configuration is set up as an OEM image, just as the head node. Therefore, the <type> element definition only differs in the title for the GRUB menu entry from the head node definition. As in the head node configuration a root user password is set. However, it is immaterial as the cloud node will, as part of the self configuration, inherit the root password set during the head node configuration.
The config.sh file turns off IPv6, as seen previously in the head node configuration, and sets the IPv6 DHCP client to /bin/false to disable the startup of an IPv6 DHCP client.
# Disable IPv6 baseUpdateSysConfig /etc/sysconfig/network/dhcp DHCLIENT6_BIN /bin/false echo “alias net-pf-10 off” >> /etc/modprobe.conf.local echo “alias ipv6 off” >> /etc/modprobe.conf.local sed -i "s/#net.ipv6.conf.all.disable_ipv6 = 1/net.ipv6.conf.all.disable_ipv6 = 1" /etc/sysctl.conf
The following line in config.sh,
baseUpdateSysConfig /etc/sysconfig/network/dhcp DHCLIENT_BIN dhclient
sets the DHCP client to be the ISC client rather than the default dhcpcd client. The reason for this setting is that dhcpcd does not respect settings in /etc/dhclient.conf and has no option to configure restrictions for lease acceptance from a specific DHCP server. The self configuration, the cloud node registration, and the proper cloud operation are dependent on this DHCP feature. Therefore, the ISC client must be used on the cloud node (and inside the guest images). There are of course other network configuration options, see the Networking Guide for more information. The configuration of the DHCP client happens in the cloudNodeConfig script (more on this script below).
Following the dhcp setup, the config.sh script modifies the configuration file for the libvirt daemon.
sed -i "s/#listen_tcp = 1/listen_tcp = 1/" /etc/libvirt/libvirtd.conf
This change allows the libvirt daemon to listen for tcp connections. The OpenNebula infrastructure uses the libvirt API to manage virtual machines on the cloud nodes. The daemon for libvirt is running on the cloud nodes as the config.sh inserts the service via
during the image build process. As with the head node configuration there are differences in the way service activation is handled between openSUSE 12.1 and earlier versions.
After the libvitd configuration file changes in config.sh the lines shown below modify the qemu configuration file.
sed -i "s/#dynamic_ownership = 1/dynamic_ownership = 0/" /etc/libvirt/qemu.conf sed -i "s/#user = \"root\"/user = \"oneadmin\"/" /etc/libvirt/qemu.conf sed -i "s/#group = \"root\"/group = \"cloud\"/" /etc/libvirt/qemu.conf
The first change prevents the qemu process that gets run by libvirtd to dynamically change the ownership of the images. This may lead to permission issues. The 2nd and 3rd change to qemu.conf set the process ownership to the oneadmin user in the cloud group. This is necessary to prevent permission issues when VMs are launched from the disk image files that will be owned by the oneadmin user.
Last but not least a soft link is created from the qemu-kvm executable to the name kvm. The kvm name is hard coded into the OpenNebula code and is not provided by the qemu package. This completes the overview of the config.sh file.
The overlay tree for the cloud node image is a bit simpler than the overlay tree for the head node. The hostname of the node is pre-configured, to be node-1 in cloud_cloud/root/etc/HOSTNAME. However, this setting is changed during the self configuration phase on firstboot (after all we do not want all cloud nodes to be named node-1). The head node maintains a counter and nodes get assigned a hostname (see the implementation of cloud_head/root/sbin/suseNebulaConfSrv for details). Firstboot "magic" is handled by the implementation of cloud_cloud/root/etc/init.d/boot.local, this is replaced by a service in cloud_cloud/root/lib/systemd/system for openSUSE 12.1, which calls the cloud_cloud/root/usr/share/firstboot/scripts/cloudNodeConfig script if the trigger file cloud_cloud/root/var/lib/firstboot exists. The cloudNodeConfig script finds the first connected network interface and configures the network to the link local address of 169.254.1.2. With this the configuration service running on the head node at 169.254.1.1 port 8445 can be contacted and the configuration information is retrieved. This set up leads to a built in race condition that occurs when two nodes are running firstboot at the same time, a time delay of about 10 seconds when turning on nodes for the first time should suffice to avoid the race condition. As part of the self configuration the /etc/fstab file on the cloud node is modified to enable an NFS mount of the oneadmin user's home directory to /var/lib/one. Once the node is configured the production/cloud network is setup using a bridge (named br0) bound to the first ethernet device (eth0) found. After obtaining a dhcp address from the head node the address is registered with the head node to assure that the cloud node will get the same IP address, should it get rebooted. Every aspect of the self configuration and the registration of the cloud node is contained in the reasonably straight forward cloudNodeConfig script (implemented in Python). Last but not least the overlay tree contains the following Policykit rule:
[Remote libvirt SSH access] Identity=unix-user:oneadmin Action=org.libvirt.unix.manage ResultAny=yes ResultInactive=yes ResultActive=yes
This allows the oneadmin user to access libvirt and control the virtual machines running in your cloud. The rule is contained in the cloud_cloud/root/etc/polkit-1/localauthority/50-local.d/60-suseNebula-access.pkla file.
This completes the configuration of the cloud node. Build the image according to the instructions in the .readme file supplied with the example. Once the build is complete you can dump the resulting image to a USB stick and use it to install as many cloud nodes as you want to add to your cloud setup. As with the head node the installation occurs in unattended mode and the machine will turn itself off when the initial install is complete. Upon firstboot (after the USB stick is removed) the cloud node will configure itself as discussed previously. It is not necessary to attach a keyboard or monitor to the cloud node, simply turn the machine on and "watch" (actually there is nothing to see) the magic happen. The cloud node must of course be connected to the same network as the head node.
You can monitor the system log on the head node to see when the cloud node registration is complete or you can simply use the onehost list command on the head node to monitor when the new cloud node shows up. Messages are written to the system log on both the head node and the cloud node.
Create a guest
Having a cloud infrastructure setup is great, but without virtual machines running on the infrastructure the setup is not much of a cloud. The guest image configuration provided with the KIWI example is only a rudimentary image to show the format of a guest image. Using this example as a guide you can relatively easily build your own guest images that meet your needs. In addition you can use virtual machines other people built. Check out SUSE Gallery, you just might find what you are looking for without having to build an image. For more GUI fun you can follow the Using SUSE Studio with OpenNebula guide on the OpenNebula site to build images with SUSE Studio.
The underlying KVM virtualization technology also deals happily with VMware images, thus you do not necessarily have to have a native KVM virtual disk image. Using different image types in OpenNebula is configured in the VM configuration template, please consult the OpenNebula documentation.
Create the guest configuration
The config.xml file specifies only a minimal set of packages. The key is the <type> element set up.
<type image="vmx" primary="true" filesystem="ext4" boot="vmxboot/suse-12.1" format="qcow2"/>
The guest image is a virtual machine, thus we need the image to be of type vmx and since we have chosen KVM as the virtualization infrastructure for our cloud we prefer our guest image to be in the qcow2 format, the native KVM image format. The rest of the config.xml file is "standard" stuff and should be familiar.
The config.sh script has a few customizations as follows:
baseUpdateSysConfig /etc/sysconfig/bootloader LOADER_LOCATION mbr baseUpdateSysConfig /etc/sysconfig/network/dhcp DHCLIENT_BIN dhclient # Disable IPv6 baseUpdateSysConfig /etc/sysconfig/network/dhcp DHCLIENT6_BIN /bin/false echo “alias net-pf-10 off” >> /etc/modprobe.conf.local echo “alias ipv6 off” >> /etc/modprobe.conf.local sed -i "s/#net.ipv6.conf.all.disable_ipv6 = 1/net.ipv6.conf.all.disable_ipv6 = 1" /etc/sysctl.conf echo "option suse-nebula code 239 = ip-address;" >> /etc/dhclient.conf echo "require suse-nebula;" >> /etc/dhclient.conf echo "request suse-nebula;" >> /etc/dhclient.conf
As with the head and cloud nodes, IPv6 is disabled. The boot loader location is set to be the Master Boot Record (MBR) and the ISC dhclient is configured to be the DHCP client. We want the guest to get it's IP address from the DHCP server running on the cloud head node and not some other DHCP server on the network the cloud may be connected to. In this example the DHCP feature that controls the lease offer to be accepted is hard coded (last 3 lines of the code snippet above) to the default setting in the configuration of the head node. Hard coding this setup is perfectly reasonable as you will know the name of the feature you assigned to the dhcp server once your head node is configured and you set up your KIWI configuration for your guest. If you have multiple clouds or want to be a bit more flexible you can add a self configuration to the guest along the lines of the cloud node self configuration. For the guest you only need the dhcp feature from the head node.
The overlay tree is basically empty and contains just enough information to get the network in the guest up and running.
Build the guest image according to the information provided in the .readme file. The resulting image can easily be tested outside of your cloud on a machine that has KVM, just run qemu-kvm and supply the path to the image. This way you can make sure your image behaves as you expect prior to deploying it into the cloud. Be aware of the DHCP setting in your guest image, if you are testing on a machine that does not have network access to the cloud head node you will not be able to get into the machine through the network.
This is all there is to it. If you copy the resulting image to the /home/ctester directory on the cloud head node, change its permissions to be readable by the oneadmin user and then execute the startTestVM script as the oneadmin user the created guest image will be deployed to the cloud. The guest has sshd running such that you can connect to it, or you can use vncviewer on port 5905 (see testVM.vmd) of the cloud node where the virtual machine has been placed (use onevm list to obtain the information).
Before we come to the end a quick remark about the oneadmin user account. The user is setup by the opennebula package from OBS and during installation a random password for the oneadmin user is generated. Also the package creates ssh keys for the oneadmin account. It is not necessary to login as the oneadmin user to operate the OpenNebula cloud. For instructions that refer to "run as oneadmin user", use the sudo -u oneadmin command. As admin you are of course free to change this behavior to your liking.
What about a GUI
OpenNebula provides a Web-UI for cloud management through the Sunstone service. The configuration as included with KIWI version 4.91.3 did not activate or consider the Web-UI service. As of KIWI version 4.92.3 the provided configuration supports the use of the Sunstone service. For openSUSE versions prior to 12.1 the sunstone service is inserted at the end of finalizeSetup, while for version 12.1 and later the service is inserted in config.sh as mentioned previously. After the system is booted you can connect to the Web-UI on port 9869 from a browser running on a machine that can connect to the head node.
The example provides everything you need to have a "cloud in a box". In less then 2 hours you can have a private cloud up and running.
For cloud administration consult the OpenNebula documentation.