SDB:Cloud OpenNebula
All of Cloud edit
Recommended articles
Related articles
Introduction
The packages for OpenNebula from OBS are setup to eliminate the configuration steps that are generic and can be automated. This allows you to skip the basic setup as documented in the OpenNebula documentation. Cloud nodes for an OpenNebula cloud only require the setup of a Hypervisor and libvirt (assuming a shared home directory of the cloud administrator), thus installation of the OpenNebula packages is only required on the machine that will function as the cloud head node. The cloud head node can be virtualized, however, self hosting has not been tested.
The KIWI example that shows how to build images to setup an OpenNebula cloud has all configurations applied and provides a YaST firstboot setup that allows you to customize the setup to a certain extend.
The following description will walk you through the setup of an OpenNebula cloud using the provided packages from OBS running on SUSE (openSUSE or SLE) systems.
Procedure
- Add the OpenNebula repository
- Install the package(s)
- Complete the configuration
Add the OpenNebula repository
with YaST2
Start YaST2, select "Software" and start the module "Software Repositories". Click "Add" in the lower left hand corner. In the dialog that appears, select "Specify URL...", if not already selected, and click "Next" in the lower right hand corner. Enter a name for the repository "OpenNebula" for example and add the repository URL "http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula/YOUR_SUSE_VERSION" (replace YOUR_SUSE_VERSION with the version number of the distribution you are using). Click "Next" in the lower right hand corner.
on the command line
As previously mentioned, replace YOUR_SUSE_VERSION with the version for the distribution you are using.
Install the package(s)
Using YaST or zypper all necessary dependencies will be installed.
The could infrastructure
with YaST2
Start YaST2, select "Software" and start the module "Software Management" module. Use "nebula" as the search term and select the "opennebula" package for installation. Click the "Apply" button in the lower right hand corner.
on the command line
The Web-UI service (optional)
with YaST2
Start YaST2, select "Software" and start the module "Software Management" module. Use "nebula" as the search term and select the "opennebula-sunstone" package for installation. Click the "Apply" button in the lower right hand corner.
on the command line
Multi tenancy Web-UI(optional)
with YaST2
Start YaST2, select "Software" and start the module "Software Management" module. Use "nebula" as the search term and select the "opennebula-zones" package for installation. Click the "Apply" button in the lower right hand corner.
on the command line
Complete the configuration
The configuration of OpenNebula needs to be completed on the command line. The installation of the "opennebula" package created the "oneadmin" user account on your system with the home directory being located in "/var/lib/one". A random password for the oneadmin user was generated. For the operation of the cloud infrastructure it is not necessary that you log into the "oneadmin" account. However, if you wish to be able to login as the oneadmin user you will need to set a new password for the "oneadmin" account as follows:
Create a new password for the oneadmin account (optional)
Setting up authentication
One authentication model of OpenNebula is an authorization file to which the "ONE_AUTH" environment variable value may point. In absence of the "ONE_AUTH" variable the file $HOME/.one/one_auth is read. The authentication file has the format:
username:password
The authentication file is consulted for all command line operations, thus any user added to the cloud will need to have an authorization file if the command line tools are being used. Cloud user accounts are created by the administrator (oneadmin) and the user information is stored in the DB backend. For user management please refer to the OpenNebula documentation. The DB is consulted by the command line tools and by the Web-UI to verify credentials. If only the Web-UI is used to work with the cloud the authentication file is not needed in the users home directory. In addition to the authorization file OpenNebula also supports the use of X509 certificates. For details see x509 Authentication.
The installation of the "opennebula" package also installed sqlite, which is used as the backend DB by default. It is possible to use another DB as the backend, please consult the OpenNebula documentation for information about configuration of a different DB.
The authentication file for the "oneadmin" user, unlike for regular users, is required and has significance for the operation of the cloud infrastructure. Follow the steps below to create the file, the underlying assumption is that you did not change the randomly generated password for the oneadmin user.
- Become "root" in your shell
- Create the directory for the authentication file
- Create the file
As a convenience you may want to set the "ONE_AUTH" environment variable to point to /var/lib/one/.one/one_auth in the "oneadmin" users shell environment by modifying the shell's startup scripts.
VM image storage
The default setup by the "opennebula" package is to store VM images in the directory /var/lib/one/datastores on the local disk. This implies that it is expected that sufficient disk space is available on the local drive to store the VM images you expect to use in your cloud setup. It is possible to change the location of the image storage, please consult the OpenNebula documentation for details. It is important that the image storage is in the same location on all machines that are part of the cloud. OpenNebula offers a number of storage models, as described in the OpenNebula documentation.
The special requirements for the oneadmin user account
The "oneadmin" user account created by the opennebula package is the account for the cloud administrator. The "oneadmin" user is part of the "cloud" group, also created by the opennebula package. The "oneadmin" account and "cloud" group must also exist on all cloud node installations with the same user ID and group ID. One way to accomplish this is to turn your cloud head node into an NIS (Network Information Service) server and configure the cloud nodes as NIS clients. If you already have an NIS server running on you network you can add the "oneadmin" user to your existing NIS configuration and have the cloud nodes access the running NIS server. The simplest option is to add the "oneadmin" user account and the "cloud" group to each cloud node as you set it up. Do not forget, the UID for "oneadmin" must be the same on all machines. The same requirement exists for the "cloud" group ID.
The home directory of the "oneadmin" user contains scripts, called drivers in OpenNebula speak, that are accessed when VMs are launched in the cloud. To avoid having to copy these scripts to every cloud machine, it is best if you share the "oneadmin" home directory with the cloud nodes via NFS and mount the oneadmin home directory on the cloud nodes. This implies that your head node is also and NFS server.
Exporting the oneadmin home directory
with YaST2
Start YaST2, select "Network Services" and start the "NFS Server" module. Select "Start" to have the NFS server started, this will also assure that the NFS server starts at boot time in the future, Open the port in the firewall if you have the firewall running, enable IPv4 and enter your domain name. Click "Next" in the lower right hand corner. In the dialog that follows click on "Add Directory" then enter /var/lib/one or use the "Browse" button to browse to the directory. This is the home directory of the "oneadmin" user, if you changed the directory from the default location make sure to export it instead. Use the following "Options":
no_subtree_check,rw,root_squash,sync
Click "OK" and click "Finish"
on the command line
As the root user
Prior to version 12.1 use
With openSUSE 12.1 the init system changed to systemd to enable and start the nfsserver use the following commands
The cloud management service
The cloud is managed by a service called "oned". The service is started with the "one" initscript or in version 12.1 as a service unit (one.service). Starting the service for the first time requires that the "ONE_AUTH" environment variable is set. Subsequent start ups do not have this requirement.
Prior to version 12.1 as root run:
The first command starts the service with the "ONE_AUTH" variable set to point to the authentication file of the "oneadmin" user. The second command inserts the service into the init process to assure proper start up if the machine needs to be rebooted.
With openSUSE 12.1 the init system changed to systemd, and the OpenNebula package provides service files in /lib/systemd/system. Further a setup script fullfills the role of the initial special start up. As root execute the following commands:
The first command configures the database with the "ONE_AUTH" variable set to point to the authentication file of the "oneadmin" user. The subsequent commands insert the services into the init process, to assure proper start up if the machine needs to be rebooted, and start the services.
Configure the Web-UI (optional)
A web based UI is provided by the opennebula-sunstone package as described earlier. As with the openenbula package, the opennebula-sunstone package provides configuration for things that are generic and can be automated. However, some steps remain to enable the web-ui service for your cloud setup. By default the Web-UI listens on localhost, i.e. 127.0.0.1 on port 9869. Change the ":host:" entry in /etc/one/sunstone-server.conf from "127.0.0.1" to the IP address of your server to make the Web-UI accessible from any machine on your network that can access the server. You may also change the port if you so desire. Now enable and start the service, as root run:
For versions prior to 12.1
In 12.1 use
You can now connect to CLOUD_HEAD_IP:9869 with a browser on any machine that can access the IP-Address.
Multi tenancy
As of version 3.0, OpenNebula supports multi tenancy. Multi tenancy means that one cloud deployment has multiple control or head nodes. This feature of OpenNebula is available through the opennebula-zones package. As this type of setup is rather advanced and many possibilities exists this will not be discussed further in this guide. Please refer to the OpenNebula documentation about zones. In version 12.1 the service is integrated with systemd, use
to add the service to the init process and start it. Prior to openSUSE 12.1 use
As with the primary daemon (oned) and initial startup of the ozones service with special settings is required. Use /usr/bin/ozones-server start for the first time start up, and consult the OpenNebula Zones documentation for details.
Setting up a cloud node
A cloud node must run a hypervisor. OpenNebula supports both KVM (Kernel based Virtual Machine) and Xen hypervisors, as well as proprietary hypervisors. For configuration details for each hypervisor please see the OpenNebula documentation. The cloud node is managed by the "oned" service on the head node using the virtualization daemon (libvirtd). Therefore, this service must be running.
Install the libvirt package
with YaST2
Start YaST2, select "Software" and start the module "Software Management" module. Use "libvirtd" as the search term and select the "libvirtd" package for installation. Click the "Apply" button in the lower right hand corner.
on the command line
By default access to the daemon and it's services is restricted to the root account. You must change the PolicyKit setup to allow access to virtualization functionality by the "oneadmin" user. Create the file /etc/polkit-1/localauthority/50-local.d/60-suseNebula-access.pkla on the cloud node with the following content:
[Remote libvirt SSH access] Identity=unix-user:oneadmin Action=org.libvirt.unix.manage ResultAny=yes ResultInactive=yes ResultActive=yes
As discussed previously the cloud node must have the "oneadmin" user account and the "cloud" group configured.
The virtualization service (libvirtd) will run the "qemu" process as the default user and group (root:root or qemu:qemu depending on the version of SUSE). This will lead to permissions problems. Therefore, you must modify the /etc/libvirt/qemu.conf configuration file to run the process as the "oneadmin" user and "cloud" group. Additionally you want to prevent the qemu process from modifying file ownership. Change the configuration file to reflect the settings shown below:
user = "oneadmin" group = "cloud" dynamic_ownership = 0
Note these entries do not appear in the shown order or grouping, use the search function of your editor to find the entries in the '/etc/libvirt/qemu.conf configuration file.
It is also necessary to modify the virtualization daemon's configuration file such that the process will listen to tcp connection requests. Modify /etc/libvirt/libvirtd.conf and set the listen_tcp value to 1 as shown below:
listen_tcp = 1
With these modifications complete you may now start the "libvirtd" process as follows:
For openSUSE 12.1 and later use
Next you need to have the content of the "oneadmin" user account home directory, i.e. /var/lib/one by default, from the head node available on the cloud node. You may copy the content from the head node (this will create a maintenance problem should the version of OpenNebula be upgraded on the head node in the future), or you may NFS mount the directory.
NFS mounting the oneadmin users home directory
with YaST2
Start YaST2, select "NFS Client" in the "Network Services" category. Click "Add" to start the NFS Client configuration dialog. Enter the hostname of the head node or the head node IP address. The remote directory should be /var/lib/one unless you modified the default home directory of the "oneadmin" user on the head node. Choose /var/lib/one as the local mount point, or the new location of the "oneadmin" home directory. this path must match the home directory of the "oneadmin" account on the head node. Click OK.
on the command line
Create an entry in the /etc/fstab file using your favorite editor or the following command (as root):
Replace the IP_ADDR_HEADNODE placeholder in the command above with the IP address or the hostname of the head node.
Last but not least you need to configure and setup the network on your cloud node. The possibilities are pretty much endless and therefore difficult to cover. In the most simple case all your cloud nodes have 1 network bridge with a static IP-Address running. You can configure a network bridge using YaST or the brctl command. It is important that each cloud node always has the same IP address or you will have trouble with your cloud operation. Having the same IP address for each machine can also be achieved using DHCP. The configuration option with DHCP and 1 network bridge is shown in the [KIWI example]. For network setup details consult the OpenNebula documentation.
Updating to a new version
Version 3.2.1 to Version 3.4.1
The packages for version 3.4.1do not automatically perform the necessary upgrade steps. To upgrade you need to shutdown you cloud operation, install the new packages and follow the steps out line in the OpenNebula upgrade documentation.
Version 3.0 to Version 3.2.1
The packages for version 3.2.1 do not automatically perform the necessary upgrade steps. To upgrade you need to shutdown you cloud operation, install the new packages and follow the steps out line in the OpenNebula upgrade documentation.
Version 2.2.1 to Version 3.0
For version 3.0 of OpenNebula the database schema for the OpenNebula data storage database (/var/lib/one/one.db by default) was changed. The packages for version 3.0 do not perform an automatic upgrade of the database schema. The upgrade has to be performed manually. For a description of the database upgrade procedure follow the instructions provided by the OpenNebula project found here. The OpenNebula daemon (oned) checks the database version and will not start if the database does not provide the expected version information. Installation of the new packages will not effect an existing database.
In addition to the database schema update there were changes to the command line interface and the templates describing virtual machines for the cloud. These changes are described in the compatibility guide found here.
Additional notes
Cloud nodes need to be registered with the controller on the head node, please see the OpenNebula documentation for the commands and arguments. Once one cloud node is setup and registered the cloud is operational. You can add new cloud nodes at any time to the cloud.
Once you have a VM, it also needs to be registered with the cloud infrastructure, please see the OpenNebula documentation for the commands and arguments.
For VM creation you can use SUSE Studio, KIWI, or another tool that produces images in the appropriate format for your cloud setup. You may also create images manually by using tools such as qemu to run a virtual disk image and then install an OS into it. Description of these methods is outside the scope of this setup discussion.
If you are using DHCP for your cloud and you have another DHCP server running on a network segment that is accessible by the VMs that are deployed be aware that the acceptance of leases by the VMs from a DHCP server other than the cloud head node may cause problems. The KIWI example contains configuration information about how to avoid this issue.
Summary
Using the packages from OBS for OpenNebula simplifies the setup of your cloud infrastructure in that generic automated tasks are already completed during package installation. However, some configuration steps remain after package installation. These configuration steps also allow you to tailor the installation to your needs.
Have fun and enjoy your new cloud infrastructure.