openSUSE:Build Service private installation

Jump to: navigation, search

Hardware specification

My goal was to enable the rebuild of a full Linux distribution in less than 1/2 day after a major change (e.g.tool chain) and in less than 15min for any major changes which do not involve a full rebuild.
The configuration described achieves a full rebuild in 5 hours and a kernel rebuild in less than 10min.
Note : On the initial Quad Core - 4GB RAM PC I used for my test a full rebuild took 7 days.

Server

In the default configuration, the obs-server is also acting as an obs-worker. In this HowTo I assume that the server does not take any obs-worker task.

  • Quad core 2Ghz with 8 GB RAM
  • 1 250GB sata HDD
  • 2 >500GB SAS HDD
  • 1 server class Giga Bit Ethernet class interface.

Workers

The obs-workers actually do all the compiling work - raw power is what you need. CPU Core 2.6Ghz or better.
Please note that when a full rebuild is required, the OBS system cannot process tasks in parallel for 2/3 of the duration of the build process and the final speed of build will depend mainly on the single thread power of your obs-worker. Furthermore as the obs-server will use any available obs-workers, it is not advisable to have slow-OBS workers registered on the network.
In my configuration, I use two (2) obs-workers. Yes, the IT man nearly had a heart attack when I gave him the specifications.

  • DUAL Quad core 2.6 Ghz (more is better)
  • 64 GB RAM.

32GB of RAM is used by the system the other 32GB are used as RAM disk (see Chap. Optimisation bellow).

  • 1 250 GB SATA HDD
  • 1 80 GB SSD (Solid State drive)
  • 1 server class Giga Bit Ethernet class interface.

Network model

For each target package which needs to be built, the obs-server, will trigger an obs-worker to download all the package dependencies required to build the target package. Even with the cache activated on each obs-worker (on the SSD)the loading time via the network is important and will be cumulative for each package which needs building.
You want the obs-servers and obs-workers to operate on an almost dedicated GigaBit Ethernet local backbone to minimise each download time.
I advise you to isolate this local obs sub network by a firewall (open ports 22, 80, 82 and 444).
As (at the time of writing this paper) the OBS appliances do not support neither a Bacula client for the backup or a Nagios client for the monitoring, you will need to provide an external solution for backup and monitoring (I ruled out modifying the OBS appliance in order to keep the light weight upgrade/scaling offered by that model, but I hope that one day Bacula and Nagios will be be included by default).

  • for backup, my solution consists of remotely mounting the data partition, the data base and some configuration files through an ssh tunnel via remote command on a server which is external to the obs local network and runs the backup via nfs.
  • for remote monitoring, Nagios provides a clientless solution via ssh which covers the basic needs.

Deploying an obs-server

Installing the obs-server software

Use virtual appliance

An easy option is to use th server appliance provided by the project: OBS server appliance.

The OBS server appliance is built in suse studio, so you can download, clone, use Media, Virtual or cloud appliance.

Install in command line
  • Install openSUSE OBS Server repo:
zypper ar -f https://download.opensuse.org/repositories/OBS:/Server:/2.10/openSUSE_15.2/OBS:Server:2.10.repo

you should replace '15.2' with your version of openSUSE

  • Install OBS server:
zypper in -t pattern OBS_Server

Note: If you want an OBS server for development, you can install server and worker on the same machine. If you want an OBS server for production, I recommend an OBS server machine without OBS worker. To configure this, take a look at section 1.4 of the OBS admin guide

Install in Virtual Machine

The project also provides VM images. OBS itself uses relatively low CPU and memory resources (it’s the build workers that use a lot of these resources), so if you are planning on building a flexible, distributed setup, you can download OBS VM images here.

Configuring the lvm as data store

You need to decide whether to use 1 or 2 HDDs for your data. I do not advise using a single disk because the risk of losing all your data is too high. Having to go back to the latest backup just for a HDD failure is a huge loss of time for an engineering team. HDDs are cheap and any production system (even for a small team) should use case 2.

Case 1 single HDD

# fdisk /dev/sdb -> create a new primary partition with type 8e (aka Linux LVM)
# pvcreate /dev/sdb1 -> prepare partition for lvm 
# vgcreate "OBS" /dev/sdb1-> create the logical device

Case 2 HDD mirror

# fdisk /dev/sdb; /dev/sdc -> create a new primary partition with type fd - aka Linux Raid Auto
# /sbin/mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
# pvcreate /dev/md0 -> prepare partition for lvm 
# vgcreate "OBS" /dev/md0-> create the logical device

  • creating your data logical disk

# lvcreate -L 50G -n "server" /dev/OBS  -> Note : size sould be adapted to disk size
# vgscan
# mkfs.ext4 /dev/OBS/server

Configuring the obs-server appliance

  • reboot to start the appliance. It will take some time to create all the structure
  • log on as root (default is no password)
  • give a password for root. It is mandatory to run sshd (from here I will assume opensuse)
  • activate and start sshd

# insserv sshd
# rcsshd start

  • stop the obs-server on that machine (this is not needed if you run internal workers for an initial test or a small system)

# insserv -r obsworker
# rcobsworker stop

  • Install NFS server to allow the backup of your data

# zypper in nfs-kernel-server
# #configure /etc/export -> enable access to /obs link target to your backup
# insserv nfsserver
# rcnfsserver start

  • check that you have the right options in /etc/sysconfig/network/dhcp
  • force the name of the OBS instance (overcomes a bug with remote workers). The variable obsname needs to be the same regardless of which machine builds a package. Otherwise the buildcompare process

will think a package has changed from what was built last time, just because it was built on a different machine.

edit /usr/lib/obs/server/BSConfig.pm
If you have the line :- our $obsname = $hostname; # unique identifier for this Build Service instance
replace it with our $obsname = "your-obs-server-name.yourDomain.com";

If you use a virtual machine, you better set the VM type used by worker to none :

vi /etc/sysconfig/obs-server (note on newer releases obs-worker and obs-server files have merged)
OBS_VM_TYPE="none"

Add Repositories targets

edit distributions.xml

vi /srv/www/obs/api/files/distributions.xml

Note: instead of using the file above, with 2.4+ you must use the API to export, modify, and re-import the distributions like so:

osc api /distributions > file
edit file
osc api /distributions -T file

Under the tag <distributions>, add/remove <distribution>...</distribution> block.

A block is defined like :

<distribution vendor="MeeGo" version="1.2.0" id="meego-1.2.0">
 <name>MeeGo 1.2.0</name>
 <project>MeeGo:1.2.0:oss</project>
 <reponame>MeeGo_1.2.0_oss_standard</reponame>
 <repository>standard</repository>
 <link>http://www.meego.org/</link>
</distribution>

Add your *.png files here:

/srv/www/obs/webui/public/images/distributions/

The files must have the same name used in the distributions.xml file.

Change architecture

If you want to change the available architecture into a OBS.

first: Configure the scheduler:

vi /etc/sysconfig/obs-server
OBS_SCHEDULER_ARCHITECTURES="i586 armv8el"

Starting your appliance

reboot your appliance
The appliance is ready to go (the user Admin with password opensuse is available)

webui -> https://hostname.domain
api   -> https://hostname.domain:444 
repo  -> http://hostname.domain:82


Update Note 1: If you are trying to start a OBS instance for quick testing, the hostname doesn't matter... From the 2.2.80 of OBS (I didn't test with older version) you only need to know the IP address of your OBS...

Update Note 2: The API port have change since the writing of this wiki... In the OBS 2.2.80 (and surely newest version) the API is located on the port 444...

Update Note 3: But, you don't need to worry about all these. With latest version of OBS (since 2.2.80), if you connect on the "http://ip.of.your.obs a page will show you all the link that you should use to connect to your OBS ...

Deploying an obs-workers

Note : If you deploy a small system you will probably elect to run the workers which are built into the obs-server appliance. In that case, ignore the obs-worker install and jump directly to its configuration.

Installing the obs-worker software

An easy option is to use the OBS server appliance provided by the project OBS Light. The alternative is to build it by yourself. To help you we provide the method used by the OBS Light project bellow.

  • boot your server with a Linux live distribution. openSUSE can provide you one which will do the work.
  • check that dhcp works and that you get the hostname that you expect (this is a key point with OBS appliances. Your DNS lookup and your hostname must be identical).
  • download the latest stable raw.bz2 obs-worker appliance from openSUSE
  • remove all the partitions on the boot disk of your server using fdisk (I assume from now that it is /dev/sda).
    Note : This will erase forever all the data from that disk. Do it on the right machine and right disk.
  • copy the appliance onto the boot disk of your server
# bunzip2 -c MyDownloadedApplianceFile.raw.bz2 > /dev/sda
Note : there is not error you must use the dive name without a partition number
  • the appliance's root password is opensuse.

Configuring the OBS instance names

To avoid triggering un-needed builds you need to give a unique ID for your build instance which is the same for the OBS worker(s) and server(s). By default this ID is set to $HOSTNAME what is only correct in the case where the server and the workers are running on the same host.

vi /usr/lib/obs/server/BSConfig.pm
change $obsname = $hostname; # unique identifier for this Build Service instance
by
$obsname = "obs-server.mydomain.com"

OBS servers boradcast via SLP information for the workers, in some configuration of DNS that information may be unusable. You can check with slptool that the broascast IP name or address is valid.

slptool findsrvs obs.source_server
slptool findsrvs obs.repo_server

If not correct the server address is the files:

/etc/slp.reg.d/obs.repo_server.reg
/etc/slp.reg.d/obs.source_server.reg

Note: Never use "_" character into the $HOSTNAME for an OBS sever.

Configuring the cache to use the SSD

I assume here that you have a dedicated obs-worker (aka note a server-worker).
We need to create a logical volume named cache on the logical disk named OBS.

# fdisk /dev/sdX -> create a new primary partition with type 8e (aka Linux LVM) on your SSD or fast dedicated cache HDD.
# pvcreate /dev/sdX1 -> prepare partition for lvm
# vgcreate "OBS" /dev/sdX1-> create the logical device
# lvcreate -L 50G -n "cache" OBS
# vgscan
# mkfs.ext4 /dev/OBS/cache

By default the appliance will use only 50% at most of the volume on which the cache is mounted. This makes sense when the cache is not on a dedicated volume but that is not the case here.
To enable the cache to expand to the full size of the volume, simply modify the file /etc/sysconfig/obsworker

OBS_CACHE_DIR="/cache" -> point to the fastest local disk 
OBS_CACHE_SIZE="" -> by default 50% of the partition containing OBS_CACHE_DIR
# should be increased to 90% if that is the sole usage of that partition.


Possible optimisation

You need to use a Proxy / Firewall to get out of your company

One of the simplest way to start an OBS instance is to link it to another OBS instance (as shown in bootstrapping section) that is located on Internet beyond the firewall of your network ... That's mean, that your OBS instance have be able to access transparently to Internet through the firewall... To do that you will need to edit 2 files The first file is /etc/sysconfig/proxy. It is related to opensuse proxy settings and you need to modify the 3 following lines starting by :

PROXY_ENABLED and set the parameter value to "yes"
HTTP_PROXY and set the proper value for "http://[user]:[password]@[proxy IP]:[port]>/"
HTTPS_PROXY and set the proper value for "http://[user>:[password]@[proxy IP>:[port]/"

But it is not sufficient for OBS (to my test experience) because it seems that OBS will try to access the network in a way that is not handle by the opensuse... So you will need to modify another file /usr/lib/obs/server/BSConfig.pm
Copy and uncomment the line with #our $proxy = "http(s)://[user:pass]\@[host]:[port]", replace the [xxx] by the appropriate value and select the http mode for Internet access (with or without the "s"... My advice should be be to keep the "s" but you should try the two solution (with and without) to see what is working properly...

Note The syntax between the 2 files is sightly different. The difference is in "\" before the "@"... Don't forget it in th second file ! This should save you a lot of headache !

Image Creation : If you need later to use Mic2 please check the proxy setting for Mic2 here

Reduce the number of workers

By default an obs-worker instance will

  • create one Worker by CPU Thread
  • use the flag -j1.

If you are offering a public OBS to be shared by many users this is a good way to distribute a fair service to a large body of users.

In a development team at the opposite end of the spectrum you have a smaller number of builds per day but you need them quickly. My tests were carried out with a team of 50 engineers building one to two releases a day.
It was more efficient to reduce the number of workers to 1 per Dual CPU Threads and to allow the default compilation flag to be as high as possible. This model has reduced our building time from scratch from 7 hours to 5 hours.

Edit the file /etc/sysconfig/obs-worker

Note: Since "2.2.992-1.3" the file /etc/sysconfig/obs-worker seems no longer existed

# 0 instances will automatically use the number of CPU's
# my advise -> force a value to 1/2 of the number of CPU Thread.
OBS_WORKER_INSTANCES="8"
....
 # this maps usually to "make -j1" during build
 # my advise force it to the maximum of CPU thread that you have on your server.
 OBS_WORKER_JOBS="8"

Activate RAM DISK for workers files

The default configuration will lead each worker to create its own directory in /tmp/obs or in /dev/OBS/worker_root_N (N varies from 1 to n depending on the numbers of workers that you need) if these logical drives have been created.

This is not efficient as /tmp is by default on the boot HDD which is a regular SATA drive and /dev/OBS is a software Mirror also running on regular HDD.

The workers will expend the entire list of rpm required to build a package to create a fake root before running the rpm build. With a bit of luck the binary rpm needed to run the build job will be in the cache on your fast SSD but you still don't want to loose time in copying all this data onto a second HDD every time you build a package. My advice is to use a RAM disk (2 models are possible).

Option 1 Create a large swap partition of the size that you want the RAM drive to be and create /tmp as tmpfs. It's a simple method and as long as your unit will not swap it will be very fast. The risk is that if you ever start to swap it will be unacceptably slow.
Implementation is simple, create swap partition and mount /tmp by default on tmpfs in your /etc/fstab

Option 2 Create a dedicated RAM drive and configure the workers to build on that partition. This method requires a bit more work as you will have to create and mount the RAM disk at each boot. The RAM disk needs to be mounted before starting the obsworker daemon.
A simple solution is to create a /etc/init.d/before.local file which contain a demon script which creates your RAM drive and mount it on /tmp/obs.
If you use another mounting point you will need to patch /etc/sysconfig/obs-server.

Note: Since "2.2.992-1.3" the file /etc/sysconfig/obs-worker seems no longer existed

# default is /tmp
OBS_WORKER_DIRECTORY="" -> a fast local hard drive or better ramfs.

Upgrade an existing appliance

Upgrading an OBS appliance has been greatly simplified from the v2.x. You simply do it via zypper.

if you need to update OBS server 2.2 to 2.3, you may need this Link

# zypper ref
# zypper dup

And the great facility offered by the appliance, is that, should you need to reinstall from scratch, the process is very simple.

  • delete the partition on your boot drive (default /dev/sda). not on your data drive.
  • Copy the appliance onto the boot drive as you would for a clean install.
  • Reboot
  • Appliance will detect your lvm OBS automatically and will then use it to recover all your previous data.
  • Reapply your optimisation options
  • reboot

Restoring an obs-server from backup

What to backup

Restoring an OBS data store is very simple, assuming that you have backed up all the required data. Personally I would advise you to backup the following;

  • the .raw.bz2 files that you have used to create the appliance. Backward compatibility is good but you never know.

At installation time, I copy the .raw.bz2 file on the data store into a dedicated directory which is covered by the back up system.

  • /obs/MySQL /obs/db -> The OBS data base
  • /obs/projects -> The projects info
  • /obs/sources -> The source code of the packages
  • /obs/build -> build packages
  • /obs/repos -> build repositories
  • /obs/trees -> the projects MD5SUMS
# cd /
# tar -czvf saveOBS.tar.gz /srv/obs/MySQL/ /srv/obs/db/ /srv/obs/projects/ /srv/obs/sources/ /srv/obs/build/ /srv/obs/repos/ /srv/obs/trees/ /srv/www/obs/webui/public/images/distributions/ /srv/www/obs/api/files/

Restore process

  • Stop all obs services

find /etc/init.d -name obs\* -exec {} stop \;
rcmysql stop

  • Restore the files

tar -xvf saveOBS.tar.gz -C /

  • Enforce the right UI and GUI onto the files which might be needed if you have restored on a different unit to the one which carried out the backup.

chown -R obsrun /obs
chgrp -R obsrun /obs
chown -R mysql /obs/MySQL
chgrp -R mysql /obs/MySQL

  • You have restored your OBS appliance, just reboot and play again.