openSUSE:Hastor/Drbd
Preparations
To setup DRBD, first install the two Hastor Controller machines. These need a direct interconnect. In our example, they have the IP addresses 192.168.22.1 and 192.168.22.2.
Disk Devices
DRBD needs some physical devices to operate on. Any devices that are supported by openSUSE or SLES should work. If local disks are used, a RAID system is recommended. Either use some raid card with support over the lifetime from your vendor or go with software raid. Simple and cheap onboard raid is not recommended.
Both controller machines should have the same amount and quality of disk storage. The controllers will mirror all data to both storage subsystems.
LVM
The storage system has two central tasks:
- aggregate the available disk storage
- create volumes that are exported.
Both of these are done by means of LVM. LVM known to be stable and performing very good. The naming schema of the respective volume groups and volumes is as follows:
- <to be defined>
To view the current LVM setup, use the following commands:
pvs -v # view the physical devices vgs -v # list existing volume groups lvs -v # list current logical volumes
DRBD Configuration
We use DRBD in primary/primary setup since more than a year very successfully. The configuration is stable, and there were no problems during normal operation or administration tasks. Reboot and outages of one of the controllers have been performed multiple times but the DRBD system always recovered from these procedures.
Global DRBD Configuration
The configuration looks like this:
# cat /etc/drbd.conf include "drbd.d/global_common.conf"; include "drbd.d/*.res";
# cat /etc/drbd.d/global_common.conf global { disable-ip-verification; } common { protocol C; handlers { } startup { become-primary-on both; } disk { on-io-error detach; use-bmbv; } net { allow-two-primaries; after-sb-0pri discard-least-changes; after-sb-1pri discard-secondary; after-sb-2pri violently-as0p; rr-conflict violently; } syncer { rate 50M; al-extents 257; } }
Note, that there are no handlers defined. The reason for this is, that no single service on the controller should kill the complete machine. This is also the reason why an extra virtual machine will be added for openAIS.
Single Disks on DRBD
All exported volumes
resource disk01 { protocol C; on controllera { device /dev/drbd1; disk /dev/volume_group/disk01; address 192.168.22.1:7801; meta-disk /dev/system_volume_group/drbd_meta[1]; } on controllerb { device /dev/drbd1; disk /dev/volume_group/disk10; address 192.168.22.2:7801; meta-disk /dev/system_volume_group/flexible_drbd_meta[1]; } }