oVirt in 2 hours. Part 2. Installing the manager and hosts

This article is the next in a series on oVirt, beginning here .

Articles


  1. Introduction
  2. Installation of the manager (ovirt-engine) and hypervisors (hosts) - We are here
  3. Additional settings
  4. Basic operations

So, we will consider the issues of initial installation of the ovirt-engine and ovirt-host components.

You can always see installation processes in the documentation in more detail .

Content


  1. Install ovirt-engine
  2. Install ovirt-host
  3. Adding a node to oVirtN
  4. Network Interface Setup
  5. FC setup
  6. Configure FCoE
  7. ISO Image Storage
  8. First VM

Install ovirt-engine


For Engine, the minimum requirements are 2 cores / 4 GiB of RAM / 25 GiB of storage. Recommended - from 4 cores / 16 GiB of RAM / 50 GiB of storage. We use the Standalone Manager option when the engine runs on a dedicated physical or virtual machine outside a managed cluster. For our installation, let's take a virtual machine, for example, on a separate ESXi * . It is convenient to use deployment automation tools or cloning from a previously prepared template or installing kickstart.

* Note: for a production system this is a bad idea because the manager works without reserve and becomes a bottleneck. In this case, it is better to consider the option Self-hosted Engine.

If necessary, the procedure for converting Standalone to Self Hosted is described in detail in the documentation. In particular, the host needs to be given the reinstall command with support for the Hosted Engine.

Install CentOS 7 in the minimal configuration on the VM, then update and reboot the system:

$ sudo yum update -y && sudo reboot

For a virtual machine, it is useful to install a guest agent:

$ sudo yum install open-vm-tools

for VMware ESXi hosts, or for oVirt:

$ sudo yum install ovirt-guest-agent

We connect the repository and install the manager:

$ sudo yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
$ sudo yum install ovirt-engine

Basic setting:

$ sudo engine-setup

In most cases, the default settings selected are sufficient; for their automatic use, you can start the configuration with the key:

$ sudo engine-setup --accept-defaults

Now we can connect to our new engine at ovirt.lab.example.com . It's still empty here, so let's move on to installing hypervisors.

Install ovirt-host


Install CentOS 7 in the minimal configuration on the physical host, then connect the repository, update and reboot the system:

$ sudo yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
$ sudo yum update -y && sudo reboot

Note: it’s convenient to use deployment automation tools or kickstart installation for installation.

Kickstart file example
! ! !

# System authorization information
auth --enableshadow --passalgo=sha512
# Use CDROM installation media
cdrom
# Use graphical install
graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us','ru' --switch='grp:alt_shift_toggle'
# System language
lang ru_RU.UTF-8

# Network information
network  --bootproto=dhcp --device=ens192 --ipv6=auto --activate
network  --hostname=kvm01.lab.example.com

# Root password 'monteV1DE0'
rootpw --iscrypted $6$6oPcf0GW9VdmJe5w$6WBucrUPRdCAP.aBVnUfvaEu9ozkXq9M1TXiwOm41Y58DEerG8b3Ulme2YtxAgNHr6DGIJ02eFgVuEmYsOo7./
# User password 'metroP0!is'
user --name=mgmt --groups=wheel --iscrypted --password=$6$883g2lyXdkDLbKYR$B3yWx1aQZmYYi.aO10W2Bvw0Jpkl1upzgjhZr6lmITTrGaPupa5iC3kZAOvwDonZ/6ogNJe/59GN5U8Okp.qx.
# System services
services --enabled="chronyd"
# System timezone
timezone Europe/Moscow --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --all
# Disk partitioning information
part /boot --fstype xfs --size=1024 --ondisk=sda  --label=boot
part pv.01 --size=45056 --grow
volgroup HostVG pv.01 --reserved-percent=20
logvol swap --vgname=HostVG --name=lv_swap --fstype=swap --recommended
logvol none --vgname=HostVG --name=HostPool --thinpool --size=40960 --grow
logvol / --vgname=HostVG --name=lv_root --thin --fstype=ext4 --label="root" --poolname=HostPool --fsoptions="defaults,discard" --size=6144 --grow
logvol /var --vgname=HostVG --name=lv_var --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=16536
logvol /var/crash --vgname=HostVG --name=lv_var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=10240
logvol /var/log --vgname=HostVG --name=lv_var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=8192
logvol /var/log/audit --vgname=HostVG --name=lv_var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=2048
logvol /home --vgname=HostVG --name=lv_home --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1024
logvol /tmp --vgname=HostVG --name=lv_tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1024

%packages
@^minimal
@core
chrony
kexec-tools

%end

%addon com_redhat_kdump --enable --reserve-mb='auto'

%end

%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
%end
# Reboot when the install is finished.
reboot --eject

, ., ftp.example.com/pub/labkvm.cfg. 'Install CentOS 7', ( Tab) ( , )

' inst.ks=ftp://ftp.example.com/pub/labkvm.cfg'
.
/dev/sda, ( lsblk). kvm01.lab.example.com ( hostnamectl set-hostname kvm03.lab.example.com), IP β€” , β€” , .

root: monteV1DE0, mgmt: metroP0!is.
! ! !

Repeat (or execute in parallel) on all hosts. From turning on the "empty" server to the finished state, taking into account 2 long downloads, it takes about 20 minutes.

Adding a node to oVirt


It is very simple:

Compute β†’ Hosts β†’ New β†’ ...

In the wizard, the required fields are Name (display name, for example, kvm03), Hostname (FQDN, for example kvm03.lab.example.com) and the Authentication section - root user (invariable) - password or SSH Public Key.

After clicking the Ok button, you will receive the message β€œYou haven't configured Power Management for this Host. Are you sure you want to continue? ” . This is normal - we will consider power management later, after a successful host connection. However, if the machines on which the hosts are installed do not support management (IPMI, iLO, DRAC, etc.), I recommend turning it off: Compute β†’ Clusters β†’ Default β†’ Edit β†’ Fencing Ploicy β†’ Enable fencing, uncheck it.

If the oVirt repository was not connected on the host, the installation will fail, but it's okay - you need to add it, then click Install -> Reinstall.

Host connection takes no more than 5-10 minutes.

Network Interface Setup


As we build a fault-tolerant system, the network connection should also provide a redundant connection, which is done on the tab Compute β†’ Hosts β†’ HOST β†’ Network Interfaces - Setup Host Networks.

Depending on the capabilities of your network equipment and architecture approaches, options are possible. It is best to connect to the stack of top-of-rack switches so that when one fails, network availability is not interrupted. Consider the example of an aggregated LACP channel. To configure the aggregated channel, β€œtake” the 2nd unused adapter with the mouse and β€œtake” it to the 1st. The Create New Bond window opens.where LACP (Mode 4, Dynamic link aggregation, 802.3ad) is selected by default. On the switch side, the normal LACP group configuration is performed. If it is not possible to build a switch stack, you can use Active-Backup mode (Mode 1). We will consider VLAN settings in the next article, and in more detail with recommendations on network settings in the Planning and Prerequisites Guide document .

FC setup


Fiber Channel (FC) is supported out of the box, its use is straightforward. We will not configure the storage network, including the configuration of storage systems and the zoning of fabric switches as part of the oVirt configuration.

Configure FCoE


FCoE, in my opinion, was not widely used in storage networks, but it is often used on servers as the "last mile", for example, in HPE Virtual Connect.

Configuring FCoE requires additional simple steps.

Setup FCoE Engine


Red Hat B.3. How to Set Up Red Hat Virtualization Manager to Use FCoE
On the Manager
, :


$ sudo engine-config -s UserDefinedNetworkCustomProperties='fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*$'
$ sudo systemctl restart ovirt-engine.service

Setup Node FCoE


oVirt-Host'

$ sudo yum install vdsm-hook-fcoe

FCoE, Red Hat: 25.5. Configuring a Fibre Channel over Ethernet Interface.

Broadcom CNA User Guide FCoE Configuration for Broadcom-Based Adapters.

, ( minimal):

$ sudo yum install fcoe-utils lldpad

( ens3f2 ens3f3 CNA, ):

$ sudo cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ens3f2
$ sudo cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ens3f3
$ sudo vim /etc/fcoe/cfg-ens3f2
$ sudo vim /etc/fcoe/cfg-ens3f3

: DCB/DCBX, DCB_REQUIRED no.

DCB_REQUIRED=Β«yesΒ» β†’ #DCB_REQUIRED=Β«yesΒ»

, adminStatus , .. FCoE:

$ sudo lldptool set-lldp -i ens3f0 adminStatus=disabled
...
$ sudo lldptool set-lldp -i ens3f3 adminStatus=disabled

, LLDP:

$ sudo systemctl start lldpad
$ sudo systemctl enable lldpad

, DCB/DCBX, DCB_REQUIRED no .

$ sudo dcbtool sc ens3f2 dcb on
$ sudo dcbtool sc ens3f3 dcb on
$ sudo dcbtool sc ens3f2 app:fcoe e:1
$ sudo dcbtool sc ens3f3 app:fcoe e:1
$ sudo ip link set dev ens3f2 up
$ sudo ip link set dev ens3f3 up
$ sudo systemctl start fcoe
$ sudo systemctl enable fcoe

:

$ sudo vim /etc/sysconfig/network-scripts/ifcfg-ens3f2
$ sudo vim /etc/sysconfig/network-scripts/ifcfg-ens3f3

ONBOOT=yes

FCoE , .

$ sudo fcoeadm -i

FCoE FC.

The following is the configuration of storage systems and networks - zoning, SAN hosts, creating and presenting volumes / LUNs, after which the storage can be connected to ovirt-hosts: Storage β†’ Domains β†’ New Domain.

Domain Function leave Data, Storage Type - Fiber Channel, Host - any, name - e.g. storNN-volMM.

Surely your storage system allows the connection is not just a reservation of paths, but also balancing. Many modern systems are capable of transmitting data along all paths equally optimally (ALUA active / active).

To enable all paths in an active state, you need to configure multipassing, more on that in the following articles.

Configuring NFS and iSCSI is done in a similar way.

ISO Image Storage


To install the OS, you will need their installation files, most often available in the form of ISO images. You can use the built-in path, but for working with images in oVirt a special type of storage is developed - ISO, which can be targeted to an NFS server. Add it:

Storage β†’ Domains β†’ New Domain,
Domain Function β†’ ISO,
Export Path - e.g. mynfs01.example.com:/exports/ovirt-iso (at the time of connection, the folder should be empty, the manager should be able to write to it),
Name - e.g. mynfs01-iso.

To store images, the manager will create the structure
/ exports / ovirt-iso / <some UUID> / images / 11111111-1111-1111-1111-111111111111 /

If you already have ISO images on our NFS server, it’s convenient to link them to this folder instead of copying files to save space.

First VM


At this stage, you can already create the first virtual machine, install the OS and application software on it.

Compute β†’ Virtual Machines β†’ New

For the new machine, specify a name (Name), create a disk (Instance Images β†’ Create) and connect the network interface (Instantiate VM network interfaces by picking a vNIC profile β†’ choose the only ovirtmgmt from the list so far).

On the client side, you need a modern browser and SPICE client to interact with the console.

The first machine started successfully. However, for a more complete system operation, a number of additional settings are required, which we will continue in the following articles.

All Articles