oVirt in 2 hours. Part 1. Open Failover Virtualization Platform

Introduction


Open source project oVirt is a free enterprise-level virtualization platform. Scrolling through habr, he found that oVirt is not covered as widely as it deserves.
oVirt is actually an upstream for the Red Hat Virtualization commercial system (RHV, formerly RHEV), growing under the Red Hat wing. To avoid confusion, this is not the same as CentOS vs RHEL, the model is closer to Fedora vs RHEL.
Under the hood - KVM , the web interface is used for control. It is based on OS RHEL / CentOS 7.
oVirt can be used for both “traditional” server and desktop virtualization (VDI), unlike VMware solution, both systems can coexist in one complex.
Project wellIt is documented , has long reached maturity for productive use and is ready for high loads.
This article is the first in a series on how to build a working failover cluster. After going through them, we will get a fully working system in a short (about 2 hours) time, although a number of issues, of course, cannot be solved, I will try to cover them in the following articles.
We have been using it for several years, we started with version 4.1. Our industrial system now lives on the 10th generation HPE Synergy 480 and ProLiant BL460c with Xeon Gold CPU.
At the time of writing, the current version is 4.3.

Articles


  1. Introduction - We are here
  2. Installation of the manager (ovirt-engine) and hypervisors (hosts)
  3. Additional settings
  4. Basic operations


Functional Features


There are 2 main entities in oVirt: ovirt-engine and ovirt-host (s). For those who are familiar with VMware products, oVirt as a platform as a whole is vSphere, ovirt-engine - the control layer - performs the same functions as vCenter, and ovirt-host is a hypervisor like ESX (i). Because vSphere is a very popular solution, sometimes I will give a comparison with it.
oVirt Dashboard
Fig. 1 - oVirt control panel.

As guest machines, most Linux distributions and versions of Windows are supported. For guest machines, there are agents and optimized virtual devices and virtio drivers, primarily a disk controller and a network interface.
To implement a fault-tolerant solution and all the interesting features you will need shared storage. Both block FC, FCoE, iSCSI, file NFS storages, etc. are supported. To implement a fault-tolerant solution, the storage system must also be fault-tolerant (at least 2 controllers, multipassing).
Using local storages is possible, but by default only shared storages are suitable for a real cluster. Local storages make the system a disparate set of hypervisors, and even if there is a shared storage, the cluster cannot be assembled. The most correct way is diskless machines with boot from SAN, or disks of minimal size. Probably, through the vdsm hook, it is possible to assemble from local drives Software Defined Storage (e.g. Ceph) and present its VM, but seriously did not consider it.

Architecture


arch
Fig. 2 - oVirt architecture.
For more information on architecture, see the developer’s documentation .

dc
Fig. 3 - oVirt objects.

The top element in the hierarchy is the Data Center . It determines whether shared or local storage is used, as well as the set of functions used (compatibility, from 4.1 to 4.3). May be one or more. For many options, using the Data Center by default is Default.
Data Center consists of one or more Clusters . The cluster determines the type of processor, migration policies, etc. For small installations, you can also limit yourself to the Default cluster.
The cluster, in turn, consists of Host's doing the main work - they carry virtual machines, storages are connected to them. A cluster assumes 2 or more hosts. Although it is technically possible to make a cluster with a 1st host, but this is of no practical use.

OVirt supports many functions, including live migration of virtual machines between hypervisors (live migration) and storage (storage migration), desktop virtualization (virtual desktop infrastructure) with VM pools, statefull and stateless VMs, support for NVidia Grid vGPU, import from vSphere, KVM, there is a powerful API and much more . All these features are available without royalties, and if necessary, support can be obtained from Red Hat through regional partners.

About RHV prices


The cost is not high compared to VMware, only support is bought - without requiring the purchase of the license itself. Support is purchased only on hypervisors; the ovirt-engine, unlike vCenter Server, does not require expenses.

Calculation example for the 1st year of ownership


Consider a cluster of 4 2-socket machines and retail prices (without project discounts).
A standard RHV subscription costs $ 999 per socket / year (premium 365/24/7 - $ 1499), total 4 * 2 * $ 999 = $ 7992 .
VSphere Price :
  • VMware vCenter Server Standard $ 10,837.13 per instance, plus Basic $ 2,625.41 subscription (Production - $ 3,125.39);
  • VMware vSphere Standard $ 1,164.15 + Basic Subscription $ 552.61 (Production $ 653.82);
  • VMware vSphere Enterprise Plus $ 6,309.23 + Basic Subscription $ 1,261.09 (Production $ 1,499.94).

Total: 10 837.13 + 2 625.41 + 4 * 2 * (1 164.15 + 552.61) = $ 27 196.62 for the youngest option. The difference is about 3.5 times!
In oVirt, all functions are available without restrictions.

Brief specs and highs


System requirements


The hypervisor requires a CPU with hardware virtualization enabled, the minimum amount of RAM to start is 2 GiB, the recommended storage capacity for the OS is 55 GiB (for the most part for magazines, etc., the OS itself is small).
More details here .
For Engine, the minimum requirements are 2 cores / 4 GiB of RAM / 25 GiB of storage. Recommended - from 4 cores / 16 GiB of RAM / 50 GiB of storage.
As with any system, there are restrictions on volumes and quantities, most of which exceed the capabilities of the available mass commercial servers. So, a pair of Intel Xeon Gold 6230 can address 2 TiB of RAM and gives 40 cores (80 threads), which is less than even the limits of one VM.

Virtual Machine Maximums:


  • Maximum concurrently running virtual machines: Unlimited;
  • Maximum virtual CPUs per virtual machine: 384;
  • Maximum memory per virtual machine: 4 TiB;
  • Maximum single disk size per virtual machine: 8 TiB.

Host Maximums:


  • Logical CPU cores or threads: 768;
  • RAM: 12 TiB;
  • Number of hosted virtual machines: 250;
  • Simultaneous live migrations: 2 incoming, 2 outgoing;
  • Live migration bandwidth: Default to 52 MiB (~ 436 Mb) per migration when using the legacy migration policy. Other policies use adaptive throughput values ​​based on the speed of the physical device. QoS policies can limit migration bandwidth.

Manager Logical Entity Maximums:


In 4.3, the following limits exist .
  • Data center
    • Maximum data center count: 400;
    • Maximum host count: 400 supported, 500 tested;
    • Maximum VM count: 4000 supported, 5000 tested;
  • Cluster
    • Maximum cluster count: 400;
    • Maximum host count: 400 supported, 500 tested;
    • Maximum VM count: 4000 supported, 5000 tested;
  • Network
    • Logical networks / cluster: 300;
    • SDN/external networks: 2600 tested, no enforced limit;
  • Storage
    • Maximum domains: 50 supported, 70 tested;
    • Hosts per domain: No limit;
    • Logical volumes per block domain (more): 1500;
    • Maximum number of LUNs (more): 300;
    • Maximum disk size: 500 TiB (limited to 8 TiB by default).



As already mentioned, oVirt is built from 2 basic elements - ovirt-engine (control) and ovirt-host (hypervisor).
Engine can be located both outside the platform itself (standalone Manager - it can be a VM running on another platform or a separate hypervisor and even a physical machine), and on the platform itself (self-hosted engine, similar to VMware's VCSA approach).
The hypervisor can be installed both on a regular OS RHEL / CentOS 7 (EL Host), and on a specialized minimum OS (oVirt-Node, based on el7).
The hardware requirements for all options are approximately the same.
Standard architecture
Fig. 4 - standard architecture.

Self Hosted Engine Architecture
Fig. 5 - Self-hosted Engine architecture.

For myself, I chose the option standalone Manager and EL Hosts:
  • standalone Manager is a little easier with startup problems, there is no chicken and egg dilemma (as for VCSA, you won’t run it until at least one host has fully risen), but there is a dependence on another system * ;
  • EL Host provides all the power of the OS, which is useful for external monitoring, debugging, troubleshooting, etc.

* However, over the entire period of operation this was not required, even after a serious power accident.
But closer to the point!
For the experiment, it is possible to free a pair of ProLiant BL460c G7 blades with the Xeon® CPU. We will reproduce the installation process on them.
The nodes will be named ovirt.lab.example.com, kvm01.lab.example.com and kvm02.lab.example.com.
We proceed directly to the installation .

All Articles