summary

Xapi is the general name of a group of management interfaces in Xen server. It is the core of Xen server management and consists of a series of toolstacks.

Xapi mainly provides the communication interface between each client and each host in the pool. The client can read Xen server configuration, management, license management, database maintenance, etc. through xapi. At the same time, it also includes the functional management and control of resources such as storage, virtual machine, virtual network card, ha, etc. The xapi interface must remain backward compatible, allowing older versions of clients to work properly.

Its representative clients include xencenter, Xen Orchestra, openstack and cloudstack.

Basic concepts

The most basic concept in Xen is resource pool – the overall cluster is managed as a single entity. Even in the non cluster environment of a single Xen host, xapi manages resource objects through pool. Xapi runs in the host cluster, and they share part of the storage cluster. This part of shared storage is also the premise and guarantee for the establishment of high availability cluster (HA). The following figure shows the host cluster environment running xapi.

The xapi management interface in Xen server can be customized according to your own needs

At any time, at most one host can be called a pool mate, which is responsible for coordinating and locking the resources of the resource pool. When creating a pool for the first time, you need to specify a machine as the pool master, which is called the master host (master node), and other nodes can be called the slave host (slave node). The role of pool master is not invariable. We can manually adjust the master host node through clients such as xencenter; You can also configure the HA cluster to automatically select a new node as the master host when the master host goes down through Xen’s own ha mechanism.

All hosts will provide two protocol interfaces, one is HTTP and XML / RPC protocol interface using port 80 and TLS / SSL protocol interface using port 443. Although these two interface protocols exist, not all hosts can send operation requests through xapi. In the cluster, only the master host has the permission to accept xapi operation requests.

The xapi management interface in Xen server can be customized according to your own needs

If you try to send the request for control operation to another slave host machine, the xenapi redirection will return an error message containing the address of the master host of the cluster in which this machine is located and detailed error prompts.

As a pool master, in addition to the automatic migration after HA configuration mentioned above, it also processes or forwards user requests (Xe pool - design - new - Master) and user requests (Xe pool - emergency - transition - to - Master) in an orderly manner.

The slave host node does not accept any operations at all. To improve efficiency, the following operations are allowed on the slave host:

Query performance counters (and their history)

Connect to VNC console

Import / export (especially when the disk is on local storage)

Because the master host acts as coordinator and lock manager, other hosts usually communicate with the master host. Slave hosts also communicate with each other (through the same HTTP and XMLRPC channels) to complete the following functions

Transfer VM memory image (VM migration)

Mirrored disk (storage migration)

Note that some types of shared storage (especially all storage using VHD) need to coordinate disk GC and consolidation. This coordination is currently done by xapi, so it is impossible to share this storage between resource pools.

Toolset / toolstack

The xapi toolset requires the host to run Xen 4.4 or later on X86 or arm. The Xen hypervisor divides the host into multiple domains, some of which can have privileged hardware access, and the rest are non privileged clients (domainu). The xapi tool stack typically runs all its components in the privileged initial domain domain 0, also known as the “control domain”. However, some experimental code supports “driver domains”, which allows storage and network drivers to be isolated in their respective domains.

The following figure shows running Xen server on a single host. In a cluster environment, all hosts run the same version of Xen server. Unless Xen server is in version iteration, it is not necessarily the same software version.

The xapi management interface in Xen server can be customized according to your own needs

The toolset contains a set of collaboration daemons that build on the basic set common to all Xen hosts. They mainly include:

Xapi: manage host clusters and coordinate access to shared storage and networks.

Xenopsd: a low-level “domain manager”, which is responsible for creating, suspending, restoring, migrating and rebooting domains through interaction with Xen through libxc and libxl.

XCP rrdd: a performance counter monitoring daemon that aggregates “data sources” defined through the plug-in API and records the history of each daemon.

XCP networkd: host network manager, which is responsible for configuring interfaces, bridges and openvswitch instances

SM: Storage Manager plug-in, used to connect the internal storage interface of xapi to the control API of external storage system.

Perfmon: a daemon that monitors performance counters and sends an alert if the value exceeds a predefined threshold.

Mpathalert: the daemon that monitors the storage path and sends an alert if the path fails and needs to be repaired.

Snapwatchd: a daemon that responds to snapshot requests sent through the guest virtual machine VSS agent (for Windows).

Stunnel: a daemon that decodes TLS / SSL and forwards traffic to xapi.

Xenconsolidated: allows access to the client console. This is common to all Xen hosts.

Xenstored: key value pair configuration database used to connect VM disk and network interface. This is also common for all hosts.

Working mechanism

Xapi is divided into the following categories:

Master only: These are the main API request types at present. The client API requests the master node, which forwards the request to the corresponding machine and locks the corresponding resources.

Normal local: These are APIs that are allowed to be called from nodes when there are special performance requirements. For example, disk import / export and console connection, they directly send the data to the relevant host without forwarding through the master node.

Emergency: This is the type of API request used to handle the emergency when the master host is offline.

After receiving the API request, the host first judges that the local machine can accept this type of request. If it can be executed, the API call will enter the “message forwarding” layer. The message forwarding layer will:

Locking resources (through current operations mechanism)

Select the host that needs to execute the request

If the request should run locally, use a direct function call; Otherwise, the message forwarding code will make a synchronous API call to a specific slave host. It should be noted that xapi currently uses the “thread per request” model, which will create a complete POSIX thread for each request. Even if only the request is forwarded, this thread will still be created and blocked until the relevant slave host returns the result.

If the content of the xenapi request is an operation related to the VM life cycle, it will be converted into a xenopsd API call and forwarded through the UNIX domain socket. Xapi and xenopsd have similar task concepts. The current xapi task (all operations run in the context of the task) will be bound to the xenopsd task, and then xapi will be used to pass cancel operations and update task progress.

If the content requested by xenapi is a store operation, the message is forwarded to the store access layer. Storage access tier requires:

Verify that the storage object is in the correct state (verify SR mount state; VDI mount, activate, and VDI have read and write permissions)

Call related operations in the Storage Manager API (smapi) V2 interface

Use the smapiv2 to smapiv1 converter to generate the necessary command lines to communicate with and execute smapiv1 plug-ins (EXT, NFS, LVM, etc.)

Persist the state of the storage object, including the result of the vdi.attach call

Internally, the smapiv1 plug-in directly sets fields (such as VDI. Virtual size) through privileged access to the xapi database. These fields will be regarded as read-only to other clients. The smapiv1 plug-in also relies on xapi

Understand all hosts that may have access to storage

Lock disks in resource pool

Execute code safely on other hosts through the xapi plug-in mechanism

The xapi database contains the metadata of the host and VM and is shared to the entire pool. The master host will cache a copy in memory, and all other nodes will query the data cached in the master host when using it. Each object in the database will have an event counter. The generated counter is used to implement the related operations of event.next and event.from in xenapi. If “redo log” is enabled, all database writes will be synchronously written to the shared block device in incremental form. If you do not use “redo log” to cancel xapi before refreshing, you may lose the latest updates.

summary

Xapi provides a flexible, stable, convenient and fast Xen server management interface for program developers, so that users can customize according to their own needs. However, using the “thread per request” mode will bring great resource overhead to the master host. When using it, you can try to combine the request operations and reduce the number of concurrent operations to reduce the resource consumption of xapi on the master node.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *