The IBM PowerVM hypervisor provides virtualization on POWER hardware. PowerVM admins can see benefits in their environments by making use of OpenStack. This driver (along with a Neutron ML2 compatible agent and Ceilometer agent) provides the capability for operators of PowerVM to use OpenStack natively.
As ecosystems continue to evolve around the POWER platform, a single OpenStack driver does not meet all of the needs for the various hypervisors. The standard libvirt driver provides support for KVM on POWER systems. This nova driver provides PowerVM support to OpenStack environment.
This driver meets the following:
This driver makes the following use cases available for PowerVM:
To use the driver, install the nova-powervm project on your NovaLink-based PowerVM system. The nova-powervm project has a minimal set of configuration. See the configuration options section of the dev-ref for more information.
It is recommended that operators also make use of the networking-powervm project. The project ensures that the network bridge supports the VLAN-based networks required for the workloads.
There is also a ceilometer-powervm project that can be included.
Future work will be done to include PowerVM into the various OpenStack deployment models.
The driver enables the following:
The intention is that this driver follows the OpenStack Nova model and will be a candidate for promotion (via a subsequent blueprint) into the nova core project.
No REST API impacts.
No known security impacts.
No new notifications. The driver does expect that the Neutron agent will return an event when the VIF plug has occurred, assuming that Neutron is the network service.
The administrator may notice new logging messages in the nova compute logs.
The driver has a similar deployment speed and agility to other hypervisors. It has been tested with up to 10 concurrent deploys with several hundred VMs on a given server.
Most operations are comparable in speed. Deployment, attach/detach volumes, lifecycle, etc... are quick.
Due to the nature of the project, any performance impacts are limited to the Compute Driver. The API processes for instance are not impacted.
The cloud administrator will need to refer to documentation on how to configure OpenStack for use with a PowerVM hypervisor.
A ‘powervm’ configuration group is used to contain all the PowerVM specific configuration settings. Existing configuration file attributes will be reused as much as possible (e.g. vif_plugging_timeout). This reduces the number of PowerVM specific items that will be needed.
It is the goal of the project to only require minimal additional attributes. The deployer may specify additional attributes to fit their configuration.
In the Mitaka release, the Nova project moved to using conductor-based objects for the live migration flow. These objects exist in nova/objects/migrate_data.py.
While the PowerVM driver supports live migration, it can not register its own live migration object due to being out of tree. The team is working with the nova core team to bring the PowerVM driver in tree. However until that time, the use of live migration with the PowerVM driver requires starting a PowerVM-specific conductor.
This conductor does not limit the OpenStack cloud to only supporting PowerVM. Rather, it simply allows an existing cloud to include PowerVM support within it.
To use the conductor, install the nova-powervm project on the node running the nova conductor. Then start the ‘nova-conductor-powervm’ process. This will support ALL of the hypervisors, including PowerVM.
To reiterate, this is only needed if you plan to use PowerVM’s live migrate functionality.
The code for this driver is currently contained within a powervm project. The driver is within the /nova_powervm/virt/powervm/ package and extends the nova.virt.driver.ComputeDriver class.
The code interacts with PowerVM through the pypowervm library. This python binding is a wrapper to the PowerVM REST API. All hypervisor operations interact with the PowerVM REST API via this binding. The driver is maintained to support future revisions of the PowerVM REST API as needed.
For ephemeral disk support, either a Virtual I/O Server hosted local disk or a Shared Storage Pool (a PowerVM clustered file system) is supported. For volume attachments, the driver supports Cinder-based attachments via protocols supported by the hypervisor (e.g. Fibre Channel).
For networking, the networking-powervm project provides Neutron ML2 Agents. The agents provide the necessary configuration on the Virtual I/O Server for networking. The PowerVM Nova driver code creates the VIF for the client VM, but the Neutron agent creates the VIF for VLANs.
Automated functional testing is provided through a third party continuous integration system. It monitors for incoming Nova change sets, runs a set of functional tests (lifecycle operations) against the incoming change, and provides a non-gating vote (+1 or -1).
Developers should not be impacted by these changes unless they wish to try the driver.
The intent of this project is to bring another driver to OpenStack that aligns with the ideals and vision of the community. The intention is to promote this to core Nova.
No alternatives appear viable to bring PowerVM support into the OpenStack community.
Since the tempest tests should be implementation agnostic, the existing tempest tests should be able to run against the PowerVM driver without issue.
Tempest tests that require function that the platform does not yet support (e.g. iSCSI or Floating IPs) will not pass. These should be ommitted from the Tempest test suite.
A sample Tempest test configuration for the PowerVM driver has been provided.
Thorough unit tests exist within the project to validate specific functions within this implementation.
A third party functional test environment has been created. It monitors for incoming nova change sets. Once it detects a new change set, it will execute the existing lifecycle API tests. A non-gating vote (+1 or -1) will be provided with information provided (logs) based on the result.
Existing APIs should be valid. All testing is planned within the functional testing system and via unit tests.
See the dev-ref for documentation on how to configure, contribute, use, etc. this driver implementation.
The existing Nova developer documentation should typically suffice. However, until merge into Nova, we will maintain a subset of dev-ref documentation.