RETIRED, OpenStack Virtual Baremetal (OVB)
Go to file
Ben Nemec b69f56330c Update setup.py to match current 2017-01-11 16:29:27 -06:00
bin Restart openstackbmc if it dies 2016-08-12 09:07:01 +12:00
bond-network-templates Regenerate net-iso templates with latest tool 2016-10-21 10:34:56 -05:00
ipv6-network-templates Regenerate net-iso templates with latest tool 2016-10-21 10:34:56 -05:00
ipxe Replace empty image & nova patch with ipxe-boot image 2016-04-13 10:15:22 +12:00
network-templates Regenerate net-iso templates with latest tool 2016-10-21 10:34:56 -05:00
openstack_virtual_baremetal Add undercloud to nodes' json file 2016-12-22 15:18:55 +02:00
patches/nova Remove version from patches directory 2016-06-03 16:21:01 -05:00
templates Add ability to assign profiles to nodes automatically 2016-11-29 15:38:49 -06:00
.gitignore Reorganize into package and add tox for testing 2016-07-14 12:23:41 -05:00
.testr.conf Reorganize into package and add tox for testing 2016-07-14 12:23:41 -05:00
README.rst Update default baremetal flavor ram 2017-01-11 16:03:48 -06:00
requirements.txt Add unit tests for build-nodes-json 2016-07-14 16:00:21 -05:00
setup.cfg Reorganize into package and add tox for testing 2016-07-14 12:23:41 -05:00
setup.py Update setup.py to match current 2017-01-11 16:29:27 -06:00
test-requirements.txt Reorganize into package and add tox for testing 2016-07-14 12:23:41 -05:00
tox.ini Reorganize into package and add tox for testing 2016-07-14 12:23:41 -05:00
troubleshooting.rst Bold the troubleshooting titles 2016-06-03 16:31:35 -05:00

README.rst

OpenStack Virtual Baremetal

Introduction

OpenStack Virtual Baremetal is a way to use OpenStack instances to do simulated baremetal deployments. This project is a collection of tools and documentation that make it much easier to do so. It primarily consists of the following pieces:

  • Patches and documentation for setting up a host cloud.
  • A deployment CLI that leverages the OpenStack Heat project to deploy the VMs, networks, and other resources needed.
  • An OpenStack BMC that can be used to control OpenStack instances via IPMI commands.
  • A tool to collect details from the "baremetal" VMs so they can be added as nodes in the OpenStack Ironic baremetal deployment project.

A basic OVB environment is just a BMC VM configured to control a number of "baremetal" VMs. This allows them to be treated largely the same way a real baremetal system with a BMC would. A number of additional features can also be enabled to add more to the environment.

Benefits and Drawbacks

OVB started as part of the OpenStack TripleO project. It was intended to provide more flexible and scalable development and CI environments. Previous methods for doing this focused on setting up all the vms for a given environment on a single box. This had a number of drawbacks:

  • Each developer needed to have their own system. Sharing was possible, but more complex and generally not done. Multi-tenancy is a basic design tenet of OpenStack so this is not a problem when using it to provision the VMs. A large number of developers can make use of a much smaller number of physical systems.
  • If a deployment called for more VMs than could fit on a single system, it was a complex manual process to scale out to multiple systems. An OVB environment is only limited by the number of instances the host cloud can support.
  • Pre-OVB test environments were generally static because there was not an API for dynamic provisioning. By using the OpenStack API to create all of the resources, test environments can be easily tailored to their intended use case.

One drawback to OVB at this time is that it is generally not compatible with current public clouds. While it is possible to do an OVB deployment on a completely stock OpenStack cloud, most public clouds have restrictions (older OpenStack releases, inability to upload new images, no Heat, etc.) that make it problematic. At this time, OVB is primarily used with semi-private clouds configured for ideal compatibility. This situation should improve as more public clouds move to newer OpenStack releases, however.

How-To

Instructions for patching the host cloud[1], setting up the base environment, and deploying a virtual baremetal Heat stack.

1: The host cloud is any OpenStack cloud providing the necessary functionality to run OVB. In a TripleO deployment, this would be the overcloud.

Warning

This process requires patches and configuration settings that may not be appropriate for production clouds.

Patching the Host Cloud

The changes described in this section apply to compute nodes in the host cloud.

Apply the Nova pxe boot patch file in the patches directory to the host cloud Nova. Examples:

TripleO/RDO:

sudo patch -p1 -d /usr/lib/python2.7/site-packages < patches/nova/nova-pxe-boot.patch

Devstack:

Note

You probably don't want to try to run this with devstack anymore. Devstack no longer supports rejoining an existing stack, so if you have to reboot your host cloud you will have to rebuild from scratch.

Note

The patch may not apply cleanly against master Nova code. If/when that happens, the patch will need to be applied manually.

cp patches/nova/nova-pxe-boot.patch /opt/stack/nova
cd /opt/stack/nova
patch -p1 < nova-pxe-boot.patch

Configuring the Host Cloud

The changes described in this section apply to compute nodes in the host cloud.

  1. Neutron must be configured to use the NoopFirewallDriver. Edit /etc/neutron/plugins/ml2/ml2_conf.ini and set the option firewall_driver in the [securitygroup] section as follows:

    firewall_driver = neutron.agent.firewall.NoopFirewallDriver
  2. In Liberty and later versions, arp spoofing must be disabled. Edit /etc/neutron/plugins/ml2/ml2_conf.ini and set the option prevent_arp_spoofing in the [agent] section as follows:

    prevent_arp_spoofing = False
  3. The Nova option force_config_drive must _not be set.

  4. Ideally, jumbo frames should be enabled on the host cloud. This avoids MTU problems when deploying to instances over tunneled Neutron networks with VXLAN or GRE.

    For TripleO-based host clouds, this can be done by setting mtu on all interfaces and vlans in the network isolation nic-configs. A value of at least 1550 should be sufficient to avoid problems.

    If this cannot be done (perhaps because you don't have access to make such a change on the host cloud), it will likely be necessary to configure a smaller MTU on the deployed virtual instances. For a TripleO undercloud, Neutron should be configured to advertise a smaller MTU to instances. Run the following as root:

    # Replace 'eth1' with the actual device to be used for the
    # provisioning network
    ip link set eth1 mtu 1350
    echo -e "\ndhcp-option-force=26,1350" >> /etc/dnsmasq-ironic.conf
    systemctl restart 'neutron-*'

    If network isolation is in use, the templates must also configure mtu as discussed above, except the mtu should be set to 1350 instead of 1550.

  5. Restart nova-compute and neutron-openvswitch-agent to apply the changes above.

Preparing the Host Cloud Environment

  1. Source an rc file that will provide admin credentials for the host cloud.

  2. Upload an ipxe-boot image for the baremetal instances:

    glance image-create --name ipxe-boot --disk-format qcow2 --property os_shutdown_timeout=5 --container-format bare < ipxe/ipxe-boot.qcow2

    Note

    os_shutdown_timeout=5 is to avoid server shutdown delays since since these servers won't respond to graceful shutdown requests.

    Note

    On a UEFI enabled openstack cloud, to boot the baremetal instances with uefi (instead of the default bios firmware) the image should be created with the parameters --property="hw_firmware_type=uefi".

  3. Upload a CentOS 7 image for use as the base image:

    wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
    
    glance image-create --name CentOS-7-x86_64-GenericCloud --disk-format qcow2 --container-format bare < CentOS-7-x86_64-GenericCloud.qcow2
  4. (Optional) Create a pre-populated base BMC image. This is a CentOS 7 image with the required packages for the BMC pre-installed. This eliminates one potential point of failure during the deployment of an OVB environment because the BMC will not require any external network resources:

    wget https://repos.fedorapeople.org/repos/openstack-m/ovb/bmc-base.qcow2
    
    glance image-create --name bmc-base --disk-format qcow2 --container-format bare < bmc-base.qcow2

    To use this image, configure bmc_image in env.yaml to be bmc-base instead of the generic CentOS 7 image.

  5. Create recommended flavors:

    nova flavor-create baremetal auto 8192 50 2
    nova flavor-create bmc auto 512 20 1

    These flavors can be customized if desired. For large environments with many baremetal instances it may be wise to give the bmc flavor more memory. A 512 MB BMC will run out of memory around 20 baremetal instances.

  6. Source an rc file that will provide user credentials for the host cloud.

  7. Add a Nova keypair to be injected into instances:

    nova keypair-add --pub-key ~/.ssh/id_rsa.pub default
  8. (Optional) Configure quotas. When running in a dedicated OVB cloud, it may be helpful to set some quotas to very large/unlimited values to avoid running out of quota when deploying multiple or large environments:

    neutron quota-update --security_group 1000
    neutron quota-update --port -1
    neutron quota-update --network -1
    neutron quota-update --subnet -1
    nova quota-update --instances -1 --cores -1 --ram -1 [tenant uuid]
  9. Create provisioning network.

    Note

    The CIDR used for the subnet does not matter. Standard tenant and external networks are also needed to provide floating ip access to the undercloud and bmc instances

    Warning

    Do not enable DHCP on this network. Addresses will be assigned by the undercloud Neutron.

    neutron net-create provision
    neutron subnet-create --name provision --no-gateway --disable-dhcp provision 192.0.2.0/24
  10. Create "public" network.

    Note

    The CIDR used for the subnet does not matter. This can be used as the network for the public API endpoints on the overcloud, but it does not have to be accessible externally. Only the undercloud VM will need to have access to this network.

    Warning

    Do not enable DHCP on this network. Doing so may cause conflicts between the host cloud metadata service and the undercloud metadata service. Overcloud nodes will be assigned addresses on this network by the undercloud Neutron.

    neutron net-create public
    neutron subnet-create --name public --no-gateway --disable-dhcp public 10.0.0.0/24

Create the baremetal Heat stack

  1. Copy the example env file and edit it to reflect the host environment:

    cp templates/env.yaml.example env.yaml
    vi env.yaml
  2. Deploy the stack:

    bin/deploy.py
  3. Wait for Heat stack to complete:

    Note

    The BMC instance does post-deployment configuration that can take a while to complete, so the Heat stack completing does not necessarily mean the environment is entirely ready for use. To determine whether the BMC is finished starting up, run nova console-log bmc. The BMC service outputs a message like "Managing instance [uuid]" when it is fully configured. There should be one of these messages for each baremetal instance.

    heat stack-show baremetal
  4. Boot a VM to serve as the undercloud:

    nova boot undercloud --flavor m1.large --image centos7 --nic net-id=[tenant net uuid] --nic net-id=[provisioning net uuid]
    neutron floatingip-create [external net uuid]
    neutron port-list
    neutron floatingip-associate [floatingip uuid] [undercloud instance port id]
  5. Build a nodes.json file that can be imported into Ironic:

    bin/build-nodes-json
    scp nodes.json centos@[undercloud floating ip]:~/instackenv.json

    Note

    build-nodes-json also outputs a file named bmc_bm_pairs that lists which BMC address corresponds to a given baremetal instance.

  6. The undercloud vm can now be used with something like TripleO to do a baremetal-style deployment to the virtual baremetal instances deployed previously.

  7. If using the full network isolation provided by OS::OVB::BaremetalNetworks then the overcloud can be created with the network templates in the network-templates directory.