properly format user guide in RST
The user guide isn't formatted properly for RST. Cosmetically, the lines are too long and some have extra non-printing characters. Lists aren't formatted for RST. This fixes the file so that the lines are no more than 80 characters wide, removes the non-printing characters, and makes use of RST-formatting for the lists. The text itself is not modified. Change-Id: I8d2f3e4723e619014c056946e4f226d9c0e8f48a Partial-Bug: #1353688
This commit is contained in:
parent
e6c81011ab
commit
daaad27b75
@ -1,217 +1,236 @@
|
|||||||
.. _user-guide:
|
.. _user-guide:
|
||||||
|
|
||||||
=======================
|
=======================
|
||||||
Introduction to Ironic
|
Introduction to Ironic
|
||||||
=======================
|
=======================
|
||||||
|
|
|
||||||
Ironic is an OpenStack project which provisions physical hardware as opposed to virtual machines.
|
Ironic is an OpenStack project which provisions physical hardware as opposed to
|
||||||
Ironic provides several reference drivers which leverage common technologies like PXE and IPMI, to
|
virtual machines. Ironic provides several reference drivers which leverage
|
||||||
cover a wide range of hardware. Ironic's pluggable driver architecture also allows vendor-specific
|
common technologies like PXE and IPMI, to cover a wide range of hardware.
|
||||||
drivers to be added for improved performance or functionality not provided by reference drivers.
|
Ironic's pluggable driver architecture also allows vendor-specific drivers to
|
||||||
|
|
be added for improved performance or functionality not provided by reference
|
||||||
If one thinks of traditional hypervisor functionality (e.g., creating a VM, enumerating virtual devices, managing
|
drivers.
|
||||||
the power state, loading an OS onto the VM, and so on), then Ironic may be thought of as a hypervisor API gluing
|
|
||||||
together multiple drivers, each of which implement some portion of that functionality with respect to physical hardware.
|
If one thinks of traditional hypervisor functionality (e.g., creating a
|
||||||
|
|
VM, enumerating virtual devices, managing the power state, loading an OS onto
|
||||||
|
|
the VM, and so on), then Ironic may be thought of as a hypervisor API gluing
|
||||||
OpenStack's Ironic project makes physical servers as easy to provision as virtual machines in cloud, which in turn will
|
together multiple drivers, each of which implement some portion of that
|
||||||
open up new avenues for enterprises and service providers.
|
functionality with respect to physical hardware.
|
||||||
|
|
|
||||||
Ironic's driver will replace the Nova "bare metal" driver of Grizzly, Havana and Icehouse releases. It is targeting inclusion
|
OpenStack's Ironic project makes physical servers as easy to provision as
|
||||||
in the OpenStack Juno release. See https://wiki.openstack.org/wiki/Baremetal for more information.
|
virtual machines in cloud, which in turn will open up new avenues for
|
||||||
|
|
enterprises and service providers.
|
||||||
|
|
|
||||||
Why Provision Bare Metal
|
Ironic's driver will replace the Nova "bare metal" driver of Grizzly, Havana
|
||||||
==========================
|
and Icehouse releases. It is targeting inclusion in the OpenStack Juno release.
|
||||||
|
|
See https://wiki.openstack.org/wiki/Baremetal for more information.
|
||||||
Here are a few use-cases for bare metal (physical server) provisioning in cloud; there are doubtless many more interesting ones.
|
|
||||||
|
|
Why Provision Bare Metal
|
||||||
|
|
========================
|
||||||
1. High-performance computing clusters
|
|
||||||
|
|
Here are a few use-cases for bare metal (physical server) provisioning in
|
||||||
2. Computing tasks that require access to hardware devices which can't be virtualized
|
cloud; there are doubtless many more interesting ones:
|
||||||
|
|
|
||||||
3. Database hosting (some databases run poorly in a hypervisor)
|
- High-performance computing clusters
|
||||||
|
|
- Computing tasks that require access to hardware devices which can't be
|
||||||
4. Single tenant, dedicated hardware for performance, security, dependability and other regulatory requirements
|
virtualized
|
||||||
|
|
- Database hosting (some databases run poorly in a hypervisor)
|
||||||
5. Or, rapidly deploying a cloud infrastructure
|
- Single tenant, dedicated hardware for performance, security, dependability
|
||||||
|
|
and other regulatory requirements
|
||||||
|
|
- Or, rapidly deploying a cloud infrastructure
|
||||||
Conceptual Architecture
|
|
||||||
========================
|
Conceptual Architecture
|
||||||
|
|
=======================
|
||||||
The following diagram shows the relationships and how all services come into play during the provisioning of a
|
|
||||||
physical server.
|
The following diagram shows the relationships and how all services come into
|
||||||
|
|
play during the provisioning of a physical server.
|
||||||
.. figure:: ../images/conceptual_architecture.png
|
|
||||||
:alt: ConceptualArchitecture
|
.. figure:: ../images/conceptual_architecture.png
|
||||||
|
|
:alt: ConceptualArchitecture
|
||||||
|
|
|
||||||
Logical Architecture
|
Logical Architecture
|
||||||
=====================
|
====================
|
||||||
|
|
|
||||||
To successfully deploy the Ironic service in cloud, the administrator users need to understand the logical architecture.
|
To successfully deploy the Ironic service in cloud, the administrator users
|
||||||
The below diagram shows the basic components that form the Ironic service, the relation of Ironic service with other
|
need to understand the logical architecture. The below diagram shows the basic
|
||||||
OpenStack services and the logical flow of a boot instance request resulting in the provisioning of a physical server.
|
components that form the Ironic service, the relation of Ironic service with
|
||||||
|
|
other OpenStack services and the logical flow of a boot instance request
|
||||||
|
|
resulting in the provisioning of a physical server.
|
||||||
|
|
||||||
.. figure:: ../images/logical_architecture.png
|
.. figure:: ../images/logical_architecture.png
|
||||||
:alt: Logical Architecture
|
:alt: Logical Architecture
|
||||||
|
|
|
||||||
The Ironic service is composed of the following components,
|
The Ironic service is composed of the following components:
|
||||||
|
|
|
||||||
|
|
#. A RESTful API service, by which operators and other services may interact
|
||||||
1. A RESTful API service, by which operators and other services may interact with the managed bare metal servers.
|
with the managed bare metal servers.
|
||||||
|
|
|
||||||
|
|
#. A Conductor service, which does the bulk of the work. Functionality is
|
||||||
2. A Conductor service, which does the bulk of the work. Functionality is exposed via the API service.
|
exposed via the API service. The Conductor and API services communicate
|
||||||
The Conductor and API services communicate via RPC.
|
via RPC.
|
||||||
|
|
|
||||||
|
|
#. A Message Queue
|
||||||
3. A Message Queue
|
|
||||||
|
|
#. A Database for storing the state of the Conductor and Drivers.
|
||||||
|
|
|
||||||
4. A Database for storing the state of the Conductor and Drivers.
|
As in Figure 1.2. Logical Architecture, a user request to boot an instance is
|
||||||
|
|
passed to the Nova Compute service via Nova API and Nova Scheduler. The Compute
|
||||||
|
|
service hands over this request to the Ironic service, which comprises
|
||||||
As in Figure 1.2. Logical Architecture, a user request to boot an instance is passed to the Nova Compute service
|
of the Ironic API, the Ironic Conductor, many Drivers to support heterogeneous
|
||||||
via Nova API and Nova Scheduler. The Compute service hands over this request to the Ironic service, which comprises
|
hardware, Database etc. The request passes from the Ironic API, to the
|
||||||
of the Ironic API, the Ironic Conductor, many Drivers to support heterogeneous hardware, Database etc. The request
|
Conductor and the Drivers to successfully provision a physical server to
|
||||||
passes from the Ironic API, to the Conductor and the Drivers to successfully provision a physical server to the user.
|
the user.
|
||||||
|
|
||||||
|
|
Just as Nova Compute service talks to various OpenStack services like Glance,
|
||||||
Just as Nova Compute service talks to various OpenStack services like Glance, Neutron, Swift etc to provision a
|
Neutron, Swift etc to provision a virtual machine instance, here the
|
||||||
virtual machine instance, here the Ironic service talks to the same OpenStack services for image, network and other
|
Ironic service talks to the same OpenStack services for image, network and
|
||||||
resource needs to provision a bare metal instance.
|
other resource needs to provision a bare metal instance.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
Key Technologies for Bare Metal Hosting
|
||||||
Key Technologies for Bare Metal Hosting
|
=======================================
|
||||||
===========================================
|
|
||||||
|
|
PXE
|
||||||
PXE
|
-----
|
||||||
-----
|
Preboot Execution Environment (PXE) is part of the Wired for Management (WfM)
|
||||||
|
|
specification developed by Intel and Microsoft. The PXE enables system's BIOS
|
||||||
Preboot Execution Environment (PXE) is part of the Wired for Management (WfM) specification developed by Intel and Microsoft.
|
and network interface card (NIC) to bootstrap a computer from the network in
|
||||||
The PXE enables system's BIOS and network interface card (NIC) to bootstrap a computer from the network in place of a disk. Bootstrapping is the process by which a system loads the OS into local memory so that it can be executed by the processor.
|
place of a disk. Bootstrapping is the process by which a system loads the OS
|
||||||
This capability of allowing a system to boot over a network simplifies server deployment and server management for administrators.
|
into local memory so that it can be executed by the processor. This capability
|
||||||
|
of allowing a system to boot over a network simplifies server deployment and
|
||||||
|
|
server management for administrators.
|
||||||
DHCP
|
|
||||||
------
|
DHCP
|
||||||
|
|
------
|
||||||
Dynamic Host Configuration Protocol (DHCP) is a standardized networking protocol used on Internet Protocol (IP) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services.
|
Dynamic Host Configuration Protocol (DHCP) is a standardized networking
|
||||||
Using PXE, the BIOS uses DHCP to obtain an IP address for the network interface and to locate the server that stores the network bootstrap program (NBP).
|
protocol used on Internet Protocol (IP) networks for dynamically distributing
|
||||||
|
network configuration parameters, such as IP addresses for interfaces and
|
||||||
|
|
services. Using PXE, the BIOS uses DHCP to obtain an IP address for the
|
||||||
NBP
|
network interface and to locate the server that stores the network bootstrap
|
||||||
------
|
program (NBP).
|
||||||
|
|
|
||||||
Network Bootstrap Program (NBP) is equivalent to GRUB (GRand Unified Bootloader) or LILO (LInux LOader) - loaders which are traditionally used in local booting. Like the boot program in a hard drive environment, the NBP is responsible for loading the OS kernel into memory so that the OS can be bootstrapped over a network.
|
NBP
|
||||||
|
------
|
||||||
|
|
Network Bootstrap Program (NBP) is equivalent to GRUB (GRand Unified
|
||||||
TFTP
|
Bootloader) or LILO (LInux LOader) - loaders which are traditionally used in
|
||||||
------
|
local booting. Like the boot program in a hard drive environment, the NBP is
|
||||||
|
|
responsible for loading the OS kernel into memory so that the OS can be
|
||||||
Trivial File Transfer Protocol (TFTP) is a simple file transfer protocol that is generally used for automated transfer of configuration or boot files between machines in a local environment.
|
bootstrapped over a network.
|
||||||
In a PXE environment, TFTP is used to download NBP over the network using information from the DHCP server.
|
|
||||||
|
TFTP
|
||||||
|
|
------
|
||||||
IPMI
|
Trivial File Transfer Protocol (TFTP) is a simple file transfer protocol that
|
||||||
------
|
is generally used for automated transfer of configuration or boot files between
|
||||||
|
|
machines in a local environment. In a PXE environment, TFTP is used to
|
||||||
Intelligent Platform Management Interface (IPMI) is a standardized computer system interface used by system administrators for out-of-band management of computer systems and monitoring of their operation.
|
download NBP over the network using information from the DHCP server.
|
||||||
It is a method to manage systems that may be unresponsive or powered off by using only a network connection to the hardware rather than to an operating system.
|
|
||||||
|
IPMI
|
||||||
|
|
------
|
||||||
|
|
Intelligent Platform Management Interface (IPMI) is a standardized computer
|
||||||
Ironic Deployment Architecture
|
system interface used by system administrators for out-of-band management of
|
||||||
===================================
|
computer systems and monitoring of their operation. It is a method to manage
|
||||||
|
|
systems that may be unresponsive or powered off by using only a network
|
||||||
We already know that OpenStack services are highly configurable to meet various end-user requirements. The diagrams below are sample deployment scenarios of the Ironic service for bare metal provisioning.
|
connection to the hardware rather than to an operating system.
|
||||||
|
|
|
||||||
.. figure:: ../images/deployment_architecture_1.png
|
|
||||||
:alt: Deployment Architecture 1
|
Ironic Deployment Architecture
|
||||||
|
|
==============================
|
||||||
In the above deployment architecture figure 1.3.1.:
|
We already know that OpenStack services are highly configurable to meet various
|
||||||
|
|
end-user requirements. The diagrams below are sample deployment scenarios of
|
||||||
|
|
the Ironic service for bare metal provisioning.
|
||||||
1. The controller runs the identity service, management service, dashboard and the management portion of compute. It also contains associated API services, MySQL databases and messaging system.
|
|
||||||
|
|
.. figure:: ../images/deployment_architecture_1.png
|
||||||
|
|
:alt: Deployment Architecture 1
|
||||||
2. The Ironic RESTful API service is used to enroll hardware with attributes like MAC addresses, IPMI credentials etc. A cloud administrator usually enrolls this information for Ironic to manage the specific hardware.
|
|
||||||
|
|
In the above deployment architecture figure 1.3.1.:
|
||||||
|
|
|
||||||
3. The compute node runs the Nova compute service, networking plug-in agent and Ironic conductor service. The Ironic conductor service does the bulk of the work. There can be multiple instances of the conductor service to support various class of drivers and also to manage fail over. Ideally, instances of conductor service should be on separate nodes. Each conductor can itself run many drivers to operate heterogeneous hardware. This is depicted in figure 1.3.2. The API exposes a list of supported drivers and the names of conductor hosts servicing them.
|
#. The controller runs the identity service, management service, dashboard and
|
||||||
|
|
the management portion of compute. It also contains associated API services,
|
||||||
.. figure:: ../images/deployment_architecture_2.png
|
MySQL databases and messaging system.
|
||||||
:alt: Deployment Architecture 2
|
|
||||||
|
|
#. The Ironic RESTful API service is used to enroll hardware with attributes
|
||||||
Understanding Bare Metal Deployment
|
like MAC addresses, IPMI credentials etc. A cloud administrator usually
|
||||||
=========================================
|
enrolls this information for Ironic to manage the specific hardware.
|
||||||
|
|
|
||||||
What happens when a boot instance request comes in? The below diagram walks through the steps involved during the provisioning of a bare metal instance.
|
#. The compute node runs the Nova compute service, networking plug-in agent and
|
||||||
|
|
Ironic conductor service. The Ironic conductor service does the bulk of the
|
||||||
These pre-requisites must be met before the deployment process:
|
work. There can be multiple instances of the conductor service to support
|
||||||
|
|
various class of drivers and also to manage fail over. Ideally, instances of
|
||||||
|
|
conductor service should be on separate nodes. Each conductor can itself run
|
||||||
a. Dependent packages to be configured on the compute node like tftp-server, ipmi, syslinux etc for bare metal provisioning.
|
many drivers to operate heterogeneous hardware. This is depicted in figure
|
||||||
|
|
1.3.2. The API exposes a list of supported drivers and the names of conductor
|
||||||
|
|
hosts servicing them.
|
||||||
b. Flavors to be created for the available hardware. Nova must know the flavor to boot from.
|
|
||||||
|
|
.. figure:: ../images/deployment_architecture_2.png
|
||||||
|
|
:alt: Deployment Architecture 2
|
||||||
c. Images to be made available in Glance. Listed below are some image types required for successful bare metal deployment.
|
|
||||||
bm-deploy-kernel
|
Understanding Bare Metal Deployment
|
||||||
bm-deploy-ramdisk
|
===================================
|
||||||
user-image
|
|
||||||
user-image-vmlinuz
|
What happens when a boot instance request comes in? The below diagram walks
|
||||||
user-image-initrd
|
through the steps involved during the provisioning of a bare metal instance.
|
||||||
|
|
|
||||||
|
|
These pre-requisites must be met before the deployment process:
|
||||||
d. Hardware to be enrolled via Ironic RESTful API service.
|
|
||||||
|
|
- Dependent packages to be configured on the compute node like tftp-server,
|
||||||
.. figure:: ../images/deployment_steps.png
|
ipmi, syslinux etc for bare metal provisioning.
|
||||||
:alt: Deployment Steps
|
- Flavors to be created for the available hardware. Nova must know the flavor
|
||||||
|
|
to boot from.
|
||||||
Deploy Process
|
- Images to be made available in Glance. Listed below are some image types
|
||||||
-----------------
|
required for successful bare metal deployment:
|
||||||
|
|
|
||||||
1. A boot instance request comes in via the Nova API, through the message queue to the Nova scheduler.
|
+ bm-deploy-kernel
|
||||||
|
|
+ bm-deploy-ramdisk
|
||||||
|
|
+ user-image
|
||||||
2. Nova scheduler applies filter and finds the eligible compute node. Nova scheduler uses flavor extra_specs detail such as 'cpu_arch', 'baremetal:deploy_kernel_id', 'baremetal:deploy_ramdisk_id' etc to match the target physical node.
|
+ user-image-vmlinuz
|
||||||
|
|
+ user-image-initrd
|
||||||
|
|
- Hardware to be enrolled via Ironic RESTful API service.
|
||||||
3. A spawn task is placed by the driver which contains all information such as which image to boot from etc. It invokes the driver.spawn from the virt layer of Nova compute.
|
|
||||||
|
|
.. figure:: ../images/deployment_steps.png
|
||||||
|
|
:alt: Deployment Steps
|
||||||
4. Information about the bare metal node is retrieved from the bare metal database and the node is reserved.
|
|
||||||
|
|
Deploy Process
|
||||||
|
|
-----------------
|
||||||
5. Images from Glance are pulled down to the local disk of the Ironic conductor servicing the bare metal node.
|
|
||||||
|
|
#. A boot instance request comes in via the Nova API, through the message
|
||||||
|
|
queue to the Nova scheduler.
|
||||||
6. Virtual interfaces are plugged in and Neutron API updates DHCP port to support PXE/TFTP options.
|
|
||||||
|
|
#. Nova scheduler applies filter and finds the eligible compute node. Nova
|
||||||
|
|
scheduler uses flavor extra_specs detail such as 'cpu_arch',
|
||||||
7. Nova's ironic driver issues a deploy request via the Ironic API to the Ironic conductor servicing the bare metal node.
|
'baremetal:deploy_kernel_id', 'baremetal:deploy_ramdisk_id' etc to match
|
||||||
|
|
the target physical node.
|
||||||
|
|
|
||||||
8. PXE driver prepares tftp bootloader.
|
#. A spawn task is placed by the driver which contains all information such
|
||||||
|
|
as which image to boot from etc. It invokes the driver.spawn from the
|
||||||
|
|
virt layer of Nova compute.
|
||||||
9. The IPMI driver issues command to enable network boot of a node and power it on.
|
|
||||||
|
|
#. Information about the bare metal node is retrieved from the bare metal
|
||||||
|
|
database and the node is reserved.
|
||||||
10. The DHCP boots the deploy ramdisk. The PXE driver actually copies the image over iSCSI to the physical node. It connects to the iSCSI end point, partitions volume, "dd" the image and closes the iSCSI connection. The deployment is done. The Ironic conductor will switch pxe config to service mode and notify ramdisk agent on the successful deployment.
|
|
||||||
|
|
#. Images from Glance are pulled down to the local disk of the Ironic
|
||||||
|
|
conductor servicing the bare metal node.
|
||||||
11. The IPMI driver reboots the bare metal node. Note that there are 2 power cycles during bare metal deployment; the first time when powered-on, the images get deployed as mentioned in step 9. The second time as in this case, after the images are deployed, the node is powered up.
|
|
||||||
|
|
#. Virtual interfaces are plugged in and Neutron API updates DHCP port to
|
||||||
|
|
support PXE/TFTP options.
|
||||||
12. The bare metal node status is updated and the node instance is made available.
|
|
||||||
|
|
#. Nova's ironic driver issues a deploy request via the Ironic API to the
|
||||||
|
|
Ironic conductor servicing the bare metal node.
|
||||||
|
|
||||||
|
#. PXE driver prepares tftp bootloader.
|
||||||
|
|
||||||
|
#. The IPMI driver issues command to enable network boot of a node and power
|
||||||
|
it on.
|
||||||
|
|
||||||
|
#. The DHCP boots the deploy ramdisk. The PXE driver actually copies the image
|
||||||
|
over iSCSI to the physical node. It connects to the iSCSI end point,
|
||||||
|
partitions volume, "dd" the image and closes the iSCSI connection. The
|
||||||
|
deployment is done. The Ironic conductor will switch pxe config to service
|
||||||
|
mode and notify ramdisk agent on the successful deployment.
|
||||||
|
|
||||||
|
#. The IPMI driver reboots the bare metal node. Note that there are 2 power
|
||||||
|
cycles during bare metal deployment; the first time when powered-on, the
|
||||||
|
images get deployed as mentioned in step 9. The second time as in this case,
|
||||||
|
after the images are deployed, the node is powered up.
|
||||||
|
|
||||||
|
#. The bare metal node status is updated and the node instance is made
|
||||||
|
available.
|
||||||
|
Loading…
Reference in New Issue
Block a user