docs: Rework all things metadata'y

Turns out we've a *lot* of disparate metadata systems. Attempt to both
link them somewhat through extensive cross-referencing and extract out
deployment-specific stuff from user-facing docs. Lots of changes here,
but in summary:

- Split out admin-focused content from the metadata API, config drive,
  user data and vendordata docs.

- Merge the config drive, metadata service, vendordata and user-data
  user docs, which are mostly talking about the same thing and are
  fairly barren without the deployment components

- Make use of various oslo.config and Sphinx roles

Side note: I miss when we have tech writers to do this stuff for us :(

Change-Id: I4fb2b628bd93358a752e2397ae353221758e2984
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
Stephen Finucane 2019-03-05 15:57:14 +00:00
parent 74aebe0d4e
commit 92a432fde7
20 changed files with 1058 additions and 725 deletions

View File

@ -1014,10 +1014,11 @@ Configure Guest OS
Metadata API
------------
Nova provides a metadata api for servers to retrieve server specific metadata.
Neutron ensures this metadata api can be accessed through a predefined ip
address (169.254.169.254). For more details, see :nova-doc:`Metadata Service
<user/metadata-service.html>`.
Nova provides a metadata API for servers to retrieve server specific metadata.
Neutron ensures this metadata API can be accessed through a predefined IP
address, ``169.254.169.254``. For more details, refer to the :nova-doc:`user
guide <user/metadata.html>`.
Config Drive
------------
@ -1025,20 +1026,19 @@ Config Drive
Nova is able to write metadata to a special configuration drive that attaches
to the server when it boots. The server can mount this drive and read files
from it to get information that is normally available through the metadata
service. For more details, see :nova-doc:`Config Drive
<user/config-drive.html>`.
service. For more details, refer to the :nova-doc:`user guide
<user/metadata.html>`.
User data
---------
A user data file is a special key in the metadata service that holds a file
that cloud-aware applications in the server can access.
Nova has two ways to send user data to the deployed server, one is by
metadata service to let server able to access to its metadata through
a predefined ip address (169.254.169.254), then other way is to use config
drive which will wrap metadata into a iso9660 or vfat format disk so that
the deployed server can consume it by active engines such as cloud-init
during its boot process.
This information can be accessed via the metadata API or a config drive. The
latter allows the deployed server to consume it by active engines such as
cloud-init during its boot process, where network connectivity may not be an
option.
Server personality
------------------

View File

@ -1,10 +1,3 @@
# The following is generated with:
#
# git log --follow --name-status --format='%H' 2d0dfc632f.. -- doc/source | \
# grep ^R | grep .rst | cut -f2- | \
# sed -e 's|doc/source/|redirectmatch 301 ^/nova/([^/]+)/|' -e 's|doc/source/|/nova/$1/|' -e 's/.rst/.html$/' -e 's/.rst/.html/' | \
# sort
redirectmatch 301 ^/nova/([^/]+)/addmethod.openstackapi.html$ /nova/$1/contributor/api-2.html
redirectmatch 301 ^/nova/([^/]+)/admin/flavors2.html$ /nova/$1/admin/flavors.html
redirectmatch 301 ^/nova/([^/]+)/admin/numa.html$ /nova/$1/admin/cpu-topologies.html
@ -71,8 +64,12 @@ redirectmatch 301 ^/nova/([^/]+)/testing/serial-console.html$ /nova/$1/contribut
redirectmatch 301 ^/nova/([^/]+)/testing/zero-downtime-upgrade.html$ /nova/$1/contributor/testing/zero-downtime-upgrade.html
redirectmatch 301 ^/nova/([^/]+)/threading.html$ /nova/$1/reference/threading.html
redirectmatch 301 ^/nova/([^/]+)/upgrade.html$ /nova/$1/user/upgrade.html
redirectmatch 301 ^/nova/([^/]+)/vendordata.html$ /nova/$1/user/vendordata.html
redirectmatch 301 ^/nova/([^/]+)/user/cellsv2_layout.html$ /nova/$1/user/cellsv2-layout.html
redirectmatch 301 ^/nova/([^/]+)/user/config-drive.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/user/metadata-service.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/user/placement.html$ /placement/$1/
redirectmatch 301 ^/nova/([^/]+)/user/user-data.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/user/vendordata.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/vendordata.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/vmstates.html$ /nova/$1/reference/vm-states.html
redirectmatch 301 ^/nova/([^/]+)/wsgi.html$ /nova/$1/user/wsgi.html
redirectmatch 301 ^/nova/([^/]+)/user/cellsv2_layout.html$ /nova/$1/user/cellsv2-layout.html
redirectmatch 301 ^/nova/latest/user/placement.html$ /placement/latest/

View File

@ -0,0 +1,115 @@
=============
Config drives
=============
.. note::
This section provides deployment information about the config drive feature.
For end-user information about the config drive feature and instance metadata
in general, refer to the :doc:`user guide </user/metadata>`.
Config drives are special drives that are attached to an instance when it boots.
The instance can mount this drive and read files from it to get information that
is normally available through :doc:`the metadata service
</admin/metadata-service>`.
There are many use cases for the config drive. One such use case is to pass a
networking configuration when you do not use DHCP to assign IP addresses to
instances. For example, you might pass the IP address configuration for the
instance through the config drive, which the instance can mount and access
before you configure the network settings for the instance. Another common
reason to use config drives is load. If running something like the OpenStack
puppet providers in your instances, they can hit the :doc:`metadata servers
</admin/metadata-service>` every fifteen minutes, simultaneously for every
instance you have. They are just checking in, and building facts, but it's not
insignificant load. With a config drive, that becomes a local (cached) disk
read. Finally, using a config drive means you're not dependent on the metadata
service being up, reachable, or performing well to do things like reboot your
instance that runs `cloud-init`_ at the beginning.
Any modern guest operating system that is capable of mounting an ISO 9660 or
VFAT file system can use the config drive.
Requirements and guidelines
---------------------------
To use the config drive, you must follow the following requirements for the
compute host and image.
.. rubric:: Compute host requirements
The following virt drivers support the config drive: libvirt, XenServer,
Hyper-V, VMware, and (since 17.0.0 Queens) PowerVM. The Bare Metal service also
supports the config drive.
- To use config drives with libvirt, XenServer, or VMware, you must first
install the :command:`genisoimage` package on each compute host. Use the
:oslo.config:option:`mkisofs_cmd` config option to set the path where you
install the :command:`genisoimage` program. If :command:`genisoimage` is in
the same path as the :program:`nova-compute` service, you do not need to set
this flag.
- To use config drives with Hyper-V, you must set the
:oslo.config:option:`mkisofs_cmd` config option to the full path to an
:command:`mkisofs.exe` installation. Additionally, you must set the
:oslo.config:option:`hyperv.qemu_img_cmd` config option to the full path to an
:command:`qemu-img` command installation.
- To use config drives with PowerVM or the Bare Metal service, you do not need
to prepare anything.
.. rubric:: Image requirements
An image built with a recent version of the `cloud-init`_ package can
automatically access metadata passed through the config drive. The cloud-init
package version 0.7.1 works with Ubuntu, Fedora based images (such as Red Hat
Enterprise Linux) and openSUSE based images (such as SUSE Linux Enterprise
Server). If an image does not have the cloud-init package installed, you must
customize the image to run a script that mounts the config drive on boot, reads
the data from the drive, and takes appropriate action such as adding the public
key to an account. For more details about how data is organized on the config
drive, refer to the :ref:`user guide <metadata-config-drive>`.
Configuration
-------------
The :program:`nova-compute` service accepts the following config drive-related
options:
- :oslo.config:option:`api.config_drive_skip_versions`
- :oslo.config:option:`force_config_drive`
- :oslo.config:option:`config_drive_format`
If using the HyperV compute driver, the following additional options are
supported:
- :oslo.config:option:`hyperv.config_drive_cdrom`
For example, to ensure nova always provides a config drive to instances but
versions ``2018-08-27`` (Rocky) and ``2017-02-22`` (Ocata) are skipped, add the
following to :file:`nova.conf`:
.. code-block:: ini
[DEFAULT]
force_config_drive = True
[api]
config_drive_skip_versions = 2018-08-27 2017-02-22
.. note::
The ``img_config_drive`` image metadata property can be used to force enable
the config drive. In addition, users can explicitly request a config drive
when booting instances. For more information, refer to the :ref:`user guide
<metadata-config-drive>`.
.. note::
If using Xen with a config drive, you must use the
:oslo.config:option:`xenserver.disable_agent` config option to disable the
agent.
.. _cloud-init: https://cloudinit.readthedocs.io/en/latest/

View File

@ -23,6 +23,7 @@ operating system, and exposes functionality over a web-based API.
arch.rst
availability-zones.rst
cells.rst
config-drive.rst
configuration/index.rst
configuring-migrations.rst
cpu-topologies.rst
@ -35,6 +36,7 @@ operating system, and exposes functionality over a web-based API.
manage-the-cloud.rst
manage-users.rst
manage-volumes.rst
metadata-service.rst
migration.rst
migrate-instance-with-snapshot.rst
networking-nova.rst
@ -54,3 +56,4 @@ operating system, and exposes functionality over a web-based API.
system-admin.rst
secure-live-migration-with-qemu-native-tls.rst
mitigation-for-Intel-MDS-security-flaws.rst
vendordata.rst

View File

@ -0,0 +1,193 @@
================
Metadata service
================
.. note::
This section provides deployment information about the metadata service. For
end-user information about the metadata service and instance metadata in
general, refer to the :ref:`user guide <metadata-service>`.
.. note::
This section provides deployment information about the metadata service using
neutron. This functions very differently when deployed with the deprecated
:program:`nova-network` service.
For information about deploying the metadata service with the
:program:`nova-network` service, refer to the :ref:`nova-network
documentation <metadata-service-deploy>`
The metadata service provides a way for instances to retrieve instance-specific
data. Instances access the metadata service at ``http://169.254.169.254``. The
metadata service supports two sets of APIs - an OpenStack metadata API and an
EC2-compatible API - and also exposes vendordata and user data. Both the
OpenStack metadata and EC2-compatible APIs are versioned by date.
The metadata service can be run globally, as part of the :program:`nova-api`
application, or on a per-cell basis, as part of the standalone
:program:`nova-api-metadata` application. A detailed comparison is provided in
the :ref:`cells V2 guide <cells-v2-layout-metadata-api>`.
.. versionchanged:: 19.0.0
The ability to run the nova metadata API service on a per-cell basis was
added in Stein. For versions prior to this release, you should not use the
standalone :program:`nova-api-metadata` application for multiple cells.
Guests access the service at ``169.254.169.254``. The networking service,
neutron, is responsible for intercepting these requests and adding HTTP headers
which uniquely identify the source of the request before forwarding it to the
metadata API server. For the Open vSwitch and Linux Bridge backends provided
with neutron, the flow looks something like so:
#. Instance sends a HTTP request for metadata to ``169.254.169.254``.
#. This request either hits the router or DHCP namespace depending on the route
in the instance
#. The metadata proxy service in the namespace adds the following info to the
request:
- Instance IP (``X-Forwarded-For`` header)
- Router or Network-ID (``X-Neutron-Network-Id`` or ``X-Neutron-Router-Id``
header)
#. The metadata proxy service sends this request to the metadata agent (outside
the namespace) via a UNIX domain socket.
#. The :program:`neutron-metadata-agent` application forwards the request to the
nova metadata API service by adding some new headers (instance ID and Tenant
ID) to the request.
This flow may vary if a different networking backend is used.
Neutron and nova must be configured to communicate together with a shared
secret. Neutron uses this secret to sign the Instance-ID header of the metadata
request to prevent spoofing. This secret is configured through the
:oslo.config:option:`neutron.metadata_proxy_shared_secret` config option in nova
and the equivalent ``metadata_proxy_shared_secret`` config option in neutron.
Configuration
-------------
The :program:`nova-api` application accepts the following metadata
service-related options:
- :oslo.config:option:`enabled_apis`
- :oslo.config:option:`enabled_ssl_apis`
- :oslo.config:option:`neutron.service_metadata_proxy`
- :oslo.config:option:`neutron.metadata_proxy_shared_secret`
- :oslo.config:option:`api.metadata_cache_expiration`
- :oslo.config:option:`api.use_forwarded_for`
- :oslo.config:option:`api.local_metadata_per_cell`
- :oslo.config:option:`api.dhcp_domain`
.. note::
This list excludes configuration options related to the vendordata feature.
Refer to :doc:`vendordata feature documentation </admin/vendordata>` for
information on configuring this.
For example, to configure the :program:`nova-api` application to serve the
metadata API, without SSL, using the ``StaticJSON`` vendordata provider, add the
following to a :file:`nova-api.conf` file:
.. code-block:: ini
[DEFAULT]
enabled_apis = osapi_compute,metadata
enabled_ssl_apis =
metadata_listen = 0.0.0.0
metadata_listen_port = 0
metadata_workers = 4
[neutron]
service_metadata_proxy = True
[api]
dhcp_domain =
metadata_cache_expiration = 15
use_forwarded_for = False
local_metadata_per_cell = False
vendordata_providers = StaticJSON
vendordata_jsonfile_path = /etc/nova/vendor_data.json
.. note::
This does not include configuration options that are not metadata-specific
but are nonetheless required, such as
:oslo.config:option:`api.auth_strategy`.
Configuring the application to use the ``DynamicJSON`` vendordata provider is
more involved and is not covered here.
The :program:`nova-api-metadata` application accepts almost the same options:
- :oslo.config:option:`neutron.service_metadata_proxy`
- :oslo.config:option:`neutron.metadata_proxy_shared_secret`
- :oslo.config:option:`api.metadata_cache_expiration`
- :oslo.config:option:`api.use_forwarded_for`
- :oslo.config:option:`api.local_metadata_per_cell`
- :oslo.config:option:`api.dhcp_domain`
.. note::
This list excludes configuration options related to the vendordata feature.
Refer to :doc:`vendordata feature documentation </admin/vendordata>` for
information on configuring this.
For example, to configure the :program:`nova-api-metadata` application to serve
the metadata API, without SSL, add the following to a :file:`nova-api.conf`
file:
.. code-block:: ini
[DEFAULT]
metadata_listen = 0.0.0.0
metadata_listen_port = 0
metadata_workers = 4
[neutron]
service_metadata_proxy = True
[api]
dhcp_domain =
metadata_cache_expiration = 15
use_forwarded_for = False
local_metadata_per_cell = False
.. note::
This does not include configuration options that are not metadata-specific
but are nonetheless required, such as
:oslo.config:option:`api.auth_strategy`.
For information about configuring the neutron side of the metadata service,
refer to the :neutron-doc:`neutron configuration guide
<configuration/metadata-agent.html>`
Config drives
-------------
Config drives are special drives that are attached to an instance when it boots.
The instance can mount this drive and read files from it to get information that
is normally available through the metadata service. For more information, refer
to :doc:`/admin/config-drive` and the :ref:`user guide <metadata-config-drive>`.
Vendordata
----------
Vendordata provides a way to pass vendor or deployment-specific information to
instances. For more information, refer to :doc:`/admin/vendordata` and the
:ref:`user guide <metadata-vendordata>`.
User data
---------
User data is a blob of data that the user can specify when they launch an
instance. For more information, refer to :ref:`the user guide
<metadata-userdata>`.

View File

@ -311,12 +311,16 @@ command:
Metadata service
~~~~~~~~~~~~~~~~
.. TODO: This should be moved into its own document once we add information
about integrating this with neutron rather than nova-network.
.. note::
This section provides deployment information about the metadata service. For
end-user information about the metadata service, see the
:doc:`user guide </user/metadata-service>`.
This section provides deployment information about the metadata service
using the deprecated :program:`nova-network` service.
For information about deploying the metadata service with neutron, refer to
the :doc:`metadata service documentation </admin/metadata-service>`.
For end-user information about the metadata service and instance metadata in
general, refer to the :doc:`user guide </user/metadata>`.
The metadata service is implemented by either the ``nova-api`` service or the
``nova-api-metadata`` service. Note that the ``nova-api-metadata`` service is
@ -349,9 +353,6 @@ The default Compute service settings assume that ``nova-network`` and
``metadata_host`` configuration option to the IP address of the host where
``nova-api`` is running.
.. TODO: Consider grouping the metadata options into the same [metadata]
group and then we can just link to that in the generated config option doc.
.. list-table:: Description of metadata configuration options
:header-rows: 2
@ -380,7 +381,7 @@ The default Compute service settings assume that ``nova-network`` and
changes to take effect.
* - :oslo.config:option:`vendordata_providers <api.vendordata_providers>` = StaticJSON
- (ListOpt) A list of vendordata providers. See
:doc:`Vendordata </user/vendordata>` for more information.
:doc:`Vendordata </admin/vendordata>` for more information.
* - :oslo.config:option:`vendordata_jsonfile_path <api.vendordata_jsonfile_path>` = None
- (StrOpt) File to load JSON formatted vendor data from

View File

@ -0,0 +1,178 @@
==========
Vendordata
==========
.. note::
This section provides deployment information about the vendordata feature.
For end-user information about the vendordata feature and instance metadata
in general, refer to the :doc:`user guide </user/metadata>`.
The *vendordata* feature provides a way to pass vendor or deployment-specific
information to instances. This can be accessed by users using :doc:`the metadata
service </admin/metadata-service>` or with :doc:`config drives
</admin/config-drive>`.
There are two vendordata modules provided with nova: ``StaticJSON`` and
``DynamicJSON``.
``StaticJSON``
--------------
The ``StaticJSON`` module includes the contents of a static JSON file loaded
from disk. This can be used for things which don't change between instances,
such as the location of the corporate puppet server. It is the default provider.
Configuration
~~~~~~~~~~~~~
The service you must configure to enable the ``StaticJSON`` vendordata module
depends on how guests are accessing vendordata. If using the metadata service,
configuration applies to either :program:`nova-api` or
:program:`nova-api-metadata`, depending on the deployment, while if using
config drives, configuration applies to :program:`nova-compute`. However,
configuration is otherwise the same and the following options apply:
- :oslo.config:option:`api.vendordata_providers`
- :oslo.config:option:`api.vendordata_jsonfile_path`
Refer to the :doc:`metadata service </admin/metadata-service>` and :doc:`config
drive </admin/config-drive>` documentation for more information on how to
configure the required services.
``DynamicJSON``
---------------
The ``DynamicJSON`` module can make a request to an external REST service to
determine what metadata to add to an instance. This is how we recommend you
generate things like Active Directory tokens which change per instance.
When used, the ``DynamicJSON`` module will make a request to any REST services
listed in the :oslo.config:option:`api.vendordata_dynamic_targets` configuration
option. There can be more than one of these but note that they will be queried
once per metadata request from the instance which can mean a lot of traffic
depending on your configuration and the configuration of the instance.
The following data is passed to your REST service as a JSON encoded POST:
.. list-table::
:header-rows: 1
* - Key
- Description
* - ``project-id``
- The ID of the project that owns this instance.
* - ``instance-id``
- The UUID of this instance.
* - ``image-id``
- The ID of the image used to boot this instance.
* - ``user-data``
- As specified by the user at boot time.
* - ``hostname``
- The hostname of the instance.
* - ``metadata``
- As specified by the user at boot time.
Metadata fetched from the REST service will appear in the metadata service at a
new file called ``vendordata2.json``, with a path (either in the metadata service
URL or in the config drive) like this::
openstack/latest/vendor_data2.json
For each dynamic target, there will be an entry in the JSON file named after
that target. For example:
.. code-block:: json
{
"testing": {
"value1": 1,
"value2": 2,
"value3": "three"
}
}
The `novajoin`__ project provides a dynamic vendordata service to manage host
instantiation in an IPA server.
__ https://github.com/openstack/novajoin
Deployment considerations
~~~~~~~~~~~~~~~~~~~~~~~~~
Nova provides authentication to external metadata services in order to provide
some level of certainty that the request came from nova. This is done by
providing a service token with the request -- you can then just deploy your
metadata service with the keystone authentication WSGI middleware. This is
configured using the keystone authentication parameters in the
:oslo.config:group:`vendordata_dynamic_auth` configuration group.
Configuration
~~~~~~~~~~~~~
As with ``StaticJSON``, the service you must configure to enable the
``DynamicJSON`` vendordata module depends on how guests are accessing
vendordata. If using the metadata service, configuration applies to either
:program:`nova-api` or :program:`nova-api-metadata`, depending on the
deployment, while if using configuration drives, configuration applies to
:program:`nova-compute`. However, configuration is otherwise the same and the
following options apply:
- :oslo.config:option:`api.vendordata_providers`
- :oslo.config:option:`api.vendordata_dynamic_ssl_certfile`
- :oslo.config:option:`api.vendordata_dynamic_connect_timeout`
- :oslo.config:option:`api.vendordata_dynamic_read_timeout`
- :oslo.config:option:`api.vendordata_dynamic_failure_fatal`
- :oslo.config:option:`api.vendordata_dynamic_targets`
Refer to the :doc:`metadata service </admin/metadata-service>` and :doc:`config
drive </admin/config-drive>` documentation for more information on how to
configure the required services.
In addition, there are also many options related to authentication. These are
provided by :keystone-doc:`keystone <>` but are listed below for completeness:
- :oslo.config:option:`vendordata_dynamic_auth.cafile`
- :oslo.config:option:`vendordata_dynamic_auth.certfile`
- :oslo.config:option:`vendordata_dynamic_auth.keyfile`
- :oslo.config:option:`vendordata_dynamic_auth.insecure`
- :oslo.config:option:`vendordata_dynamic_auth.timeout`
- :oslo.config:option:`vendordata_dynamic_auth.collect_timing`
- :oslo.config:option:`vendordata_dynamic_auth.split_loggers`
- :oslo.config:option:`vendordata_dynamic_auth.auth_type`
- :oslo.config:option:`vendordata_dynamic_auth.auth_section`
- :oslo.config:option:`vendordata_dynamic_auth.auth_url`
- :oslo.config:option:`vendordata_dynamic_auth.system_scope`
- :oslo.config:option:`vendordata_dynamic_auth.domain_id`
- :oslo.config:option:`vendordata_dynamic_auth.domain_name`
- :oslo.config:option:`vendordata_dynamic_auth.project_id`
- :oslo.config:option:`vendordata_dynamic_auth.project_name`
- :oslo.config:option:`vendordata_dynamic_auth.project_domain_id`
- :oslo.config:option:`vendordata_dynamic_auth.project_domain_name`
- :oslo.config:option:`vendordata_dynamic_auth.trust_id`
- :oslo.config:option:`vendordata_dynamic_auth.default_domain_id`
- :oslo.config:option:`vendordata_dynamic_auth.default_domain_name`
- :oslo.config:option:`vendordata_dynamic_auth.user_id`
- :oslo.config:option:`vendordata_dynamic_auth.username`
- :oslo.config:option:`vendordata_dynamic_auth.user_domain_id`
- :oslo.config:option:`vendordata_dynamic_auth.user_domain_name`
- :oslo.config:option:`vendordata_dynamic_auth.password`
- :oslo.config:option:`vendordata_dynamic_auth.tenant_id`
- :oslo.config:option:`vendordata_dynamic_auth.tenant_name`
Refer to the :keystone-doc:`keystone documentation </configuration/index.html>`
for information on configuring these.
References
----------
* Michael Still's talk from the Queens summit in Sydney, `Metadata, User Data,
Vendor Data, oh my!`__
* Michael's blog post on `deploying a simple vendordata service`__ which
provides more details and sample code to supplement the documentation above.
__ https://www.openstack.org/videos/sydney-2017/metadata-user-data-vendor-data-oh-my
__ https://www.madebymikal.com/nova-vendordata-deployment-an-excessively-detailed-guide/

View File

@ -84,8 +84,8 @@ resources will help you get started with consuming the API directly.
* :doc:`Block Device Mapping </user/block-device-mapping>`: One of the trickier
parts to understand is the Block Device Mapping parameters used to connect
specific block devices to computes. This deserves its own deep dive.
* :doc:`Configuration drive </user/config-drive>`: Provide information to the
guest instance when it is created.
* :doc:`Metadata </user/metadata>`: Provide information to the guest instance
when it is created.
Nova can be configured to emit notifications over RPC.
@ -158,9 +158,10 @@ Once you are running nova, the following information is extremely useful.
configured, and how that will impact where compute instances land in your
environment. If you are seeing unexpected distribution of compute instances
in your hosts, you'll want to dive into this configuration.
* :doc:`Exposing custom metadata to compute instances </user/vendordata>`: How and
when you might want to extend the basic metadata exposed to compute instances
(either via metadata server or config drive) for your specific purposes.
* :doc:`Exposing custom metadata to compute instances </admin/vendordata>`: How
and when you might want to extend the basic metadata exposed to compute
instances (either via metadata server or config drive) for your specific
purposes.
Reference Material
------------------
@ -244,7 +245,6 @@ looking parts of our architecture. These are collected below.
user/cellsv2-layout
user/certificate-validation
user/conductor
user/config-drive
user/feature-classification
user/filter-scheduler
user/flavors
@ -252,8 +252,6 @@ looking parts of our architecture. These are collected below.
user/quotas
user/support-matrix
user/upgrade
user/user-data
user/vendordata
user/wsgi

View File

@ -294,8 +294,8 @@ details.
Nova Metadata API service
~~~~~~~~~~~~~~~~~~~~~~~~~
Starting from the Stein release, the Nova Metadata API service
can be run either globally or per cell using the
Starting from the Stein release, the :doc:`nova metadata API service
</admin/metadata-service>` can be run either globally or per cell using the
:oslo.config:option:`api.local_metadata_per_cell` configuration option.
**Global**
@ -303,20 +303,20 @@ can be run either globally or per cell using the
If you have networks that span cells, you might need to run Nova metadata API
globally. When running globally, it should be configured as an API-level
service with access to the :oslo.config:option:`api_database.connection`
information. The nova metadata API service must not be run as a standalone
service in this case (e.g. must not be run via the nova-api-metadata script).
information. The nova metadata API service **must not** be run as a standalone
service, using the :program:`nova-api-metadata` service, in this case.
**Local per cell**
Running Nova metadata API per cell can have better performance and data
isolation in a muli-cell deployment. If your networks are segmented along
cell boundaries, then you can run Nova metadata API service per cell. If
you choose to run it per cell, you should also configure each
`Neutron metadata-agent`_ to point to the corresponding nova-metadata-api.
The nova metadata API service must be run as a standalone service in this
case (e.g. must be run via the nova-api-metadata script).
.. _Neutron metadata-agent: https://docs.openstack.org/neutron/latest/configuration/metadata-agent.html?#DEFAULT.nova_metadata_host
isolation in a multi-cell deployment. If your networks are segmented along
cell boundaries, then you can run Nova metadata API service per cell. If you
choose to run it per cell, you should also configure each
:neutron-doc:`neutron-metadata-agent
<configuration/metadata-agent.html?#DEFAULT.nova_metadata_host>` service to
point to the corresponding :program:`nova-api-metadata`. The nova metadata API
service **must** be run as a standalone service, using the
:program:`nova-api-metadata` service, in this case.
Operations Requiring upcalls

View File

@ -1,296 +0,0 @@
=======================================
Store metadata on a configuration drive
=======================================
You can configure OpenStack to write metadata to a special configuration drive
that attaches to the instance when it boots. The instance can mount this drive
and read files from it to get information that is normally available through
the :doc:`metadata service </user/metadata-service>`.
This metadata is different from the :doc:`user data </user/user-data>`.
One use case for using the configuration drive is to pass a networking
configuration when you do not use DHCP to assign IP addresses to
instances. For example, you might pass the IP address configuration for
the instance through the configuration drive, which the instance can
mount and access before you configure the network settings for the
instance.
Any modern guest operating system that is capable of mounting an ISO
9660 or VFAT file system can use the configuration drive.
Requirements and guidelines
~~~~~~~~~~~~~~~~~~~~~~~~~~~
To use the configuration drive, you must follow the following
requirements for the compute host and image.
**Compute host requirements**
- The following hypervisors support the configuration drive: libvirt,
XenServer, Hyper-V, VMware, and (since 17.0.0 Queens) PowerVM.
Also, the Bare Metal service supports the configuration drive.
- To use configuration drive with libvirt, XenServer, or VMware, you
must first install the genisoimage package on each compute host.
Otherwise, instances do not boot properly.
Use the ``mkisofs_cmd`` flag to set the path where you install the
genisoimage program. If genisoimage is in same path as the
``nova-compute`` service, you do not need to set this flag.
- To use configuration drive with Hyper-V, you must set the
``mkisofs_cmd`` value to the full path to an ``mkisofs.exe``
installation. Additionally, you must set the ``qemu_img_cmd`` value
in the ``hyperv`` configuration section to the full path to an
:command:`qemu-img` command installation.
- To use configuration drive with PowerVM or the Bare Metal service,
you do not need to prepare anything because these treat the configuration
drive properly.
**Image requirements**
- An image built with a recent version of the cloud-init package can
automatically access metadata passed through the configuration drive.
The cloud-init package version 0.7.1 works with Ubuntu, Fedora
based images (such as Red Hat Enterprise Linux) and openSUSE based
images (such as SUSE Linux Enterprise Server).
- If an image does not have the cloud-init package installed, you must
customize the image to run a script that mounts the configuration
drive on boot, reads the data from the drive, and takes appropriate
action such as adding the public key to an account. You can read more
details about how data is organized on the configuration drive.
- If you use Xen with a configuration drive, use the
:oslo.config:option:`xenserver.disable_agent` configuration parameter to
disable the agent.
**Guidelines**
- Do not rely on the presence of the EC2 metadata in the configuration
drive, because this content might be removed in a future release. For
example, do not rely on files in the ``ec2`` directory.
- When you create images that access configuration drive data and
multiple directories are under the ``openstack`` directory, always
select the highest API version by date that your consumer supports.
For example, if your guest image supports the 2012-03-05, 2012-08-05,
and 2013-04-13 versions, try 2013-04-13 first and fall back to a
previous version if 2013-04-13 is not present.
Enable and access the configuration drive
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. To enable the configuration drive, pass the ``--config-drive true``
parameter to the :command:`openstack server create` command.
The following example enables the configuration drive and passes user
data, a user data file, and two key/value metadata pairs, all of which are
accessible from the configuration drive:
.. code-block:: console
$ openstack server create --config-drive true --image my-image-name \
--flavor 1 --key-name mykey --user-data ./my-user-data.txt \
--property role=webservers --property essential=false MYINSTANCE
You can also configure the Compute service to always create a
configuration drive by setting the following option in the
``/etc/nova/nova.conf`` file:
.. code-block:: console
force_config_drive = true
It is also possible to force the config drive by specifying the
``img_config_drive=mandatory`` property in the image.
.. note::
If a user passes the ``--config-drive true`` flag to the
:command:`openstack server create` command, an administrator cannot
disable the configuration drive.
#. If your guest operating system supports accessing disk by label, you
can mount the configuration drive as the
``/dev/disk/by-label/configurationDriveVolumeLabel`` device. In the
following example, the configuration drive has the ``config-2``
volume label:
.. code-block:: console
# mkdir -p /mnt/config
# mount /dev/disk/by-label/config-2 /mnt/config
.. note::
Ensure that you use at least version 0.3.1 of CirrOS for
configuration drive support.
If your guest operating system does not use ``udev``, the
``/dev/disk/by-label`` directory is not present.
You can use the :command:`blkid` command to identify the block device that
corresponds to the configuration drive. For example, when you boot
the CirrOS image with the ``m1.tiny`` flavor, the device is
``/dev/vdb``:
.. code-block:: console
# blkid -t LABEL="config-2" -odevice
.. code-block:: console
/dev/vdb
Once identified, you can mount the device:
.. code-block:: console
# mkdir -p /mnt/config
# mount /dev/vdb /mnt/config
Configuration drive contents
----------------------------
In this example, the contents of the configuration drive are as follows::
ec2/2009-04-04/meta-data.json
ec2/2009-04-04/user-data
ec2/latest/meta-data.json
ec2/latest/user-data
openstack/2012-08-10/meta_data.json
openstack/2012-08-10/user_data
openstack/content
openstack/content/0000
openstack/content/0001
openstack/latest/meta_data.json
openstack/latest/user_data
The files that appear on the configuration drive depend on the arguments
that you pass to the :command:`openstack server create` command.
Vendor-specific data can also be exposed to a guest in the configuration
drive. See :doc:`Vendordata </user/vendordata>` for details.
OpenStack metadata format
-------------------------
The following example shows the contents of the
``openstack/2012-08-10/meta_data.json`` and
``openstack/latest/meta_data.json`` files. These files are identical.
The file contents are formatted for readability.
.. code-block:: json
{
"availability_zone": "nova",
"hostname": "test.novalocal",
"launch_index": 0,
"name": "test",
"meta": {
"role": "webservers",
"essential": "false"
},
"public_keys": {
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n"
},
"uuid": "83679162-1378-4288-a2d4-70e13ec132aa"
}
EC2 metadata format
-------------------
The following example shows the contents of the
``ec2/2009-04-04/meta-data.json`` and the ``ec2/latest/meta-data.json``
files. These files are identical. The file contents are formatted to
improve readability.
.. code-block:: json
{
"ami-id": "ami-00000001",
"ami-launch-index": 0,
"ami-manifest-path": "FIXME",
"block-device-mapping": {
"ami": "sda1",
"ephemeral0": "sda2",
"root": "/dev/sda1",
"swap": "sda3"
},
"hostname": "test.novalocal",
"instance-action": "none",
"instance-id": "i-00000001",
"instance-type": "m1.tiny",
"kernel-id": "aki-00000002",
"local-hostname": "test.novalocal",
"local-ipv4": null,
"placement": {
"availability-zone": "nova"
},
"public-hostname": "test.novalocal",
"public-ipv4": "",
"public-keys": {
"0": {
"openssh-key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n"
}
},
"ramdisk-id": "ari-00000003",
"reservation-id": "r-7lfps8wj",
"security-groups": [
"default"
]
}
User data
---------
The ``openstack/2012-08-10/user_data``, ``openstack/latest/user_data``,
``ec2/2009-04-04/user-data``, and ``ec2/latest/user-data`` file are
present only if the ``--user-data`` flag and the contents of the user
data file are passed to the :command:`openstack server create` command.
Configuration drive format
--------------------------
The default format of the configuration drive as an ISO 9660 file
system. To explicitly specify the ISO 9660 format, add the following
line to the ``/etc/nova/nova.conf`` file:
.. code-block:: console
config_drive_format=iso9660
By default, you cannot attach the configuration drive image as a CD
drive instead of as a disk drive. To attach a CD drive, add the
following line to the ``/etc/nova/nova.conf`` file:
.. code-block:: console
[hyperv]
config_drive_cdrom=true
.. note:: Attaching a configuration drive as a CD drive is only supported
by the Hyper-V compute driver.
For legacy reasons, you can configure the configuration drive to use
VFAT format instead of ISO 9660. It is unlikely that you would require
VFAT format because ISO 9660 is widely supported across operating
systems. However, to use the VFAT format, add the following line to the
``/etc/nova/nova.conf`` file:
.. code-block:: console
config_drive_format=vfat
If you choose VFAT, the configuration drive is 64 MB.
.. deprecated:: 19.0.0
The :oslo.config:option:`config_drive_format` option was deprecated in 19.0.0
(Stein). The option was originally added as a workaround for a bug in
libvirt, `#1246201`__, that was resolved in libvirt v1.2.17. As a result,
this option is no longer necessary or useful.
__ https://bugs.launchpad.net/nova/+bug/1246201

View File

@ -337,7 +337,7 @@ notes=This ensures the user data provided by the user when booting
a server is available in one of the expected config drive locations.
maturity=complete
api_doc_link=http://developer.openstack.org/api-ref/compute/#create-server
admin_doc_link=https://docs.openstack.org/nova/latest/user/config-drive.html
admin_doc_link=https://docs.openstack.org/nova/latest/admin/config-drive.html
tempest_test_uuids=7fff3fb3-91d8-4fd0-bd7d-0204f1f180ba
cli=
libvirt-kvm=complete

View File

@ -9,8 +9,7 @@ End user guide
:maxdepth: 1
launch-instances
config-drive
metadata-service
metadata
certificate-validation
resize
reboot
@ -80,6 +79,7 @@ Once you are running nova, the following information is extremely useful.
environment. If you are seeing unexpected distribution of compute instances
in your hosts, you'll want to dive into this configuration.
* :doc:`Exposing custom metadata to compute instances </user/vendordata>`: How and
when you might want to extend the basic metadata exposed to compute instances
(either via metadata server or config drive) for your specific purposes.
* :doc:`Exposing custom metadata to compute instances </admin/vendordata>`: How
and when you might want to extend the basic metadata exposed to compute
instances (either via metadata server or config drive) for your specific
purposes.

View File

@ -18,8 +18,8 @@ Follow the steps below to launch an instance from an image.
For example, you can add a description for your server by providing the
``--property description="My Server"`` parameter.
You can pass :doc:`user data </user/user-data>` in a local file at instance
launch by using the ``--user-data USER-DATA-FILE`` parameter.
You can pass :ref:`user data <metadata-userdata>` in a local file at
instance launch by using the ``--user-data USER-DATA-FILE`` parameter.
.. important::

View File

@ -16,7 +16,7 @@ Before you can launch an instance, gather the following parameters:
available hardware configuration for a server. It defines the size of
a virtual server that can be launched.
- Any **user data** files. A :doc:`user data </user/user-data>` file is a
- Any **user data** files. A :ref:`user data <metadata-userdata>` file is a
special key in the metadata service that holds a file that cloud-aware
applications in the guest instance can access. For example, one application
that uses user data is the

View File

@ -1,199 +0,0 @@
================
Metadata service
================
This document provides end user information about the metadata service. For
deployment information about the metadata service, see the
:ref:`admin guide <metadata-service-deploy>`.
Compute uses a metadata service for virtual machine instances to retrieve
instance-specific data. Instances access the metadata service at
``http://169.254.169.254``. The metadata service supports two sets of APIs: an
OpenStack metadata API and an EC2-compatible API. Both APIs are versioned by
date.
To retrieve a list of supported versions for the OpenStack metadata API, make a
GET request to ``http://169.254.169.254/openstack``:
.. code-block:: console
$ curl http://169.254.169.254/openstack
2012-08-10
2013-04-04
2013-10-17
2015-10-15
2016-06-30
2016-10-06
2017-02-22
2018-08-27
latest
To list supported versions for the EC2-compatible metadata API, make a GET
request to ``http://169.254.169.254``:
.. code-block:: console
$ curl http://169.254.169.254
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
latest
If you write a consumer for one of these APIs, always attempt to access the
most recent API version supported by your consumer first, then fall back to an
earlier version if the most recent one is not available.
Metadata from the OpenStack API is distributed in JSON format. To retrieve the
metadata, make a GET request to
``http://169.254.169.254/openstack/2018-08-27/meta_data.json``:
.. code-block:: console
$ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json
.. code-block:: json
{
"random_seed": "yu5ZnkqF2CqnDZVAfZgarGLoFubhcK5wHG4fcNfVZEtie/bTV8k2dDXK\
C7krP2cjp9A7g9LIWe5+WSaZ3zpvQ03hp/4mMNy9V1U/mnRMZyQ3W4Fn\
Nex7UP/0Smjb9rVzfUb2HrVUCN61Yo4jHySTd7UeEasF0nxBrx6NFY6e\
KRoELGPPr1S6+ZDcDT1Sp7pRoHqwVbzyJZc80ICndqxGkZOuvwDgVKZD\
B6O3kFSLuqOfNRaL8y79gJizw/MHI7YjOxtPMr6g0upIBHFl8Vt1VKjR\
s3zB+c3WkC6JsopjcToHeR4tPK0RtdIp6G2Bbls5cblQUAc/zG0a8BAm\
p6Pream9XRpaQBDk4iXtjIn8Bf56SCANOFfeI5BgBeTwfdDGoM0Ptml6\
BJQiyFtc3APfXVVswrCq2SuJop+spgrpiKXOzXvve+gEWVhyfbigI52e\
l1VyMoyZ7/pbdnX0LCGHOdAU8KRnBoo99ZOErv+p7sROEIN4Yywq/U/C\
xXtQ5BNCtae389+3yT5ZCV7fYzLYChgDMJSZ9ds9fDFIWKmsRu3N+wUg\
eL4klxAjRgzQ7MMlap5kppnIYRxXVy0a5j1qOaBAzJB5LLJ7r3/Om38x\
Z4+XGWjqd6KbSwhUVs1aqzxpep1Sp3nTurQCuYjgMchjslt0O5oJjh5Z\
hbCZT3YUc8M=\n",
"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38",
"availability_zone": "nova",
"keys": [
{
"data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKV\
VRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTH\
bsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCe\
uMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n",
"type": "ssh",
"name": "mykey"
}
],
"hostname": "test.novalocal",
"launch_index": 0,
"meta": {
"priority": "low",
"role": "webserver"
},
"devices": [
{
"type": "nic",
"bus": "pci",
"address": "0000:00:02.0",
"mac": "00:11:22:33:44:55",
"tags": ["trusted"]
},
{
"type": "disk",
"bus": "ide",
"address": "0:0",
"serial": "disk-vol-2352423",
"path": "/dev/sda",
"tags": ["baz"]
}
],
"project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f",
"public_keys": {
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKV\
VRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTH\
bsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCe\
uMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"
},
"name": "test"
}
Instances also retrieve :doc:`user data </user/user-data>` (passed as the
``user_data`` parameter in the API call or by the ``--user-data`` flag in the
:command:`openstack server create` command) through the metadata service, by
making a GET request to
``http://169.254.169.254/openstack/2018-08-27/user_data``:
.. code-block:: console
$ curl http://169.254.169.254/openstack/2018-08-27/user_data
#!/bin/bash
echo 'Extra user data here'
The metadata service has an API that is compatible with version 2009-04-04 of
the `Amazon EC2 metadata service
<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html>`__.
This means that virtual machine images designed for EC2 will work properly with
OpenStack.
The EC2 API exposes a separate URL for each metadata element. Retrieve a
listing of these elements by making a GET query to
``http://169.254.169.254/2009-04-04/meta-data/``:
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/
ami
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/placement/
availability-zone
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/
0=mykey
Instances can retrieve the public SSH key (identified by keypair name when a
user requests a new instance) by making a GET request to
``http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key``:
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+US\
LGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3B\
ISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated\
by Nova
Instances can retrieve user data by making a GET request to
``http://169.254.169.254/2009-04-04/user-data``:
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/user-data
#!/bin/bash
echo 'Extra user data here'

View File

@ -0,0 +1,504 @@
========
Metadata
========
Nova presents configuration information to instances it starts via a mechanism
called metadata. These mechanisms are widely used via helpers such as
`cloud-init`_ to specify things like the root password the instance should use.
This metadata is made available via either a *config drive* or the *metadata
service* and can be somewhat customised by the user using the *user data*
feature. This guide provides an overview of these features along with a summary
of the types of metadata available.
.. _cloud-init: https://cloudinit.readthedocs.io/en/latest/
Types of metadata
-----------------
There are three separate groups of users who need to be able to specify
metadata for an instance.
User provided data
~~~~~~~~~~~~~~~~~~
The user who booted the instance can pass metadata to the instance in several
ways. For authentication keypairs, the keypairs functionality of the nova API
can be used to upload a key and then specify that key during the nova boot API
request. For less structured data, a small opaque blob of data may be passed
via the :ref:`user data <metadata-userdata>` feature of the nova API. Examples
of such unstructured data would be the puppet role that the instance should use,
or the HTTP address of a server from which to fetch post-boot configuration
information.
Nova provided data
~~~~~~~~~~~~~~~~~~
Nova itself needs to pass information to the instance via its internal
implementation of the metadata system. Such information includes the requested
hostname for the instance and the availability zone the instance is in. This
happens by default and requires no configuration by the user or deployer.
Nova provides both an :ref:`OpenStack metadata API <metadata-openstack-format>`
and an :ref:`EC2-compatible API <metadata-ec2-format>`. Both the OpenStack
metadata and EC2-compatible APIs are versioned by date. These are described
later.
Deployer provided data
~~~~~~~~~~~~~~~~~~~~~~
A deployer of OpenStack may need to pass data to an instance. It is also
possible that this data is not known to the user starting the instance. An
example might be a cryptographic token to be used to register the instance with
Active Directory post boot -- the user starting the instance should not have
access to Active Directory to create this token, but the nova deployment might
have permissions to generate the token on the user's behalf. This is possible
using the :ref:`vendordata <metadata-vendordata>` feature, which must be
configured by your cloud operator.
.. _metadata-service:
The metadata service
--------------------
.. note::
This section provides end user information about the metadata service. For
deployment information about the metadata service, refer to the :doc:`admin
guide </admin/metadata-service>`.
The *metadata service* provides a way for instances to retrieve
instance-specific data via a REST API. Instances access this service at
``169.254.169.254`` and all types of metadata, be it user-, nova- or
vendor-provided, can be accessed via this service.
Using the metadata service
~~~~~~~~~~~~~~~~~~~~~~~~~~
To retrieve a list of supported versions for the :ref:`OpenStack metadata API
<metadata-openstack-format>`, make a GET request to
``http://169.254.169.254/openstack``, which will return a list of directories:
.. code-block:: console
$ curl http://169.254.169.254/openstack
2012-08-10
2013-04-04
2013-10-17
2015-10-15
2016-06-30
2016-10-06
2017-02-22
2018-08-27
latest
Refer to :ref:`OpenStack format metadata <metadata-openstack-format>` for
information on the contents and structure of these directories.
To list supported versions for the :ref:`EC2-compatible metadata API
<metadata-ec2-format>`, make a GET request to ``http://169.254.169.254``, which
will, once again, return a list of directories:
.. code-block:: console
$ curl http://169.254.169.254
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
latest
Refer to :ref:`EC2-compatible metadata <metadata-ec2-format>` for information on
the contents and structure of these directories.
.. _metadata-config-drive:
Config drives
-------------
.. note::
This section provides end user information about config drives. For
deployment information about the config drive feature, refer to the
:doc:`admin guide </admin/config-drive>`.
*Config drives* are special drives that are attached to an instance when
it boots. The instance can mount this drive and read files from it to get
information that is normally available through the metadata service.
One use case for using the config drive is to pass a networking configuration
when you do not use DHCP to assign IP addresses to instances. For example, you
might pass the IP address configuration for the instance through the config
drive, which the instance can mount and access before you configure the network
settings for the instance.
Using the config drive
~~~~~~~~~~~~~~~~~~~~~~
To enable the config drive for an instance, pass the ``--config-drive true``
parameter to the :command:`openstack server create` command.
The following example enables the config drive and passes a user data file and
two key/value metadata pairs, all of which are accessible from the config
drive:
.. code-block:: console
$ openstack server create --config-drive true --image my-image-name \
--flavor 1 --key-name mykey --user-data ./my-user-data.txt \
--property role=webservers --property essential=false MYINSTANCE
.. note::
The Compute service can be configured to always create a config drive. For
more information, refer to :doc:`the admin guide </admin/config-drive>`.
If your guest operating system supports accessing disk by label, you can mount
the config drive as the ``/dev/disk/by-label/configurationDriveVolumeLabel``
device. In the following example, the config drive has the ``config-2`` volume
label:
.. code-block:: console
# mkdir -p /mnt/config
# mount /dev/disk/by-label/config-2 /mnt/config
If your guest operating system does not use ``udev``, the ``/dev/disk/by-label``
directory is not present. You can use the :command:`blkid` command to identify
the block device that corresponds to the config drive. For example:
.. code-block:: console
# blkid -t LABEL="config-2" -odevice
/dev/vdb
Once identified, you can mount the device:
.. code-block:: console
# mkdir -p /mnt/config
# mount /dev/vdb /mnt/config
Once mounted, you can examine the contents of the config drive:
.. code-block:: console
$ cd /mnt/config
$ find . -maxdepth 2
.
./ec2
./ec2/2009-04-04
./ec2/latest
./openstack
./openstack/2012-08-10
./openstack/2013-04-04
./openstack/2013-10-17
./openstack/2015-10-15
./openstack/2016-06-30
./openstack/2016-10-06
./openstack/2017-02-22
./openstack/latest
The files that appear on the config drive depend on the arguments that you pass
to the :command:`openstack server create` command. The format of this directory
is the same as that provided by the :ref:`metadata service <metadata-service>`,
with the exception that the EC2-compatible metadata is now located in the
``ec2`` directory instead of the root (``/``) directory. Refer to the
:ref:`metadata-openstack-format` and :ref:`metadata-ec2-format` sections for
information about the format of the files and subdirectories within these
directories.
Nova metadata
-------------
As noted previously, nova provides its metadata in two formats: OpenStack format
and EC2-compatible format.
.. _metadata-openstack-format:
OpenStack format metadata
~~~~~~~~~~~~~~~~~~~~~~~~~
.. versionchanged:: 12.0.0
Support for network metadata was added in the Liberty release.
Metadata from the OpenStack API is distributed in JSON format. There are two
files provided for each version: ``meta_data.json`` and ``network_data.json``.
The ``meta_data.json`` file contains nova-specific information, while the
``network_data.json`` file contains information retrieved from neutron. For
example:
.. code-block:: console
$ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json
.. code-block:: json
{
"random_seed": "yu5ZnkqF2CqnDZVAfZgarGLoFubhcK5wHG4fcNfVZEtie/bTV8k2dDXK\
C7krP2cjp9A7g9LIWe5+WSaZ3zpvQ03hp/4mMNy9V1U/mnRMZyQ3W4Fn\
Nex7UP/0Smjb9rVzfUb2HrVUCN61Yo4jHySTd7UeEasF0nxBrx6NFY6e\
KRoELGPPr1S6+ZDcDT1Sp7pRoHqwVbzyJZc80ICndqxGkZOuvwDgVKZD\
B6O3kFSLuqOfNRaL8y79gJizw/MHI7YjOxtPMr6g0upIBHFl8Vt1VKjR\
s3zB+c3WkC6JsopjcToHeR4tPK0RtdIp6G2Bbls5cblQUAc/zG0a8BAm\
p6Pream9XRpaQBDk4iXtjIn8Bf56SCANOFfeI5BgBeTwfdDGoM0Ptml6\
BJQiyFtc3APfXVVswrCq2SuJop+spgrpiKXOzXvve+gEWVhyfbigI52e\
l1VyMoyZ7/pbdnX0LCGHOdAU8KRnBoo99ZOErv+p7sROEIN4Yywq/U/C\
xXtQ5BNCtae389+3yT5ZCV7fYzLYChgDMJSZ9ds9fDFIWKmsRu3N+wUg\
eL4klxAjRgzQ7MMlap5kppnIYRxXVy0a5j1qOaBAzJB5LLJ7r3/Om38x\
Z4+XGWjqd6KbSwhUVs1aqzxpep1Sp3nTurQCuYjgMchjslt0O5oJjh5Z\
hbCZT3YUc8M=\n",
"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38",
"availability_zone": "nova",
"keys": [
{
"data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKV\
VRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTH\
bsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCe\
uMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n",
"type": "ssh",
"name": "mykey"
}
],
"hostname": "test.novalocal",
"launch_index": 0,
"meta": {
"priority": "low",
"role": "webserver"
},
"devices": [
{
"type": "nic",
"bus": "pci",
"address": "0000:00:02.0",
"mac": "00:11:22:33:44:55",
"tags": ["trusted"]
},
{
"type": "disk",
"bus": "ide",
"address": "0:0",
"serial": "disk-vol-2352423",
"path": "/dev/sda",
"tags": ["baz"]
}
],
"project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f",
"public_keys": {
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKV\
VRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTH\
bsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCe\
uMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"
},
"name": "test"
}
.. code-block:: console
$ curl http://169.254.169.254/openstack/2018-08-27/network_data.json
.. code-block:: json
{
"links": [
{
"ethernet_mac_address": "fa:16:3e:9c:bf:3d",
"id": "tapcd9f6d46-4a",
"mtu": null,
"type": "bridge",
"vif_id": "cd9f6d46-4a3a-43ab-a466-994af9db96fc"
}
],
"networks": [
{
"id": "network0",
"link": "tapcd9f6d46-4a",
"network_id": "99e88329-f20d-4741-9593-25bf07847b16",
"type": "ipv4_dhcp"
}
],
"services": [
{
"address": "8.8.8.8",
"type": "dns"
}
]
}
.. _metadata-ec2-format:
EC2-compatible metadata
~~~~~~~~~~~~~~~~~~~~~~~
The EC2-compatible API is compatible with version 2009-04-04 of the `Amazon EC2
metadata service`__ This means that virtual machine images designed for EC2 will
work properly with OpenStack.
The EC2 API exposes a separate URL for each metadata element. Retrieve a
listing of these elements by making a GET query to
``http://169.254.169.254/2009-04-04/meta-data/``. For example:
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/
ami
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/placement/
availability-zone
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/
0=mykey
Instances can retrieve the public SSH key (identified by keypair name when a
user requests a new instance) by making a GET request to
``http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key``:
.. code-block:: console
$ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+US\
LGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3B\
ISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated\
by Nova
__ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
.. _metadata-userdata:
User data
---------
*User data* is a blob of data that the user can specify when they launch an
instance. The instance can access this data through the metadata service or
config drive. Commonly used to pass a shell script that the instance runs on
boot.
For example, one application that uses user data is the `cloud-init
<https://help.ubuntu.com/community/CloudInit>`__ system, which is an open-source
package from Ubuntu that is available on various Linux distributions and which
handles early initialization of a cloud instance.
You can place user data in a local file and pass it through the ``--user-data
<user-data-file>`` parameter at instance creation.
.. code-block:: console
$ openstack server create --image ubuntu-cloudimage --flavor 1 \
--user-data mydata.file VM_INSTANCE
.. note::
The provided user data must be base64 encoded and is restricted to 65535
bytes.
Once booted, you can access this data from the instance using either the
metadata service or the config drive. To access it via the metadata service,
make a GET request to either
``http://169.254.169.254/openstack/{version}/user_data`` (OpenStack API) or
``http://169.254.169.254/{version}/user-data`` (EC2-compatible API). For
example:
.. code-block:: console
$ curl http://169.254.169.254/openstack/2018-08-27/user_data
.. code-block:: shell
#!/bin/bash
echo 'Extra user data here'
.. _metadata-vendordata:
Vendordata
----------
.. note::
This section provides end user information about the vendordata feature. For
deployment information about this feature, refer to the :doc:`admin guide
</admin/vendordata>`.
.. versionchanged:: 14.0.0
Support for dynamic vendor data was added in the Newton release.
**Where configured**, instances can retrieve vendor-specific data from the
metadata service or config drive. To access it via the metadata service, make a
GET request to either
``http://169.254.169.254/openstack/{version}/vendor_data.json`` or
``http://169.254.169.254/openstack/{version}/vendor_data2.json``, depending on
the deployment. For example:
.. code-block:: console
$ curl http://169.254.169.254/openstack/2018-08-27/vendor_data2.json
.. code-block:: json
{
"testing": {
"value1": 1,
"value2": 2,
"value3": "three"
}
}
.. note::
The presence and contents of this file will vary from deployment to
deployment.
General guidelines
------------------
- Do not rely on the presence of the EC2 metadata in the metadata API or
config drive, because this content might be removed in a future release. For
example, do not rely on files in the ``ec2`` directory.
- When you create images that access metadata service or config drive data and
multiple directories are under the ``openstack`` directory, always select the
highest API version by date that your consumer supports. For example, if your
guest image supports the ``2012-03-05``, ``2012-08-05``, and ``2013-04-13``
versions, try ``2013-04-13`` first and fall back to a previous version if
``2013-04-13`` is not present.

View File

@ -1,25 +0,0 @@
==============================
Provide user data to instances
==============================
*User data* is a blob of data that the user can specify when they launch an
instance. The instance can access this data through the metadata service or
config drive. Commonly used to pass a shell script that the instance runs on
boot.
For example, one application that uses user data is the
`cloud-init <https://help.ubuntu.com/community/CloudInit>`__ system,
which is an open-source package from Ubuntu that is available on various
Linux distributions and which handles early initialization of a cloud
instance.
You can place user data in a local file and pass it through the
``--user-data <user-data-file>`` parameter at instance creation.
.. code-block:: console
$ openstack server create --image ubuntu-cloudimage --flavor 1 \
--user-data mydata.file VM_INSTANCE
.. note:: The provided user data must be base64 encoded and is restricted to
65535 bytes.

View File

@ -1,140 +0,0 @@
==========
Vendordata
==========
Nova presents configuration information to instances it starts via a mechanism
called metadata. This metadata is made available via either a configdrive, or
the metadata service. These mechanisms are widely used via helpers such as
cloud-init to specify things like the root password the instance should use.
There are three separate groups of people who need to be able to specify
metadata for an instance.
User provided data
------------------
The user who booted the instance can pass metadata to the instance in several
ways. For authentication keypairs, the keypairs functionality of the Nova APIs
can be used to upload a key and then specify that key during the Nova boot API
request. For less structured data, a small opaque blob of data may be passed
via the :doc:`user data </user/user-data>` feature of the Nova API. Examples of
such unstructured data would be the puppet role that the instance should use,
or the HTTP address of a server to fetch post-boot configuration information
from.
Nova provided data
------------------
Nova itself needs to pass information to the instance via its internal
implementation of the metadata system. Such information includes the network
configuration for the instance, as well as the requested hostname for the
instance. This happens by default and requires no configuration by the user or
deployer.
Deployer provided data
----------------------
There is however a third type of data. It is possible that the deployer of
OpenStack needs to pass data to an instance. It is also possible that this data
is not known to the user starting the instance. An example might be a
cryptographic token to be used to register the instance with Active Directory
post boot -- the user starting the instance should not have access to Active
Directory to create this token, but the Nova deployment might have permissions
to generate the token on the user's behalf.
Nova supports a mechanism to add "vendordata" to the metadata handed to
instances. This is done by loading named modules, which must appear in the nova
source code. We provide two such modules:
- StaticJSON: a module which can include the contents of a static JSON file
loaded from disk, configurable via the
:oslo.config:option:`api.vendordata_jsonfile_path` option. This can be used
for things which don't change between instances, such as the location of the
corporate puppet server. This is the default provider.
- DynamicJSON: a module which will make a request to an external REST service
to determine what metadata to add to an instance. This is how we recommend
you generate things like Active Directory tokens which change per instance.
The vendordata providers are configured via the
:oslo.config:option:`api.vendordata_providers` option.
Tell me more about DynamicJSON
==============================
To use DynamicJSON, you configure it like this:
- Add "DynamicJSON" to the :oslo.config:option:`api.vendordata_providers`
configuration option. This can also include "StaticJSON" if you'd like.
- Specify the REST services to be contacted to generate metadata in the
:oslo.config:option:`api.vendordata_dynamic_targets` configuration option.
There can be more than one of these, but note that they will be queried once
per metadata request from the instance, which can mean a fair bit of traffic
depending on your configuration and the configuration of the instance.
The format for an entry in *vendordata_dynamic_targets* is like this::
<name>@<url>
Where name is a short string not including the '@' character, and where the
URL can include a port number if so required. An example would be::
testing@http://127.0.0.1:125
Metadata fetched from this target will appear in the metadata service at a
new file called *vendordata2.json*, with a path (either in the metadata service
URL or in the config drive) like this::
openstack/2016-10-06/vendor_data2.json
For each dynamic target, there will be an entry in the JSON file named after
that target. For example::
{
"testing": {
"value1": 1,
"value2": 2,
"value3": "three"
}
}
Do not specify the same name more than once. If you do, we will ignore
subsequent uses of a previously used name.
The following data is passed to your REST service as a JSON encoded POST:
+-------------+-------------------------------------------------+
| Key | Description |
+=============+=================================================+
| project-id | The ID of the project that owns this instance. |
+-------------+-------------------------------------------------+
| instance-id | The UUID of this instance. |
+-------------+-------------------------------------------------+
| image-id | The ID of the image used to boot this instance. |
+-------------+-------------------------------------------------+
| user-data | As specified by the user at boot time. |
+-------------+-------------------------------------------------+
| hostname | The hostname of the instance. |
+-------------+-------------------------------------------------+
| metadata | As specified by the user at boot time. |
+-------------+-------------------------------------------------+
Deployment considerations
=========================
Nova provides authentication to external metadata services in order to provide
some level of certainty that the request came from nova. This is done by
providing a service token with the request -- you can then just deploy your
metadata service with the keystone authentication WSGI middleware. This is
configured using the keystone authentication parameters in the
:oslo.config:group:`vendordata_dynamic_auth` configuration group.
References
==========
* Michael Still's talk from the Queens summit in Sydney:
`Metadata, User Data, Vendor Data, oh my!`_
* Michael's blog post on `deploying a simple vendordata service`_ which
provides more details and sample code to supplement the documentation above.
.. _Metadata, User Data, Vendor Data, oh my!: https://www.openstack.org/videos/sydney-2017/metadata-user-data-vendor-data-oh-my
.. _deploying a simple vendordata service: http://www.stillhq.com/openstack/000022.html

View File

@ -64,8 +64,12 @@
/nova/latest/testing/zero-downtime-upgrade.html 301 /nova/latest/contributor/testing/zero-downtime-upgrade.html
/nova/latest/threading.html 301 /nova/latest/reference/threading.html
/nova/latest/upgrade.html 301 /nova/latest/user/upgrade.html
/nova/latest/vendordata.html 301 /nova/latest/user/vendordata.html
/nova/latest/user/cellsv2_layout.html 301 /nova/latest/user/cellsv2-layout.html
/nova/latest/user/config-drive.html 301 /nova/latest/user/metadata.html
/nova/latest/user/metadata-service.html 301 /nova/latest/user/metadata.html
/nova/latest/user/placement.html 301 /placement/latest/
/nova/latest/user/user-data.html 301 /nova/latest/user/metadata.html
/nova/latest/user/vendordata.html 301 /nova/latest/user/metadata.html
/nova/latest/vendordata.html 301 /nova/latest/user/metadata.html
/nova/latest/vmstates.html 301 /nova/latest/reference/vm-states.html
/nova/latest/wsgi.html 301 /nova/latest/user/wsgi.html
/nova/latest/user/cellsv2_layout.html 301 /nova/latest/user/cellsv2-layout.html
/nova/latest/user/placement.html 301 /placement/latest/

View File

@ -61,8 +61,8 @@ VERSIONS = [
# hidden from the listing, but can still be requested explicitly, which is
# required for testing purposes. We know this isn't great, but its inherited
# from EC2, which this needs to be compatible with.
# NOTE(jichen): please update doc/source/user/metadata-service.rst on the
# metadata output when new version is created in order to make doc up-to-date.
# NOTE(jichen): please update doc/source/user/metadata.rst on the metadata
# output when new version is created in order to make doc up-to-date.
FOLSOM = '2012-08-10'
GRIZZLY = '2013-04-04'
HAVANA = '2013-10-17'