Removing an extra space after fullstop

Removing extra space after fullstop in both security notes
and security guide.

Change-Id: I23edcd68b015aa454845a3b9db56106a69bb717a
This commit is contained in:
Rahul Nair 2016-12-02 13:53:37 -06:00 committed by KATO Tomoyuki
parent 3cf6cd1425
commit 21e9b81f10
33 changed files with 185 additions and 185 deletions

View File

@ -72,46 +72,46 @@ Phases of an audit
~~~~~~~~~~~~~~~~~~
An audit has four distinct phases, though most stakeholders and control owners
will only participate in one or two. The four phases are Planning, Fieldwork,
Reporting and Wrap-up. Each of these phases is discussed below.
will only participate in one or two. The four phases are Planning, Fieldwork,
Reporting and Wrap-up. Each of these phases is discussed below.
The Planning phase is typically performed two weeks to six months before
Fieldwork begins. In this phase audit items such as the timeframe, timeline,
Fieldwork begins. In this phase audit items such as the timeframe, timeline,
controls to be evaluated, and control owners are discussed and finalized.
Concerns about resource availability, impartiality, and costs are also
resolved.
The Fieldwork phase is the most visible portion of the audit. This is where
The Fieldwork phase is the most visible portion of the audit. This is where
the auditors are onsite, interviewing the control owners, documenting the
controls that are in place, and identifying any issues. It is important to
controls that are in place, and identifying any issues. It is important to
note that the auditors will use a two part process for evaluating the controls
in place. The first part is evaluating the design effectiveness of the
control. This is where the auditor will evaluate whether the control is
in place. The first part is evaluating the design effectiveness of the
control. This is where the auditor will evaluate whether the control is
capable of effectively preventing or detecting and correcting weaknesses and
deficiencies. A control must "pass" this test to be evaluated in the second
phase. This is because with a control that is designed ineffectually, there
deficiencies. A control must "pass" this test to be evaluated in the second
phase. This is because with a control that is designed ineffectually, there
is no point considering whether it is operating effectively. The second part
is operational effectiveness. Operational effectiveness testing will determine
how the control was applied, the consistency with which the control was
applied and by whom or by what means the control was applied. A control may
applied and by whom or by what means the control was applied. A control may
depend upon other controls (indirect controls) and, if they do, additional
evidence that demonstrates the operating effectiveness of those indirect
controls may be required for the auditor to determine the overall operating
effectiveness of the control.
The Reporting phase is where any issues that were identified during the
Fieldwork phase will be validated by management. For logistics
Fieldwork phase will be validated by management. For logistics
purposes, some activities such as issue validation may be performed during the
Fieldwork phase. Management will also need to provide remediation plans to
address the issues and ensure that they do not reoccur. A draft of the
Fieldwork phase. Management will also need to provide remediation plans to
address the issues and ensure that they do not reoccur. A draft of the
overall report will be circulated for review to the stakeholders and
management. Agreed upon changes are incorporated and the updated draft is
sent to senior management for review and approval. Once senior management
management. Agreed upon changes are incorporated and the updated draft is
sent to senior management for review and approval. Once senior management
approves the report, it is finalized and distributed to executive management.
Any issues are entered into the issue tracking or risk tracking mechanism the
organization uses.
The Wrap-up phase is where the audit is officially spun down. Management will
The Wrap-up phase is where the audit is officially spun down. Management will
begin remediation activities at this point. Processes and notifications are
used to ensure that any audit related information is moved to a secure
repository.

View File

@ -113,23 +113,23 @@ pygments_style = 'sphinx'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [openstackdocstheme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
@ -137,7 +137,7 @@ html_theme_path = [openstackdocstheme.get_html_theme_path()]
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
@ -188,7 +188,7 @@ html_show_sourcelink = False
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''

View File

@ -56,10 +56,10 @@ Allowed hosts
~~~~~~~~~~~~~
Configure the ``ALLOWED_HOSTS`` setting with the fully qualified host name(s)
that are served by the OpenStack dashboard. Once this setting is provided, if
that are served by the OpenStack dashboard. Once this setting is provided, if
the value in the "Host:" header of an incoming HTTP request does not match any
of the values in this list an error will be raised and the requestor will not
be able to proceed. Failing to configure this option, or the use of wild card
be able to proceed. Failing to configure this option, or the use of wild card
characters in the specified host names, will cause the dashboard to be
vulnerable to security breaches associated with fake HTTP Host headers.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 515 KiB

After

Width:  |  Height:  |  Size: 515 KiB

View File

@ -7,7 +7,7 @@ Policies
Each OpenStack service defines the access policies for its resources in an
associated policy file. A resource, for example, could be API access, the
ability to attach to a volume, or to fire up instances. The policy rules are
specified in JSON format and the file is called ``policy.json``. The
specified in JSON format and the file is called ``policy.json``. The
syntax and format of this file is discussed in the `Configuration Reference
<http://docs.openstack.org/newton/config-reference/policy-json-file.html>`__.

View File

@ -108,7 +108,7 @@ images they upload themselves. In both cases, users should be able to
ensure the image they are utilizing has not been tampered with. The
ability to verify images is a fundamental imperative for security. A
chain of trust is needed from the source of the image to the
destination where it's used. This can be accomplished by signing
destination where it's used. This can be accomplished by signing
images obtained from trusted sources and by verifying the signature
prior to use. Various ways to obtain and create verified images will
be discussed below, followed by a description of the image signature

View File

@ -51,7 +51,7 @@ multiple instance types. Due to the nature of public clouds, they are exposed
to a higher degree of risk. As a consumer of a public cloud you should validate
that your selected provider has the necessary certifications, attestations, and
other regulatory considerations. As a public cloud provider, depending on your
target customers, you may be subject to one or more regulations. Additionally,
target customers, you may be subject to one or more regulations. Additionally,
even if not required to meet regulatory requirements, a provider should ensure
tenant isolation as well as protecting management infrastructure from external
attacks.
@ -63,7 +63,7 @@ At the opposite end of the spectrum is the private cloud. As NIST defines it, a
private cloud is provisioned for exclusive use by a single organization
comprising multiple consumers, such as business units. It may be owned,
managed, and operated by the organization, a third-party, or some combination
of them, and it may exist on or off premises. Private cloud use cases are
of them, and it may exist on or off premises. Private cloud use cases are
diverse, as such, their individual security concerns vary.
Community cloud
@ -86,7 +86,7 @@ technology that enables data and application portability, such as cloud
bursting for load balancing between clouds. For example an online retailer may
have their advertising and catalogue presented on a public cloud that allows
for elastic provisioning. This would enable them to handle seasonal loads in a
flexible, cost-effective fashion. Once a customer begins to process their
flexible, cost-effective fashion. Once a customer begins to process their
order, they are transferred to the more secure private cloud back end that is
PCI compliant.
@ -125,7 +125,7 @@ For information about the current state of feature support, see `OpenStack
Hypervisor Support Matrix
<https://wiki.openstack.org/wiki/HypervisorSupportMatrix>`__.
The security of Compute is critical for an OpenStack deployment. Hardening
The security of Compute is critical for an OpenStack deployment. Hardening
techniques should include support for strong instance isolation, secure
communication between Compute sub-components, and resiliency of public-facing
API endpoints.

View File

@ -13,7 +13,7 @@ Security domains
~~~~~~~~~~~~~~~~
A security domain comprises users, applications, servers or networks that share
common trust requirements and expectations within a system. Typically they
common trust requirements and expectations within a system. Typically they
have the same :term:`authentication` and :term:`authorization` (AuthN/Z)
requirements and users.
@ -99,7 +99,7 @@ Bridging security domains
A *bridge* is a component that exists inside more than one security domain. Any
component that bridges security domains with different trust levels or
authentication requirements must be carefully configured. These bridges are
authentication requirements must be carefully configured. These bridges are
often the weak points in network architecture. A bridge should always be
configured to meet the security requirements of the highest trust level of any
of the domains it is bridging. In many cases the security controls for bridges

View File

@ -101,7 +101,7 @@ The team included:
Business Unit where he helps customers implement OpenStack and VMware NSX
(formerly known as Nicira's Network Virtualization Platform). Prior to
joining VMware (through the company's acquisition of Nicira), he worked for
Q1 Labs, Symantec, Vontu, and Brightmail. He has a B.S in Electrical
Q1 Labs, Symantec, Vontu, and Brightmail. He has a B.S in Electrical
Engineering/Computer Science and Nuclear Engineering from U.C. Berkeley and
MBA from the University of San Francisco.
@ -127,14 +127,14 @@ The team included:
Nathanael Burton is a Computer Scientist at the National Security Agency. He
has worked for the Agency for over 10 years working on distributed systems,
large-scale hosting, open source initiatives, operating systems, security,
storage, and virtualization technology. He has a B.S. in Computer Science
storage, and virtualization technology. He has a B.S. in Computer Science
from Virginia Tech.
- **Vibha Fauver**
Vibha Fauver, GWEB, CISSP, PMP, has over fifteen years of experience in
Information Technology. Her areas of specialization include software
engineering, project management and information security. She has a B.S. in
engineering, project management and information security. She has a B.S. in
Computer & Information Science and a M.S. in Engineering Management with
specialization and a certificate in Systems Engineering.
@ -169,7 +169,7 @@ McMillan, Brian Schott and Lorin Hochstein.
This Book was produced in a 5 day book sprint. A book sprint is an intensely
collaborative, facilitated process which brings together a group to produce a
book in 3-5 days. It is a strongly facilitated process with a specific
methodology founded and developed by Adam Hyde. For more information visit the
methodology founded and developed by Adam Hyde. For more information visit the
book sprint web page at http://www.booksprints.net.
How to contribute to this book

View File

@ -16,14 +16,14 @@ cloud providers. It is recommended that a risk assessment and legal consul
advised before choosing tenant encryption policies.
Per-instance or per-object encryption is preferable over, in descending order,
per-project, per-tenant, per-host, and per-cloud aggregations. This
per-project, per-tenant, per-host, and per-cloud aggregations. This
recommendation is inverse to the complexity and difficulty of implementation.
Presently, in some projects it is difficult or impossible to implement
encryption as loosely granular as even per-tenant. We recommend implementors
make a best-effort in encrypting tenant data.
Often, data encryption relates positively to the ability to reliably destroy
tenant and per-instance data, simply by throwing away the keys. It should be
tenant and per-instance data, simply by throwing away the keys. It should be
noted that in doing so, it becomes of great importance to destroy those keys in
a reliable and secure manner.
@ -76,7 +76,7 @@ release, the following ephemeral disk encryption features are supported:
- This field sets the cipher and mode used to encrypt ephemeral
storage. AES-XTS is recommended by NIST_ specifically for disk
storage, and the name is shorthand for AES encryption using the
XTS encryption mode. Available ciphers depend on kernel support.
XTS encryption mode. Available ciphers depend on kernel support.
At the command line, type 'cyrptsetup benchmark' to determine the
available options (and see benchmark results), or go to
*/proc/crypto*
@ -91,7 +91,7 @@ release, the following ephemeral disk encryption features are supported:
manager that could require the use of 'key_size = 256', which would
only provide an AES key size of 128-bits. XTS requires it's own
"tweak key" in addition to the encryption key AES requires.
This is typically expressed as a single large key. In this case,
This is typically expressed as a single large key. In this case,
using the 512-bit setting, 256 bits will be used by AES and 256 bits
by XTS. (see NIST_)
@ -132,7 +132,7 @@ ends chosen. Some back ends may not support this at all. It is outside the
scope of this guide to specify recommendations for each Block Storage back-end
driver.
For the purpose of performance, many storage protocols are unencrypted. Some
For the purpose of performance, many storage protocols are unencrypted. Some
protocols such as iSCSI can provide authentication and encrypted sessions, it
is our recommendation to enable these features.
@ -147,7 +147,7 @@ physical volume to an LVM volume group (VG).
Network data
~~~~~~~~~~~~
Tenant data for compute could be encrypted over IPsec or other tunnels. This
Tenant data for compute could be encrypted over IPsec or other tunnels. This
is not functionality common or standard in OpenStack, but is an option
available to motivated and interested implementors.

View File

@ -46,7 +46,7 @@ implement an appropriate level of strength and integrity given the specific
security domain and sensitivity of the information.
"The sanitization process removes information from the media such that the
information cannot be retrieved or reconstructed. Sanitization
information cannot be retrieved or reconstructed. Sanitization
techniques, including clearing, purging, cryptographic erase, and
destruction, prevent the disclosure of information to unauthorized
individuals when such media is reused or released for disposal." `NIST
@ -82,7 +82,7 @@ auto vacuuming and periodic free-space wiping.
Instance memory scrubbing
-------------------------
Specific to various hypervisors is the treatment of instance memory. This
Specific to various hypervisors is the treatment of instance memory. This
behavior is not defined in OpenStack Compute, although it is generally expected
of hypervisors that they will make a best effort to scrub memory either upon
deletion of an instance, upon creation of an instance, or both.
@ -103,18 +103,18 @@ documentation.
Cinder volume data
------------------
Use of the OpenStack volume encryption feature is highly encouraged. This is
Use of the OpenStack volume encryption feature is highly encouraged. This is
discussed below in the Data Encryption section under Volume Encryption. When
this feature is used, destruction of data is accomplished by securely deleting
the encryption key. The end user can select this feature while creating a
the encryption key. The end user can select this feature while creating a
volume, but note that an admin must perform a one-time set up of the volume
encryption feature first. Instructions for this setup are in the block
encryption feature first. Instructions for this setup are in the block
storage section of the `Configuration Reference
<docs.openstack.org/newton/config-reference/block-storage/volume-encryption.html>`__
, under volume encryption.
If the OpenStack volume encryption feature is not used, then other approaches
generally would be more difficult to enable. If a back-end plug-in is being
generally would be more difficult to enable. If a back-end plug-in is being
used, there may be independent ways of doing encryption or non-standard
overwrite solutions. Plug-ins to OpenStack Block Storage will store data in
a variety of ways. Many plug-ins are specific to a vendor or technology,
@ -143,7 +143,7 @@ Compute soft delete feature
---------------------------
OpenStack Compute has a soft-delete feature, which enables an instance that is
deleted to be in a soft-delete state for a defined time period. The instance
deleted to be in a soft-delete state for a defined time period. The instance
can be restored during this time period. To disable the soft-delete feature,
edit the ``etc/nova/nova.conf`` file and leave the
``reclaim_instance_interval`` option empty.
@ -173,7 +173,7 @@ information disclosure. There have in the past been information disclosure
vulnerabilities related to improperly erased ephemeral block storage devices.
Filesystem storage is a more secure solution for ephemeral block storage
devices than LVM as dirty extents cannot be provisioned to users. However, it
devices than LVM as dirty extents cannot be provisioned to users. However, it
is important to be mindful that user data is not destroyed, so it is suggested
to encrypt the backing filesystem.

View File

@ -8,7 +8,7 @@ that the libvirt daemon is configured for remote network connectivity.
The libvirt daemon configuration recommended in the OpenStack
Configuration Reference manual configures libvirtd to listen for
incoming TCP connections on all network interfaces without requiring any
authentication or using any encryption. This insecure configuration
authentication or using any encryption. This insecure configuration
allows for anyone with network access to the libvirt daemon TCP port on
OpenStack Compute nodes to control the hypervisor through the libvirt
API.
@ -18,13 +18,13 @@ Nova, Compute, KVM, libvirt, Grizzly, Havana, Icehouse
### Discussion ###
The default configuration of the libvirt daemon is to not allow remote
access. Live migration of running instances between OpenStack Compute
access. Live migration of running instances between OpenStack Compute
nodes requires libvirt daemon remote access between OpenStack Compute
nodes.
The libvirt daemon should not be configured to allow unauthenticated
remote access. The libvirt daemon has a choice of 4 secure options for
remote access over TCP. These options are:
remote access. The libvirt daemon has a choice of 4 secure options for
remote access over TCP. These options are:
- SSH tunnel to libvirtd's UNIX socket
- libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption
@ -34,14 +34,14 @@ remote access over TCP. These options are:
authentication
It is not necessary for the libvirt daemon to listen for remote TCP
connections on all interfaces. Remote network connectivity to the
libvirt daemon should be restricted as much as possible. Remote
connections on all interfaces. Remote network connectivity to the
libvirt daemon should be restricted as much as possible. Remote
access is only needed between the OpenStack Compute nodes, so the
libvirt daemon only needs to listen for remote TCP connections on the
interface that is used for this communication. A firewall can be
interface that is used for this communication. A firewall can be
configured to lock down access to the TCP port that the libvirt daemon
listens on, but this does not sufficiently protect access to the libvirt
API. Other processes on a remote OpenStack Compute node might have
API. Other processes on a remote OpenStack Compute node might have
network access, but should not be authorized to remotely control the
hypervisor on another OpenStack Compute node.
@ -51,13 +51,13 @@ nodes, you should review your libvirt daemon configuration to ensure
that it is not allowing unauthenticated remote access.
Remote access to the libvirt daemon via TCP is configured by the
"listen_tls", "listen_tcp", and "auth_tcp" configuration directives. By
default, these directives are all commented out. This results in remote
"listen_tls", "listen_tcp", and "auth_tcp" configuration directives. By
default, these directives are all commented out. This results in remote
access via TCP being disabled.
If you do not need remote libvirt daemon access, you should ensure that
the following configuration directives are set as follows in the
/etc/libvirt/libvirtd.conf configuration file. Commenting out these
/etc/libvirt/libvirtd.conf configuration file. Commenting out these
directives will have the same effect, as these values match the internal
defaults:
@ -69,7 +69,7 @@ auth_tcp = "sasl"
If you need to allow remote access to the libvirt daemon between
OpenStack Compute nodes for live migration, you should ensure that
authentication is required. Additionally, you should consider enabling
authentication is required. Additionally, you should consider enabling
TLS to allow remote connections to be encrypted.
The following libvirt daemon configuration directives will allow for
@ -83,8 +83,8 @@ auth_tcp = "sasl"
If you want to require TLS encrypted remote connections, you will have
to obtain X.509 certificates and configure the libvirt daemon to use
them to use TLS. Details on this configuration are in the libvirt
daemon documentation. Once the certificates are configured, you should
them to use TLS. Details on this configuration are in the libvirt
daemon documentation. Once the certificates are configured, you should
set the following libvirt daemon configuration directives:
---- begin example libvirtd.conf snippet ----
@ -94,7 +94,7 @@ auth_tls = "none"
---- end example libvirtd.conf snippet ----
When using TLS, setting the "auth_tls" configuration directive to "none"
uses X.509 client certificates for authentication. You can additionally
uses X.509 client certificates for authentication. You can additionally
require SASL authentication by setting the following libvirt daemon
configuration directives:
@ -105,7 +105,7 @@ auth_tls = "sasl"
---- end example libvirtd.conf snippet ----
When using TLS, it is also necessary to configure the OpenStack Compute
nodes to use a non-default URI for live migration. This is done by
nodes to use a non-default URI for live migration. This is done by
setting the following configuration directive in /etc/nova/nova.conf:
---- begin example nova.conf snippet ----
@ -126,11 +126,11 @@ at the following locations:
When configuring the libvirt daemon for authentication, it is also
important to configure authorization to restrict remote access to your
OpenStack Compute nodes. For example, if you don't configure
OpenStack Compute nodes. For example, if you don't configure
authorization, any X.509 client certificate that is signed by your
issuing CA will be allowed access. When using SASL/GSSAPI for Kerberos
issuing CA will be allowed access. When using SASL/GSSAPI for Kerberos
authentication, any client with a valid TGT will be granted access.
Lack of authorization can allow unintended remote access. The libvirt
Lack of authorization can allow unintended remote access. The libvirt
daemon documentation should be consulted for details on configuring
authorization.
@ -142,7 +142,7 @@ the Compute nodes.
The first thing that should be done is to restrict the network
interfaces that the libvirt daemon listens on for remote connections.
By default, the libvirt daemon listens on all interfaces when remote
access is enabled. This can be restricted by setting the following
access is enabled. This can be restricted by setting the following
configuration directive in /etc/libvirt/libvirtd.conf:
---- begin example libvirtd.conf snippet ----
@ -150,14 +150,14 @@ listen_addr = <IP address or hostname>
---- end example libvirtd.conf snippet ----
Migration in the libvirt daemon also uses a range of ephemeral ports by
default. The connections to these ephemeral ports are not authenticated
or encrypted. It is possible to tunnel the migration traffic over the
default. The connections to these ephemeral ports are not authenticated
or encrypted. It is possible to tunnel the migration traffic over the
regular libvirt daemon remote access port, which will use the
authentication and encryption settings that you have defined for that
port. It is recommended that you do this for the additional security
port. It is recommended that you do this for the additional security
that it provides. To enable tunneling of the migration traffic, you
must tell your OpenStack Compute nodes to set the VIR_MIGRATE_TUNNELLED
flag for live migration. This is done by setting the following
flag for live migration. This is done by setting the following
directive in /etc/nova/nova.conf:
---- begin example nova.conf snippet ----
@ -166,13 +166,13 @@ VIR_MIGRATE_TUNNELLED
---- end example nova.conf snippet ----
The tunneling of migration traffic described above does not apply to
live block migration. Live block migration is currently only possible
live block migration. Live block migration is currently only possible
over ephemeral ports.
If you choose to use the ephemeral migration ports, there are a few
things that you should be aware of. Unfortunately, there is no way to
things that you should be aware of. Unfortunately, there is no way to
restrict the network interfaces that these ephemeral ports will listen
on in libvirt versions prior to 1.1.4. If you are running version 1.1.4
on in libvirt versions prior to 1.1.4. If you are running version 1.1.4
or later of the libvirt daemon, you can set the following directive in
/etc/libvirt/qemu.conf to specify what interfaces are used for the
ephemeral migration ports:
@ -183,11 +183,11 @@ migration_address = <IP address>
It is also recommended to configure the firewall on each OpenStack
Compute node to only allow other Compute nodes to access the ports that
are used for remote access to the libvirt daemon. By default, this is
are used for remote access to the libvirt daemon. By default, this is
port 16514 for TLS and 16509 for unencrypted TCP.
Additionally, migration over ephemeral ports uses a port range of
49152-49215. You will need to allow your OpenStack Compute nodes to
49152-49215. You will need to allow your OpenStack Compute nodes to
communicate with each other over these ports if you choose not to tunnel
the migration traffic over the libvirt daemon remote access port.
@ -201,7 +201,7 @@ looking at the following configuration directives:
If you are running a version of the libvirt daemon older than 1.1.4 and
you want to perform live block migration, you will need to allow your
OpenStack Compute nodes to communicate over port range 5900-49151. If
OpenStack Compute nodes to communicate over port range 5900-49151. If
you are running version 1.1.4 or later of the libvirt daemon, the
regular ephemeral migration port range is used for live block migration.

View File

@ -14,24 +14,24 @@ Horizon, Nova, noVNC proxy, SPICE console, Grizzly, Havana
### Discussion ###
Currently with a single user token, no restrictions are enforced on the
number or frequency of noVNC or SPICE console sessions that may be
established. While a user can only access their own virtual machine
established. While a user can only access their own virtual machine
instances, resources can be exhausted on the console proxy host by
creating an excessive number of simultaneous console sessions. This can
creating an excessive number of simultaneous console sessions. This can
result in timeouts for subsequent connection requests to instances using
the same console proxy. Not only would this prevent the user from
the same console proxy. Not only would this prevent the user from
accessing their own instances, but other legitimate users would also be
deprived of console access. Further, other services running on the
deprived of console access. Further, other services running on the
noVNC proxy and Compute hosts may degrade in responsiveness.
By taking advantage of this lack of restrictions around noVNC or SPICE
console connections, a single user could cause the console proxy
endpoint to become unresponsive, resulting in a Denial Of Service (DoS)
style attack. It should be noted that there is no amplification effect.
style attack. It should be noted that there is no amplification effect.
### Recommended Actions ###
For current stable OpenStack releases (Grizzly, Havana), users need to
workaround this vulnerability by using rate-limiting proxies to cover
access to the noVNC proxy service. Rate-limiting is a common mechanism
access to the noVNC proxy service. Rate-limiting is a common mechanism
to prevent DoS and Brute-Force attacks.
For example, if you are using a proxy such as Repose, enable the rate

View File

@ -3,7 +3,7 @@ Potential token revocation abuse via group membership
### Summary ###
Deletion of groups in Keystone causes token revocation for group
members. If group capabilities are delegated to users, they can abuse
members. If group capabilities are delegated to users, they can abuse
those capabilities to maliciously revoke tokens for other users.
### Affected Services / Software ###
@ -11,20 +11,20 @@ Keystone, Grizzly, Havana, Icehouse
### Discussion ###
If a group is deleted from Keystone, all tokens for all users that are
members of that group are revoked. By adding users to a group without
members of that group are revoked. By adding users to a group without
those users' knowledge and then deleting that group, a group admin can
revoke all of the users' tokens. While the default policy file gives
revoke all of the users' tokens. While the default policy file gives
the group admin role to global admin, an alternative policy could
delegate the "create_group", "add_user_to_group", and "delete_group"
capabilities to a set of users. In such a system, those users will also
get a token revocation capability. Only setups using a custom policy
capabilities to a set of users. In such a system, those users will also
get a token revocation capability. Only setups using a custom policy
file in Keystone are affected.
### Recommended Actions ###
Keystone's default policy.json file uses the "admin_required" rule for
the "create_group", "delete_group", and "add_user_to_group"
capabilities. It is recommended that you use this default configuration
if possible. Here is an example snippet of a properly configured
capabilities. It is recommended that you use this default configuration
if possible. Here is an example snippet of a properly configured
policy.json file:
---- begin example policy.json snippet ----
@ -35,7 +35,7 @@ policy.json file:
If you need to delegate the above capabilities to non-admin users, you
need to take into account that those users will be able to revoke
tokens for other users by performing group deletion operations. You
tokens for other users by performing group deletion operations. You
should take caution with who you delegate these capabilities to.
### Contacts / References ###

View File

@ -4,7 +4,7 @@ Sample Keystone v3 policy exposes privilege escalation vulnerability
### Summary ###
The policy.v3cloudsample.json sample Keystone policy file combined with
the underlying mutability of the domain ID for user, group, and project
entities exposed a privilege escalation vulnerability. When this
entities exposed a privilege escalation vulnerability. When this
sample policy is applied a domain administrator can elevate their
privileges to become a cloud administrator.
@ -15,12 +15,12 @@ Keystone, Havana
Changes to the Keystone v3 sample policy during the Havana release cycle
set an excessively broad domain administrator scope that allowed
creation of roles ("create_grant") on other domains (among other
actions). There was no check that the domain administrator had
actions). There was no check that the domain administrator had
authority to the domain they were attempting to grant a role on.
Combining the mutable state of the domain ID for user, group, and
project entities with the sample v3 policy resulted in a privilege
escalation vulnerability. A domain administrator could execute a series
escalation vulnerability. A domain administrator could execute a series
of steps to escalate their access to that of a cloud administrator.
### Recommended Actions ###
@ -30,7 +30,7 @@ Icehouse release:
https://git.openstack.org/cgit/openstack/keystone/commit/?id=0496466821c1ff6e7d4209233b6c671f88aadc50
You should ensure that your Keystone deployment appropriately reflects
that update. Domain administrators should generally only be permitted
that update. Domain administrators should generally only be permitted
to perform actions against the domain for which they are an
administrator.

View File

@ -3,9 +3,9 @@ Heat templates with invalid references allows unintended network access
### Summary ###
Orchestration templates can create security groups to define network
access rules. When creating these rules, it is possible to have a rule
access rules. When creating these rules, it is possible to have a rule
grant incoming network access to instances belonging to another security
group. If a rule references a non-existent security group, it can
group. If a rule references a non-existent security group, it can
result in allowing incoming access to all hosts for that rule.
### Affected Services / Software ###
@ -15,17 +15,17 @@ Heat, nova-network, Havana
When defining security groups of the "AWS::EC2::SecurityGroup" type in a
CloudFormation-compatible format (CFN) orchestration template, it is
possible to use references to other security groups as the source for
ingress rules. When these rules are evaluated by Heat in the OpenStack
ingress rules. When these rules are evaluated by Heat in the OpenStack
Havana release, a reference to a non-existent security group will be
silently ignored. This results in the rule using a "CidrIp" property of
"0.0.0.0/0". This will allow incoming access to any host for the
affected rule. This has the effect of allowing unintended network
silently ignored. This results in the rule using a "CidrIp" property of
"0.0.0.0/0". This will allow incoming access to any host for the
affected rule. This has the effect of allowing unintended network
access to instances.
This issue only occurs when Nova is used for networking (nova-network).
The Neutron networking service is not affected by this issue.
The OpenStack Icehouse release is not affected by this issue. In the
The OpenStack Icehouse release is not affected by this issue. In the
Icehouse release, Heat will check if a non-existent security group is
referenced in a template and return an error, causing the creation of
the security group to fail.
@ -34,16 +34,16 @@ the security group to fail.
If you are using Heat in the OpenStack Havana release with Nova for
networking (nova-network), you should review your orchestration
templates to ensure that all references to security groups in ingress
rules are valid. Specifically, you should look at the use of the
rules are valid. Specifically, you should look at the use of the
"SourceSecurityGroupName" property in your templates to ensure that
all referenced security groups exist.
One particular improper usage of security group references that you
should look for is the case where you define multiple security groups
in one template and use references between them. In this case, you
in one template and use references between them. In this case, you
need to make sure that you are using the "Ref" intrinsic function to
indicate that you are referencing a security group that is defined in
the same template. Here is an example of a template with a valid
the same template. Here is an example of a template with a valid
security group reference:
---- begin example correct template snippet ----

View File

@ -25,23 +25,23 @@ functionality.
### Recommended Actions ###
It is recommended that you immediately update OpenSSL software on the
systems you use to run OpenStack services. In most cases, you will want
systems you use to run OpenStack services. In most cases, you will want
to upgrade to OpenSSL version 1.0.1g, though it is recommended that you
review the exact affected version details on the Heartbleed website
referenced above.
After upgrading your OpenSSL software, you will need to restart any
services that use the OpenSSL libraries. You can get a list of all
services that use the OpenSSL libraries. You can get a list of all
processes that have the old version of OpenSSL loaded by running the
following command:
lsof | grep ssl | grep DEL
Any processes shown by the above command will need to be restarted, or
you can choose to restart your entire system if desired. In an
you can choose to restart your entire system if desired. In an
OpenStack deployment, OpenSSL is commonly used to enable SSL/TLS
protection for OpenStack API endpoints, SSL terminators, databases,
message brokers, and Libvirt remote access. In addition to the native
message brokers, and Libvirt remote access. In addition to the native
OpenStack services, some commonly used software that may need to be
restarted includes:
@ -56,12 +56,12 @@ restarted includes:
Stud
It is also recommended that you treat your existing SSL/TLS keys as
compromised and generate new keys. This includes keys used to enable
compromised and generate new keys. This includes keys used to enable
SSL/TLS protection for OpenStack API endpoints, databases, message
brokers, and libvirt remote access.
In addition, any confidential data such as credentials that have been
sent over a SSL/TLS connection may have been compromised. It is
sent over a SSL/TLS connection may have been compromised. It is
recommended that cloud administrators change any passwords, tokens, or
other credentials that may have been communicated over SSL/TLS.

View File

@ -59,7 +59,7 @@ GPFS driver is the only driver in use, your Cinder system is not
vulnerable to this issue.
In the likely scenario that your system is vulnerable, you should limit
access to the Cinder host as much as possible. You should also explore
access to the Cinder host as much as possible. You should also explore
alternatives such as applying mandatory access control policies
(SELinux, AppArmor, etc) or using NFS uid squashing to control access
to the files in order to minimize the possible exposure.

View File

@ -26,9 +26,9 @@ publicize images.
### Recommended Actions ###
It is recommended that the ability to publicize images in Glance be
restricted to trusted users, such as users with the "admin" role. This
restricted to trusted users, such as users with the "admin" role. This
can be done by modifying the "publicize_image" capability in Glance's
policy.json file. Here is an example of restricting this capability to
policy.json file. Here is an example of restricting this capability to
users with the "admin" role:
---- begin example policy.json snippet ----

View File

@ -4,7 +4,7 @@ cookie sessions
### Summary ###
The default setting in Horizon is to use signed cookies to store
session state on the client side. This creates the possibility that if
session state on the client side. This creates the possibility that if
an attacker is able to capture a user's cookie, they may perform all
actions as that user, even if the user has logged out.
@ -13,32 +13,32 @@ Horizon, Folsom, Grizzly, Havana, Icehouse
### Discussion ###
When configured to use client side sessions, the server isn't aware
of the user's login state. The OpenStack authorization tokens are
stored in the session ID in the cookie. If an attacker can steal the
of the user's login state. The OpenStack authorization tokens are
stored in the session ID in the cookie. If an attacker can steal the
cookie, they can perform all actions as the target user, even after the
user has logged out.
There are several ways attackers can steal the cookie. One example is
There are several ways attackers can steal the cookie. One example is
by intercepting it over the wire if Horizon is not configured to use
SSL. The attacker may also access the cookie from the filesystem if
they have access to the machine. There are also other ways to steal
SSL. The attacker may also access the cookie from the filesystem if
they have access to the machine. There are also other ways to steal
cookies that are beyond the scope of this note.
By enabling a server side session tracking solution such as memcache,
the session is terminated when the user logs out. This prevents an
the session is terminated when the user logs out. This prevents an
attacker from using cookies from terminated sessions.
It should be noted that Horizon does request that Keystone invalidate
the token upon user logout, but this has not been implemented for the
Identity API v3. Token invalidation may also fail if the Keystone
service is unavailable. Therefore, to ensure that sessions are not
Identity API v3. Token invalidation may also fail if the Keystone
service is unavailable. Therefore, to ensure that sessions are not
usable after the user logs out, it is recommended to use server side
session tracking.
### Recommended Actions ###
It is recommended that you configure Horizon to use a different session
backend rather than signed cookies. One possible alternative is to use
memcache sessions. To check if you are using signed cookies, look for
backend rather than signed cookies. One possible alternative is to use
memcache sessions. To check if you are using signed cookies, look for
this line in Horizon's local_settings.py
--- begin example local_settings.py snippet ---
@ -47,7 +47,7 @@ this line in Horizon's local_settings.py
If the SESSION_ENGINE is set to value other than
'django.contrib.sessions.backends.signed_cookies' this vulnerability
is not present. If SESSION_ENGINE is not set in local_settings.py,
is not present. If SESSION_ENGINE is not set in local_settings.py,
check for it in settings.py.
Here are the steps to configure memcache sessions:

View File

@ -42,7 +42,7 @@ Nova-controlled interfaces.
Using the sshd service as an example, the default configuration on most
systems is to bind to all interfaces and all local addresses
("ListenAddress :22" in sshd_config). In order to configure it only on
("ListenAddress :22" in sshd_config). In order to configure it only on
a specific interface, use "ListenAddress a.b.c.d:22" where a.b.c.d is
the address assigned to the chosen interface. Similar settings can be
found for most other services.

View File

@ -13,7 +13,7 @@ Keystone, Grizzly, Havana, Icehouse, Juno
Tokens are used to authorize users when making API requests against
OpenStack services. This allows previously authenticated users to make
API requests without providing their authentication information again,
such as their username and password. This makes them very sensitive
such as their username and password. This makes them very sensitive
when stored and transmitted on the wire. Ideally tokens should never be
stored on the disk to avoid the possibility of an attacker obtaining
local storage access and reusing them in the system.

View File

@ -4,7 +4,7 @@ Unrestricted write permission to config files can allow code execution
### Summary ###
In numerous places throughout OpenStack projects, variables are read
directly from configuration files and used to construct statements
which are executed with the privileges of the OpenStack service. Since
which are executed with the privileges of the OpenStack service. Since
configuration files are trusted, the input is not checked or sanitized.
If a malicious user is able to write to these files, they may be able
to execute arbitrary code as the OpenStack service.
@ -14,7 +14,7 @@ Nova / All versions, Trove / Juno, possibly others
### Discussion ###
Some OpenStack services rely on operating system commands to perform
certain actions. In some cases these commands are created by appending
certain actions. In some cases these commands are created by appending
input from configuration files to a specified command, and passing the
complete command directly to the operating system shell to execute.
For example:
@ -25,11 +25,11 @@ For example:
--- end example example.py snippet ---
In this case, if config.DIRECTORY is set to something benign like
'/opt' the code behaves as expected. If, on the other hand, an
'/opt' the code behaves as expected. If, on the other hand, an
attacker is able to set config.DIRECTORY to something malicious such as
'/opt ; rm -rf /etc', the shell will execute both 'ls -al /opt' and 'rm
-rf /etc'. When called with shell=True, the shell will blindly execute
anything passed to it. Code with the potential for shell injection
-rf /etc'. When called with shell=True, the shell will blindly execute
anything passed to it. Code with the potential for shell injection
vulnerabilities has been identified in the above mentioned services and
versions, but vulnerabilities are possible in other services as well.
@ -45,7 +45,7 @@ validated.
Additionally the principle of least privilege should always be observed
- files should be protected with the most restrictive permissions
possible. Other serious security issues, such as the exposure of
possible. Other serious security issues, such as the exposure of
plaintext credentials, can result from permissions which allow
malicious users to view sensitive data (read access).

View File

@ -29,7 +29,7 @@ It is possible to override the use of the compute node's SMBIOS data by
libvirt in /etc/libvirt/libvirtd.conf by setting the 'host_uuid'
parameter. This allows setting an arbitrary UUID for identification
purposes that doesn't leak any information about the real underlying
hardware. It is advised to make use of this override ability to prevent
hardware. It is advised to make use of this override ability to prevent
potential exposure of information about the underlying compute node
hardware.

View File

@ -26,7 +26,7 @@ Disable Django's GZIP Middleware:
https://docs.djangoproject.com/en/dev/ref/middleware/#module-django.middleware.gzip
Disable GZip compression in your web server's config. For Apache httpd,
Disable GZip compression in your web server's config. For Apache httpd,
you can do this by disabling mod_deflate:
http://httpd.apache.org/docs/2.2/mod/mod_deflate.html

View File

@ -60,10 +60,10 @@ forcing the negotiation of a weak cipher. In particular we need to
remove support for any export grade ciphers, which are especially weak.
The first step is to find what versions your TLS server currently
supports. Two useful solutions exist for this: SSL Server Test at
supports. Two useful solutions exist for this: SSL Server Test at
Qualys SSL Labs can be used to scan any web accessible endpoints, and
SSLScan is a command line tool which attempts a TLS connection to a
server with all possible cipher suites. Please see "tools" below for
server with all possible cipher suites. Please see "tools" below for
links to both.
The specific steps required to configure which cipher suites are

View File

@ -3,7 +3,7 @@ Keystone does not validate that identity providers match federation mappings
### Summary ###
Keystone's OS-FEDERATION extension does not enforce a link between an
identity provider and a federation mapping. This can lead to assertions
identity provider and a federation mapping. This can lead to assertions
or claims from one identity provider being used with mappings intended
for use with another identity provider, which could result in users
obtaining access to resources that they are not intended to have.
@ -22,39 +22,39 @@ different forms.
In the Juno release of Keystone, there is no ability within Keystone
itself to enforce that assertions or claims from an identity provider
are actually being used against a mapping that is associated with that
same identity provider. A malicious user from one trusted identity
same identity provider. A malicious user from one trusted identity
provider could access a Keystone federated authentication URL for a
different trusted identity provider. Depending on the content of the
different trusted identity provider. Depending on the content of the
assertions or claims and the mapping rules, this could result in a user
gaining access to resources that they are not intended to access.
Consider an example deployment where Keystone is configured to trust two
identity providers ('idp1' and 'idp2'). The federation mapping for
identity providers ('idp1' and 'idp2'). The federation mapping for
'idp1' might result in users of the 'devops' group having the 'admin'
role on a specific project. If a user with an assertion or claim from
role on a specific project. If a user with an assertion or claim from
'idp2' that says they are in the 'devops' group uses the authentication
URL that is associated with 'idp1', they could also be given the 'admin'
role just as if they were a 'devops' user from 'idp1'. This access
role just as if they were a 'devops' user from 'idp1'. This access
should not be allowed.
### Recommended Actions ###
Even though the Juno release of Keystone does not have the ability to
enforce that an identity provider and a mapping match, it is possible to
configure the frontend webserver that is used to deploy Keystone to
perform this enforcement. Each identity provider supported by Keystone
has its own authentication URL. It is recommended that the webserver
perform this enforcement. Each identity provider supported by Keystone
has its own authentication URL. It is recommended that the webserver
configuration configures its underlying federation plug-ins to
cryptographically enforce that an identity provider is only valid for
its associated authentication URL.
For example, the SAML protocol uses an asymmetric keypair to sign the
requests and responses that are transmitted between an identity provider
and a service provider (Keystone in our case). When using Apache HTTPD
and a service provider (Keystone in our case). When using Apache HTTPD
as a webserver for Keystone, a separate 'Location' directive can be used
for each federated authentication URL. The directives that define the
for each federated authentication URL. The directives that define the
certificate of the identity provider for the underlying HTTPD module
that is handling the SAML protocol can be defined within the identity
provider specific 'Location' directives. This will ensure that a signed
provider specific 'Location' directives. This will ensure that a signed
SAML assertion from one trusted identity provider will only be
successfully validated when used against the appropriate authentication
URL.
@ -80,29 +80,29 @@ Here is an example with the mod_auth_mellon HTTPD module:
--- end example httpd configuration snippet ---
In the above example, we have two identity providers ('idp1' and
'idp2'). Each identity provider has their own 'Location' directive,
'idp2'). Each identity provider has their own 'Location' directive,
and the 'MellonIdPMetadataFile' directive that points to the metadata
that contains the certificate of the identity provider is specific to
each 'Location' directive. This configuration will not allow a signed
each 'Location' directive. This configuration will not allow a signed
assertion from 'idp1' to be used against the authentication URL for
'idp2'. An attempt to do so would be rejected by mod_auth_mellon and
'idp2'. An attempt to do so would be rejected by mod_auth_mellon and
would never actually reach Keystone's OS-FEDERATION extension.
It is recommended to read the Keystone federation documentation as well
as the documentation for the HTTPD module that you are using for your
federation method of choice. Some useful links to this documentation
federation method of choice. Some useful links to this documentation
are provided in the references section of this note.
In the Kilo release of Keystone, it is also possible to have Keystone
enforce that an assertion actually comes from the identity provider that
is associated with the authentication URL. This is performed by
is associated with the authentication URL. This is performed by
comparing an identity provider identifier value from the assertion or
claim with an identifier that is stored as a part of the identity
provider within Keystone.
To enable this functionality, you must set the 'remote_id_attribute'
setting in keystone.conf, which defines the environment variable that
contains the identity provider identifier. You then must add the
contains the identity provider identifier. You then must add the
identifier value that the 'remote_id_attribute' will contain as one of
the 'remote_ids' values of the associated identity provider in Keystone.
This can be done using the Identity API directly, or via the 'openstack'

View File

@ -50,7 +50,7 @@ attack to compromising Keystone directly or man-in-the-middle capture
of a separate token that has the authorization to create
trusts/delegate/assign roles.
3. Retain the default token lifespan of 1 hour. Many workloads require
3. Retain the default token lifespan of 1 hour. Many workloads require
a single token for the whole workload, and take more than one hour, so
installations have increased token lifespans back to the old value of
24 hours - increasing their exposure to this issue.

View File

@ -3,7 +3,7 @@ Service accounts may have cloud admin privileges
### Summary ###
OpenStack services (for example Nova and Glance) typically use a
service account in Keystone to perform actions. In some cases this
service account in Keystone to perform actions. In some cases this
service account has full admin privileges, may therefore perform any
action on your cloud, and should be protected appropriately.
@ -12,7 +12,7 @@ Most OpenStack services / all versions
### Discussion ###
In many cases, OpenStack services require an OpenStack account to
perform API actions such as validating Keystone tokens. Some
perform API actions such as validating Keystone tokens. Some
deployment tools grant administrative level access to these service
accounts, making these accounts very powerful.
@ -24,7 +24,7 @@ A service account with administrator access could be used to:
- log in to Horizon
### Recommended Actions ###
Service accounts can use the "service" role rather than admin. You
Service accounts can use the "service" role rather than admin. You
can check what role the service account has by performing the following
steps:
@ -37,7 +37,7 @@ steps:
openstack role assignment list --user <service_user>
3. Compare the ID listed in the "role" column from step 2 with the role
IDs listed in step 1. If the role is listed as "admin", the service
IDs listed in step 1. If the role is listed as "admin", the service
account has full admin privileges on the cloud.
It is possible to change the role to "service" for some accounts but
@ -46,7 +46,7 @@ Neutron, and is therefore not recommended for inexperienced admins.
If a service account does have admin, it's advisable to closely
monitor login events for that user to ensure that it is not used
unexpectedly. In particular, pay attention to unusual IPs using the
unexpectedly. In particular, pay attention to unusual IPs using the
service account.
### Contacts / References ###

View File

@ -22,8 +22,8 @@ an endless flood of 'image-create' requests.
### Recommended Actions ###
For current stable OpenStack releases, users can workaround this
vulnerability by using rate-limiting proxies to cover access to the
Glance API. Rate-limiting is a common mechanism to prevent DoS and
Brute-Force attacks. Rate limiting on the API requests allows a delay
Glance API. Rate-limiting is a common mechanism to prevent DoS and
Brute-Force attacks. Rate limiting on the API requests allows a delay
in the consequences of the attack, but does not prevent it.
For example, if you are using a proxy such as Repose, enable the rate
@ -38,7 +38,7 @@ policy.json file.
"add_image": "role:admin",
Another preventative action would be to monitor the logs to identify
excessive image create requests. One example of such a log message
excessive image create requests. One example of such a log message
is as follows (single line, wrapped):
---- begin example glance-api.log snippet ----

View File

@ -3,8 +3,8 @@ Glance configuration option can lead to privilege escalation
### Summary ###
Glance exposes a configuration option called `use_user_token` in the
configuration file `glance-api.conf`. It should be noted that the
default setting (`True`) is secure. If, however, the setting is
configuration file `glance-api.conf`. It should be noted that the
default setting (`True`) is secure. If, however, the setting is
changed to `False` and valid admin credentials are supplied in the
following section (`admin_user` and `admin_password`), Glance API
commands will be executed with admin privileges regardless of the
@ -18,15 +18,15 @@ Glance, Juno, Kilo, Liberty
The `use_user_token` configuration option was created to enable
automatic re-authentication for tokens whch are close to expiration,
thus preventing the tokens from expiring in the middle of
longer-lasting Glance commands. Unfortunately the implementation
longer-lasting Glance commands. Unfortunately the implementation
enables privilege escalation attacks by automatically executing API
commands as an administrator level user.
By default `use_user_token` is set to `True` which is secure. If the
By default `use_user_token` is set to `True` which is secure. If the
option is disabled (set to `False`) and valid admin credentials are
specified in the `glance-api.conf` file, API commands will be executed
as the supplied admin user regardless of the intended privileges of the
calling user. Glance API v2 configurations which don't enable the
calling user. Glance API v2 configurations which don't enable the
registry service (`data_api = glance.db.registry.api`) aren't affected.
Enabling unauthenticated and lower privileged users to execute Glance
@ -38,7 +38,7 @@ expose risks including:
- denial of service attacks
### Recommended Actions ###
A comprehensive fix will be included in the Mitaka release. Meanwhile
A comprehensive fix will be included in the Mitaka release. Meanwhile
it is recommended that all users ensure that `use_user_token` is left
at the default setting (`True`) or commented out.

View File

@ -6,7 +6,7 @@ An authorization token issued by the Identity service can be revoked,
which is designed to immediately make that token invalid for future use.
When the PKI or PKIZ token providers are used, it is possible for an
attacker to manipulate the token contents of a revoked token such that
the token will still be considered to be valid. This can allow
the token will still be considered to be valid. This can allow
unauthorized access to cloud resources if a revoked token is intercepted
by an attacker.
@ -15,30 +15,30 @@ Keystone, Icehouse, Juno, Kilo, Liberty
### Discussion ###
Token revocation is used in OpenStack to invalidate a token for further
use. This token revocation takes place automatically in certain
situations, such as when a user logs out of the Dashboard. If a revoked
use. This token revocation takes place automatically in certain
situations, such as when a user logs out of the Dashboard. If a revoked
token is obtained by another party, it should no longer be possible to
use it to perform any actions within the cloud. Unfortunately, this is
use it to perform any actions within the cloud. Unfortunately, this is
not the case when the PKI or PKIZ token providers are used.
When a PKI or PKIZ token is validated, the Identity service checks it
by searching for a revocation by the entire token. It is possible for
by searching for a revocation by the entire token. It is possible for
an attacker to manipulate portions of an intercepted PKI or PKIZ token
that are not cryptographically protected, which will cause the
revocation check to improperly consider the token to be valid.
### Recommended Actions ###
We recommend that you do not use the PKI or PKIZ token providers. The
We recommend that you do not use the PKI or PKIZ token providers. The
PKI and PKIZ token providers do not offer any significant benefit over
other token providers such as the UUID or Fernet.
If you are using the PKI or PKIZ token providers, it is recommended that
you switch to using another supported token provider such as the UUID
provider. This issue might be fixed in a future update of the PKI and
provider. This issue might be fixed in a future update of the PKI and
PKIZ token providers in the Identity service.
To check what token provider you are using, you must look in the
'keystone.conf' file for your Identity service. An example is provided
'keystone.conf' file for your Identity service. An example is provided
below:
---- begin keystone.conf sample snippet ----
@ -49,7 +49,7 @@ provider = keystone.token.providers.uuid.Provider
---- end keystone.conf sample snippet ----
In the Liberty release of the Identity service, the token provider
configuration is different than previous OpenStack releases. An
configuration is different than previous OpenStack releases. An
example from the Libery release is provided below:
---- begin keystone.conf sample snippet ----
@ -59,7 +59,7 @@ example from the Libery release is provided below:
provider = uuid
---- end keystone.conf sample snippet ----
These configuration snippets are using the UUID token provider. If you
These configuration snippets are using the UUID token provider. If you
are using any of the commented out settings from these examples, your
cloud is vulnerable to this issue and you should switch to a different
token provider.

View File

@ -8,14 +8,14 @@ A few sentences describing the issue at a high level.
A comma separated list of affected services and OpenStack releases.
### Discussion ###
A detailed discussion of the problem. This should have enough detail
A detailed discussion of the problem. This should have enough detail
that the person reading can determine if their deployment is affected,
when the problem was introduced, and what types of attacks/problems that
an affected deployment would be exposed to.
### Recommended Actions ###
A detailed description of what can be done to remediate the problem (if
possible). If the recommendation involves configuration changes,
possible). If the recommendation involves configuration changes,
example snippets of configuration files should be included here.
### Contacts / References ###