Merge "Moved Fuel File Reference section"

This commit is contained in:
Jenkins 2016-04-15 06:50:20 +00:00 committed by Gerrit Code Review
commit 3b5c0cf19e
10 changed files with 1260 additions and 0 deletions

View File

@ -19,5 +19,6 @@ Fuel User Guide
fuel-user-guide/configure-additional-components.rst
fuel-user-guide/verify-environment.rst
fuel-user-guide/maintain-environment.rst
fuel-user-guide/file-ref.rst
fuel-user-guide/cli.rst

View File

@ -0,0 +1,52 @@
.. _file-ref:
Fuel configuration files reference
==================================
You can modify some of the Fuel settings directly in the corresponding
configuration files.
.. warning:: For advanced OpenStack users only!
Editing the Fuel configuration files
may severely damage your OpenStack environment.
After you modify a ``yaml`` file, you will get an notification in
the Fuel web UI that some settings have been changed and, therefore,
some features may become inaccessible in the Fuel web UI.
The following table provides descriptions of the Fuel configuration files.
+-------------------------------+-------------+------------------------------+
| File | Node | Description |
+===============================+=============+==============================+
| :ref:`astute-yaml-master-ref` | Fuel Master | Configuration attributes |
| | node | passed to Puppet |
+-------------------------------+-------------+------------------------------+
| :ref:`astute-yaml-target-ref` | Fuel Slave | Configuration attributes |
| | nodes | passed to Puppet |
+-------------------------------+-------------+------------------------------+
| :ref:`engine-yaml-ref` | Fuel Master | Provisioning engine (Cobbler)|
| | node | and basic configuration of |
| | | the Fuel Slave nodes |
+-------------------------------+-------------+------------------------------+
| :ref:`network-1-yaml-ref` | Fuel Master | Network group configuration |
| | node | |
+-------------------------------+-------------+------------------------------+
| :ref:`openstack-yaml-ref` | Fuel Master | Basic configuration of the |
| | node | Fuel Slave nodes |
+-------------------------------+-------------+------------------------------+
| :ref:`settings-yaml-ref` | Fuel Master | Fuel settings |
| | node | |
+-------------------------------+-------------+------------------------------+
This section includes the following topics:
.. toctree::
:maxdepth: 1
file-ref/astute-yaml-master.rst
file-ref/astute-yaml-target.rst
file-ref/engine-yaml.rst
file-ref/network-1-yaml.rst
file-ref/openstack-yaml.rst
file-ref/settings-yaml.rst

View File

@ -0,0 +1,29 @@
.. raw:: pdf
PageBreak
.. _astute-yaml-master-ref:
astute.yaml (Fuel Master node)
------------------------------
Fuel Master Node:
**/etc/fuel/astute.yaml**
Fuel uses the *astute.yaml* file to pass configuration attributes
to Puppet.
Usage
+++++
The */etc/fuel/astute.yaml* file is installed
on the Fuel Master node
and must not be deleted.
File Format
+++++++++++
The *xxx.yaml* file <detailed-description>

View File

@ -0,0 +1,711 @@
.. raw:: pdf
PageBreak
.. _astute-yaml-target-ref:
astute.yaml (Fuel Slave nodes)
------------------------------
Fuel Slave nodes:
**/etc/astute.yaml**
Fuel uses the *astute.yaml* file to pass configuration attributes
to Puppet.
Usage
+++++
The */etc/astute.yaml* file is placed
on each Fuel Slave node when it is deployed
by **mcollective** and must not be deleted.
The Facter extension reads data from this file
and uses it to create the `$::fuel_settings` data structure.
This structure contains all variables as a single hash
and supports embedding of other rich structures
such as nodes hash or arrays.
File Format
+++++++++++
The *astute.yaml* file <detailed-description>
Basic networking configuration
++++++++++++++++++++++++++++++
::
libvirt_type: qemu
disable_offload: true
network_scheme:
roles:
management: br-mgmt
private: br-prv
fw-admin: br-fw-admin
storage: br-storage
provider: ovs
version: "1.0"
interfaces:
eth4:
L2:
vlan_splinters: "off"
eth3:
L2:
vlan_splinters: "off"
eth2:
L2:
vlan_splinters: "off"
eth1:
L2:
vlan_splinters: "off"
eth0:
L2:
vlan_splinters: "off"
endpoints:
br-prv:
IP: none
br-mgmt:
other_nets: []
IP:
- 10.108.22.6/24
br-storage:
other_nets: []
IP:
- 10.108.24.5/24
br-fw-admin:
other_nets:
- 10.108.20.0/24
IP:
- 10.108.20.7/24
default_gateway: true
gateway: 10.108.20.2
transformations:
- action: add-br
name: br-eth0
- bridge: br-eth0
action: add-port
name: eth0
- action: add-br
name: br-eth1
- bridge: br-eth1
action: add-port
name: eth1
- action: add-br
name: br-eth2
- bridge: br-eth2
action: add-port
name: eth2
- action: add-br
name: br-eth3
- bridge: br-eth3
action: add-port
name: eth3
- action: add-br
name: br-eth4
- bridge: br-eth4
action: add-port
name: eth4
- action: add-br
name: br-mgmt
- action: add-br
name: br-storage
- action: add-br
name: br-fw-admin
- trunks:
- 0
action: add-patch
bridges:
- br-eth4
- br-storage
- trunks:
- 0
action: add-patch
bridges:
- br-eth2
- br-mgmt
- trunks:
- 0
action: add-patch
bridges:
- br-eth0
- br-fw-admin
- action: add-br
name: br-prv
- action: add-patch
bridges:
- br-eth3
- br-prv
Nova configuration
++++++++++++++++++
::
nova:
db_password: Ns08DOge
state_path: /var/lib/nova
user_password: z8sJBhvw
Swift configuration
+++++++++++++++++++
::
swift:
user_password: Li9DPL0d
mp configuration
++++++++++++++++
::
mp:
- point: "1"
weight: "1"
- point: "2"
weight: "2"
Glance configuration
++++++++++++++++++++
::
glance:
db_password: DgVvco7J
image_cache_max_size: "5368709120"
user_password: sRX4ksp6
role: primary-mongo
deployment_mode: ha_compact
Mellanox configuration
++++++++++++++++++++++
::
neutron_mellanox:
plugin: disabled
metadata:
label: Mellanox Neutron components
enabled: true
toggleable: false
weight: 50
vf_num: "16"
mongo:
enabled: false
auth_key: ""
NTP configuration
+++++++++++++++++
::
external_ntp:
ntp_list: 0.pool.ntp.org, 1.pool.ntp.org
metadata:
label: Upstream NTP
weight: 100
Zabbix configuration
++++++++++++++++++++
::
zabbix:
db_password: 7hQFiVYa
db_root_password: xB33AjUw
password: zabbix
metadata:
label: Zabbix Access
restrictions:
- condition: not ('experimental' in version:feature_groups)
action: hide
weight: 70
username: admin
Definition of puppet tasks
++++++++++++++++++++++++++
::
tasks:
- type: puppet
priority: 100
parameters:
puppet_modules: /etc/puppet/modules
cwd: /
timeout: 3600
puppet_manifest: /etc/puppet/manifests/site.pp
uids:
- "12"
auto_assign_floating_ip: false
.. _astute-ceilometer-config-ref:
Ceilometer configuration
++++++++++++++++++++++++
::
ceilometer:
db_password: ReBB1hdT
metering_secret: jzHL7r76
enabled: true
user_password: p0JVzpHv
Public networking configuration
+++++++++++++++++++++++++++++++
::
public_vip: 10.108.21.2
public_network_assignment:
assign_to_all_nodes: false
metadata:
label: Public network assignment
restrictions:
- condition: cluster:net_provider != 'neutron'
action: hide
weight: 50
Heat configuration
++++++++++++++++++
::
heat:
db_password: Vv6vslci
enabled: true
rabbit_password: TOYQuiwH
auth_encryption_key: 3775079699142c1bcd7bd8b814648b01
user_password: s54JsapR
Fuel version
++++++++++++
::
fuel_version: "6.1"
NSX configuration
+++++++++++++++++
::
nsx_plugin:
nsx_password: ""
nsx_username: admin
packages_url: ""
l3_gw_service_uuid: ""
transport_zone_uuid: ""
connector_type: stt
metadata:
label: VMware NSX
enabled: false
restrictions:
- condition: cluster:net_provider != 'neutron' or networking_parameters:net_l23_provider != 'nsx'
action: hide
weight: 20
replication_mode: true
nsx_controllers: ""
Controller nodes configuration
++++++++++++++++++++++++++++++
::
nodes:
- role: primary-controller
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.3
uid: "9"
swift_zone: "9"
public_netmask: 255.255.255.0
public_address: 10.108.21.3
name: node-9
storage_address: 10.108.24.2
fqdn: node-9.test.domain.local
- role: controller
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.4
uid: "10"
swift_zone: "10"
public_netmask: 255.255.255.0
public_address: 10.108.21.4
name: node-10
storage_address: 10.108.24.3
fqdn: node-10.test.domain.local
- role: controller
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.5
uid: "11"
swift_zone: "11"
public_netmask: 255.255.255.0
public_address: 10.108.21.5
name: node-11
storage_address: 10.108.24.4
fqdn: node-11.test.domain.local
.. _astute-mongodb-nodes-ref:
MongoDB nodes configuration
+++++++++++++++++++++++++++
Each OpenStack environment that uses Ceilometer
and MongoDB must have a definition for each MongoDB node
in the *astute.yaml* file. One node is designated as
the `primary-mongo` node and all other nodes have
`mongo` specified as a role.
Ideally, you should have one MongoDB node for each
controller node in an OpenStack environment.
You can use the Fuel web UI to deploy
as many MongoDB nodes as required
when you initially create your OpenStack environment.
You must edit this file and use command line
to add MongoDB nodes to the deployed OpenStack environment.
The configuration for the primary MongoDB node is:
::
- role: primary-mongo
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.6
uid: "12"
swift_zone: "12"
name: node-12
storage_address: 10.108.24.5
fqdn: node-12.test.domain.local
The fields are:
:internal_netmask: Netmask used for the Internal.
:storage_netmask: Netmask used for the Storage logical network.
:internal_address:
:uid:
:swift_zone:
:name:
:storage_address:
:fqdn:
The configuration for each non-primary MongoDB node:
has the same fields.
The *astute.yaml* file includes one section like this
for each configured MongoDB node:
::
- role: mongo
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.7
uid: "13"
swift_zone: "13"
name: node-13
storage_address: 10.108.24.6
fqdn: node-13.test.domain.local
Sahara configuration
++++++++++++++++++++
::
sahara:
db_password: 0VDkceJQ
enabled: false
user_password: 4zs7JZaY
deployment_id: 9
Provisioning configuration
++++++++++++++++++++++++++
::
provision:
method: cobbler
metadata:
label: Provision
restrictions:
- condition: not ('experimental' in version:feature_groups)
action: hide
weight: 80
image_data:
/:
uri: http://10.108.20.2:8080/targetimages/ubuntu_1204_amd64.img.gz
format: ext4
container: gzip
/boot:
uri: http://10.108.20.2:8080/targetimages/ubuntu_1204_amd64-boot.img.gz
format: ext2
container: gzip
nova_quota: false
uid: "12"
repo_metadata:
2014.2-6.0: http://10.108.20.2:8080/2014.2-6.0/ubuntu/x86_64 precise main
Storage configuration
+++++++++++++++++++++
::
storage:
objects_ceph: false
pg_num: 128
vc_user: ""
iser: false
images_ceph: false
ephemeral_ceph: false
vc_datastore: ""
vc_password: ""
osd_pool_size: "2"
volumes_vmdk: false
metadata:
label: Storage
weight: 60
vc_host: ""
volumes_lvm: true
images_vcenter: false
vc_image_dir: /openstack_glance
volumes_ceph: false
vc_datacenter: ""
Keystone configuration
++++++++++++++++++++++
::
keystone:
db_password: rwTdR4Vd
admin_token: YXauBQbY
priority: 200
Cinder configuration
++++++++++++++++++++
::
cinder:
db_password: fv85YGzr
user_password: cIVtXdbp
Corosync configuration
++++++++++++++++++++++
::
corosync:
group: 226.94.1.1
verified: false
metadata:
label: Corosync
restrictions:
- condition: "true"
action: hide
weight: 50
port: "12000"
Miscellaneous configs to look at later
++++++++++++++++++++++++++++++++++++++
::
management_vip: 10.108.22.2
test_vm_image:
img_path: /usr/share/cirros-testvm/cirros-x86_64-disk.img
img_name: TestVM
min_ram: 64
public: "true"
glance_properties: "--property murano_image_info='{\"title\": \"Murano Demo\", \"type\": \"cirros.demo\"}'"
os_name: cirros
disk_format: qcow2
container_format: bare
quantum: true
cobbler:
profile: ubuntu_1204_x86_64
status: discover
management_network_range: 10.108.22.0/24
fail_if_error: true
puppet_modules_source: rsync://10.108.20.2:/puppet/2014.2-6.0/modules/
master_ip: 10.108.20.2
puppet_manifests_source: rsync://10.108.20.2:/puppet/2014.2-6.0/manifests/
resume_guests_state_on_host_boot: true
Syslog configuration
++++++++++++++++++++
::
syslog:
syslog_transport: tcp
syslog_port: "514"
metadata:
label: Syslog
weight: 50
syslog_server: ""
debug: false
online: true
metadata:
label: Common
weight: 30
access:
email: admin@localhost
user: admin
password: admin
metadata:
label: Access
weight: 10
tenant: admin
openstack_version_prev:
use_cow_images: true
last_controller: node-11
kernel_params:
kernel: console=ttyS0,9600 console=tty0 rootdelay=90 nomodeset
metadata:
label: Kernel parameters
weight: 40
mysql:
wsrep_password: 6JoYdvoz
root_password: ZtwW8gk8
external_dns:
dns_list: 8.8.8.8, 8.8.4.4
metadata:
label: Upstream DNS
weight: 90
rabbit:
password: GGcZVT4f
compute_scheduler_driver: nova.scheduler.filter_scheduler.FilterScheduler
openstack_version: 2014.2-6.0
External MongoDB configuration
++++++++++++++++++++++++++++++
::
external_mongo:
mongo_replset: ""
mongo_password: ceilometer
mongo_user: ceilometer
metadata:
label: External MongoDB
restrictions:
- condition: settings:additional_components.mongo.value == false
action: hide
weight: 20
hosts_ip: ""
mongo_db_name: ceilometer
Murano configuration
++++++++++++++++++++
::
murano:
db_password: 0PVsOHo9
enabled: false
rabbit_password: FGjWVooK
user_password: crpWYkaY
More miscellaneous configs
++++++++++++++++++++++++++
::
quantum_settings:
database:
passwd: yOL94I9n
L3:
use_namespaces: true
L2:
phys_nets:
physnet2:
vlan_range: 1000:1030
bridge: br-prv
base_mac: fa:16:3e:00:00:00
segmentation_type: vlan
predefined_networks:
admin_floating_net:
L2:
segment_id:
network_type: local
router_ext: true
physnet:
L3:
floating: 10.108.21.11:10.108.21.20
subnet: 10.108.21.0/24
enable_dhcp: false
gateway: 10.108.21.1
nameservers: []
tenant: admin
shared: false
admin_internal_net:
L2:
segment_id:
network_type: vlan
router_ext: false
physnet: physnet2
L3:
floating:
subnet: 192.168.111.0/24
enable_dhcp: true
gateway: 192.168.111.1
nameservers:
- 8.8.4.4
- 8.8.8.8
tenant: admin
shared: false
keystone:
admin_password: gqWPu2Vg
metadata:
metadata_proxy_shared_secret: qoEcTup3
fqdn: node-12.test.domain.local
storage_network_range: 10.108.24.0/24
vCenter configuration
storage_network_range: 10.108.24.0/24
vCenter configuration
+++++++++++++++++++++
::
vcenter:
datastore_regex: ""
host_ip: ""
vc_user: ""
vlan_interface: ""
vc_password: ""
cluster: ""
metadata:
label: vCenter
restrictions:
- condition: settings:common.libvirt_type.value != 'vcenter'
action: hide
weight: 20
use_vcenter: true
Syslog configuration
++++++++++++++++++++
::
base_syslog:
syslog_port: "514"
syslog_server: 10.108.20.2

View File

@ -0,0 +1,54 @@
.. raw:: pdf
PageBreak
.. _xxx-ref:
xxx.yaml
--------
Fuel Master Node:
**/usr/lib/python2.6/site-packages/nailgun/fixtures*
The *xxx.yaml* file defines
the basic configuration of the target nodes
that Fuel deploys for the OpenStack environment.
Initially, it contains Fuel defaults;
these are adjusted in response to configuration choices
the user makes through the Fuel UI
and then fed to :ref:`Nailgun<nailgun-term>`.
Usage
~~~~~
#. Log into the nailgun :ref:`docker-term` container:
::
dockerctl shell nailgun
#. Edit file.
#. Run the following commands to Nailgun
to reread its settings and restart:
::
manage.py dropdb && manage.py syncdb && manage.py loaddefault
killall nailgund
#. Exit the Nailgun docker container:
::
exit
#. Run the following commands to Nailgun
to sync deployment tasks:
::
fuel rel --sync-deployment-tasks --dir /etc/puppet
File Format
~~~~~~~~~~~

View File

@ -0,0 +1,52 @@
.. raw:: pdf
PageBreak
.. _engine-yaml-ref:
engine.yaml
-----------
Fuel Master Node:
**/root/provisioning_1**
The *engine.yaml* file defines
the basic configuration of the target nodes
that Fuel deploys for the OpenStack environment.
Initially, it contains Fuel defaults;
these are adjusted in response to configuration choices
the user makes through the Fuel UI
and then fed to Nailgun.
Usage
+++++
#. Dump provisioning information using the following
fuel command:
::
fuel --env 1 provisioning default
where ``--env 1`` should be set to the specific environment
(id=1 in this example).
#. Edit file.
#. Upload the modified file:
::
fuel --env-1 provisioning upload
Description
+++++++++++
The *engine.yaml* file defines the provisioning engine
being used
along with the password and URLs used to access it. By default,
Cobbler is specified as the provisioning engine.

View File

@ -0,0 +1,41 @@
.. raw:: pdf
PageBreak
.. _xxx-ref:
xxx.yaml
--------
Fuel Master Node:
**yyy**
The *xxx.yaml* file <brief functional description>
Usage
-----
#. Dump provisioning information using this
:ref:`fuel CLI<fuel-cli-config>` command::
fuel --env 1 provisioning default
where ``--env 1`` points to the specific environment
(id=1 in this example).
#. Edit file.
#. Upload the modified file:
::
fuel --env-1 provisioning upload
File Format
~~~~~~~~~~~
The *xxx.yaml* file <detailed-description>

View File

@ -0,0 +1,154 @@
.. raw:: pdf
PageBreak
.. _network-1-yaml-ref:
network_1.yaml
--------------
Fuel Master Node:
**/root/network_1.yaml**
The *network_1.yaml* file contains information about the network configuration
for the OpenStack environment.
Usage
+++++
#. Dump network information using the following Fuel command:
::
fuel --env 1 network --download
where ``--env 1`` points to the specific environment
(id=1 in this example).
#. Edit file and add information about the new Network Group(s).
#. Upload the modified file:
::
fuel --env-1 network --upload
If you make a mistake when populating this file,
it seems to upload normally
but no network data changes are applied;
if you then download the file again,
the unmodified file may overwrite
the modifications you made to the file.
To protect yourself,
we recommend the following process:
- After you edit the file but before you upload it,
make a copy in another location.
- Upload the file.
- Download the file again.
- Compare the current file to the one you saved.
If they match, you successfully configured your networks.
If you configure your networking by editing this file,
you should create and configure the rest of your environment
using the Fuel CLI rather than the Web UI.
Especially do not attempt to configure your networking
using the Web UI screens.
File Format
+++++++++++
The *network_1.yaml* file contains
**global settings** and the **networks** section.
Note that the *network_1.yaml* is dumped in dictionary order
so the sections may appear in a different order than
documented here.
Global settings
+++++++++++++++
are mostly at the beginning of the file
but one (**public_vip**) is at the end of the file,
When configuring a new environment,
you must set values for the **management_vip**,
**floating_ranges**, and **public_vip** parameters.
::
management_vip: 10.108.37.2
networking_parameters:
base_mac: fa:16:3e:00:00:00
dns_nameservers:
- 8.8.4.4
- 8.8.8.8
floating_ranges:
- - 10.108.36.128
- 10.108.36.254
gre_id_range:
- 2
- 65535
internal_cidr: 192.168.111.0/24
internal_gateway: 192.168.111.1
net_l23_provider: ovs
segmentation_type: gre
vlan_range:
- 1000
- 1030
. . .
public_vip: 10.108.36.2
Networks
++++++++
The **networks** section contains the configurations
of each Network Group that has been created.
You must set values for
the **cidr**, **gateway**, and **ip_ranges** parameters
for each logical network in the group.
This is what the configuration of one logical network (**public**)looks like.
A similar section is provided for each of the
logical networks that belong to the Node Group.
::
networks:
- cidr: 10.108.36.0/24
gateway: 10.108.36.1
group_id: 1
id: 1
ip_ranges:
- - 10.108.36.2
- 10.108.36.127
meta:
assign_vip: true
cidr: 172.16.0.0/24
configurable: true
floating_range_var: floating_ranges
ip_range:
- 172.16.0.2
- 172.16.0.126
map_priority: 1
name: public
notation: ip_ranges
render_addr_mask: public
render_type: null
use_gateway: true
vlan_start: null
name: public
vlan_start: pull
- 10.108.35.254
vlan_start: null
If you create additional Node Groups,
the file contains segments for each Node Group,
each identified by a unique **group_id**,
with configuration blocks for each
of the four logical networks associated with that Node Group.

View File

@ -0,0 +1,121 @@
.. raw:: pdf
PageBreak
.. _openstack-yaml-ref:
openstack.yaml
--------------
Fuel Master Node:
**/usr/lib/python2.6/site-packages/nailgun/fixtures/openstack.yaml**
The *openstack.yaml* file defines
the basic configuration of the target nodes
that Fuel deploys for the OpenStack environment.
Initially, it contains Fuel defaults;
these are adjusted in response to configuration choices
the user makes through the Fuel UI
and then fed to Nailgun.
Usage
+++++
#. Log into the nailgun Docker container:
::
dockerctl shell nailgun
#. Edit file.
#. Run the following commands to Nailgun
to reread its settings and restart:
::
manage.py dropdb && manage.py syncdb && manage.py loaddefault
killall nailgund
#. Exit the Nailgun docker container:
::
exit
#. Run the following commands to Nailgun
to sync deployment tasks:
::
fuel rel --sync-deployment-tasks --dir /etc/puppet
File Format
+++++++++++
The *openstack.yaml* file contains a number of blocks,
each of which may contain multiple parameters.
The major ones are described here.
The file has two major sections:
- The first is for VirtualBox and other limited deployments.
- The second is for full bare-metal deployments.
Roles
+++++
Lists each of the roles available on the
:guilabel:`Assign Roles` screen
with descriptions.
Note that there are two `roles-metadata` sections in the file:
- The limited deployments section
lists only the Controller, Compute, and Cinder LVM roles.
- The "full_release" section
lists the Controller, Compute, Cinder LVM,
Ceph-OSD, MongoDB, and Zabbix Server roles.
Roles that should not be deployed on the same server
are identified with "conflicts" statements
such as the following that prevents a Compute role
from being installed on a Controller node:
::
controller:
name: "Controller"
description: "The controller initiates orchestration activities..."
has_primary: true
conflicts:
- compute
The "has_primary" line is added in Release 6.0
to identify the Primary controller.
In earlier releases,
Galera searched for the Controller node with the lowest node-id value
and made that the Primary Controller.
This created problems when a new controller that had a lower node-id value
was added to an existing Controller cluster
and became the Primary Controller,
which conflicted with the existing Primary Controller in the cluster.
Persisting the Primary role in the database solves this problem.
If you delete the "conflicts:" and "compute" line
and redeploy nailgun,
you can deploy a bare-metal deployment
that runs on a single server.
.. warning:: Deploying Fuel on VirtualBox is a much better
way to install Fuel on minimal hardware
for demonstration purposes
than using this procedure.
Be extremely careful when using this "all-in-one" deployment;
if you create too many VM instances,
they may consume all the available CPUs,
causing serious problems accessing the MySQL database.
Resource-intensive services
such as Ceilometer with MongoDB, Zabbix,
and Ceph are also apt to cause problems
when OpenStack is deployed on a single server.

View File

@ -0,0 +1,45 @@
.. raw:: pdf
PageBreak
.. _settings-yaml-ref:
settings.yaml
-------------
Fuel Master Node:
**/root/settings_x.yaml/**
The *settings.yaml* file contains
the current values for the information
on the Settings page of the Fuel UI.
Usage
+++++
#. Dump provisioning information using the following
Fuel command:
::
fuel --env 1 settings default
where ``--env 1`` that to the specific environment
(id=1 in this example).
#. Edit file.
#. Upload the modified file:
::
fuel --env-1 settings upload
File Format
+++++++++++
Modify the Fuel settings using the Fuel web UI.