From ff210912ea1d66390789b7d5b427bcaa77b69291 Mon Sep 17 00:00:00 2001 From: marios Date: Wed, 19 Aug 2015 17:30:04 +0300 Subject: [PATCH 1/7] Adds an example for retrieveing node swift introspection data As part of https://bugzilla.redhat.com/show_bug.cgi?id=1255058 at least initially the user will have to collect the mac address list and feed into the heat stack create. This just adds an enhanced example on how to achieve that using the downloaded node swift data. Change-Id: I6ccc6c2aac794214d69c7bb33046a74b80455ff1 --- .../advanced_deployment/profile_matching.rst | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/doc/source/advanced_deployment/profile_matching.rst b/doc/source/advanced_deployment/profile_matching.rst index ce868669..82917a20 100644 --- a/doc/source/advanced_deployment/profile_matching.rst +++ b/doc/source/advanced_deployment/profile_matching.rst @@ -46,6 +46,23 @@ is ``ironic-inspector`` and can be modified in **/etc/ironic-discoverd/discoverd.conf**. Swift object name is stored under ``hardware_swift_object`` key in Ironic node extra field. +As an example, to download the swift data for all nodes to a local directory +and use that to collect a list of node mac addresses:: + + # You will need the discoverd user password + # from /etc/ironic-discoverd/discoverd.conf: + export IRONIC_DISCOVERD_PASSWORD= + + # Download the extra introspection data from swift: + for node in $(ironic node-list | grep -v UUID| awk '{print $2}'); + do swift -U service:ironic -K $IRONIC_DISCOVERD_PASSWORD download ironic-discoverd extra_hardware-$node; + done + + # Use jq to access the local data - for example gather macs: + for f in extra_hardware-*; + do cat $f | jq -r 'map(select(.[0]=="network" and .[2]=="serial"))'; + done + State file ---------- From 6cf4bdb9fc6595d51529e5fe52b120cb9b68934a Mon Sep 17 00:00:00 2001 From: James Slagle Date: Wed, 2 Sep 2015 17:34:22 -0400 Subject: [PATCH 2/7] Fix curl commands Need to use -o, not -0. And also fix an incorrenct link to the latest delorean repo. --- doc/source/environments/virtual.rst | 2 +- doc/source/installation/installing.rst | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/source/environments/virtual.rst b/doc/source/environments/virtual.rst index eb78e96c..472212fc 100644 --- a/doc/source/environments/virtual.rst +++ b/doc/source/environments/virtual.rst @@ -115,7 +115,7 @@ Preparing the Virtual Environment (Automated) sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm # Enable RDO Trunk - sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -O /etc/yum.repos.d/delorean.repo + sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -o /etc/yum.repos.d/delorean.repo The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed. diff --git a/doc/source/installation/installing.rst b/doc/source/installation/installing.rst index 5cb0acc3..fc8d521a 100644 --- a/doc/source/installation/installing.rst +++ b/doc/source/installation/installing.rst @@ -58,13 +58,13 @@ Installing the Undercloud sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm # Enable RDO Trunk - sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -O /etc/yum.repos.d/delorean.repo + sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -o /etc/yum.repos.d/delorean.repo The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed. :: - sudo curl -o /etc/yum.repos.d/rdo-management-trunk.repo http://trunk-mgt.rdoproject.org/centos-kilo/current/delorean-rdo-management.repo + sudo curl -o /etc/yum.repos.d/delorean.repo http://trunk.rdoproject.org/centos7/current/delorean.repo #. Install the TripleO CLI, which will pull in all other necessary packages as dependencies:: From 2e308f0cd16b6e3b570de44a5d054fa974f7a0b1 Mon Sep 17 00:00:00 2001 From: James Slagle Date: Thu, 3 Sep 2015 09:11:11 -0400 Subject: [PATCH 3/7] Factor out the common repositories info Moves the common steps for repository setup into a separate file, repository.txt and uses an include directive. Also adds a step to install the yum-plugin-priorities plugin which is needed when combining delorean with RDO. --- doc/source/environments/virtual.rst | 19 ++++--------------- doc/source/installation/installing.rst | 18 ++++-------------- doc/source/repositories.txt | 22 ++++++++++++++++++++++ 3 files changed, 30 insertions(+), 29 deletions(-) create mode 100644 doc/source/repositories.txt diff --git a/doc/source/environments/virtual.rst b/doc/source/environments/virtual.rst index 472212fc..f58d860c 100644 --- a/doc/source/environments/virtual.rst +++ b/doc/source/environments/virtual.rst @@ -109,22 +109,11 @@ Preparing the Virtual Environment (Automated) #. Enable needed repositories: - :: +.. include:: ../repositories.txt - # Enable RDO Kilo - sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm - - # Enable RDO Trunk - sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -o /etc/yum.repos.d/delorean.repo - - The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed. - - :: - - sudo curl -o /etc/yum.repos.d/delorean.repo http://trunk.rdoproject.org/centos7/current/delorean.repo - - -#. Install instack-undercloud:: +.. We need to manually continue our list numbering here since the above + "include" directive breaks the numbering. +5. Install instack-undercloud:: sudo yum install -y instack-undercloud diff --git a/doc/source/installation/installing.rst b/doc/source/installation/installing.rst index fc8d521a..2fe213db 100644 --- a/doc/source/installation/installing.rst +++ b/doc/source/installation/installing.rst @@ -52,22 +52,12 @@ Installing the Undercloud sudo yum install -y yum-utils sudo yum-config-manager --enable rhelosp-rhel-7-server-opt - :: - - # Enable RDO Kilo - sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm - - # Enable RDO Trunk - sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -o /etc/yum.repos.d/delorean.repo - - The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed. - - :: - - sudo curl -o /etc/yum.repos.d/delorean.repo http://trunk.rdoproject.org/centos7/current/delorean.repo +.. include:: ../repositories.txt -#. Install the TripleO CLI, which will pull in all other necessary packages as dependencies:: +.. We need to manually continue our list numbering here since the above + "include" directive breaks the numbering. +3. Install the TripleO CLI, which will pull in all other necessary packages as dependencies:: sudo yum install -y python-rdomanager-oscplugin diff --git a/doc/source/repositories.txt b/doc/source/repositories.txt new file mode 100644 index 00000000..4170c28c --- /dev/null +++ b/doc/source/repositories.txt @@ -0,0 +1,22 @@ + + :: + + # Enable RDO Kilo + sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm + + # Enable RDO Trunk + sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -o /etc/yum.repos.d/delorean.repo + + The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed. + + :: + + sudo curl -o /etc/yum.repos.d/delorean.repo http://trunk.rdoproject.org/centos7/current/delorean.repo + + Install the yum-plugin-priorities package so that the Delorean repository + takes precedence over the main RDO repositories. + + :: + + sudo yum -y install yum-plugin-priorities + From 0a30d7376e9fe941f92c68663696ab708e3729c4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Martin=20Andr=C3=A9?= Date: Tue, 25 Aug 2015 15:41:06 +0200 Subject: [PATCH 4/7] Various fixes for architecture documentation More specifically: * fixed typos * use ASCII chars for quotes and double quotes * properly capitalize OpenStack * fixed list formatting Change-Id: Iaf94a98c3ab4df2da2f972ac3cee07e0f0b6bd60 --- doc/source/introduction/architecture.rst | 65 ++++++++++++------------ 1 file changed, 33 insertions(+), 32 deletions(-) diff --git a/doc/source/introduction/architecture.rst b/doc/source/introduction/architecture.rst index 95694a2e..03dff950 100644 --- a/doc/source/introduction/architecture.rst +++ b/doc/source/introduction/architecture.rst @@ -31,9 +31,9 @@ components. .. image:: ../_images/overview.png -With TripleO, you start by creating an “undercloud” (a deployment cloud) +With TripleO, you start by creating an "undercloud" (a deployment cloud) that will contain the necessary OpenStack components to deploy and manage an -“overcloud” (a workload cloud). The overcloud is the deployed solution +"overcloud" (a workload cloud). The overcloud is the deployed solution and can represent a cloud for any purpose (e.g. production, staging, test, etc). .. image:: ../_images/logical_view.png @@ -75,25 +75,25 @@ that granular control and validation of the deployment is possible Benefits -------- -Using |project|’s combination of OpenStack components, and their APIs, as the +Using |project|'s combination of OpenStack components, and their APIs, as the infrastructure to deploy and operate OpenStack itself delivers several benefits: -* |project|’s APIs are the OpenStack APIs. They’re well maintained, well +* |project|'s APIs are the OpenStack APIs. They're well maintained, well documented, and come with client libraries and command line tools. Users who - invest time in learning about |project|’s APIs are also learning about + invest time in learning about |project|'s APIs are also learning about OpenStack itself, and users who are already familiar with OpenStack will find a great deal in |project| that they already understand. * Using the OpenStack components allows more rapid feature development of |project| than might otherwise be the case; |project| automatically inherits all the new features which are added to Glance, Heat etc., even when - the developer of the new feature didn’t explicitly have |project| in mind. + the developer of the new feature didn't explicitly have |project| in mind. * The same applies to bug fixes and security updates. When OpenStack developers fix bugs in the common components, those fixes are inherited by |project|. -* Users’ can invest time in integrating their own scripts and utilities with - |project|’s APIs with some confidence. Those APIs are cooperatively - maintained and developed by the OpenStack community. They’re not at risk of +* Users' can invest time in integrating their own scripts and utilities with + |project|'s APIs with some confidence. Those APIs are cooperatively + maintained and developed by the OpenStack community. They're not at risk of being suddenly changed or retired by a single controlling vendor. -* For developers, tight integration with the openstack APIs provides a solid +* For developers, tight integration with the OpenStack APIs provides a solid architecture, which has gone through extensive community review. It should be noted that not everything in |project| is a reused OpenStack @@ -108,7 +108,7 @@ Deployment Workflow Overview #. Environment Preparation - * Prepare your environemnt (baremetal or virtual) + * Prepare your environment (baremetal or virtual) * Install undercloud @@ -163,7 +163,7 @@ Environment Preparation In the first place, you need to check that your environment is ready. |project| can deploy OpenStack into baremetal as well as virtual environments. You need to make sure that your environment satisfies minimum requirements for -given environemnt type and that networking is correctly set up. +given environment type and that networking is correctly set up. Next step is to install the undercloud. We install undercloud using `Instack `_'s script and it calls @@ -199,7 +199,7 @@ Nodes """"" Deploying the overcloud requires suitable hardware. The first task is to -register the available hardware with Ironic, OpenStack’s equivalent of a +register the available hardware with Ironic, OpenStack's equivalent of a hypervisor for managing baremetal servers. User can define the hardware attributes (such as number of CPUs, RAM, disk) manually or he can leave the fields out and run introspection of the nodes afterwards. @@ -217,7 +217,7 @@ The sequence of events is pictured below: * The discovery ramdisk probes the hardware on the node and gathers facts, including the number of CPU cores, the local disk size and the amount of RAM. * The ramdisk posts the facts to the discoverd API. -* All facts are passed and stored in the Ironic databse. +* All facts are passed and stored in the Ironic database. * There can be performed advanced role matching via the ''ahc-match'' tool, which simply adds an additional role categorization to Ironic based on discovered node facts and specified conditions. @@ -229,9 +229,9 @@ Flavors When users are creating virtual machines (VMs) in an OpenStack cloud, the flavor that they choose specifies the capacity of the VM which should be created. The flavor defines the CPU count, the amount of RAM, the amount of disk space etc. -As long as the cloud has enough capacity to grant the user’s wish, and the user -hasn’t reached their quota limit, the flavor acts as a set of instructions on -exactly what kind of VM to create on the user’s behalf. +As long as the cloud has enough capacity to grant the user's wish, and the user +hasn't reached their quota limit, the flavor acts as a set of instructions on +exactly what kind of VM to create on the user's behalf. In the undercloud, where the machines are usually physical rather than virtual (or, at least, pre-existing, rather than created on demand), flavors have a @@ -246,7 +246,7 @@ two different modes. The simpler PoC (Proof of Concept) mode is intended to enable new users to experiment, without worrying about matching hardware profiles. In this mode, -there’s one single, global flavor, and any hardware can match it. That +there's one single, global flavor, and any hardware can match it. That effectively removes flavor matching. Users can use whatever hardware they wish. For the second mode, named Scale because it is suited to larger scale overcloud @@ -278,11 +278,11 @@ Tuskar API. A role brings together following things: task -In the case of the “Compute” role: +In the case of the "Compute" role: * the image must contain all the required software to boot an OS and then run the KVM hypervisor and the Nova compute service -* the flavor (at least for a deployment which isn’t a simple proof of concept), +* the flavor (at least for a deployment which isn't a simple proof of concept), should specify that the machine has enough CPU capacity and RAM to host several VMs concurrently * the Heat templates will take care of ensuring that the Nova service is @@ -295,11 +295,12 @@ individual services cannot easily be scaled independently of the Controller role future release. Customizable things during deployment planning are: + * Number of nodes for each role * Service parameters configuration * Network configuration (NIC configuration options, isolated vs. single overlay) * Ceph rbd backend options and defaults -* Ways to pass in extra configuration, e.g site-specific customzations +* Ways to pass in extra configuration, e.g site-specific customizations Deployment @@ -312,12 +313,12 @@ To deploy the overcloud Tuskar needs gather all plan information it keeps and build a Heat templates which describe desired overcloud. This template is served to to Heat which will orchestrate the whole deployment -and it will create a stack. Stack is Heat’s own term for the applications that +and it will create a stack. Stack is Heat's own term for the applications that it creates. The overcloud, in Heat terms, is a particularly complex instance of a stack. In order to the stack to be deployed, Heat makes successive calls to Nova, -OpenStack’s compute service controller. Nova depends upon Ironic, which, as +OpenStack's compute service controller. Nova depends upon Ironic, which, as described above has acquired an inventory of discovered hardware by this stage in the process. @@ -329,10 +330,10 @@ nodes, ensuring that the selected nodes meets the hardware requirements. Once the target node has been selected, Ironic does the actual provisioning of the node, Ironic retrieves the OS image associated with the role from Glance, causes the node to boot a deployment ramdisk and then, in the typical case, -exports the node’s local disk over iSCSI so that the disk can be partitioned and +exports the node's local disk over iSCSI so that the disk can be partitioned and the have the OS image written onto it by the Ironic Conductor. -See Ironic’s `Understanding Baremetal Deployment `_ for further details. @@ -351,7 +352,7 @@ After the overcloud has been deployed, the initialization of OpenStack services scripts in the `tripleo-incubator `_ source repository and it uses bits from `os-cloud-config `_ which contains common code, -the seed initialisation logic, and the post heat completion initial +the seed initialization logic, and the post heat completion initial configuration of a cloud. There are three primary steps to completing the initialization: @@ -363,10 +364,10 @@ The first step initializes Keystone for use with normal authentication by creating the admin and service tenants, the admin and Member roles, the admin user, configure certificates and finally registers the initial identity endpoint. The next step registers image, orchestration, network and compute -services running on the default ports on the controlplane node. Finally, Neutron -is given a starting IP address, ending IP address, and a CIDR notation to -represent the subnet for the block of floating IP addresses that will be used -within the overcloud. +services running on the default ports on the control plane node. Finally, +Neutron is given a starting IP address, ending IP address, and a CIDR notation +to represent the subnet for the block of floating IP addresses that will be +used within the overcloud. @@ -392,7 +393,7 @@ Monitoring the Overcloud ^^^^^^^^^^^^^^^^^^^^^^^^ When the overcloud is deployed, Ceilometer can be configured to track a set of -OS metrics for each node (system load, CPU utiization, swap usage etc.) These +OS metrics for each node (system load, CPU utilization, swap usage etc.) These metrics are graphed in the GUI, both for individual nodes, and for groups of nodes, such as the collection of nodes which are all delivering a particular role. @@ -416,7 +417,7 @@ stages: * Making sure you have enough nodes to deploy on (or register new nodes as described in the "Undercloud Data Preparation" section above). -* Updating the plan managed by Tuskar, as described in the “Deployment Planning" +* Updating the plan managed by Tuskar, as described in the "Deployment Planning" section above. * Calling Heat to update the stack which will apply the set of changes to the overcloud. From 01ee4f59356b7d7df572d021931a4c7fa1e07cd9 Mon Sep 17 00:00:00 2001 From: Ryan Hefner Date: Thu, 16 Jul 2015 14:06:31 -0400 Subject: [PATCH 5/7] [Doc] Document Manila Deploying This doc shows show to deploy an overcloud with Manila installed and configured. This guides the user through deploying the overcloud, creating a share and accessing that share from a VM. Change-Id: Idd35cf5eb06f0f9ae5eaf768ce228e0bb156a75d --- .../advanced_deployment.rst | 1 + .../advanced_deployment/deploy_manila.rst | 129 ++++++++++++++++++ 2 files changed, 130 insertions(+) create mode 100644 doc/source/advanced_deployment/deploy_manila.rst diff --git a/doc/source/advanced_deployment/advanced_deployment.rst b/doc/source/advanced_deployment/advanced_deployment.rst index 27cee2dc..d9889156 100644 --- a/doc/source/advanced_deployment/advanced_deployment.rst +++ b/doc/source/advanced_deployment/advanced_deployment.rst @@ -13,6 +13,7 @@ In this chapter you will find advanced deployment of various |project| areas. Deploying with Heat Templates Network Isolation Managing Tuskar Plans and Roles + Deploying Manila .. diff --git a/doc/source/advanced_deployment/deploy_manila.rst b/doc/source/advanced_deployment/deploy_manila.rst new file mode 100644 index 00000000..572716dd --- /dev/null +++ b/doc/source/advanced_deployment/deploy_manila.rst @@ -0,0 +1,129 @@ +Deploying Manila in the Overcloud +================================= + +This guide assumes that your undercloud is already installed and ready to +deploy an overcloud with Manila enabled. + +Deploying the Overcloud +----------------------- +.. note:: + + The :doc:`template_deploy` doc has a more detailed explanation of the + following steps. + +#. Copy the Manila driver-specific configuration file to your home directory: + + - Generic driver:: + + sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-generic-config.yaml ~ + + - NetApp driver:: + + sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml ~ + +#. Edit the permissions (user is typically ``stack``):: + + sudo chown $USER ~/manila-*-config.yaml + sudo chmod 755 ~/manila-*-config.yaml + + +#. Edit the parameters in this file to fit your requirements. + - If you're using the generic driver, ensure that the service image + details correspond to the service image you intend to load. + - Ensure that the following line is changed:: + + OS::TripleO::ControllerExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/controller/manila-[generic or netapp].yaml + + +#. Continue following the TripleO instructions for deploying an overcloud. + Before entering the command to deploy the overcloud, add the environment + file that you just configured as an argument:: + + openstack overcloud deploy --templates -e ~/manila-[generic or netapp]-config.yaml + +#. Wait for the completion of the overcloud deployment process. + + +Creating the Share +------------------ + +.. note:: + + The following steps will refer to running commands as an admin user or a + tenant user. Sourcing the ``overcloudrc`` file will authenticate you as + the admin user. You can then create a tenant user and use environment + files to switch between them. + +#. Upload a service image: + + .. note:: + + This step is only required for the generic driver. + + Download a Manila service image to be used for share servers and upload it + to Glance so that Manila can use it [tenant]:: + + glance image-create --name manila-service-image --disk-format qcow2 --container-format bare --file manila_service_image.qcow2 + +#. Create a share network to host the shares: + + - Create the overcloud networks. The + :doc:`../basic_deployment/basic_deployment` doc has a more detailed + explanation about creating the network and subnet. Note that you may also + need to perform the following steps to get Manila working:: + + neutron router-create router1 + neutron router-interface-add router1 [subnet id] + + - List the networks and subnets [tenant]:: + + neutron net-list && neutron subnet-list + + - Create a share network (typically using the private default-net net/subnet) + [tenant]:: + + manila share-network-create --neutron-net-id [net] --neutron-subnet-id [subnet] + +#. Create a new share type (yes/no is for specifying if the driver handles + share servers) [admin]:: + + manila type-create [name] [yes/no] + +#. Create the share [tenant]:: + + manila create --share-network [share net ID] --share-type [type name] [nfs/cifs] [size of share] + + +Accessing the Share +------------------- + +#. To access the share, create a new VM on the same Neutron network that was + used to create the share network:: + + nova boot --image [image ID] --flavor [flavor ID] --nic net-id=[network ID] [name] + +#. Allow access to the VM you just created:: + + manila access-allow [share ID] ip [IP address of VM] + +#. Run ``manila list`` and ensure that the share is available. + +#. Log into the VM:: + + ssh [user]@[IP] + +.. note:: + + You may need to configure Neutron security rules to access the + VM. That is not in the scope of this document, so it will not be covered + here. + +5. In the VM, execute:: + + sudo mount [export location] [folder to mount to] + +6. Ensure the share is mounted by looking at the bottom of the output of the + ``mount`` command. + +7. That's it - you're ready to start using Manila! + From ec5156b8df8e83cde418eae46e3d00a06ddc4d09 Mon Sep 17 00:00:00 2001 From: Ryan Hefner Date: Fri, 17 Jul 2015 07:44:15 -0400 Subject: [PATCH 6/7] [Doc] Configuring Cinder with NetApp Storage This document details how to configure an overcloud with Cinder on top of NetApp storage. Change-Id: I9b86f0c38bfbfb5cc837e4e5263f90df07cc9cc7 --- .../advanced_deployment.rst | 1 + .../advanced_deployment/cinder_netapp.rst | 60 +++++++++++++++++++ 2 files changed, 61 insertions(+) create mode 100644 doc/source/advanced_deployment/cinder_netapp.rst diff --git a/doc/source/advanced_deployment/advanced_deployment.rst b/doc/source/advanced_deployment/advanced_deployment.rst index d9889156..77918b96 100644 --- a/doc/source/advanced_deployment/advanced_deployment.rst +++ b/doc/source/advanced_deployment/advanced_deployment.rst @@ -14,6 +14,7 @@ In this chapter you will find advanced deployment of various |project| areas. Network Isolation Managing Tuskar Plans and Roles Deploying Manila + Configuring Cinder with a NetApp Backend .. diff --git a/doc/source/advanced_deployment/cinder_netapp.rst b/doc/source/advanced_deployment/cinder_netapp.rst new file mode 100644 index 00000000..468883d3 --- /dev/null +++ b/doc/source/advanced_deployment/cinder_netapp.rst @@ -0,0 +1,60 @@ +Configuring Cinder with a NetApp Backend +======================================== + +This guide assumes that your undercloud is already installed and ready to +deploy an overcloud. + +Deploying the Overcloud +----------------------- +.. note:: + + The :doc:`template_deploy` doc has a more detailed explanation of the + following steps. + +#. Copy the NetApp configuration file to your home directory:: + + sudo cp /usr/share/openstack-tripleo-heat-templates/environments/cinder-netapp-config.yaml ~ + +#. Edit the permissions (user is typically ``stack``):: + + sudo chown $USER ~/cinder-netapp-config.yaml + sudo chmod 755 ~/cinder-netapp-config.yaml + + +#. Edit the parameters in this file to fit your requirements. Ensure that the following line is changed:: + + OS::TripleO::ControllerExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/controller/cinder-netapp.yaml + + +#. Continue following the TripleO instructions for deploying an overcloud. + Before entering the command to deploy the overcloud, add the environment + file that you just configured as an argument:: + + openstack overcloud deploy --templates -e ~/cinder-netapp-config.yaml + +#. Wait for the completion of the overcloud deployment process. + + +Creating a NetApp Volume +------------------------ + +.. note:: + + The following steps will refer to running commands as an admin user or a + tenant user. Sourcing the ``overcloudrc`` file will authenticate you as + the admin user. You can then create a tenant user and use environment + files to switch between them. + +#. Create a new volume type that maps to the new NetApp backend [admin]:: + + cinder type-create [name] + cinder type-key [name] set volume_backend_name=tripleo_netapp + +#. Create the volume [admin]:: + + cinder create --volume-type [type name] [size of volume] + +#. Attach the volume to a server:: + + nova volume-attach + From 8fe6ada6d0530df9e565337c6dcfe4e67cc5bc80 Mon Sep 17 00:00:00 2001 From: James Slagle Date: Fri, 4 Sep 2015 07:46:27 -0400 Subject: [PATCH 7/7] Use correct trunk repo link --- doc/source/basic_deployment/basic_deployment_cli.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/source/basic_deployment/basic_deployment_cli.rst b/doc/source/basic_deployment/basic_deployment_cli.rst index ee5520fc..02cbb250 100644 --- a/doc/source/basic_deployment/basic_deployment_cli.rst +++ b/doc/source/basic_deployment/basic_deployment_cli.rst @@ -132,7 +132,7 @@ non-root user that was used to install the undercloud. :: - export DELOREAN_TRUNK_MGT_REPO="http://trunk-mgt.rdoproject.org/centos-kilo/current" + export DELOREAN_TRUNK_MGT_REPO="http://trunk.rdoproject.org/centos7/current/" ::