Merge pull request #2 from slagle/tripleo
Adds an example for retrieveing node swift introspection data
This commit is contained in:
commit
3f04790f92
@ -13,6 +13,8 @@ In this chapter you will find advanced deployment of various |project| areas.
|
||||
Deploying with Heat Templates <template_deploy>
|
||||
Network Isolation <network_isolation>
|
||||
Managing Tuskar Plans and Roles <managing_plans_and_roles>
|
||||
Deploying Manila <deploy_manila>
|
||||
Configuring Cinder with a NetApp Backend <cinder_netapp>
|
||||
|
||||
|
||||
.. <MOVE THESE UNDER TOCTREE WHEN READY, KEEP LOGICAL WORKFLOW ORDER>
|
||||
|
60
doc/source/advanced_deployment/cinder_netapp.rst
Normal file
60
doc/source/advanced_deployment/cinder_netapp.rst
Normal file
@ -0,0 +1,60 @@
|
||||
Configuring Cinder with a NetApp Backend
|
||||
========================================
|
||||
|
||||
This guide assumes that your undercloud is already installed and ready to
|
||||
deploy an overcloud.
|
||||
|
||||
Deploying the Overcloud
|
||||
-----------------------
|
||||
.. note::
|
||||
|
||||
The :doc:`template_deploy` doc has a more detailed explanation of the
|
||||
following steps.
|
||||
|
||||
#. Copy the NetApp configuration file to your home directory::
|
||||
|
||||
sudo cp /usr/share/openstack-tripleo-heat-templates/environments/cinder-netapp-config.yaml ~
|
||||
|
||||
#. Edit the permissions (user is typically ``stack``)::
|
||||
|
||||
sudo chown $USER ~/cinder-netapp-config.yaml
|
||||
sudo chmod 755 ~/cinder-netapp-config.yaml
|
||||
|
||||
|
||||
#. Edit the parameters in this file to fit your requirements. Ensure that the following line is changed::
|
||||
|
||||
OS::TripleO::ControllerExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/controller/cinder-netapp.yaml
|
||||
|
||||
|
||||
#. Continue following the TripleO instructions for deploying an overcloud.
|
||||
Before entering the command to deploy the overcloud, add the environment
|
||||
file that you just configured as an argument::
|
||||
|
||||
openstack overcloud deploy --templates -e ~/cinder-netapp-config.yaml
|
||||
|
||||
#. Wait for the completion of the overcloud deployment process.
|
||||
|
||||
|
||||
Creating a NetApp Volume
|
||||
------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following steps will refer to running commands as an admin user or a
|
||||
tenant user. Sourcing the ``overcloudrc`` file will authenticate you as
|
||||
the admin user. You can then create a tenant user and use environment
|
||||
files to switch between them.
|
||||
|
||||
#. Create a new volume type that maps to the new NetApp backend [admin]::
|
||||
|
||||
cinder type-create [name]
|
||||
cinder type-key [name] set volume_backend_name=tripleo_netapp
|
||||
|
||||
#. Create the volume [admin]::
|
||||
|
||||
cinder create --volume-type [type name] [size of volume]
|
||||
|
||||
#. Attach the volume to a server::
|
||||
|
||||
nova volume-attach <server> <volume> <device>
|
||||
|
129
doc/source/advanced_deployment/deploy_manila.rst
Normal file
129
doc/source/advanced_deployment/deploy_manila.rst
Normal file
@ -0,0 +1,129 @@
|
||||
Deploying Manila in the Overcloud
|
||||
=================================
|
||||
|
||||
This guide assumes that your undercloud is already installed and ready to
|
||||
deploy an overcloud with Manila enabled.
|
||||
|
||||
Deploying the Overcloud
|
||||
-----------------------
|
||||
.. note::
|
||||
|
||||
The :doc:`template_deploy` doc has a more detailed explanation of the
|
||||
following steps.
|
||||
|
||||
#. Copy the Manila driver-specific configuration file to your home directory:
|
||||
|
||||
- Generic driver::
|
||||
|
||||
sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-generic-config.yaml ~
|
||||
|
||||
- NetApp driver::
|
||||
|
||||
sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml ~
|
||||
|
||||
#. Edit the permissions (user is typically ``stack``)::
|
||||
|
||||
sudo chown $USER ~/manila-*-config.yaml
|
||||
sudo chmod 755 ~/manila-*-config.yaml
|
||||
|
||||
|
||||
#. Edit the parameters in this file to fit your requirements.
|
||||
- If you're using the generic driver, ensure that the service image
|
||||
details correspond to the service image you intend to load.
|
||||
- Ensure that the following line is changed::
|
||||
|
||||
OS::TripleO::ControllerExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/controller/manila-[generic or netapp].yaml
|
||||
|
||||
|
||||
#. Continue following the TripleO instructions for deploying an overcloud.
|
||||
Before entering the command to deploy the overcloud, add the environment
|
||||
file that you just configured as an argument::
|
||||
|
||||
openstack overcloud deploy --templates -e ~/manila-[generic or netapp]-config.yaml
|
||||
|
||||
#. Wait for the completion of the overcloud deployment process.
|
||||
|
||||
|
||||
Creating the Share
|
||||
------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following steps will refer to running commands as an admin user or a
|
||||
tenant user. Sourcing the ``overcloudrc`` file will authenticate you as
|
||||
the admin user. You can then create a tenant user and use environment
|
||||
files to switch between them.
|
||||
|
||||
#. Upload a service image:
|
||||
|
||||
.. note::
|
||||
|
||||
This step is only required for the generic driver.
|
||||
|
||||
Download a Manila service image to be used for share servers and upload it
|
||||
to Glance so that Manila can use it [tenant]::
|
||||
|
||||
glance image-create --name manila-service-image --disk-format qcow2 --container-format bare --file manila_service_image.qcow2
|
||||
|
||||
#. Create a share network to host the shares:
|
||||
|
||||
- Create the overcloud networks. The
|
||||
:doc:`../basic_deployment/basic_deployment` doc has a more detailed
|
||||
explanation about creating the network and subnet. Note that you may also
|
||||
need to perform the following steps to get Manila working::
|
||||
|
||||
neutron router-create router1
|
||||
neutron router-interface-add router1 [subnet id]
|
||||
|
||||
- List the networks and subnets [tenant]::
|
||||
|
||||
neutron net-list && neutron subnet-list
|
||||
|
||||
- Create a share network (typically using the private default-net net/subnet)
|
||||
[tenant]::
|
||||
|
||||
manila share-network-create --neutron-net-id [net] --neutron-subnet-id [subnet]
|
||||
|
||||
#. Create a new share type (yes/no is for specifying if the driver handles
|
||||
share servers) [admin]::
|
||||
|
||||
manila type-create [name] [yes/no]
|
||||
|
||||
#. Create the share [tenant]::
|
||||
|
||||
manila create --share-network [share net ID] --share-type [type name] [nfs/cifs] [size of share]
|
||||
|
||||
|
||||
Accessing the Share
|
||||
-------------------
|
||||
|
||||
#. To access the share, create a new VM on the same Neutron network that was
|
||||
used to create the share network::
|
||||
|
||||
nova boot --image [image ID] --flavor [flavor ID] --nic net-id=[network ID] [name]
|
||||
|
||||
#. Allow access to the VM you just created::
|
||||
|
||||
manila access-allow [share ID] ip [IP address of VM]
|
||||
|
||||
#. Run ``manila list`` and ensure that the share is available.
|
||||
|
||||
#. Log into the VM::
|
||||
|
||||
ssh [user]@[IP]
|
||||
|
||||
.. note::
|
||||
|
||||
You may need to configure Neutron security rules to access the
|
||||
VM. That is not in the scope of this document, so it will not be covered
|
||||
here.
|
||||
|
||||
5. In the VM, execute::
|
||||
|
||||
sudo mount [export location] [folder to mount to]
|
||||
|
||||
6. Ensure the share is mounted by looking at the bottom of the output of the
|
||||
``mount`` command.
|
||||
|
||||
7. That's it - you're ready to start using Manila!
|
||||
|
@ -46,6 +46,23 @@ is ``ironic-inspector`` and can be modified in
|
||||
**/etc/ironic-discoverd/discoverd.conf**. Swift object name is stored under
|
||||
``hardware_swift_object`` key in Ironic node extra field.
|
||||
|
||||
As an example, to download the swift data for all nodes to a local directory
|
||||
and use that to collect a list of node mac addresses::
|
||||
|
||||
# You will need the discoverd user password
|
||||
# from /etc/ironic-discoverd/discoverd.conf:
|
||||
export IRONIC_DISCOVERD_PASSWORD=
|
||||
|
||||
# Download the extra introspection data from swift:
|
||||
for node in $(ironic node-list | grep -v UUID| awk '{print $2}');
|
||||
do swift -U service:ironic -K $IRONIC_DISCOVERD_PASSWORD download ironic-discoverd extra_hardware-$node;
|
||||
done
|
||||
|
||||
# Use jq to access the local data - for example gather macs:
|
||||
for f in extra_hardware-*;
|
||||
do cat $f | jq -r 'map(select(.[0]=="network" and .[2]=="serial"))';
|
||||
done
|
||||
|
||||
State file
|
||||
----------
|
||||
|
||||
|
@ -132,7 +132,7 @@ non-root user that was used to install the undercloud.
|
||||
|
||||
::
|
||||
|
||||
export DELOREAN_TRUNK_MGT_REPO="http://trunk-mgt.rdoproject.org/centos-kilo/current"
|
||||
export DELOREAN_TRUNK_MGT_REPO="http://trunk.rdoproject.org/centos7/current/"
|
||||
|
||||
::
|
||||
|
||||
|
@ -109,22 +109,11 @@ Preparing the Virtual Environment (Automated)
|
||||
|
||||
#. Enable needed repositories:
|
||||
|
||||
::
|
||||
.. include:: ../repositories.txt
|
||||
|
||||
# Enable RDO Kilo
|
||||
sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
|
||||
|
||||
# Enable RDO Trunk
|
||||
sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -O /etc/yum.repos.d/delorean.repo
|
||||
|
||||
The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed.
|
||||
|
||||
::
|
||||
|
||||
sudo curl -o /etc/yum.repos.d/delorean.repo http://trunk.rdoproject.org/centos7/current/delorean.repo
|
||||
|
||||
|
||||
#. Install instack-undercloud::
|
||||
.. We need to manually continue our list numbering here since the above
|
||||
"include" directive breaks the numbering.
|
||||
5. Install instack-undercloud::
|
||||
|
||||
sudo yum install -y instack-undercloud
|
||||
|
||||
|
@ -52,22 +52,12 @@ Installing the Undercloud
|
||||
sudo yum install -y yum-utils
|
||||
sudo yum-config-manager --enable rhelosp-rhel-7-server-opt
|
||||
|
||||
::
|
||||
|
||||
# Enable RDO Kilo
|
||||
sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
|
||||
|
||||
# Enable RDO Trunk
|
||||
sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -O /etc/yum.repos.d/delorean.repo
|
||||
|
||||
The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed.
|
||||
|
||||
::
|
||||
|
||||
sudo curl -o /etc/yum.repos.d/rdo-management-trunk.repo http://trunk-mgt.rdoproject.org/centos-kilo/current/delorean-rdo-management.repo
|
||||
.. include:: ../repositories.txt
|
||||
|
||||
|
||||
#. Install the TripleO CLI, which will pull in all other necessary packages as dependencies::
|
||||
.. We need to manually continue our list numbering here since the above
|
||||
"include" directive breaks the numbering.
|
||||
3. Install the TripleO CLI, which will pull in all other necessary packages as dependencies::
|
||||
|
||||
sudo yum install -y python-rdomanager-oscplugin
|
||||
|
||||
|
@ -31,9 +31,9 @@ components.
|
||||
|
||||
.. image:: ../_images/overview.png
|
||||
|
||||
With TripleO, you start by creating an “undercloud” (a deployment cloud)
|
||||
With TripleO, you start by creating an "undercloud" (a deployment cloud)
|
||||
that will contain the necessary OpenStack components to deploy and manage an
|
||||
“overcloud” (a workload cloud). The overcloud is the deployed solution
|
||||
"overcloud" (a workload cloud). The overcloud is the deployed solution
|
||||
and can represent a cloud for any purpose (e.g. production, staging, test, etc).
|
||||
|
||||
.. image:: ../_images/logical_view.png
|
||||
@ -75,25 +75,25 @@ that granular control and validation of the deployment is possible
|
||||
Benefits
|
||||
--------
|
||||
|
||||
Using |project|’s combination of OpenStack components, and their APIs, as the
|
||||
Using |project|'s combination of OpenStack components, and their APIs, as the
|
||||
infrastructure to deploy and operate OpenStack itself delivers several benefits:
|
||||
|
||||
* |project|’s APIs are the OpenStack APIs. They’re well maintained, well
|
||||
* |project|'s APIs are the OpenStack APIs. They're well maintained, well
|
||||
documented, and come with client libraries and command line tools. Users who
|
||||
invest time in learning about |project|’s APIs are also learning about
|
||||
invest time in learning about |project|'s APIs are also learning about
|
||||
OpenStack itself, and users who are already familiar with OpenStack will find
|
||||
a great deal in |project| that they already understand.
|
||||
* Using the OpenStack components allows more rapid feature development of
|
||||
|project| than might otherwise be the case; |project| automatically
|
||||
inherits all the new features which are added to Glance, Heat etc., even when
|
||||
the developer of the new feature didn’t explicitly have |project| in mind.
|
||||
the developer of the new feature didn't explicitly have |project| in mind.
|
||||
* The same applies to bug fixes and security updates. When OpenStack developers
|
||||
fix bugs in the common components, those fixes are inherited by |project|.
|
||||
* Users’ can invest time in integrating their own scripts and utilities with
|
||||
|project|’s APIs with some confidence. Those APIs are cooperatively
|
||||
maintained and developed by the OpenStack community. They’re not at risk of
|
||||
* Users' can invest time in integrating their own scripts and utilities with
|
||||
|project|'s APIs with some confidence. Those APIs are cooperatively
|
||||
maintained and developed by the OpenStack community. They're not at risk of
|
||||
being suddenly changed or retired by a single controlling vendor.
|
||||
* For developers, tight integration with the openstack APIs provides a solid
|
||||
* For developers, tight integration with the OpenStack APIs provides a solid
|
||||
architecture, which has gone through extensive community review.
|
||||
|
||||
It should be noted that not everything in |project| is a reused OpenStack
|
||||
@ -108,7 +108,7 @@ Deployment Workflow Overview
|
||||
|
||||
#. Environment Preparation
|
||||
|
||||
* Prepare your environemnt (baremetal or virtual)
|
||||
* Prepare your environment (baremetal or virtual)
|
||||
* Install undercloud
|
||||
|
||||
|
||||
@ -163,7 +163,7 @@ Environment Preparation
|
||||
In the first place, you need to check that your environment is ready.
|
||||
|project| can deploy OpenStack into baremetal as well as virtual environments.
|
||||
You need to make sure that your environment satisfies minimum requirements for
|
||||
given environemnt type and that networking is correctly set up.
|
||||
given environment type and that networking is correctly set up.
|
||||
|
||||
Next step is to install the undercloud. We install undercloud using `Instack
|
||||
<https://github.com/rdo-management/instack-undercloud>`_'s script and it calls
|
||||
@ -199,7 +199,7 @@ Nodes
|
||||
"""""
|
||||
|
||||
Deploying the overcloud requires suitable hardware. The first task is to
|
||||
register the available hardware with Ironic, OpenStack’s equivalent of a
|
||||
register the available hardware with Ironic, OpenStack's equivalent of a
|
||||
hypervisor for managing baremetal servers. User can define the hardware
|
||||
attributes (such as number of CPUs, RAM, disk) manually or he can leave the
|
||||
fields out and run introspection of the nodes afterwards.
|
||||
@ -217,7 +217,7 @@ The sequence of events is pictured below:
|
||||
* The discovery ramdisk probes the hardware on the node and gathers facts,
|
||||
including the number of CPU cores, the local disk size and the amount of RAM.
|
||||
* The ramdisk posts the facts to the discoverd API.
|
||||
* All facts are passed and stored in the Ironic databse.
|
||||
* All facts are passed and stored in the Ironic database.
|
||||
* There can be performed advanced role matching via the ''ahc-match'' tool,
|
||||
which simply adds an additional role categorization to Ironic based on
|
||||
discovered node facts and specified conditions.
|
||||
@ -229,9 +229,9 @@ Flavors
|
||||
When users are creating virtual machines (VMs) in an OpenStack cloud, the flavor
|
||||
that they choose specifies the capacity of the VM which should be created. The
|
||||
flavor defines the CPU count, the amount of RAM, the amount of disk space etc.
|
||||
As long as the cloud has enough capacity to grant the user’s wish, and the user
|
||||
hasn’t reached their quota limit, the flavor acts as a set of instructions on
|
||||
exactly what kind of VM to create on the user’s behalf.
|
||||
As long as the cloud has enough capacity to grant the user's wish, and the user
|
||||
hasn't reached their quota limit, the flavor acts as a set of instructions on
|
||||
exactly what kind of VM to create on the user's behalf.
|
||||
|
||||
In the undercloud, where the machines are usually physical rather than virtual
|
||||
(or, at least, pre-existing, rather than created on demand), flavors have a
|
||||
@ -246,7 +246,7 @@ two different modes.
|
||||
|
||||
The simpler PoC (Proof of Concept) mode is intended to enable new users to
|
||||
experiment, without worrying about matching hardware profiles. In this mode,
|
||||
there’s one single, global flavor, and any hardware can match it. That
|
||||
there's one single, global flavor, and any hardware can match it. That
|
||||
effectively removes flavor matching. Users can use whatever hardware they wish.
|
||||
|
||||
For the second mode, named Scale because it is suited to larger scale overcloud
|
||||
@ -278,11 +278,11 @@ Tuskar API. A role brings together following things:
|
||||
task
|
||||
|
||||
|
||||
In the case of the “Compute” role:
|
||||
In the case of the "Compute" role:
|
||||
|
||||
* the image must contain all the required software to boot an OS and then run
|
||||
the KVM hypervisor and the Nova compute service
|
||||
* the flavor (at least for a deployment which isn’t a simple proof of concept),
|
||||
* the flavor (at least for a deployment which isn't a simple proof of concept),
|
||||
should specify that the machine has enough CPU capacity and RAM to host
|
||||
several VMs concurrently
|
||||
* the Heat templates will take care of ensuring that the Nova service is
|
||||
@ -295,11 +295,12 @@ individual services cannot easily be scaled independently of the Controller role
|
||||
future release.
|
||||
|
||||
Customizable things during deployment planning are:
|
||||
|
||||
* Number of nodes for each role
|
||||
* Service parameters configuration
|
||||
* Network configuration (NIC configuration options, isolated vs. single overlay)
|
||||
* Ceph rbd backend options and defaults
|
||||
* Ways to pass in extra configuration, e.g site-specific customzations
|
||||
* Ways to pass in extra configuration, e.g site-specific customizations
|
||||
|
||||
|
||||
Deployment
|
||||
@ -312,12 +313,12 @@ To deploy the overcloud Tuskar needs gather all plan information it keeps and
|
||||
build a Heat templates which describe desired overcloud.
|
||||
|
||||
This template is served to to Heat which will orchestrate the whole deployment
|
||||
and it will create a stack. Stack is Heat’s own term for the applications that
|
||||
and it will create a stack. Stack is Heat's own term for the applications that
|
||||
it creates. The overcloud, in Heat terms, is a particularly complex instance of
|
||||
a stack.
|
||||
|
||||
In order to the stack to be deployed, Heat makes successive calls to Nova,
|
||||
OpenStack’s compute service controller. Nova depends upon Ironic, which, as
|
||||
OpenStack's compute service controller. Nova depends upon Ironic, which, as
|
||||
described above has acquired an inventory of discovered hardware by this stage
|
||||
in the process.
|
||||
|
||||
@ -329,10 +330,10 @@ nodes, ensuring that the selected nodes meets the hardware requirements.
|
||||
Once the target node has been selected, Ironic does the actual provisioning of
|
||||
the node, Ironic retrieves the OS image associated with the role from Glance,
|
||||
causes the node to boot a deployment ramdisk and then, in the typical case,
|
||||
exports the node’s local disk over iSCSI so that the disk can be partitioned and
|
||||
exports the node's local disk over iSCSI so that the disk can be partitioned and
|
||||
the have the OS image written onto it by the Ironic Conductor.
|
||||
|
||||
See Ironic’s `Understanding Baremetal Deployment <http://docs.openstack.org/
|
||||
See Ironic's `Understanding Baremetal Deployment <http://docs.openstack.org/
|
||||
developer/ironic/deploy/user-guide.html#understanding-bare-metal-deployment>`_
|
||||
for further details.
|
||||
|
||||
@ -351,7 +352,7 @@ After the overcloud has been deployed, the initialization of OpenStack services
|
||||
scripts in the `tripleo-incubator <https://github.com/openstack/
|
||||
tripleo-incubator>`_ source repository and it uses bits from `os-cloud-config
|
||||
<https://github.com/openstack/os-cloud-config>`_ which contains common code,
|
||||
the seed initialisation logic, and the post heat completion initial
|
||||
the seed initialization logic, and the post heat completion initial
|
||||
configuration of a cloud. There are three primary steps to completing the
|
||||
initialization:
|
||||
|
||||
@ -363,10 +364,10 @@ The first step initializes Keystone for use with normal authentication by
|
||||
creating the admin and service tenants, the admin and Member roles, the admin
|
||||
user, configure certificates and finally registers the initial identity
|
||||
endpoint. The next step registers image, orchestration, network and compute
|
||||
services running on the default ports on the controlplane node. Finally, Neutron
|
||||
is given a starting IP address, ending IP address, and a CIDR notation to
|
||||
represent the subnet for the block of floating IP addresses that will be used
|
||||
within the overcloud.
|
||||
services running on the default ports on the control plane node. Finally,
|
||||
Neutron is given a starting IP address, ending IP address, and a CIDR notation
|
||||
to represent the subnet for the block of floating IP addresses that will be
|
||||
used within the overcloud.
|
||||
|
||||
|
||||
|
||||
@ -392,7 +393,7 @@ Monitoring the Overcloud
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When the overcloud is deployed, Ceilometer can be configured to track a set of
|
||||
OS metrics for each node (system load, CPU utiization, swap usage etc.) These
|
||||
OS metrics for each node (system load, CPU utilization, swap usage etc.) These
|
||||
metrics are graphed in the GUI, both for individual nodes, and for groups
|
||||
of nodes, such as the collection of nodes which are all delivering a particular
|
||||
role.
|
||||
@ -416,7 +417,7 @@ stages:
|
||||
|
||||
* Making sure you have enough nodes to deploy on (or register new nodes as
|
||||
described in the "Undercloud Data Preparation" section above).
|
||||
* Updating the plan managed by Tuskar, as described in the “Deployment Planning"
|
||||
* Updating the plan managed by Tuskar, as described in the "Deployment Planning"
|
||||
section above.
|
||||
* Calling Heat to update the stack which will apply the set of changes to the
|
||||
overcloud.
|
||||
|
22
doc/source/repositories.txt
Normal file
22
doc/source/repositories.txt
Normal file
@ -0,0 +1,22 @@
|
||||
|
||||
::
|
||||
|
||||
# Enable RDO Kilo
|
||||
sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
|
||||
|
||||
# Enable RDO Trunk
|
||||
sudo curl http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -o /etc/yum.repos.d/delorean.repo
|
||||
|
||||
The above Delorean repository is updated after a successful CI run. The following repo can be used instead if the newest packages are needed before a CI run has passed.
|
||||
|
||||
::
|
||||
|
||||
sudo curl -o /etc/yum.repos.d/delorean.repo http://trunk.rdoproject.org/centos7/current/delorean.repo
|
||||
|
||||
Install the yum-plugin-priorities package so that the Delorean repository
|
||||
takes precedence over the main RDO repositories.
|
||||
|
||||
::
|
||||
|
||||
sudo yum -y install yum-plugin-priorities
|
||||
|
Loading…
Reference in New Issue
Block a user