Add configuration options for VNF workloads
Add config options for: reserved-host-memory: Memory reserved for host processes, and not used when calculating capacity for instance allocation. pci-passthrough-whitelist: List of PCI devices/vendors to use for direct passthrough to instances. vcpu-pin-set: CPU cores to reserve for host rather than instance usage. These features are used for hugepage backed VMs, SR-IOV and CPU pinning in NFV OpenStack deployments running VNF workloads. Change-Id: I93b2a0e0c568e3129002d0f505d74cf3513a4930
This commit is contained in:
parent
0a02e7e390
commit
577b3e8ef8
79
README.md
79
README.md
@ -24,9 +24,88 @@ not be added.
|
||||
|
||||
Networking
|
||||
==========
|
||||
|
||||
This charm support nova-network (legacy) and Neutron networking.
|
||||
|
||||
Storage
|
||||
=======
|
||||
|
||||
This charm supports a number of different storage backends depending on
|
||||
your hypervisor type and storage relations.
|
||||
|
||||
NFV support
|
||||
===========
|
||||
|
||||
This charm (in-conjunction with the nova-cloud-controller and neutron-api charms)
|
||||
supports use of nova-compute nodes configured for use in Telco NFV deployments;
|
||||
specifically the following configuration options (yaml excerpt):
|
||||
|
||||
```yaml
|
||||
nova-compute:
|
||||
hugepages: 60%
|
||||
vcpu-pin-set: "^0,^2"
|
||||
reserved-host-memory: 1024
|
||||
pci-passthrough-whitelist: {"vendor_id":"1137","product_id":"0071","address":"*:0a:00.*","physical_network":"physnet1"}
|
||||
```
|
||||
|
||||
In this example, compute nodes will be configured with 60% of avaliable RAM for
|
||||
hugepage use (decreasing memory fragmentation in virual machines, improving
|
||||
performance), and Nova will be configured to reserve CPU cores 0 and 2 and
|
||||
1024M of RAM for host usage and use the supplied PCI device whitelist as
|
||||
PCI devices that as consumable by virtual machines, including any mapping to
|
||||
underlying provider network names (used for SR-IOV VF/PF port scheduling with
|
||||
Nova and Neutron's SR-IOV support).
|
||||
|
||||
The vcpu-pin-set configuration option is a comma-separated list of physical
|
||||
CPU numbers that virtual CPUs can be allocated to by default. Each element
|
||||
should be either a single CPU number, a range of CPU numbers, or a caret
|
||||
followed by a CPU number to be excluded from a previous range. For example:
|
||||
|
||||
```yaml
|
||||
vcpu-pin-set: "4-12,^8,15"
|
||||
```
|
||||
|
||||
The pci-passthrough-whitelist configuration must be specified as follows:
|
||||
|
||||
A JSON dictionary which describe a whitelisted PCI device. It should take
|
||||
the following format:
|
||||
|
||||
```
|
||||
["device_id": "<id>",] ["product_id": "<id>",]
|
||||
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
|
||||
"devname": "PCI Device Name",]
|
||||
{"tag": "<tag_value>",}
|
||||
```
|
||||
|
||||
where '[' indicates zero or one occurrences, '{' indicates zero or multiple
|
||||
occurrences, and '|' mutually exclusive options. Note that any missing
|
||||
fields are automatically wildcarded. Valid examples are:
|
||||
|
||||
```
|
||||
pci-passthrough-whitelist: {"devname":"eth0", "physical_network":"physnet"}
|
||||
|
||||
pci-passthrough-whitelist: {"address":"*:0a:00.*"}
|
||||
|
||||
pci-passthrough-whitelist: {"address":":0a:00.", "physical_network":"physnet1"}
|
||||
|
||||
pci-passthrough-whitelist: {"vendor_id":"1137", "product_id":"0071"}
|
||||
|
||||
pci-passthrough-whitelist: {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"}
|
||||
```
|
||||
|
||||
The following is invalid, as it specifies mutually exclusive options:
|
||||
|
||||
```
|
||||
pci-passthrough-whitelist: {"devname":"eth0", "physical_network":"physnet", "address":"*:0a:00.*"}
|
||||
```
|
||||
|
||||
A JSON list of JSON dictionaries corresponding to the above format. For
|
||||
example:
|
||||
|
||||
```
|
||||
pci-passthrough-whitelist: [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}]`
|
||||
```
|
||||
|
||||
The [OpenStack advanced networking documentation](http://docs.openstack.org/mitaka/networking-guide/adv-config-sriov.html)
|
||||
provides further details on whitelist configuration and how to create instances
|
||||
with Neutron ports wired to SR-IOV devices.
|
||||
|
18
config.yaml
18
config.yaml
@ -147,6 +147,24 @@ options:
|
||||
type: string
|
||||
default: 'yes'
|
||||
description: Whether to run nova-api and nova-network on the compute nodes.
|
||||
pci-passthrough-whitelist:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
Sets the pci_passthrough_whitelist option in nova.conf with is used to allow
|
||||
pci passthrough to the VM of specific devices, for example for SR-IOV.
|
||||
reserved-host-memory:
|
||||
type: int
|
||||
default: 512
|
||||
description: |
|
||||
Amount of memory in MB to reserve for the host. Defaults to 512MB.
|
||||
vcpu-pin-set:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
Sets vcpu_pin_set option in nova.conf which defines which pcpus that
|
||||
instance vcpus can or cannot use. For example '^0,^2' to reserve two
|
||||
cpus for the host.
|
||||
# Required if using FlatManager (nova-network)
|
||||
bridge-interface:
|
||||
type: string
|
||||
|
@ -140,6 +140,15 @@ class NovaComputeLibvirtContext(context.OSContextGenerator):
|
||||
else:
|
||||
ctxt['kvm_hugepages'] = 0
|
||||
|
||||
if config('pci-passthrough-whitelist'):
|
||||
ctxt['pci_passthrough_whitelist'] = \
|
||||
config('pci-passthrough-whitelist')
|
||||
|
||||
if config('vcpu-pin-set'):
|
||||
ctxt['vcpu_pin_set'] = config('vcpu-pin-set')
|
||||
|
||||
ctxt['reserved_host_memory'] = config('reserved-host-memory')
|
||||
|
||||
db = kv()
|
||||
if db.get('host_uuid'):
|
||||
ctxt['host_uuid'] = db.get('host_uuid')
|
||||
|
@ -113,6 +113,15 @@ instances_path = {{ instances_path }}
|
||||
{% endfor -%}
|
||||
{% endif -%}
|
||||
|
||||
{% if vcpu_pin_set -%}
|
||||
vcpu_pin_set = {{ vcpu_pin_set }}
|
||||
{% endif -%}
|
||||
reserved_host_memory = {{ reserved_host_memory }}
|
||||
|
||||
{% if pci_passthrough_whitelist -%}
|
||||
pci_passthrough_whitelist = {{ pci_passthrough_whitelist }}
|
||||
{% endif -%}
|
||||
|
||||
{% include "section-zeromq" %}
|
||||
|
||||
{% if network_manager == 'neutron' and network_manager_config -%}
|
||||
|
@ -119,6 +119,15 @@ notify_on_state_change = {{ notify_on_state_change }}
|
||||
{% endfor -%}
|
||||
{% endif -%}
|
||||
|
||||
{% if vcpu_pin_set -%}
|
||||
vcpu_pin_set = {{ vcpu_pin_set }}
|
||||
{% endif -%}
|
||||
reserved_host_memory = {{ reserved_host_memory }}
|
||||
|
||||
{% if pci_passthrough_whitelist -%}
|
||||
pci_passthrough_whitelist = {{ pci_passthrough_whitelist }}
|
||||
{% endif -%}
|
||||
|
||||
{% include "section-zeromq" %}
|
||||
|
||||
{% if network_manager == 'neutron' and network_manager_config -%}
|
||||
|
@ -119,6 +119,15 @@ notify_on_state_change = {{ notify_on_state_change }}
|
||||
{% endfor -%}
|
||||
{% endif -%}
|
||||
|
||||
{% if vcpu_pin_set -%}
|
||||
vcpu_pin_set = {{ vcpu_pin_set }}
|
||||
{% endif -%}
|
||||
reserved_host_memory = {{ reserved_host_memory }}
|
||||
|
||||
{% if pci_passthrough_whitelist -%}
|
||||
pci_passthrough_whitelist = {{ pci_passthrough_whitelist }}
|
||||
{% endif -%}
|
||||
|
||||
{% include "section-zeromq" %}
|
||||
|
||||
{% if network_manager == 'neutron' and network_manager_config -%}
|
||||
|
@ -186,7 +186,8 @@ class NovaComputeContextTests(CharmTestCase):
|
||||
'arch': platform.machine(),
|
||||
'kvm_hugepages': 0,
|
||||
'listen_tls': 0,
|
||||
'host_uuid': self.host_uuid}, libvirt())
|
||||
'host_uuid': self.host_uuid,
|
||||
'reserved_host_memory': 512}, libvirt())
|
||||
|
||||
def test_libvirt_bin_context_migration_tcp_listen(self):
|
||||
self.kv.return_value = FakeUnitdata(**{'host_uuid': self.host_uuid})
|
||||
@ -198,7 +199,8 @@ class NovaComputeContextTests(CharmTestCase):
|
||||
'arch': platform.machine(),
|
||||
'kvm_hugepages': 0,
|
||||
'listen_tls': 0,
|
||||
'host_uuid': self.host_uuid}, libvirt())
|
||||
'host_uuid': self.host_uuid,
|
||||
'reserved_host_memory': 512}, libvirt())
|
||||
|
||||
def test_libvirt_disk_cachemodes(self):
|
||||
self.kv.return_value = FakeUnitdata(**{'host_uuid': self.host_uuid})
|
||||
@ -211,7 +213,8 @@ class NovaComputeContextTests(CharmTestCase):
|
||||
'arch': platform.machine(),
|
||||
'kvm_hugepages': 0,
|
||||
'listen_tls': 0,
|
||||
'host_uuid': self.host_uuid}, libvirt())
|
||||
'host_uuid': self.host_uuid,
|
||||
'reserved_host_memory': 512}, libvirt())
|
||||
|
||||
def test_libvirt_hugepages(self):
|
||||
self.kv.return_value = FakeUnitdata(**{'host_uuid': self.host_uuid})
|
||||
@ -224,7 +227,8 @@ class NovaComputeContextTests(CharmTestCase):
|
||||
'hugepages': True,
|
||||
'kvm_hugepages': 1,
|
||||
'listen_tls': 0,
|
||||
'host_uuid': self.host_uuid}, libvirt())
|
||||
'host_uuid': self.host_uuid,
|
||||
'reserved_host_memory': 512}, libvirt())
|
||||
|
||||
@patch.object(context.uuid, 'uuid4')
|
||||
def test_libvirt_new_uuid(self, mock_uuid):
|
||||
@ -255,6 +259,25 @@ class NovaComputeContextTests(CharmTestCase):
|
||||
self.assertEqual(libvirt()['cpu_mode'],
|
||||
'host-passthrough')
|
||||
|
||||
def test_libvirt_vnf_configs(self):
|
||||
self.kv.return_value = FakeUnitdata(**{'host_uuid': self.host_uuid})
|
||||
self.test_config.set('hugepages', '22')
|
||||
self.test_config.set('reserved-host-memory', 1024)
|
||||
self.test_config.set('vcpu-pin-set', '^0^2')
|
||||
self.test_config.set('pci-passthrough-whitelist', 'mypcidevices')
|
||||
libvirt = context.NovaComputeLibvirtContext()
|
||||
|
||||
self.assertEqual(
|
||||
{'libvirtd_opts': '-d',
|
||||
'arch': platform.machine(),
|
||||
'hugepages': True,
|
||||
'kvm_hugepages': 1,
|
||||
'listen_tls': 0,
|
||||
'host_uuid': self.host_uuid,
|
||||
'reserved_host_memory': 1024,
|
||||
'vcpu_pin_set': '^0^2',
|
||||
'pci_passthrough_whitelist': 'mypcidevices'}, libvirt())
|
||||
|
||||
@patch.object(context.uuid, 'uuid4')
|
||||
def test_libvirt_cpu_mode_default(self, mock_uuid):
|
||||
libvirt = context.NovaComputeLibvirtContext()
|
||||
|
Loading…
Reference in New Issue
Block a user