From d409296bdeb97861e8eada77f9ad754fc957c12e Mon Sep 17 00:00:00 2001 From: Stephen Finucane Date: Tue, 9 May 2023 15:57:21 +0100 Subject: [PATCH] docs: Deindent code blocks We had a number of code blocks that were being incorrectly rendered inside block quotes, which messed with formatting somewhat. Correct them. This was done using the following script: sphinx-build -W -b xml doc/source doc/build/xml files=$(find doc/build/xml -name '*.xml' -print) for file in $files; do if xmllint -xpath "//block_quote/literal_block" "$file" &>/dev/null; then echo "$file" fi done Note that this also highlighted a file using DOS line endings. This is corrected. Change-Id: If63f31bf13c76a185e2c6eebc9b85f9a1f3bbde8 Signed-off-by: Stephen Finucane --- ...-floating-ip-over-l2-segmented-network.rst | 398 ++++++----- doc/source/admin/config-logging.rst | 656 +++++++++--------- doc/source/admin/config-ovs-offload.rst | 3 +- doc/source/admin/config-qos.rst | 3 +- doc/source/admin/config-routed-networks.rst | 2 +- .../admin/ovn/routed_provider_networks.rst | 98 +-- doc/source/admin/ovn/troubleshooting.rst | 44 +- doc/source/configuration/metering-agent.rst | 52 +- .../internals/ovn/ovn_network_logging.rst | 108 +-- .../internals/ovn/port_forwarding.rst | 100 +-- 10 files changed, 728 insertions(+), 736 deletions(-) diff --git a/doc/source/admin/config-bgp-floating-ip-over-l2-segmented-network.rst b/doc/source/admin/config-bgp-floating-ip-over-l2-segmented-network.rst index 12fe3d534a1..dcedcf827a8 100644 --- a/doc/source/admin/config-bgp-floating-ip-over-l2-segmented-network.rst +++ b/doc/source/admin/config-bgp-floating-ip-over-l2-segmented-network.rst @@ -65,10 +65,10 @@ python3-neutron-dynamic-routing packages). On top of that, "segments" and "bgp" must be added to the list of plugins in service_plugins. For example in neutron.conf: - .. code-block:: ini +.. code-block:: ini - [DEFAULT] - service_plugins=router,metering,qos,trunk,segments,bgp + [DEFAULT] + service_plugins=router,metering,qos,trunk,segments,bgp The BGP agent @@ -89,36 +89,36 @@ associated to a dynamic-routing-agent (in our example, the dynamic-routing agents run on compute 1 and 4). Finally, the peer is added to the BGP speaker, so the speaker initiates a BGP session to the network equipment. - .. code-block:: console +.. code-block:: console - $ # Create a BGP peer to represent the switch 1, - $ # which runs FRR on 10.1.0.253 with AS 64601 - $ openstack bgp peer create \ - --peer-ip 10.1.0.253 \ - --remote-as 64601 \ - rack1-switch-1 + $ # Create a BGP peer to represent the switch 1, + $ # which runs FRR on 10.1.0.253 with AS 64601 + $ openstack bgp peer create \ + --peer-ip 10.1.0.253 \ + --remote-as 64601 \ + rack1-switch-1 - $ # Create a BGP speaker on compute-1 - $ BGP_SPEAKER_ID_COMPUTE_1=$(openstack bgp speaker create \ - --local-as 64999 --ip-version 4 mycloud-compute-1.example.com \ - --format value -c id) + $ # Create a BGP speaker on compute-1 + $ BGP_SPEAKER_ID_COMPUTE_1=$(openstack bgp speaker create \ + --local-as 64999 --ip-version 4 mycloud-compute-1.example.com \ + --format value -c id) - $ # Get the agent ID of the dragent running on compute 1 - $ BGP_AGENT_ID_COMPUTE_1=$(openstack network agent list \ - --host mycloud-compute-1.example.com --agent-type bgp \ - --format value -c ID) + $ # Get the agent ID of the dragent running on compute 1 + $ BGP_AGENT_ID_COMPUTE_1=$(openstack network agent list \ + --host mycloud-compute-1.example.com --agent-type bgp \ + --format value -c ID) - $ # Add the BGP speaker to the dragent of compute 1 - $ openstack bgp dragent add speaker \ - ${BGP_AGENT_ID_COMPUTE_1} ${BGP_SPEAKER_ID_COMPUTE_1} + $ # Add the BGP speaker to the dragent of compute 1 + $ openstack bgp dragent add speaker \ + ${BGP_AGENT_ID_COMPUTE_1} ${BGP_SPEAKER_ID_COMPUTE_1} - $ # Add the BGP peer to the speaker of compute 1 - $ openstack bgp speaker add peer \ - compute-1.example.com rack1-switch-1 + $ # Add the BGP peer to the speaker of compute 1 + $ openstack bgp speaker add peer \ + compute-1.example.com rack1-switch-1 - $ # Tell the speaker not to advertize tenant networks - $ openstack bgp speaker set \ - --no-advertise-tenant-networks mycloud-compute-1.example.com + $ # Tell the speaker not to advertize tenant networks + $ openstack bgp speaker set \ + --no-advertise-tenant-networks mycloud-compute-1.example.com It is possible to repeat this operation for a 2nd machine on the same rack, @@ -141,25 +141,23 @@ in each host, according to the rack names. On the compute or network nodes, this is done in /etc/neutron/plugins/ml2/openvswitch_agent.ini using the bridge_mappings directive: - .. code-block:: ini - - [ovs] - bridge_mappings = physnet-rack1:br-ex +.. code-block:: ini + [ovs] + bridge_mappings = physnet-rack1:br-ex All of the physical networks created this way must be added in the configuration of the neutron-server as well (ie: this is used by both neutron-api and neutron-rpc-server). For example, with 3 racks, here's how /etc/neutron/plugins/ml2/ml2_conf.ini should look like: - .. code-block:: ini +.. code-block:: ini - [ml2_type_flat] - flat_networks = physnet-rack1,physnet-rack2,physnet-rack3 - - [ml2_type_vlan] - network_vlan_ranges = physnet-rack1,physnet-rack2,physnet-rack3 + [ml2_type_flat] + flat_networks = physnet-rack1,physnet-rack2,physnet-rack3 + [ml2_type_vlan] + network_vlan_ranges = physnet-rack1,physnet-rack2,physnet-rack3 Once this is done, the provider network can be created, using physnet-rack1 as "physical network". @@ -171,40 +169,40 @@ Setting-up the provider network Everything that is in the provider network's scope will be advertised through BGP. Here is how to create the network scope: - .. code-block:: console +.. code-block:: console - $ # Create the address scope - $ openstack address scope create --share --ip-version 4 provider-addr-scope + $ # Create the address scope + $ openstack address scope create --share --ip-version 4 provider-addr-scope Then, the network can be ceated using the physical network name set above: - .. code-block:: console +.. code-block:: console - $ # Create the provider network that spawns over all racks - $ openstack network create --external --share \ - --provider-physical-network physnet-rack1 \ - --provider-network-type vlan \ - --provider-segment 11 \ - provider-network + $ # Create the provider network that spawns over all racks + $ openstack network create --external --share \ + --provider-physical-network physnet-rack1 \ + --provider-network-type vlan \ + --provider-segment 11 \ + provider-network This automatically creates a network AND a segment. Though by default, this segment has no name, which isn't convenient. This name can be changed though: - .. code-block:: console +.. code-block:: console - $ # Get the network ID: - $ PROVIDER_NETWORK_ID=$(openstack network show provider-network \ - --format value -c id) + $ # Get the network ID: + $ PROVIDER_NETWORK_ID=$(openstack network show provider-network \ + --format value -c id) - $ # Get the segment ID: - $ FIRST_SEGMENT_ID=$(openstack network segment list \ - --format csv -c ID -c Network | \ - q -H -d, "SELECT ID FROM - WHERE Network='${PROVIDER_NETWORK_ID}'") + $ # Get the segment ID: + $ FIRST_SEGMENT_ID=$(openstack network segment list \ + --format csv -c ID -c Network | \ + q -H -d, "SELECT ID FROM - WHERE Network='${PROVIDER_NETWORK_ID}'") - $ # Set the 1st segment name, matching the rack name - $ openstack network segment set --name segment-rack1 ${FIRST_SEGMENT_ID} + $ # Set the 1st segment name, matching the rack name + $ openstack network segment set --name segment-rack1 ${FIRST_SEGMENT_ID} Setting-up the 2nd segment @@ -213,15 +211,15 @@ Setting-up the 2nd segment The 2nd segment, which will be attached to our provider network, is created this way: - .. code-block:: console +.. code-block:: console - $ # Create the 2nd segment, matching the 2nd rack name - $ openstack network segment create \ - --physical-network physnet-rack2 \ - --network-type vlan \ - --segment 13 \ - --network provider-network \ - segment-rack2 + $ # Create the 2nd segment, matching the 2nd rack name + $ openstack network segment create \ + --physical-network physnet-rack2 \ + --network-type vlan \ + --segment 13 \ + --network provider-network \ + segment-rack2 Setting-up the provider subnets for the BGP next HOP routing @@ -232,45 +230,45 @@ network is in use in the machines. In order to use the address scope, subnet pools must be used. Here is how to create the subnet pool with the two ranges to use later when creating the subnets: - .. code-block:: console +.. code-block:: console - $ # Create the provider subnet pool which includes all ranges for all racks - $ openstack subnet pool create \ - --pool-prefix 10.1.0.0/24 \ - --pool-prefix 10.2.0.0/24 \ - --address-scope provider-addr-scope \ - --share \ - provider-subnet-pool + $ # Create the provider subnet pool which includes all ranges for all racks + $ openstack subnet pool create \ + --pool-prefix 10.1.0.0/24 \ + --pool-prefix 10.2.0.0/24 \ + --address-scope provider-addr-scope \ + --share \ + provider-subnet-pool Then, this is how to create the two subnets. In this example, we are keeping the addresses in .1 for the gateway, .2 for the DHCP server, and .253 +.254, as these addresses will be used by the switches for the BGP announcements: - .. code-block:: console +.. code-block:: console - $ # Create the subnet for the physnet-rack-1, using the segment-rack-1, and - $ # the subnet_service_type network:floatingip_agent_gateway - $ openstack subnet create \ - --service-type 'network:floatingip_agent_gateway' \ - --subnet-pool provider-subnet-pool \ - --subnet-range 10.1.0.0/24 \ - --allocation-pool start=10.1.0.3,end=10.1.0.252 \ - --gateway 10.1.0.1 \ - --network provider-network \ - --network-segment segment-rack1 \ - provider-subnet-rack1 + $ # Create the subnet for the physnet-rack-1, using the segment-rack-1, and + $ # the subnet_service_type network:floatingip_agent_gateway + $ openstack subnet create \ + --service-type 'network:floatingip_agent_gateway' \ + --subnet-pool provider-subnet-pool \ + --subnet-range 10.1.0.0/24 \ + --allocation-pool start=10.1.0.3,end=10.1.0.252 \ + --gateway 10.1.0.1 \ + --network provider-network \ + --network-segment segment-rack1 \ + provider-subnet-rack1 - $ # The same, for the 2nd rack - $ openstack subnet create \ - --service-type 'network:floatingip_agent_gateway' \ - --subnet-pool provider-subnet-pool \ - --subnet-range 10.2.0.0/24 \ - --allocation-pool start=10.2.0.3,end=10.2.0.252 \ - --gateway 10.2.0.1 \ - --network provider-network \ - --network-segment segment-rack2 \ - provider-subnet-rack2 + $ # The same, for the 2nd rack + $ openstack subnet create \ + --service-type 'network:floatingip_agent_gateway' \ + --subnet-pool provider-subnet-pool \ + --subnet-range 10.2.0.0/24 \ + --allocation-pool start=10.2.0.3,end=10.2.0.252 \ + --gateway 10.2.0.1 \ + --network provider-network \ + --network-segment segment-rack2 \ + provider-subnet-rack2 Note the service types. network:floatingip_agent_gateway makes sure that these @@ -285,21 +283,21 @@ This is to be repeated each time a new subnet must be created for floating IPs and router gateways. First, the range is added in the subnet pool, then the subnet itself is created: - .. code-block:: console +.. code-block:: console - $ # Add a new prefix in the subnet pool for the floating IPs: - $ openstack subnet pool set \ - --pool-prefix 203.0.113.0/24 \ - provider-subnet-pool + $ # Add a new prefix in the subnet pool for the floating IPs: + $ openstack subnet pool set \ + --pool-prefix 203.0.113.0/24 \ + provider-subnet-pool - $ # Create the floating IP subnet - $ openstack subnet create vm-fip \ - --service-type 'network:routed' \ - --service-type 'network:floatingip' \ - --service-type 'network:router_gateway' \ - --subnet-pool provider-subnet-pool \ - --subnet-range 203.0.113.0/24 \ - --network provider-network + $ # Create the floating IP subnet + $ openstack subnet create vm-fip \ + --service-type 'network:routed' \ + --service-type 'network:floatingip' \ + --service-type 'network:router_gateway' \ + --subnet-pool provider-subnet-pool \ + --subnet-range 203.0.113.0/24 \ + --network provider-network The service-type network:routed ensures we're using BGP through the provider network to advertize the IPs. network:floatingip and network:router_gateway @@ -312,13 +310,13 @@ The provider network needs to be added to each of the BGP speakers. This means each time a new rack is setup, the provider network must be added to the 2 BGP speakers of that rack. - .. code-block:: console +.. code-block:: console - $ # Add the provider network to the BGP speakers. - $ openstack bgp speaker add network \ - mycloud-compute-1.example.com provider-network - $ openstack bgp speaker add network \ - mycloud-compute-4.example.com provider-network + $ # Add the provider network to the BGP speakers. + $ openstack bgp speaker add network \ + mycloud-compute-1.example.com provider-network + $ openstack bgp speaker add network \ + mycloud-compute-4.example.com provider-network In this example, we've selected two compute nodes that are also running an @@ -332,68 +330,68 @@ This can be done by each customer. A subnet pool isn't mandatory, but it is nice to have. Typically, the customer network will not be advertized through BGP (but this can be done if needed). - .. code-block:: console +.. code-block:: console - $ # Create the tenant private network - $ openstack network create tenant-network + $ # Create the tenant private network + $ openstack network create tenant-network - $ # Self-service network pool: - $ openstack subnet pool create \ - --pool-prefix 192.168.130.0/23 \ - --share \ - tenant-subnet-pool + $ # Self-service network pool: + $ openstack subnet pool create \ + --pool-prefix 192.168.130.0/23 \ + --share \ + tenant-subnet-pool - $ # Self-service subnet: - $ openstack subnet create \ - --network tenant-network \ - --subnet-pool tenant-subnet-pool \ - --prefix-length 24 \ - tenant-subnet-1 + $ # Self-service subnet: + $ openstack subnet create \ + --network tenant-network \ + --subnet-pool tenant-subnet-pool \ + --prefix-length 24 \ + tenant-subnet-1 - $ # Create the router - $ openstack router create tenant-router + $ # Create the router + $ openstack router create tenant-router - $ # Add the tenant subnet to the tenant router - $ openstack router add subnet \ - tenant-router tenant-subnet-1 + $ # Add the tenant subnet to the tenant router + $ openstack router add subnet \ + tenant-router tenant-subnet-1 - $ # Set the router's default gateway. This will use one public IP. - $ openstack router set \ - --external-gateway provider-network tenant-router + $ # Set the router's default gateway. This will use one public IP. + $ openstack router set \ + --external-gateway provider-network tenant-router - $ # Create a first VM on the tenant subnet - $ openstack server create --image debian-10.5.0-openstack-amd64.qcow2 \ - --flavor cpu2-ram6-disk20 \ - --nic net-id=tenant-network \ - --key-name yubikey-zigo \ - test-server-1 + $ # Create a first VM on the tenant subnet + $ openstack server create --image debian-10.5.0-openstack-amd64.qcow2 \ + --flavor cpu2-ram6-disk20 \ + --nic net-id=tenant-network \ + --key-name yubikey-zigo \ + test-server-1 - $ # Eventually, add a floating IP - $ openstack floating ip create provider-network - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | created_at | 2020-12-15T11:48:36Z | - | description | | - | dns_domain | None | - | dns_name | None | - | fixed_ip_address | None | - | floating_ip_address | 203.0.113.17 | - | floating_network_id | 859f5302-7b22-4c50-92f8-1f71d6f3f3f4 | - | id | 01de252b-4b78-4198-bc28-1328393bf084 | - | name | 203.0.113.17 | - | port_details | None | - | port_id | None | - | project_id | d71a5d98aef04386b57736a4ea4f3644 | - | qos_policy_id | None | - | revision_number | 0 | - | router_id | None | - | status | DOWN | - | subnet_id | None | - | tags | [] | - | updated_at | 2020-12-15T11:48:36Z | - +---------------------+--------------------------------------+ - $ openstack server add floating ip test-server-1 203.0.113.17 + $ # Eventually, add a floating IP + $ openstack floating ip create provider-network + +---------------------+--------------------------------------+ + | Field | Value | + +---------------------+--------------------------------------+ + | created_at | 2020-12-15T11:48:36Z | + | description | | + | dns_domain | None | + | dns_name | None | + | fixed_ip_address | None | + | floating_ip_address | 203.0.113.17 | + | floating_network_id | 859f5302-7b22-4c50-92f8-1f71d6f3f3f4 | + | id | 01de252b-4b78-4198-bc28-1328393bf084 | + | name | 203.0.113.17 | + | port_details | None | + | port_id | None | + | project_id | d71a5d98aef04386b57736a4ea4f3644 | + | qos_policy_id | None | + | revision_number | 0 | + | router_id | None | + | status | DOWN | + | subnet_id | None | + | tags | [] | + | updated_at | 2020-12-15T11:48:36Z | + +---------------------+--------------------------------------+ + $ openstack server add floating ip test-server-1 203.0.113.17 Cumulus switch configuration ---------------------------- @@ -409,38 +407,38 @@ that works (at least with Cumulus switches). Here's how. In /etc/network/switchd.conf we change this: - .. code-block:: ini +.. code-block:: ini - # configure a route instead of a neighbor with the same ip/mask - #route.route_preferred_over_neigh = FALSE - route.route_preferred_over_neigh = TRUE + # configure a route instead of a neighbor with the same ip/mask + #route.route_preferred_over_neigh = FALSE + route.route_preferred_over_neigh = TRUE and then simply restart switchd: - .. code-block:: console +.. code-block:: console - systemctl restart switchd + systemctl restart switchd This reboots the switch ASIC of the switch, so it may be a dangerous thing to do with no switch redundancy (so be careful when doing it). The completely safe procedure, if having 2 switches per rack, looks like this: - .. code-block:: console +.. code-block:: console - # save clagd priority - OLDPRIO=$(clagctl status | sed -r -n 's/.*Our.*Role: ([0-9]+) 0.*/\1/p') - # make sure that this switch is not the primary clag switch. otherwise the - # secondary switch will also shutdown all interfaces when loosing contact - # with the primary switch. - clagctl priority 16535 + # save clagd priority + OLDPRIO=$(clagctl status | sed -r -n 's/.*Our.*Role: ([0-9]+) 0.*/\1/p') + # make sure that this switch is not the primary clag switch. otherwise the + # secondary switch will also shutdown all interfaces when loosing contact + # with the primary switch. + clagctl priority 16535 - # tell neighbors to not route through this router - vtysh - vtysh# router bgp 64999 - vtysh# bgp graceful-shutdown - vtysh# exit - systemctl restart switchd - clagctl priority $OLDPRIO + # tell neighbors to not route through this router + vtysh + vtysh# router bgp 64999 + vtysh# bgp graceful-shutdown + vtysh# exit + systemctl restart switchd + clagctl priority $OLDPRIO Verification ------------ @@ -449,16 +447,16 @@ If everything goes well, the floating IPs are advertized over BGP through the provider network. Here is an example with 4 VMs deployed on 2 racks. Neutron is here picking-up IPs on the segmented network as Nexthop. - .. code-block:: console +.. code-block:: console - $ # Check the advertized routes: - $ openstack bgp speaker list advertised routes \ - mycloud-compute-4.example.com - +-----------------+-----------+ - | Destination | Nexthop | - +-----------------+-----------+ - | 203.0.113.17/32 | 10.1.0.48 | - | 203.0.113.20/32 | 10.1.0.65 | - | 203.0.113.40/32 | 10.2.0.23 | - | 203.0.113.55/32 | 10.2.0.35 | - +-----------------+-----------+ + $ # Check the advertized routes: + $ openstack bgp speaker list advertised routes \ + mycloud-compute-4.example.com + +-----------------+-----------+ + | Destination | Nexthop | + +-----------------+-----------+ + | 203.0.113.17/32 | 10.1.0.48 | + | 203.0.113.20/32 | 10.1.0.65 | + | 203.0.113.40/32 | 10.2.0.23 | + | 203.0.113.55/32 | 10.2.0.35 | + +-----------------+-----------+ diff --git a/doc/source/admin/config-logging.rst b/doc/source/admin/config-logging.rst index 6cd7ca2afec..ba1953738c7 100644 --- a/doc/source/admin/config-logging.rst +++ b/doc/source/admin/config-logging.rst @@ -1,328 +1,328 @@ -.. _config-logging: - -================================ -Neutron Packet Logging Framework -================================ - -Packet logging service is designed as a Neutron plug-in that captures network -packets for relevant resources (e.g. security group or firewall group) when the -registered events occur. - -.. image:: figures/logging-framework.png - :width: 100% - :alt: Packet Logging Framework - -Supported loggable resource types -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -From Rocky release, both of ``security_group`` and ``firewall_group`` are -supported as resource types in Neutron packet logging framework. - - -Service Configuration -~~~~~~~~~~~~~~~~~~~~~ - -To enable the logging service, follow the below steps. - -#. On Neutron controller node, add ``log`` to ``service_plugins`` setting in - ``/etc/neutron/neutron.conf`` file. For example: - - .. code-block:: none - - service_plugins = router,metering,log - -#. To enable logging service for ``security_group`` in Layer 2, add ``log`` to - option ``extensions`` in section ``[agent]`` in ``/etc/neutron/plugins/ml2/ml2_conf.ini`` - for controller node and in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` - for compute/network nodes. For example: - - .. code-block:: ini - - [agent] - extensions = log - - .. note:: - - Fwaas v2 log is currently only supported by openvswitch, the firewall - logging driver of linuxbridge is not implemented. - -#. To enable logging service for ``firewall_group`` in Layer 3, add - ``fwaas_v2_log`` to option ``extensions`` in section ``[AGENT]`` in - ``/etc/neutron/l3_agent.ini`` for network nodes. For example: - - .. code-block:: ini - - [AGENT] - extensions = fwaas_v2,fwaas_v2_log - -#. On compute/network nodes, add configuration for logging service to - ``[network_log]`` in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` and in - ``/etc/neutron/l3_agent.ini`` as shown bellow: - - .. code-block:: ini - - [network_log] - rate_limit = 100 - burst_limit = 25 - #local_output_log_base = - - In which, ``rate_limit`` is used to configure the maximum number of packets - to be logged per second (packets per second). When a high rate triggers - ``rate_limit``, logging queues packets to be logged. ``burst_limit`` is - used to configure the maximum of queued packets. And logged packets can be - stored anywhere by using ``local_output_log_base``. - - .. note:: - - - It requires at least ``100`` for ``rate_limit`` and at least ``25`` - for ``burst_limit``. - - If ``rate_limit`` is unset, logging will log unlimited. - - If we don't specify ``local_output_log_base``, logged packets will be - stored in system journal like ``/var/log/syslog`` by default. - -Trusted projects policy.yaml configuration ----------------------------------------------- - -With the default ``/etc/neutron/policy.yaml``, administrators must set up -resource logging on behalf of the cloud projects. - -If projects are trusted to administer their own loggable resources in their -cloud, neutron's policy file ``policy.yaml`` can be modified to allow this. - -Modify ``/etc/neutron/policy.yaml`` entries as follows: - -.. code-block:: none - - "get_loggable_resources": "rule:regular_user", - "create_log": "rule:regular_user", - "get_log": "rule:regular_user", - "get_logs": "rule:regular_user", - "update_log": "rule:regular_user", - "delete_log": "rule:regular_user", - -Service workflow for Operator -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. To check the loggable resources that are supported by framework: - - .. code-block:: console - - $ openstack network loggable resources list - +-----------------+ - | Supported types | - +-----------------+ - | security_group | - | firewall_group | - +-----------------+ - - .. note:: - - - In VM ports, logging for ``security_group`` in currently works with - ``openvswitch`` firewall driver only. ``linuxbridge`` is under - development. - - Logging for ``firewall_group`` works on internal router ports only. VM - ports would be supported in the future. - -#. Log creation: - - * Create a logging resource with an appropriate resource type - - .. code-block:: console - - $ openstack network log create --resource-type security_group \ - --description "Collecting all security events" \ - --event ALL Log_Created - +-----------------+------------------------------------------------+ - | Field | Value | - +-----------------+------------------------------------------------+ - | Description | Collecting all security events | - | Enabled | True | - | Event | ALL | - | ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d | - | Name | Log_Created | - | Project | 02568bd62b414221956f15dbe9527d16 | - | Resource | None | - | Target | None | - | Type | security_group | - | created_at | 2017-07-05T02:56:43Z | - | revision_number | 0 | - | tenant_id | 02568bd62b414221956f15dbe9527d16 | - | updated_at | 2017-07-05T02:56:43Z | - +-----------------+------------------------------------------------+ - - .. warning:: - - In the case of ``--resource`` and ``--target`` are not specified from the - request, these arguments will be assigned to ``ALL`` by default. Hence, - there is an enormous range of log events will be created. - - * Create logging resource with a given resource (sg1 or fwg1) - - .. code-block:: console - - $ openstack network log create my-log --resource-type security_group --resource sg1 - $ openstack network log create my-log --resource-type firewall_group --resource fwg1 - - * Create logging resource with a given target (portA) - - .. code-block:: console - - $ openstack network log create my-log --resource-type security_group --target portA - - * Create logging resource for only the given target (portB) and the given - resource (sg1 or fwg1) - - .. code-block:: console - - $ openstack network log create my-log --resource-type security_group --target portB --resource sg1 - $ openstack network log create my-log --resource-type firewall_group --target portB --resource fwg1 - - .. note:: - - - The ``Enabled`` field is set to ``True`` by default. If enabled, logged - events are written to the destination if ``local_output_log_base`` is - configured or ``/var/log/syslog`` in default. - - The ``Event`` field will be set to ``ALL`` if ``--event`` is not specified - from log creation request. - -#. Enable/Disable log - - We can ``enable`` or ``disable`` logging objects at runtime. It means that - it will apply to all registered ports with the logging object immediately. - For example: - - .. code-block:: console - - $ openstack network log set --disable Log_Created - $ openstack network log show Log_Created - +-----------------+------------------------------------------------+ - | Field | Value | - +-----------------+------------------------------------------------+ - | Description | Collecting all security events | - | Enabled | False | - | Event | ALL | - | ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d | - | Name | Log_Created | - | Project | 02568bd62b414221956f15dbe9527d16 | - | Resource | None | - | Target | None | - | Type | security_group | - | created_at | 2017-07-05T02:56:43Z | - | revision_number | 1 | - | tenant_id | 02568bd62b414221956f15dbe9527d16 | - | updated_at | 2017-07-05T03:12:01Z | - +-----------------+------------------------------------------------+ - -Logged events description -~~~~~~~~~~~~~~~~~~~~~~~~~ - -Currently, packet logging framework supports to collect ``ACCEPT`` or ``DROP`` -or both events related to registered resources. As mentioned above, Neutron -packet logging framework offers two loggable resources through the ``log`` -service plug-in: ``security_group`` and ``firewall_group``. - -The general characteristics of each event will be shown as the following: - -* Log every ``DROP`` event: Every ``DROP`` security events will be generated - when an incoming or outgoing session is blocked by the security groups or - firewall groups - -* Log an ``ACCEPT`` event: The ``ACCEPT`` security event will be generated only - for each ``NEW`` incoming or outgoing session that is allowed by security - groups or firewall groups. More details for the ``ACCEPT`` events are shown - as bellow: - - * North/South ``ACCEPT``: For a North/South session there would be a single - ``ACCEPT`` event irrespective of direction. - - * East/West ``ACCEPT``/``ACCEPT``: In an intra-project East/West session - where the originating port allows the session and the destination port - allows the session, i.e. the traffic is allowed, there would be two - ``ACCEPT`` security events generated, one from the perspective of the - originating port and one from the perspective of the destination port. - - * East/West ``ACCEPT``/``DROP``: In an intra-project East/West session - initiation where the originating port allows the session and the - destination port does not allow the session there would be ``ACCEPT`` - security events generated from the perspective of the originating port and - ``DROP`` security events generated from the perspective of the destination - port. - -#. The security events that are collected by security group should include: - - * A timestamp of the flow. - * A status of the flow ``ACCEPT``/``DROP``. - * An indication of the originator of the flow, e.g which project or log resource - generated the events. - * An identifier of the associated instance interface (neutron port id). - * A layer 2, 3 and 4 information (mac, address, port, protocol, etc). - - * Security event record format: - - * Logged data of an ``ACCEPT`` event would look like: - - .. code-block:: console - - May 5 09:05:07 action=ACCEPT project_id=736672c700cd43e1bd321aeaf940365c - log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b', '42332d89-df42-4588-a2bb-3ce50829ac51'] - vm_port=e0259ade-86de-482e-a717-f58258f7173f - ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'), - ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0, - option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4), - tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460), - TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896), - TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)], - seq=3284890090,src_port=47825,urgent=0,window_size=14600) - - * Logged data of a ``DROP`` event: - - .. code-block:: console - - May 5 09:05:07 action=DROP project_id=736672c700cd43e1bd321aeaf940365c - log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b'] vm_port=e0259ade-86de-482e-a717-f58258f7173f - ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'), - ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0, - option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4), - tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460), - TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896), - TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)], - seq=3284890090,src_port=47825,urgent=0,window_size=14600) - -#. The events that are collected by firewall group should include: - - * A timestamp of the flow. - * A status of the flow ``ACCEPT``/``DROP``. - * The identifier of log objects that are collecting this event - * An identifier of the associated instance interface (neutron port id). - * A layer 2, 3 and 4 information (mac, address, port, protocol, etc). - - * Security event record format: - - * Logged data of an ``ACCEPT`` event would look like: - - .. code-block:: console - - Jul 26 14:46:20: - action=ACCEPT, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2 - pkt=ethernet(dst='fa:16:3e:8f:47:c5',ethertype=2048,src='fa:16:3e:1b:3e:67') - ipv4(csum=47423,dst='10.10.1.16',flags=2,header_length=5,identification=27969,offset=0,option=None,proto=1,src='10.10.0.5',tos=0,total_length=84,ttl=63,version=4) - icmp(code=0,csum=41376,data=echo(data='\xe5\xf2\xfej\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 - \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 - \x00\x00\x00\x00\x00\x00\x00',id=29185,seq=0),type=8) - - * Logged data of a ``DROP`` event: - - .. code-block:: console - - Jul 26 14:51:20: - action=DROP, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2 - pkt=ethernet(dst='fa:16:3e:32:7d:ff',ethertype=2048,src='fa:16:3e:28:83:51') - ipv4(csum=17518,dst='10.10.0.5',flags=2,header_length=5,identification=57874,offset=0,option=None,proto=1,src='10.10.1.16',tos=0,total_length=84,ttl=63,version=4) - icmp(code=0,csum=23772,data=echo(data='\x8a\xa0\xac|\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 - \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 - \x00\x00\x00\x00\x00\x00\x00',id=25601,seq=5),type=8) - -.. note:: - - No other extraneous events are generated within the security event logs, - e.g. no debugging data, etc. +.. _config-logging: + +================================ +Neutron Packet Logging Framework +================================ + +Packet logging service is designed as a Neutron plug-in that captures network +packets for relevant resources (e.g. security group or firewall group) when the +registered events occur. + +.. image:: figures/logging-framework.png + :width: 100% + :alt: Packet Logging Framework + +Supported loggable resource types +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +From Rocky release, both of ``security_group`` and ``firewall_group`` are +supported as resource types in Neutron packet logging framework. + + +Service Configuration +~~~~~~~~~~~~~~~~~~~~~ + +To enable the logging service, follow the below steps. + +#. On Neutron controller node, add ``log`` to ``service_plugins`` setting in + ``/etc/neutron/neutron.conf`` file. For example: + + .. code-block:: none + + service_plugins = router,metering,log + +#. To enable logging service for ``security_group`` in Layer 2, add ``log`` to + option ``extensions`` in section ``[agent]`` in ``/etc/neutron/plugins/ml2/ml2_conf.ini`` + for controller node and in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` + for compute/network nodes. For example: + + .. code-block:: ini + + [agent] + extensions = log + + .. note:: + + Fwaas v2 log is currently only supported by openvswitch, the firewall + logging driver of linuxbridge is not implemented. + +#. To enable logging service for ``firewall_group`` in Layer 3, add + ``fwaas_v2_log`` to option ``extensions`` in section ``[AGENT]`` in + ``/etc/neutron/l3_agent.ini`` for network nodes. For example: + + .. code-block:: ini + + [AGENT] + extensions = fwaas_v2,fwaas_v2_log + +#. On compute/network nodes, add configuration for logging service to + ``[network_log]`` in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` and in + ``/etc/neutron/l3_agent.ini`` as shown below: + + .. code-block:: ini + + [network_log] + rate_limit = 100 + burst_limit = 25 + #local_output_log_base = + + In which, ``rate_limit`` is used to configure the maximum number of packets + to be logged per second (packets per second). When a high rate triggers + ``rate_limit``, logging queues packets to be logged. ``burst_limit`` is + used to configure the maximum of queued packets. And logged packets can be + stored anywhere by using ``local_output_log_base``. + + .. note:: + + - It requires at least ``100`` for ``rate_limit`` and at least ``25`` + for ``burst_limit``. + - If ``rate_limit`` is unset, logging will log unlimited. + - If we don't specify ``local_output_log_base``, logged packets will be + stored in system journal like ``/var/log/syslog`` by default. + +Trusted projects policy.yaml configuration +---------------------------------------------- + +With the default ``/etc/neutron/policy.yaml``, administrators must set up +resource logging on behalf of the cloud projects. + +If projects are trusted to administer their own loggable resources in their +cloud, neutron's policy file ``policy.yaml`` can be modified to allow this. + +Modify ``/etc/neutron/policy.yaml`` entries as follows: + +.. code-block:: none + + "get_loggable_resources": "rule:regular_user", + "create_log": "rule:regular_user", + "get_log": "rule:regular_user", + "get_logs": "rule:regular_user", + "update_log": "rule:regular_user", + "delete_log": "rule:regular_user", + +Service workflow for Operator +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#. To check the loggable resources that are supported by framework: + + .. code-block:: console + + $ openstack network loggable resources list + +-----------------+ + | Supported types | + +-----------------+ + | security_group | + | firewall_group | + +-----------------+ + + .. note:: + + - In VM ports, logging for ``security_group`` in currently works with + ``openvswitch`` firewall driver only. ``linuxbridge`` is under + development. + - Logging for ``firewall_group`` works on internal router ports only. VM + ports would be supported in the future. + +#. Log creation: + + * Create a logging resource with an appropriate resource type + + .. code-block:: console + + $ openstack network log create --resource-type security_group \ + --description "Collecting all security events" \ + --event ALL Log_Created + +-----------------+------------------------------------------------+ + | Field | Value | + +-----------------+------------------------------------------------+ + | Description | Collecting all security events | + | Enabled | True | + | Event | ALL | + | ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d | + | Name | Log_Created | + | Project | 02568bd62b414221956f15dbe9527d16 | + | Resource | None | + | Target | None | + | Type | security_group | + | created_at | 2017-07-05T02:56:43Z | + | revision_number | 0 | + | tenant_id | 02568bd62b414221956f15dbe9527d16 | + | updated_at | 2017-07-05T02:56:43Z | + +-----------------+------------------------------------------------+ + + .. warning:: + + In the case of ``--resource`` and ``--target`` are not specified from the + request, these arguments will be assigned to ``ALL`` by default. Hence, + there is an enormous range of log events will be created. + + * Create logging resource with a given resource (sg1 or fwg1) + + .. code-block:: console + + $ openstack network log create my-log --resource-type security_group --resource sg1 + $ openstack network log create my-log --resource-type firewall_group --resource fwg1 + + * Create logging resource with a given target (portA) + + .. code-block:: console + + $ openstack network log create my-log --resource-type security_group --target portA + + * Create logging resource for only the given target (portB) and the given + resource (sg1 or fwg1) + + .. code-block:: console + + $ openstack network log create my-log --resource-type security_group --target portB --resource sg1 + $ openstack network log create my-log --resource-type firewall_group --target portB --resource fwg1 + + .. note:: + + - The ``Enabled`` field is set to ``True`` by default. If enabled, logged + events are written to the destination if ``local_output_log_base`` is + configured or ``/var/log/syslog`` in default. + - The ``Event`` field will be set to ``ALL`` if ``--event`` is not specified + from log creation request. + +#. Enable/Disable log + + We can ``enable`` or ``disable`` logging objects at runtime. It means that + it will apply to all registered ports with the logging object immediately. + For example: + + .. code-block:: console + + $ openstack network log set --disable Log_Created + $ openstack network log show Log_Created + +-----------------+------------------------------------------------+ + | Field | Value | + +-----------------+------------------------------------------------+ + | Description | Collecting all security events | + | Enabled | False | + | Event | ALL | + | ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d | + | Name | Log_Created | + | Project | 02568bd62b414221956f15dbe9527d16 | + | Resource | None | + | Target | None | + | Type | security_group | + | created_at | 2017-07-05T02:56:43Z | + | revision_number | 1 | + | tenant_id | 02568bd62b414221956f15dbe9527d16 | + | updated_at | 2017-07-05T03:12:01Z | + +-----------------+------------------------------------------------+ + +Logged events description +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Currently, packet logging framework supports to collect ``ACCEPT`` or ``DROP`` +or both events related to registered resources. As mentioned above, Neutron +packet logging framework offers two loggable resources through the ``log`` +service plug-in: ``security_group`` and ``firewall_group``. + +The general characteristics of each event will be shown as the following: + +* Log every ``DROP`` event: Every ``DROP`` security events will be generated + when an incoming or outgoing session is blocked by the security groups or + firewall groups + +* Log an ``ACCEPT`` event: The ``ACCEPT`` security event will be generated only + for each ``NEW`` incoming or outgoing session that is allowed by security + groups or firewall groups. More details for the ``ACCEPT`` events are shown + as bellow: + + * North/South ``ACCEPT``: For a North/South session there would be a single + ``ACCEPT`` event irrespective of direction. + + * East/West ``ACCEPT``/``ACCEPT``: In an intra-project East/West session + where the originating port allows the session and the destination port + allows the session, i.e. the traffic is allowed, there would be two + ``ACCEPT`` security events generated, one from the perspective of the + originating port and one from the perspective of the destination port. + + * East/West ``ACCEPT``/``DROP``: In an intra-project East/West session + initiation where the originating port allows the session and the + destination port does not allow the session there would be ``ACCEPT`` + security events generated from the perspective of the originating port and + ``DROP`` security events generated from the perspective of the destination + port. + +#. The security events that are collected by security group should include: + + * A timestamp of the flow. + * A status of the flow ``ACCEPT``/``DROP``. + * An indication of the originator of the flow, e.g which project or log resource + generated the events. + * An identifier of the associated instance interface (neutron port id). + * A layer 2, 3 and 4 information (mac, address, port, protocol, etc). + + * Security event record format: + + Logged data of an ``ACCEPT`` event would look like: + + .. code-block:: console + + May 5 09:05:07 action=ACCEPT project_id=736672c700cd43e1bd321aeaf940365c + log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b', '42332d89-df42-4588-a2bb-3ce50829ac51'] + vm_port=e0259ade-86de-482e-a717-f58258f7173f + ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'), + ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0, + option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4), + tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460), + TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896), + TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)], + seq=3284890090,src_port=47825,urgent=0,window_size=14600) + + Logged data of a ``DROP`` event: + + .. code-block:: console + + May 5 09:05:07 action=DROP project_id=736672c700cd43e1bd321aeaf940365c + log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b'] vm_port=e0259ade-86de-482e-a717-f58258f7173f + ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'), + ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0, + option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4), + tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460), + TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896), + TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)], + seq=3284890090,src_port=47825,urgent=0,window_size=14600) + +#. The events that are collected by firewall group should include: + + * A timestamp of the flow. + * A status of the flow ``ACCEPT``/``DROP``. + * The identifier of log objects that are collecting this event + * An identifier of the associated instance interface (neutron port id). + * A layer 2, 3 and 4 information (mac, address, port, protocol, etc). + + * Security event record format: + + Logged data of an ``ACCEPT`` event would look like: + + .. code-block:: console + + Jul 26 14:46:20: + action=ACCEPT, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2 + pkt=ethernet(dst='fa:16:3e:8f:47:c5',ethertype=2048,src='fa:16:3e:1b:3e:67') + ipv4(csum=47423,dst='10.10.1.16',flags=2,header_length=5,identification=27969,offset=0,option=None,proto=1,src='10.10.0.5',tos=0,total_length=84,ttl=63,version=4) + icmp(code=0,csum=41376,data=echo(data='\xe5\xf2\xfej\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 + \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 + \x00\x00\x00\x00\x00\x00\x00',id=29185,seq=0),type=8) + + Logged data of a ``DROP`` event: + + .. code-block:: console + + Jul 26 14:51:20: + action=DROP, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2 + pkt=ethernet(dst='fa:16:3e:32:7d:ff',ethertype=2048,src='fa:16:3e:28:83:51') + ipv4(csum=17518,dst='10.10.0.5',flags=2,header_length=5,identification=57874,offset=0,option=None,proto=1,src='10.10.1.16',tos=0,total_length=84,ttl=63,version=4) + icmp(code=0,csum=23772,data=echo(data='\x8a\xa0\xac|\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 + \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 + \x00\x00\x00\x00\x00\x00\x00',id=25601,seq=5),type=8) + +.. note:: + + No other extraneous events are generated within the security event logs, + e.g. no debugging data, etc. diff --git a/doc/source/admin/config-ovs-offload.rst b/doc/source/admin/config-ovs-offload.rst index ad48c921a14..c3709693741 100644 --- a/doc/source/admin/config-ovs-offload.rst +++ b/doc/source/admin/config-ovs-offload.rst @@ -166,7 +166,7 @@ network and has access to the private networks of all nodes. The PCI bus number of the PF (03:00.0) and VFs (03:00.2 .. 03:00.5) will be used later. - .. code-block::bash + .. code-block:: bash # lspci | grep Ethernet 03:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5] @@ -176,7 +176,6 @@ network and has access to the private networks of all nodes. 03:00.4 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] 03:00.5 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] - .. code-block:: bash # ip link show enp3s0f0 diff --git a/doc/source/admin/config-qos.rst b/doc/source/admin/config-qos.rst index b46273fc97c..97c7ca61d04 100644 --- a/doc/source/admin/config-qos.rst +++ b/doc/source/admin/config-qos.rst @@ -268,13 +268,12 @@ On the network and compute nodes: [agent] extensions = fip_qos, gateway_ip_qos - #. As rate limit doesn't work on Open vSwitch's ``internal`` ports, optionally, as a workaround, to make QoS bandwidth limit work on router's gateway ports, set ``ovs_use_veth`` to ``True`` in ``DEFAULT`` section in ``/etc/neutron/l3_agent.ini`` - .. code-block:: ini + .. code-block:: ini [DEFAULT] ovs_use_veth = True diff --git a/doc/source/admin/config-routed-networks.rst b/doc/source/admin/config-routed-networks.rst index ee07305c1f9..9b30f322f9a 100644 --- a/doc/source/admin/config-routed-networks.rst +++ b/doc/source/admin/config-routed-networks.rst @@ -634,7 +634,7 @@ creating multiple networks and/or increasing broadcast domain. example, the second segment uses the ``provider1`` physical network with VLAN ID 2020. - .. code-block:: console + .. code-block:: console $ openstack network segment create --physical-network provider1 \ --network-type vlan --segment 2020 --network multisegment1 segment1-2 diff --git a/doc/source/admin/ovn/routed_provider_networks.rst b/doc/source/admin/ovn/routed_provider_networks.rst index 26db350635c..cd534c78101 100644 --- a/doc/source/admin/ovn/routed_provider_networks.rst +++ b/doc/source/admin/ovn/routed_provider_networks.rst @@ -16,55 +16,55 @@ For example, in the OVN Northbound database, this is how a VLAN Provider Network with two segments (VLAN: 100, 200) is related to their ``Logical_Switch`` counterpart: - .. code-block:: bash +.. code-block:: bash - $ ovn-nbctl list logical_switch public - _uuid : 983719e5-4f32-4fb0-926d-46291457ca41 - acls : [] - dns_records : [] - external_ids : {"neutron:mtu"="1450", "neutron:network_name"=public, "neutron:revision_number"="3"} - forwarding_groups : [] - load_balancer : [] - name : neutron-6c8be12a-9ed0-4ac4-8130-cb8fad83cd46 - other_config : {mcast_flood_unregistered="false", mcast_snoop="true"} - ports : [81bce1ab-87f8-4ed5-8163-f16701499dfe, b23d0c2e-773b-4ecb-8306-53d117006a7b] - qos_rules : [] + $ ovn-nbctl list logical_switch public + _uuid : 983719e5-4f32-4fb0-926d-46291457ca41 + acls : [] + dns_records : [] + external_ids : {"neutron:mtu"="1450", "neutron:network_name"=public, "neutron:revision_number"="3"} + forwarding_groups : [] + load_balancer : [] + name : neutron-6c8be12a-9ed0-4ac4-8130-cb8fad83cd46 + other_config : {mcast_flood_unregistered="false", mcast_snoop="true"} + ports : [81bce1ab-87f8-4ed5-8163-f16701499dfe, b23d0c2e-773b-4ecb-8306-53d117006a7b] + qos_rules : [] - $ ovn-nbctl list logical_switch_port 81bce1ab-87f8-4ed5-8163-f16701499dfe - _uuid : 81bce1ab-87f8-4ed5-8163-f16701499dfe - addresses : [unknown] - dhcpv4_options : [] - dhcpv6_options : [] - dynamic_addresses : [] - enabled : [] - external_ids : {} - ha_chassis_group : [] - name : provnet-96f663af-19fa-4c7e-a1b8-1dfdc9cd9e82 - options : {network_name=phys-net-1} - parent_name : [] - port_security : [] - tag : 100 - tag_request : [] - type : localnet - up : false + $ ovn-nbctl list logical_switch_port 81bce1ab-87f8-4ed5-8163-f16701499dfe + _uuid : 81bce1ab-87f8-4ed5-8163-f16701499dfe + addresses : [unknown] + dhcpv4_options : [] + dhcpv6_options : [] + dynamic_addresses : [] + enabled : [] + external_ids : {} + ha_chassis_group : [] + name : provnet-96f663af-19fa-4c7e-a1b8-1dfdc9cd9e82 + options : {network_name=phys-net-1} + parent_name : [] + port_security : [] + tag : 100 + tag_request : [] + type : localnet + up : false - $ ovn-nbctl list logical_switch_port b23d0c2e-773b-4ecb-8306-53d117006a7b - _uuid : b23d0c2e-773b-4ecb-8306-53d117006a7b - addresses : [unknown] - dhcpv4_options : [] - dhcpv6_options : [] - dynamic_addresses : [] - enabled : [] - external_ids : {} - ha_chassis_group : [] - name : provnet-469cbc3d-8e06-4a8f-be3a-3fcdadfd398a - options : {network_name=phys-net-2} - parent_name : [] - port_security : [] - tag : 200 - tag_request : [] - type : localnet - up : false + $ ovn-nbctl list logical_switch_port b23d0c2e-773b-4ecb-8306-53d117006a7b + _uuid : b23d0c2e-773b-4ecb-8306-53d117006a7b + addresses : [unknown] + dhcpv4_options : [] + dhcpv6_options : [] + dynamic_addresses : [] + enabled : [] + external_ids : {} + ha_chassis_group : [] + name : provnet-469cbc3d-8e06-4a8f-be3a-3fcdadfd398a + options : {network_name=phys-net-2} + parent_name : [] + port_security : [] + tag : 200 + tag_request : [] + type : localnet + up : false As you can see, the two ``localnet`` ports are configured with a @@ -73,10 +73,10 @@ VLAN tag and are related to a single ``Logical_Switch`` entry. When node it's running on it will create a patch port to the provider bridge accordingly to the bridge mappings configuration. - .. code-block:: bash +.. code-block:: bash - compute-1: bridge-mappings = segment-1:br-provider1 - compute-2: bridge-mappings = segment-2:br-provider2 + compute-1: bridge-mappings = segment-1:br-provider1 + compute-2: bridge-mappings = segment-2:br-provider2 For example, when a port in the multisegment network gets bound to compute-1, ovn-controller will create a patch-port between br-int and diff --git a/doc/source/admin/ovn/troubleshooting.rst b/doc/source/admin/ovn/troubleshooting.rst index 29b10dcc168..dab9ca04c4c 100644 --- a/doc/source/admin/ovn/troubleshooting.rst +++ b/doc/source/admin/ovn/troubleshooting.rst @@ -54,7 +54,7 @@ the host to the OVN database by creating the corresponding "Chassis" and when the process is gracefully stopped, it deletes both registers. These registers are used by Neutron to control the OVN agents. - .. code-block:: console +.. code-block:: console $ openstack network agent list -c ID -c "Agent Type" -c Host -c Alive -c State +--------------------------------------+------------------------------+--------+-------+-------+ @@ -76,40 +76,36 @@ the other one will be down because the "Chassis_Private.nb_cfg_timestamp" is not updated. In this case, the administrator should manually delete from the OVN Southbound database the stale registers. For example: - * List the "Chassis" registers, filtering by hostname and name (OVS - "system-id"): +* List the "Chassis" registers, filtering by hostname and name (OVS + "system-id"): - .. code-block:: console + .. code-block:: console - $ sudo ovn-sbctl list Chassis | grep name - hostname : u20ovn - name : "a55c8d85-2071-4452-92cb-95d15c29bde7" - hostname : u20ovn - name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07" + $ sudo ovn-sbctl list Chassis | grep name + hostname : u20ovn + name : "a55c8d85-2071-4452-92cb-95d15c29bde7" + hostname : u20ovn + name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07" +* Delete the stale "Chassis" register: - * Delete the stale "Chassis" register: + .. code-block:: console - .. code-block:: console + $ sudo ovn-sbctl destroy Chassis ce9a1471-79c1-4472-adfc-9e5ce86eba07 - $ sudo ovn-sbctl destroy Chassis ce9a1471-79c1-4472-adfc-9e5ce86eba07 +* List the "Chassis_Private" registers, filtering by name: + .. code-block:: console - * List the "Chassis_Private" registers, filtering by name: + $ sudo ovn-sbctl list Chassis_Private | grep name + name : "a55c8d85-2071-4452-92cb-95d15c29bde7" + name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07" - .. code-block:: console +* Delete the stale "Chassis_Private" register: - $ sudo ovn-sbctl list Chassis_Private | grep name - name : "a55c8d85-2071-4452-92cb-95d15c29bde7" - name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07" - - - * Delete the stale "Chassis_Private" register: - - .. code-block:: console - - $ sudo ovn-sbctl destroy Chassis_Private ce9a1471-79c1-4472-adfc-9e5ce86eba07 + .. code-block:: console + $ sudo ovn-sbctl destroy Chassis_Private ce9a1471-79c1-4472-adfc-9e5ce86eba07 If the host name is also updated during the system upgrade, the Neutron agent list could present entries from different host names, but the older diff --git a/doc/source/configuration/metering-agent.rst b/doc/source/configuration/metering-agent.rst index 663ec186b2f..5bfa7523614 100644 --- a/doc/source/configuration/metering-agent.rst +++ b/doc/source/configuration/metering-agent.rst @@ -42,18 +42,18 @@ also called as legacy) have the following format; bear in mind that if labels are shared, then the counters are for all routers of all projects where the labels were applied. - .. code-block:: json +.. code-block:: json - { - "pkts": "", - "bytes": "", - "time": "", - "first_update": "timeutils.utcnow_ts() of the first collection", - "last_update": "timeutils.utcnow_ts() of the last collection", - "host": "", - "label_id": "", - "tenant_id": "" - } + { + "pkts": "", + "bytes": "", + "time": "", + "first_update": "timeutils.utcnow_ts() of the first collection", + "last_update": "timeutils.utcnow_ts() of the last collection", + "host": "", + "label_id": "", + "tenant_id": "" + } The ``first_update`` and ``last_update`` timestamps represent the moment when the first and last data collection happened within the report interval. @@ -129,21 +129,21 @@ legacy mode such as ``bytes``, ``pkts``, ``time``, ``first_update``, ``last_update``, and ``host``. As follows we present an example of JSON message with all of the possible attributes. - .. code-block:: json +.. code-block:: json - { - "resource_id": "router-f0f745d9a59c47fdbbdd187d718f9e41-label-00c714f1-49c8-462c-8f5d-f05f21e035c7", - "project_id": "f0f745d9a59c47fdbbdd187d718f9e41", - "first_update": 1591058790, - "bytes": 0, - "label_id": "00c714f1-49c8-462c-8f5d-f05f21e035c7", - "label_name": "test1", - "last_update": 1591059037, - "host": "", - "time": 247, - "pkts": 0, - "label_shared": true - } + { + "resource_id": "router-f0f745d9a59c47fdbbdd187d718f9e41-label-00c714f1-49c8-462c-8f5d-f05f21e035c7", + "project_id": "f0f745d9a59c47fdbbdd187d718f9e41", + "first_update": 1591058790, + "bytes": 0, + "label_id": "00c714f1-49c8-462c-8f5d-f05f21e035c7", + "label_name": "test1", + "last_update": 1591059037, + "host": "", + "time": 247, + "pkts": 0, + "label_shared": true + } The ``resource_id`` is a unique identified for the "resource" being monitored. Here we consider a resource to be any of the granularities that @@ -156,4 +156,4 @@ As follows we present all of the possible configuration one can use in the metering agent init file. .. show-options:: - :config-file: etc/oslo-config-generator/metering_agent.ini \ No newline at end of file + :config-file: etc/oslo-config-generator/metering_agent.ini diff --git a/doc/source/contributor/internals/ovn/ovn_network_logging.rst b/doc/source/contributor/internals/ovn/ovn_network_logging.rst index eca6c0c9532..ff3fe8ad708 100644 --- a/doc/source/contributor/internals/ovn/ovn_network_logging.rst +++ b/doc/source/contributor/internals/ovn/ovn_network_logging.rst @@ -10,10 +10,10 @@ manage affected security group rules. Thus, there is no need for an agent. It is good to keep in mind that Openstack Security Groups (SG) and their rules (SGR) map 1:1 into OVN's Port Groups (PG) and Access Control Lists (ACL): - .. code-block:: none +.. code-block:: none - Openstack Security Group <=> OVN Port Group - Openstack Security Group Rule <=> OVN ACL + Openstack Security Group <=> OVN Port Group + Openstack Security Group Rule <=> OVN ACL Just like SGs have a list of SGRs, PGs have a list of ACLs. PGs also have a list of logical ports, but that is not really relevant in this context. @@ -50,22 +50,22 @@ https://github.com/ovn-org/ovn/commit/880dca99eaf73db7e783999c29386d03c82093bf Below is an example of a meter configuration in OVN. You can locate the fair, unit, burst_size, and rate attributes: - .. code-block:: bash +.. code-block:: bash - $ ovn-nbctl list meter - _uuid : 70c76ba9-f303-471b-9d49-25dee299827f - bands : [f114c205-a170-4425-8ca6-4e71099d1955] - external_ids : {"neutron:device_owner"=logging-plugin} - fair : true - name : acl_log_meter - unit : pktps + $ ovn-nbctl list meter + _uuid : 70c76ba9-f303-471b-9d49-25dee299827f + bands : [f114c205-a170-4425-8ca6-4e71099d1955] + external_ids : {"neutron:device_owner"=logging-plugin} + fair : true + name : acl_log_meter + unit : pktps - $ ovn-nbctl list meter-band - _uuid : f114c205-a170-4425-8ca6-4e71099d1955 - action : drop - burst_size : 25 - external_ids : {} - rate : 100 + $ ovn-nbctl list meter-band + _uuid : f114c205-a170-4425-8ca6-4e71099d1955 + action : drop + burst_size : 25 + external_ids : {} + rate : 100 The burst_size and rate attributes are configurable through neutron.conf.services.logging.log_driver_opts. That is not new. @@ -78,39 +78,39 @@ Moreover, there are a few attributes in each ACL that makes it able to provide the networking logging feature. Let's use the example below to point out the relevant fields: - .. code-block:: none +.. code-block:: none - $ openstack network log create --resource-type security_group \ - --resource ${SG} --event ACCEPT logme -f value -c ID - 2e456c7f-154e-40a8-bb10-f88ba51b90b5 + $ openstack network log create --resource-type security_group \ + --resource ${SG} --event ACCEPT logme -f value -c ID + 2e456c7f-154e-40a8-bb10-f88ba51b90b5 - $ openstack security group show ${SG} -f json -c rules | jq '.rules | .[2]' | grep -v 'null' - { - "id": "de4ea1e4-c946-40ed-b5b6-53c59418dc0b", - "tenant_id": "2600067ea3a446dba332d20a30ed44fa", - "security_group_id": "c604e984-0789-4c9a-a297-3e7f62fa73fd", - "ethertype": "IPv4", - "direction": "egress", - "standard_attr_id": 48, - "tags": [], - "created_at": "2021-02-06T22:17:44Z", - "updated_at": "2021-02-06T22:17:44Z", - "revision_number": 0, - "project_id": "2600067ea3a446dba332d20a30ed44fa" - } + $ openstack security group show ${SG} -f json -c rules | jq '.rules | .[2]' | grep -v 'null' + { + "id": "de4ea1e4-c946-40ed-b5b6-53c59418dc0b", + "tenant_id": "2600067ea3a446dba332d20a30ed44fa", + "security_group_id": "c604e984-0789-4c9a-a297-3e7f62fa73fd", + "ethertype": "IPv4", + "direction": "egress", + "standard_attr_id": 48, + "tags": [], + "created_at": "2021-02-06T22:17:44Z", + "updated_at": "2021-02-06T22:17:44Z", + "revision_number": 0, + "project_id": "2600067ea3a446dba332d20a30ed44fa" + } - $ ovn-nbctl find acl \ - "external_ids:\"neutron:security_group_rule_id\""="de4ea1e4-c946-40ed-b5b6-53c59418dc0b" - _uuid : 791679e9-237d-4732-a31e-aa634496e02b - action : allow-related - direction : from-lport - external_ids : {"neutron:security_group_rule_id"="de4ea1e4-c946-40ed-b5b6-53c59418dc0b"} - log : true - match : "inport == @pg_c604e984_0789_4c9a_a297_3e7f62fa73fd && ip4" - meter : acl_log_meter - name : neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5 - priority : 1002 - severity : info + $ ovn-nbctl find acl \ + "external_ids:\"neutron:security_group_rule_id\""="de4ea1e4-c946-40ed-b5b6-53c59418dc0b" + _uuid : 791679e9-237d-4732-a31e-aa634496e02b + action : allow-related + direction : from-lport + external_ids : {"neutron:security_group_rule_id"="de4ea1e4-c946-40ed-b5b6-53c59418dc0b"} + log : true + match : "inport == @pg_c604e984_0789_4c9a_a297_3e7f62fa73fd && ip4" + meter : acl_log_meter + name : neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5 + priority : 1002 + severity : info The first command creates a networking-log for a given SG. The second shows an SGR from that SG. The third shell command is where we can see how the ACL with the meter information gets populated. @@ -128,14 +128,14 @@ These are the attributes pertinent to network logging: If we poked the SGR with packets that match its criteria, the ovn-controller local to where the ACLs is enforced will log something that looks like this: - .. code-block:: none +.. code-block:: none - 2021-02-16T11:59:00.640Z|00045|acl_log(ovn_pinctrl0)|INFO| - name="neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5", - verdict=allow, severity=info: icmp,vlan_tci=0x0000,dl_src=fa:16:3e:24:dc:88, - dl_dst=fa:16:3e:15:6d:e0, - nw_src=10.0.0.12,nw_dst=10.0.0.11,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8, - icmp_code=0 + 2021-02-16T11:59:00.640Z|00045|acl_log(ovn_pinctrl0)|INFO| + name="neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5", + verdict=allow, severity=info: icmp,vlan_tci=0x0000,dl_src=fa:16:3e:24:dc:88, + dl_dst=fa:16:3e:15:6d:e0, + nw_src=10.0.0.12,nw_dst=10.0.0.11,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8, + icmp_code=0 It is beyond the scope of this document to talk about what happens after the logs are generated by ovn-controllers. The harvesting of files across compute nodes is something a project like diff --git a/doc/source/contributor/internals/ovn/port_forwarding.rst b/doc/source/contributor/internals/ovn/port_forwarding.rst index dd632736a0b..32dc2dccc90 100644 --- a/doc/source/contributor/internals/ovn/port_forwarding.rst +++ b/doc/source/contributor/internals/ovn/port_forwarding.rst @@ -14,58 +14,58 @@ load_balancer table for all mappings for a given FIP+protocol. All PFs for the same FIP+protocol are kept as Virtual IP (VIP) mappings inside a LB entry. See the diagram below for an example of how that looks like: - .. code-block:: none +.. code-block:: none - VIP:PORT = MEMBER1:MPORT1, MEMBER2:MPORT2 + VIP:PORT = MEMBER1:MPORT1, MEMBER2:MPORT2 - The same is extended for port forwarding as: + The same is extended for port forwarding as: - FIP:PORT = PRIVATE_IP:PRIV_PORT + FIP:PORT = PRIVATE_IP:PRIV_PORT - Neutron DB OVN Northbound DB + Neutron DB OVN Northbound DB - +---------------------+ +---------------------------------+ - | Floating IP AA | | Load Balancer AA UDP | - | | | | - | +-----------------+ | | | - | | Port Forwarding | | +----------->AA:portA => internal IP1:portX | - | | | | | | | - | | External PortA +-----+ +------->AA:portB => internal IP2:portX | - | | Fixed IP1 PortX | | | | | - | | Protocol: UDP | | | +---------------------------------+ - | +-----------------+ | | - | | | +---------------------------------+ - | +-----------------+ | | | Load Balancer AA TCP | - | | Port Forwarding | | | | | - | | | | | | | - | | External PortB +---------+ +--->AA:portC => internal IP3:portX | - | | Fixed IP2 PortX | | | | | - | | Protocol: UDP | | | +---------------------------------+ - | +-----------------+ | | - | | | - | +-----------------+ | | - | | Port Forwarding | | | - | | | | | +---------------------------------+ - | | External PortC | | | | Load Balancer BB TCP | - | | Fixed IP3 PortX +-------------+ | | - | | Protocol: TCP | | | | - | +-----------------+ | +---------->BB:portD => internal IP4:portX | - | | | | | - +---------------------+ | +---------------------------------+ - | - | +-------------------+ - | | Logical Router X1 | - +---------------------+ | | | - | Floating IP BB | | | Load Balancers: | - | | | | AA UDP, AA TCP | - | +-----------------+ | | +-------------------+ - | | Port Forwarding | | | - | | | | | +-------------------+ - | | External PortD | | | | Logical Router Z1 | - | | Fixed IP4 PortX +------+ | | - | | Protocol: TCP | | | Load Balancers: | - | +-----------------+ | | BB TCP | - +---------------------+ +-------------------+ + +---------------------+ +---------------------------------+ + | Floating IP AA | | Load Balancer AA UDP | + | | | | + | +-----------------+ | | | + | | Port Forwarding | | +----------->AA:portA => internal IP1:portX | + | | | | | | | + | | External PortA +-----+ +------->AA:portB => internal IP2:portX | + | | Fixed IP1 PortX | | | | | + | | Protocol: UDP | | | +---------------------------------+ + | +-----------------+ | | + | | | +---------------------------------+ + | +-----------------+ | | | Load Balancer AA TCP | + | | Port Forwarding | | | | | + | | | | | | | + | | External PortB +---------+ +--->AA:portC => internal IP3:portX | + | | Fixed IP2 PortX | | | | | + | | Protocol: UDP | | | +---------------------------------+ + | +-----------------+ | | + | | | + | +-----------------+ | | + | | Port Forwarding | | | + | | | | | +---------------------------------+ + | | External PortC | | | | Load Balancer BB TCP | + | | Fixed IP3 PortX +-------------+ | | + | | Protocol: TCP | | | | + | +-----------------+ | +---------->BB:portD => internal IP4:portX | + | | | | | + +---------------------+ | +---------------------------------+ + | + | +-------------------+ + | | Logical Router X1 | + +---------------------+ | | | + | Floating IP BB | | | Load Balancers: | + | | | | AA UDP, AA TCP | + | +-----------------+ | | +-------------------+ + | | Port Forwarding | | | + | | | | | +-------------------+ + | | External PortD | | | | Logical Router Z1 | + | | Fixed IP4 PortX +------+ | | + | | Protocol: TCP | | | Load Balancers: | + | +-----------------+ | | BB TCP | + +---------------------+ +-------------------+ The OVN LB entries have names that include the id of the FIP and a protocol suffix. That protocol portion is needed because a single FIP can have multiple @@ -73,7 +73,7 @@ UDP and TCP port forwarding entries while a given LB entry can either be one or the other protocol (not both). Based on that, the format used to specify an LB entry is: - .. code-block:: ini +.. code-block:: ini pf-floatingip-- @@ -85,7 +85,7 @@ In order to differentiate a load balancer entry that was created by port forwarding vs load balancer entries maintained by ovn-octavia-provider, the external_ids field also has an owner value: - .. code-block:: python +.. code-block:: python external_ids = { ovn_const.OVN_DEVICE_OWNER_EXT_ID_KEY: PORT_FORWARDING_PLUGIN, @@ -97,7 +97,7 @@ external_ids field also has an owner value: The following registry (API) neutron events trigger the OVN backend to map port forwarding into LB: - .. code-block:: python +.. code-block:: python @registry.receives(PORT_FORWARDING_PLUGIN, [events.AFTER_INIT]) def register(self, resource, event, trigger, payload=None):