doc: drop sphinxcontrib-nwdiag, sphinxcontrib-blockdiag usage
sphinxcontrib-nwdiag does not appear to be maintained anymore [1] and there have been no releases in nearly 5 years. Statically generate the images and include them this way. We can revert this change if the maintainership issue resolves itself. sphinxcontrib-blockdiag has had activity more recently [2], but it's still been nearly 3 years. More importantly, we don't actually use it so there's no reason to keep it around. [1] https://pypi.org/project/sphinxcontrib-nwdiag/#history [1] https://pypi.org/project/sphinxcontrib-blockdiag/#history Change-Id: Ic5244c792acd01f8aec5ff626e53303c1738aa69 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
parent
03bc214525
commit
6512f0140c
@ -4,8 +4,4 @@ Pygments
|
|||||||
docutils
|
docutils
|
||||||
sphinx>=2.0.0,!=2.1.0 # BSD
|
sphinx>=2.0.0,!=2.1.0 # BSD
|
||||||
openstackdocstheme>=2.2.1 # Apache-2.0
|
openstackdocstheme>=2.2.1 # Apache-2.0
|
||||||
nwdiag
|
|
||||||
blockdiag
|
|
||||||
sphinxcontrib-blockdiag
|
|
||||||
sphinxcontrib-nwdiag
|
|
||||||
zuul-sphinx>=0.2.0
|
zuul-sphinx>=0.2.0
|
||||||
|
BIN
doc/source/assets/images/neutron-network-1.png
Normal file
BIN
doc/source/assets/images/neutron-network-1.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 10 KiB |
BIN
doc/source/assets/images/neutron-network-2.png
Normal file
BIN
doc/source/assets/images/neutron-network-2.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 11 KiB |
BIN
doc/source/assets/images/neutron-network-3.png
Normal file
BIN
doc/source/assets/images/neutron-network-3.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 13 KiB |
@ -23,14 +23,14 @@
|
|||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be extensions
|
# Add any Sphinx extension module names here, as strings. They can be extensions
|
||||||
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||||
extensions = [ 'sphinx.ext.autodoc',
|
extensions = [
|
||||||
|
'sphinx.ext.autodoc',
|
||||||
'zuul_sphinx',
|
'zuul_sphinx',
|
||||||
'openstackdocstheme',
|
'openstackdocstheme',
|
||||||
'sphinxcontrib.blockdiag',
|
]
|
||||||
'sphinxcontrib.nwdiag' ]
|
|
||||||
|
|
||||||
# openstackdocstheme options
|
# openstackdocstheme options
|
||||||
openstackdocs_repo_name = 'openstack-dev/devstack'
|
openstackdocs_repo_name = 'openstack/devstack'
|
||||||
openstackdocs_pdf_link = True
|
openstackdocs_pdf_link = True
|
||||||
openstackdocs_bug_project = 'devstack'
|
openstackdocs_bug_project = 'devstack'
|
||||||
openstackdocs_bug_tag = ''
|
openstackdocs_bug_tag = ''
|
||||||
|
@ -41,19 +41,8 @@ network and is on a shared subnet with other machines. The
|
|||||||
`local.conf` exhibited here assumes that 1500 is a reasonable MTU to
|
`local.conf` exhibited here assumes that 1500 is a reasonable MTU to
|
||||||
use on that network.
|
use on that network.
|
||||||
|
|
||||||
.. nwdiag::
|
.. image:: /assets/images/neutron-network-1.png
|
||||||
|
:alt: Network configuration for a single DevStack node
|
||||||
nwdiag {
|
|
||||||
inet [ shape = cloud ];
|
|
||||||
router;
|
|
||||||
inet -- router;
|
|
||||||
|
|
||||||
network hardware_network {
|
|
||||||
address = "172.18.161.0/24"
|
|
||||||
router [ address = "172.18.161.1" ];
|
|
||||||
devstack-1 [ address = "172.18.161.6" ];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
DevStack Configuration
|
DevStack Configuration
|
||||||
@ -100,21 +89,8 @@ also want to do multinode testing and networking.
|
|||||||
Physical Network Setup
|
Physical Network Setup
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. nwdiag::
|
.. image:: /assets/images/neutron-network-2.png
|
||||||
|
:alt: Network configuration for multiple DevStack nodes
|
||||||
nwdiag {
|
|
||||||
inet [ shape = cloud ];
|
|
||||||
router;
|
|
||||||
inet -- router;
|
|
||||||
|
|
||||||
network hardware_network {
|
|
||||||
address = "172.18.161.0/24"
|
|
||||||
router [ address = "172.18.161.1" ];
|
|
||||||
devstack-1 [ address = "172.18.161.6" ];
|
|
||||||
devstack-2 [ address = "172.18.161.7" ];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
After DevStack installs and configures Neutron, traffic from guest VMs
|
After DevStack installs and configures Neutron, traffic from guest VMs
|
||||||
flows out of `devstack-2` (the compute node) and is encapsulated in a
|
flows out of `devstack-2` (the compute node) and is encapsulated in a
|
||||||
@ -222,8 +198,6 @@ connect OpenStack nodes (like `devstack-2`) together. This bridge is
|
|||||||
used so that project network traffic, using the VXLAN tunneling
|
used so that project network traffic, using the VXLAN tunneling
|
||||||
protocol, flows between each compute node where project instances run.
|
protocol, flows between each compute node where project instances run.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
DevStack Compute Configuration
|
DevStack Compute Configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -268,30 +242,8 @@ to the neutron L3 service.
|
|||||||
Physical Network Setup
|
Physical Network Setup
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
.. nwdiag::
|
.. image:: /assets/images/neutron-network-3.png
|
||||||
|
:alt: Network configuration for provider networks
|
||||||
nwdiag {
|
|
||||||
inet [ shape = cloud ];
|
|
||||||
router;
|
|
||||||
inet -- router;
|
|
||||||
|
|
||||||
network provider_net {
|
|
||||||
address = "203.0.113.0/24"
|
|
||||||
router [ address = "203.0.113.1" ];
|
|
||||||
controller;
|
|
||||||
compute1;
|
|
||||||
compute2;
|
|
||||||
}
|
|
||||||
|
|
||||||
network control_plane {
|
|
||||||
router [ address = "10.0.0.1" ]
|
|
||||||
address = "10.0.0.0/24"
|
|
||||||
controller [ address = "10.0.0.2" ]
|
|
||||||
compute1 [ address = "10.0.0.3" ]
|
|
||||||
compute2 [ address = "10.0.0.4" ]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
On a compute node, the first interface, eth0 is used for the OpenStack
|
On a compute node, the first interface, eth0 is used for the OpenStack
|
||||||
management (API, message bus, etc) as well as for ssh for an
|
management (API, message bus, etc) as well as for ssh for an
|
||||||
|
Loading…
Reference in New Issue
Block a user