9f78bc667d
This commit added support for wait in the pods https://review.openstack.org/#/c/651380 However, when ovs-dpdk vswitch type is enabled like this: system modify --vswitch_type ovs-dpdk the wait causes armada to timeout. This fix is to re-comment out the wait. Note: this causes the armada logs to show: WARNING armada.handlers.wait [-] [chart=openvswitch]: "label_selector" not specified, waiting with no labels may cause unintended consequences. This submission will get sanity to pass. A later submission by someone with ovs expertise can update the openvswitch.py helm code to add a meta_override to eliminate the warning logs. Partial-Bug: 1824829 Change-Id: I1e08b2dd98d859d0b37612aba3de70d969653cda Signed-off-by: Al Bailey <Al.Bailey@windriver.com>
The expected layout for this subdirectory is as follows: kubernetes |-- applications | `-- <application> | `-- <application>-helm RPM | `-- centos | `-- build_srpm.data | `-- <application>-helm.spec | `-- <application>-helm | `-- manifests | `-- main-manifest.yaml | `-- alt-manifest-1.yaml | `-- ... | `-- alt-manifest-N.yaml | `-- custom chart 1 | `-- Chart.yaml | `-- ... | `-- ... | `-- custom chart N | `-- Chart.yaml | `-- ... |-- helm-charts | `-- chart | `-- chart `-- README The idea is that all our custom helm charts that are common across applications would go under "helm-charts". Each chart would get a subdirectory. Custom applications would generally consist of one or more armada manifest referencing multiple helm charts (both ours and upstream ones). The application is packaged as an RPM. These application RPM are used to produce the build artifacts (helm tarballs + armada manifests) but are not installed on the system. These artifacts are extracted later for proper application packaging with additional required metadata (TBD). These applications would each get their own subdirectory under "applications".