Adding a null override over default anti-affinity rules for vault injectors. The default rule only allow one vault injector pod at a time. This is a problem because helm-override and application apply will try to schedule a new pod first before completely removing the old pod.
This change lets a new vault agent injector pod to be scheduled without issue.
TEST PLAN:
- Test for AIO-SX
- Update helm-override so that vault-injector has a different image tag than default
- apply the new helm-override
- There should be no FailedScheduling error in the vault pods
- Sanity test for both AIO-SX and AIO-DX + 1 worker
Closes-bug: 2030901
Change-Id: I9814f502558ab1cbecad48cf37341639c964258f
Signed-off-by: Tae Park <tae.park@windriver.com>
Modifying the vault-manager initialization logic so that it only waits
for pod number equal to the replica value to be active
if the raft is not yet initialized.
TEST PLAN:
- In a 2 controller, 1 worker setup,
- Upload and apply vault
- Lock the host that vault-manager is running on
- Vault manager should restart
- Within the logs, there should not be a repetition of " Waiting for sva-vault statefulset running pods..."
- Vault Sanity test in AIO-SX
- Bashate of rendered init.sh
Closes-bug: 2029375
Signed-off-by: Tae Park <tae.park@windriver.com>
Change-Id: I41990b87395a5d5364ef91c048f740d0f0675d6b
Changes the setting under vault overrides to disable Vault web UI.
Test Plan:
PASS Port 8200 is unreachable from vault kubernetes pod
- kubectl port-forward --address=10.10.31.2 -n vault pod/sva-vault-0 23443:8200
PASS vault kubernetes pod settings show ui = false
- kubectl get configmaps -n vault sva-vault-config -o yaml
Story: 2010393
Task: 48381
Change-Id: Ib7915f3071c663b1375e80f04104f1f4fb872a1e
Signed-off-by: Tae Park <tae.park@windriver.com>
This commit adds the support to core affinity labels for
vault. The label 'app.starlingx.io/component' identifies
to k8s to rather run the application pods by 'platform'
or 'application' cores.
The default value for 'app.starlingx.io/component' label
is 'platform', but the label accept the values
'application' and 'platform'. The override has to be
performed when vault is in the uploaded state, after
application remove or before the first apply. This
behavior is required to ensure that no vault pod is
restarted in an improper manner.
Test plan:
PASS: In a AIO-SX system upload and apply the vault app. When apply
is finished, run "kubectl -n vault describe po sva | grep
platform" and the output should be three instances of
"app.starlingx.io/component=platform", indicating that the
default configuration is applied ofr each pod.
PASS: In a AIO-SX, where the vault app is in the applied state, run
"system application-remove vault" and override
'app.starlingx.io/component' label with 'application' value by
helm api. After the override, apply vault and verify
'app.starlingx.io/component' label is 'application' on the
pods describe output, similar to the previous test.
PASS: In a AIO-SX, where the vault app is in the applied state, run
"system application-remove vault" and override
'app.starlingx.io/component' label with any value rather
than 'platform' or 'application' and after the apply check if
the default value of 'platform' was used for the pod labels.
PASS: In a Standard configuration with one worker node, upload and
apply the vault app. When apply is finished, run 'kubectl -n
vault describe po sva | grep -b3 "app.starlingx.io/component"'
and check the output for the 'app.starlingx.io/component'
label is the default value of 'platform' for each pod, with
every vault server pod having the label.
PASS: In a Standard configuration with one worker node, remove vault
and override 'app.starlingx.io/component' label with any value,
valid or not, and after the override, apply vault. With vault
in the applied state, verify the replica count override is kept
and check the pods in a similar way to the previous test to
validate that the HA configuration is maintained. The number
of pods replicas should reflect the configuration.
Story: 2010612
Task: 48252
Change-Id: If729ab8bb8fecddf54824f5aa59326960b66942a
Signed-off-by: Alan Bandeira <Alan.PortelaBandeira@windriver.com>
It is observed that vault pods consistently show readiness probe warning
when applying the application or when a pod is recovering. The probe
runs "vault status" which returns failure when the vault is sealed. The
probe failure is not impactful, but since there is a certain delay
before unseal is completed, adjust initialDelaySeconds to 25 to account
for the time required to unseal vault pods. This commit should usually
omit readiness probe warning for a single recovering vault pod.
During testing it is observed that:
Setting initialDelaySeconds to 15: a recovering pod shows readiness
probe warning.
Setting initialDelaySeconds to 18, a recovering pod omits readiness
probe warning.
On application-apply, the first pod to be unsealed _may_ show readiness
probe warning when initialDelaySeconds is 25. Other pods will be
unsealed serially and will show readiness probe warning.
Test Plan:
PASS Standard controller storage 2+2
PASS HA tests, log inspection
PASS Inspection of kubectl describe of pods with various values for
initialDelaySeconds
Story: 2010393
Task: 48237
Depends-On: https://review.opendev.org/c/starlingx/vault-armada-app/+/884553
Change-Id: I9ea6cca2b591c40bfe70737c0fb390b18b69f796
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Options for vault-manager were introduced to delay unsealing of
recovering vault server pods until the active vault server pod would
start sending heartbeats to the recovering pod. The behavior of vault
server that prompted the change to vault-manager is no longer
observed with vault server version 1.13.1.
Remove the unsealWaitIntervals so that vault manager will unseal the
recovering server immediately.
Test Plan:
PASS HA tests, review pods logs, election status
PASS active server remains active when a pod recovers
PASS no evidence of election attempts in vault server logs
PASS tested also with statusCheckRate=.1 to minimize delay
(default 5s gives a random-ish delay of 0-5 seconds)
Story: 2010393
Task: 48236
Depends-On: https://review.opendev.org/c/starlingx/vault-armada-app/+/884553
Change-Id: Ifd73970658d6ef7a0e0ca5844b2db81d94bdde9f
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Replace references of 0.19.0 with 0.24.1. Refresh the patches for
vault-manager and agent image reference. Update the image tags to match
new vault chart.
Test plan:
PASS AIO-sx and Standard 2+2
PASS vault aware and un-aware applications
PASS HA tests
PASS test image pulls from private registry with external network
restriction
Story: 2010393
Task: 48109
Change-Id: Ib6b4d0a6f7d3a54676563c59f60d93d129c81c1c
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
These have not been needed for a while and do not impact the build of
this application.
The Makefile remains as the necessary component of the build.
Test plan:
PASS compare chart files before/after to ensure no changes
PASS compare all of stx-vault-helm package before/after
Story: 2010393
Task: 47164
Change-Id: I97025ceee2875a6fc588d72436b55e7f5ac59062
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
This patch was part of the CentOS build, which was removed with
commit 20167fc54f
Whereas the patch was copied for debian here:
commit d96e143a34
Test Plan:
PASS: debian build unaffected (there is no CentOS build)
Story: 2010393
Task: 47232
Change-Id: If90017b58f6220bca82e554e2fb50bd655d240ec
Signed-off-by: Michel Thebeau <Michel.Thebeau@windriver.com>
Add yaml to the fluxcd manifest which is compatible with the platform's
image pull and service parameter registry override handling. The
platform will pull the image and populate registry.local, and the vault
injector agent will pull from registry.local.
Story: 2010393
Task: 47927
Test Plan:
PASS: sysinv.log shows that agentImage image is pulled when vault
server image is hardcoded differently
PASS: agent image pulls when public network is blocked
PASS: agent image pulls when it is different than vault server image
PASS: vault app test, including vault un-aware application
Change-Id: Idd1215744bb31881127a6be23cf570166c79fad8
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
The new image has updated packages for CVE fixes, no other changes.
Test Plan:
PASS - apply vault application (inspect vault-manager pod)
Story: 2010710
Task: 47905
Change-Id: I83848d12baf0558edc0a2e4cd9a964f781edec56
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Updating the rsa ssh host key based on:
https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/
Note: In the future, StarlingX should have a zuul job and
secret setup for all repos so we do not need to do this
for every repo.
Needed to rename the secret, because zuul fails if like-named
secrets have diffent values in different branches of the same
repo.
Partial-Bug: #2015246
Change-Id: Ie65c51aabfa4b303b89634eb9e5c566669f5f5d9
Signed-off-by: Davlet Panech <davlet.panech@windriver.com>
Update the statefulset to prompt update strategy. The config map is
updated in previous commits, for which we want vault-manager to restart.
Test Plan:
PASS - sw-patch upload/apply/install/remove
PASS - manual revert to original tarball (system application-update)
Story: 2010393
Task: 47731
Change-Id: Ib52d019170763d066c730d679067b91ed4d59bb5
Signed-off-by: Michel Thebeau <Michel.Thebeau@windriver.com>
Instruct the platform to automatically apply the application when
patching.
Test Plan:
PASS - sw-patch upload/apply/install/remove
PASS - manual revert to original tarball (system application-update)
Story: 2010393
Task: 47731
Change-Id: I478995b70bffd9c3568c9eb2a4861b9b90bd32da
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
This change causes vault-manager to not pause for long periods when a
configured vault server is not responsive.
Use curl --connect-timeout for queries to vault server /sys/health.
During HA recovery it is known that the server is non-responsive, so
vault-manager should not wait the default time, which is 60s or 5m
depending on the google search result.
It is observed that vault-manager appears to hang for long periods
during HA recovery. Watching the $PVCDIR/pods.txt confirms that
vault-manager is inactive for minutes at a time. This changes the
default behavior to timeout within 2 seconds during the HA recovery
scenario.
In addition to not waiting, the vault-manager log will show the 'sealed'
status as empty string when the query times-out.
Test Plan:
PASS - vault ha 3 replicas
PASS - vault 1 replica
PASS - kubectl exec kill vault process
PASS - kubectl delete vault pod
PASS - short network downtime
PASS - long network downtime
PASS - rates including 1, 5
PASS - wait intervals including 0, 1, 3, 15
PASS - kubectl delete 2 vault pods
PASS - kubectl delete 3 (all) vault pods
Story: 2010393
Task: 47701
Change-Id: I4fd916033f6dd5210078126abb065393d25851cd
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Run tox against the rendered init.sh from vault-init.yaml; fix most of
the reports except for some long lines from jsonpath templates.
Test Plan:
PASS - vault ha 3 replicas
PASS - vault 1 replica
PASS - kubectl exec kill vault process
PASS - kubectl delete vault pod
PASS - short network downtime
PASS - long network downtime
Story: 2010393
Task: 47700
Change-Id: I844c5de510e8a7a3724852d4e6500eec6c327aba
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
This change delays unsealing recovering vault servers for 15 seconds.
vault-manager automatically unseals vault servers in a perpetual loop
after initial configuration. This final loop is the HA recovery
procedure.
It is observed that vault-manager will unseal a recovering vault server
when the active server has not started to send heartbeats to the new
pod. The result is that the recovering server will timeout waiting for
heartbeats and start an election. Although the active and standby
server will reject the election, there being a leader already, the
recovering vault will increment 'term' and restart election until
heartbeats are received, or until it wins election.
Although the raft algorithm is resilient to this, the procedure is not
appropriate. It is better to unseal the vault server after the active
vault sends heartbeats to the new pod.
It is observed that the heartbeat interval reduces promptly from less
than 1 second per heartbeat to ~10-12 seconds for a failed vault server.
So it is reasonable for vault-manager to wait 12 seconds before
unsealing the recovering vault. This also assumes the vault-manager and
active vault server would receive updated pod information at about the
same time and the latest heartbeat was issued immediately prior to the
update.
The options are configurable in helm-overrides. The defaults for
example:
manager:
statusCheckRate: 5
unsealWaitIntervals: 3
statusCheckRate is the rate at which vault-manager will check pod
status, in seconds. unsealWaitIntervals is the number of intervals to
wait before unsealing the server.
Default is 5 s/interval * 3 intervals == 15 seconds
When unsealWaitIntervals is set to 0 there is no delay in unsealing the
recovering vault servers. This is equivalent to the existing behaviour
before this change when statusCheckRate is also set to 5, which is the
value hard-coded before this change.
Test Plan:
PASS - vault ha 3 replicas
PASS - vault 1 replica
PASS - kubectl exec kill vault process
PASS - kubectl delete vault pod
PASS - short network downtime
PASS - long network downtime
PASS - rates including 1, 5
PASS - wait intervals including 0, 1, 3, 15
PASS - not reproduced with default values (many attempts)
Story: 2010393
Task: 47701
Change-Id: I763f6becee3e1a17e838a4f8ca59b2b0d33ba639
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Add a chart override for the rate at which vault-manager checks vault
pod status. Leave the default at previously hard-coded 5s.
Move all of the hard-coded sleep values to variables so they would be
more visible.
Test Plan:
PASS - vault ha 3 replicas
PASS - vault 1 replica
PASS - kubectl exec kill vault process
PASS - kubectl delete vault pod
PASS - short network downtime
PASS - long network downtime
PASS - rates including 1, 5
Story: 2010393
Task: 47701
Change-Id: I1de647760f6fe1806b0b1450c0e8f1117ad725ea
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Track the seal status of vault server pods so that logs can be omitted
when there is no change.
The converted loop no longer ignores pods without IP addresses. Add an
explicit test for empty IP address field coming from getVaultPods().
Test Plan:
PASS - vault ha 3 replicas
PASS - vault 1 replica
PASS - kubectl exec kill vault process
PASS - kubectl delete vault pod
PASS - short network downtime
PASS - long network downtime
Story: 2010393
Task: 47700
Change-Id: Ic75c397046a3e183faf5ecc5b37dc8abefc7af64
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Enhance debugging with dated logs for vault-manager pod. Allows
correlating the logs with other pods.
Test Plan:
PASS - vault ha 3 replicas
PASS - vault 1 replica
PASS - kubectl exec kill vault process
PASS - kubectl delete vault pod
PASS - short network downtime
PASS - long network downtime
Story: 2010393
Task: 47700
Change-Id: I4a877b8c0fc8ddc2626aaccc15196c30b6fb4b82
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
This was probably supposed to be '&&' for AND logic and not an intention
to background the grep of pods.txt. The symptom of this mistake is a
somewhat random log output - sometimes the output is printed before and
sometimes after the "Sealed status is".
Test Plan:
PASS - vault ha 3 replicas
PASS - vault 1 replica
PASS - kubectl exec kill vault process
PASS - kubectl delete vault pod
PASS - short network downtime
PASS - long network downtime
Story: 2010393
Task: 47700
Change-Id: Ia4358ca7ed7ca7af3b116934c4491a5887871853
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
Currently, the debian build system produces a debian package version
with the format: "1.0-1.stx.<revision>"
The rules file then parses this deb pkg version at build time
to produce the app tarball version, which always comes up to
be "1.0-1" at this time [1]. This commit changes the app tarball
version calculation so that the resulting tarball version will
be "1.0-<revision>" [2].
This correction is necessary because the application framework
cannot update an app between instances with the same version.
This commit is part of a set of commits updating the app tarball
version calculation to all apps based off of [3].
[1]: /usr/local/share/applications/helm/<APPNAME>-1.0-1.tgz
[2]: /usr/local/share/applications/helm/<APPNAME>-1.0-<N>.tgz
[3]: https://review.opendev.org/c/starlingx/cert-manager-armada-app/+/872628
Test Plan:
pass - build-pkg
pass - tarball version updated
Story: 2010542
Task: 47533
Signed-off-by: Leonardo Fagundes Luz Serrano <Leonardo.FagundesLuzSerrano@windriver.com>
Change-Id: Ifcfeab7e3308fa5776ec45b25cf437d0a87f3019
The Debian packaging meta_data file has been changed to reflect all the
latest git commits under the directory, pointed as usable, and to
improve pkg-versioning addressing the first commit as start point to
build packages.
This ensures that any new code submissions under those
directories will increment the versions.
Test Plan:
PASS: Verify package versions are updated as expected.
PASS: build-pkgs -c -p vault-helm,python3-k8sapp-vault,stx-vault-helm
Story: 2010550
Task: 47501
Signed-off-by: Manoel Benedito Neto <Manoel.BeneditoNeto@windriver.com>
Change-Id: I999b1d96146fb1e2ac931641620621f445cbda71
Using PKG_GITREVCOUNT combined with the plugin directory
being renamed resulted in the wheels version tracking being
reset to 1.0.1, which would result in future builds repeating
version numbers.
This commit fixes that by making it so that the version
tracks the repo, not individual directories.
As a bonus, plugin and tarball will have the same version number.
Test Plan:
PASS - Built pkgs have the fixed version
Story: 2010542
Task: 47195
Signed-off-by: Leonardo Fagundes Luz Serrano <Leonardo.FagundesLuzSerrano@windriver.com>
Change-Id: Id9f051cc3f1dc6cc53baa541c236c9e00b8b2c1f
Restore the image build for stx-vault-manager. Reference the removal of
CentOS based image: commit 20167fc5. It is effectively the same but for
Debian, a newer kubectl version, and the 'slim' image size.
Vault manager uses kubectl and miscellaneous tools including: curl, jq,
awk, bash, grep, sed and coreutils.
Test Plan:
PASS - Standard, dedicated storage 2+2+2
PASS - apply vault application with helm override to use the new image
PASS - observe pod logs, vault-manager operation
PASS - test vault HA, observe vault-manager operation
PASS - configure vault, configure secret, read secret
PASS - apply helloworld app, confirm read secret
Story: 2010393
Task: 46869
Change-Id: I9ca31d2f1047e704f961936ba8f04a7abdab56f5
Signed-off-by: Michel Thebeau <Michel.Thebeau@windriver.com>
Fixed versioning and adjusted some build files
to bring them as close to a standard as possible.
- Removed centos files
- Added version tracking via PKG_GITREVCOUNT
- Fixed mismatch in plugin name, set to python3-k8sapp-<app>
- Standardized plugin debian files (rules, *.install)
- Plugin wheels saved to /plugin instead of /plugin/<app>
Test Plan:
PASS - Build-pkgs -a
PASS - Build-image
PASS - Install, bootstrap, unlock
PASS - app tarball contains wheel file
PASS - wheel versioning updated properly
PASS - app applied (SX vm)
Signed-off-by: Leonardo Fagundes Luz Serrano <Leonardo.FagundesLuzSerrano@windriver.com>
Change-Id: I9f5e8b1afc65de3b7d7ee05744ad39f5acc834de
This change will allow this repo to pass zuul now
that this has merged:
https://review.opendev.org/c/zuul/zuul-jobs/+/866943
Tox 4 deprecated whitelist_externals.
Replace whitelist_externals with allowlist_externals
Partial-Bug: #2000399
Signed-off-by: Al Bailey <al.bailey@windriver.com>
Change-Id: I017d1b9c379ce60fa75b13609465f4256efc15df
Set reconciliation interval for all flux helm resources to 1m
to allow it to manage resources by itself in a reasonable time
interval.
Test Plan (tested as part of [1]):
PASS: bootstrap
PASS: unlocked enabled available
PASS: apps applied
PASS: inspect flux pod logs for errors
PASS: re-test known trigger for 1996747 and 1995748
PASS: re-test known trigger 1997368
[1] https://review.opendev.org/c/starlingx/config/+/866862
Partial-Bug: 1999032
Signed-off-by: Leonardo Fagundes Luz Serrano <Leonardo.FagundesLuzSerrano@windriver.com>
Change-Id: I1c95ca1d90e38a9f6c672e4dc28823b964cad334
Move the packages of "vault-armada-app" from stx-std.lst
to debian_iso_image.inc.
A subsequent commit will be sent to clean up stx-std.lst.
Test Plan:
Pass: build-pkgs -c -a
Pass: build-image
Pass: boot
Story: 2008862
Task: 46938
Signed-off-by: Yue Tao <yue.tao@windriver.com>
Change-Id: I2c979a18c6aeadec696b828b8caf59fd97977190
Disables the packages for CentOS build, as well as the vault manager
image. Conversion of docker image to Debian will happen at a later
date (task 46869).
Test Plan:
PASS: centos build
PASS: debian build
Story: 2010393
Task: 46868
Change-Id: I827352122460976b07b436fb022741f7d89e5548
Signed-off-by: Michel Thebeau <michel.thebeau@windriver.com>
The ubuntu-jammy nodeset gets selected by default
and is causing problems during setup.
Collecting cffi>=1.1
Failed to build cffi
ubuntu-bionic works if using stein upper-constraints.
ubuntu-focal works if using the debian upper-constraints.
Need to update a file that is monitored by zuul
in order to trigger the failing zuul jobs.
Partial-Bug: 1994843
Signed-off-by: Al Bailey <al.bailey@windriver.com>
Change-Id: Idd435815d995c8ed7b9d538d7abb7bd79066af20
Upstream has deprecated 'node-role.kubernetes.io/master'
to use 'node-role.kubernetes.io/control-plane' in k8s 1.24.
Platform and applications need to be updated to use 'control-plane'
with nodeSelector/Tolerations so we may upgrade from 'master'.
This updates pod nodeSelector to use
'node-role.kubernetes.io/control-plane' instead of
'node-role.kubernetes.io/master'.
This updates pod Tolerations to support both:
- 'node-role.kubernetes.io/master'
- 'node-role.kubernetes.io/control-plane'
Note: The vault app stopped applying properly when stx moved to k8s 1.22
1.23 due to the use of deprecated APIs:
https://github.com/hashicorp/vault-helm/blob/v0.6.0/templates/injector-mutating-webhook.yaml#L2https://github.com/hashicorp/vault-helm/blob/v0.6.0/templates/server-clusterrolebinding.yaml#L4
The changes will still be merged to ensure that the files are
updated when updating vault to a new version which is compatible
w/ k8s 1.24+
Story: 2010301
Task: 46667
Signed-off-by: Saba Touheed Mujawar <sabatouheed.mujawar@windriver.com>
Change-Id: I01e031a94755be01cd3e5d942e96d1b42c977329
Removal of armada app package causes build-iso step to fail
due to package still being listed in centos_iso_image.inc
TEST PLAN:
PASS: CentOS build-iso success
Closes-bug: 1987535
Signed-off-by: Lucas Cavalcante <lucasmedeiros.cavalcante@windriver.com>
Change-Id: I01fb53d119912eb5625d16743497e9a131b207bf
As part of Armada deprecation we need to remove all Armada application
builds for all applications that have been migrated to FluxCD.
This patch removes the armada app build from centos and debian.
TEST PLAN:
PASS: Build centos
PASS: Build debian
PASS: deb package does not have armada tarball
PASS: rpm package does not have armada tarball
PASS: FluxCD tarball is unchanged
Story: 2009138
Task: 46047
Signed-off-by: Lucas Cavalcante <lucasmedeiros.cavalcante@windriver.com>
Change-Id: I7ca4347c0918e6e46330b7c2e13ed1e5f8cf7d57
The pip team is planning to depreciate the
'--use-deprecated legacy-resolver'install option.
While a firm date has yet to be set, We need to be
ready for the change.
Test Plan*:
PASS: Verified that tox runs without issues
PASS: Generate Vault package.
PASS: Upload and apply Vault package and check if pod
is running.
Note(*): Tested on kubernetes 1.21.8
Closes-bug:1966076
Signed-off-by: Karla Felix <karla.karolinenogueirafelix@windriver.com>
Change-Id: I582378ff198430c182fd9197976e9041f2910680
Add the fluxcd app for vault to the debian build.
Test Cases:
PASS: Check deb install and application upload + apply on debian
PASS: Check application remove and delete on debian
PASS: Simple functional test by creating and retrieving a secret
Logs: https://paste.opendev.org/show/bhJ5eN9nyWlsqN7VMxtL/
Story: 2009138
Task: 44486
Signed-off-by: Rei Oliveira <Reinildes.JoseMateusOliveira@windriver.com>
Change-Id: I814219a52b44c082ec337db60fb62c31eb68d269
This commit switches vault to use the fluxcd app by default and also
preserves the armada app on the build for future tests.
TEST PLAN
PASS Build iso and verify apps
PASS Upload
PASS Apply
PASS Verify resources
PASS Remove
PASS Delete
Logs: https://paste.opendev.org/show/bafCSCzSQ7XH01MUEQcw/
Story: 2009138
Task: 45346
Signed-off-by: Thiago Brito <thiago.brito@windriver.com>
Change-Id: If898fd914ef85c6ca049c1eb1e0266cd6e85d58b