Comment out the dh_auto_install step that installs the generated
Distributed Cloud application tarball under
/usr/local/share/applications/helm. The tarball is still built,
but it is not shipped in the .deb package.
Additionally, the following changes were made:
- Update the nginx image to a fixed tag.
- Update README with the DC image build and pull/push instructions for
the nginx image.
- Add a service account and default-registry-key secret for the
dc-vault-nginx pod.
Test Plan:
PASS: Downloader and build-pkgs run successfully.
PASS: Verify that the distributed cloud tarball is not present in
/usr/local/share/applications/helm after the installation.
Closes-bug: 2137062
Change-Id: Ia6c3a026e7445c003b071e7a477f592d319415d3
Signed-off-by: Enzo Candotti <Enzo.Candotti@windriver.com>
app-distributed-cloud (Prototype)
This tutorial provides a step-by-step guide on containerizing DC Services using the app-distributed-cloud prototype.
Note: All dcmanager operations are not fully tested or operational.
This app is not part of the build, since its installation in /usr/local/share/applications/helm is commented out. Uncomment the relevant lines in stx-distributed-cloud-helm/debian/all/deb_folder/rules and stx-distributed-cloud-helm/debian/all/deb_folder/stx-distributed-cloud-helm.install to install the tarball in the applications directory.
Distributed Cloud Application Deployment (development)
Build Docker image
# Reference: https://docs.starlingx.io/developer_resources/build_docker_image.html#image-directives-files
# build PY3 tarball
cd $MY_REPO/build-tools/build-wheels
./build-wheel-tarball.sh --keep-image --cache
# build the container
docker pull starlingx/stx-debian:master-stable-latest
cd $MY_REPO/build-tools/build-docker-images
./build-stx-images.sh \
--base starlingx/stx-debian:master-stable-latest \
--no-pull-base \
--wheels $MY_WORKSPACE/std/build-wheels-debian-stable/stx-debian-stable-wheels.tar \
--only stx-distributed-cloud \
--cache
# Publish the image to some docker registry, or export it to a tarball. Example:
DOCKER_IMAGE=registry.local:9001/docker.io/starlingx/stx-distributed-cloud:master-debian-stable-latest
docker image tag <UUID> ${DOCKER_IMAGE}
docker save -o dc-image.tar ${DOCKER_IMAGE}
On the target controller:
DOCKER_IMAGE=registry.local:9001/docker.io/starlingx/stx-distributed-cloud:master-debian-stable-latest
sudo docker login registry.local:9001
sudo docker load -i dc-image.tar
sudo docker image push ${DOCKER_IMAGE}
# Pull nginx image and push to local registry
sudo docker pull nginx:1.27-alpine
sudo docker tag 6769dc3a703c registry.local:9001/docker.io/nginx:1.27-alpine
sudo docker push registry.local:9001/docker.io/nginx:1.27-alpine
# Upload the prototype
system application-upload /usr/local/share/applications/helm/distributed-cloud-*-0.tgz
Disable Service Management
Disable the dcmanager services on the platform
source /etc/platform/openrc
sudo sm-unmanage service dcmanager-manager
sudo sm-unmanage service dcmanager-api
sudo sm-unmanage service dcmanager-audit
sudo sm-unmanage service dcmanager-audit-worker
sudo sm-unmanage service dcmanager-orchestrator
sudo sm-unmanage service dcmanager-state
sudo sm-unmanage service dcorch-engine
sudo sm-unmanage service dcorch-engine-worker
sudo sm-unmanage service dcorch-sysinv-api-proxy
sudo sm-unmanage service dcorch-usm-api-proxy
sudo sm-unmanage service dcorch-identity-api-proxy
sudo sm-unmanage service dcdbsync-api
sudo sm-unmanage service dcagent-api
sudo pkill -f ^".*/bin/dcmanager.*"
sudo pkill -f ^".*/bin/dcorch.*"
sudo pkill -f ^".*/bin/dcdbsync.*"
sudo pkill -f ^".*/bin/dcagent.*"
Platform Setup
system host-label-assign controller-0 starlingx.io/distributed-cloud=enabled
system host-label-assign controller-1 starlingx.io/distributed-cloud=enabled
Note: If you have issues with downloading the nginx image for dc-vault-nginx, assign the distributed-cloud label just for the controller-0
Create the namespace and root-ca secret
# Create distributed-cloud namespace
kubectl create namespace distributed-cloud
# Create default-registry-key secret | if using registry.local:9001
kubectl create secret docker-registry default-registry-key \
--docker-server=registry.local:9001 \
--docker-username=admin \
--docker-password=${OS_PASSWORD} \
--namespace=distributed-cloud
# Create ca-cert secret to allow SSL
sudo cp /etc/ssl/certs/ca-certificates.crt /home/sysadmin
sudo chown sysadmin:sys_protected /home/sysadmin/ca-certificates.crt
kubectl -n distributed-cloud create secret generic root-ca --from-file=ca.crt=/home/sysadmin/ca-certificates.crt
# Set Password Variables
ADMIN_KS_PASSWORD=$(keyring get CGCS admin)
RABBITMQ_PASSWORD=$(keyring get amqp rabbit)
DCMANAGER_DB_PASSWORD=$(keyring get dcmanager database)
DCMANAGER_KS_PASSWORD=$(keyring get dcmanager services)
DCORCH_DB_PASSWORD=$(keyring get dcorch database)
DCORCH_KS_PASSWORD=$(keyring get dcorch services)
DCDBSYNC_KS_PASSWORD=$(keyring get dcdbsync services)
KEYSTONE_DB_PASSWORD=$(keyring get keystone database)
DCAGENT_KS_PASSWORD=$(keyring get dcagent services)
DOCKER_IMAGE=registry.local:9001/docker.io/starlingx/stx-distributed-cloud:master-debian-stable-latest
# Create dcmanager and dcorch overrides
cat<<EOF>dcmanager.yaml
images:
tags:
dcmanager: ${DOCKER_IMAGE}
ks_user: ${DOCKER_IMAGE}
ks_service: ${DOCKER_IMAGE}
ks_endpoints: ${DOCKER_IMAGE}
dcmanager_db_sync: ${DOCKER_IMAGE}
db_init: ${DOCKER_IMAGE}
db_drop: ${DOCKER_IMAGE}
pullPolicy: Always
pod:
image_pull_secrets:
default:
- name: default-registry-key
tolerations:
dcmanager:
enabled: true
conf:
dcmanager:
DEFAULT:
log_config_append: /etc/dcmanager/logging.conf
transport_url: rabbit://guest:${RABBITMQ_PASSWORD}@controller.internal:5672
auth_strategy: keystone
playbook_timeout: 3600
use_usm: False
workers: 1
orch_workers: 1
state_workers: 1
audit_workers: 1
audit_worker_workers: 1
cache:
auth_uri: http://controller.internal:5000/v3
admin_tenant: admin
admin_username: admin
admin_password: ${ADMIN_KS_PASSWORD}
endpoint_cache:
auth_uri: http://controller.internal:5000/v3
auth_plugin: password
username: dcmanager
password: ${DCMANAGER_KS_PASSWORD}
project_name: services
user_domain_name: Default
project_domain_name: Default
http_connect_timeout: 15
database:
connection_recycle_time: 3600
max_pool_size: 105
max_overflow: 100
keystone_authtoken:
auth_version: v3
auth_type: password
ansible:
defaults:
remote_tmp: /tmp/.ansible-${USER}/tmp
log_path: ~/ansible.log
dependencies:
static:
api:
jobs:
- dcmanager-ks-user
- dcmanager-ks-service
- dcmanager-ks-endpoints
ks_endpoints:
jobs:
- dcmanager-ks-user
- dcmanager-ks-service
endpoints:
cluster_domain_suffix: cluster.local
oslo_db:
auth:
admin:
username: admin-dcmanager
password: ${DCMANAGER_DB_PASSWORD}
dcmanager:
username: admin-dcmanager
password: ${DCMANAGER_DB_PASSWORD}
hosts:
default: postgresql
host_fqdn_override:
default: controller.internal
port:
postgresql:
default: 5432
path: /dcmanager
scheme: postgresql+psycopg2
oslo_messaging:
auth:
admin:
username: guest
password: ${RABBITMQ_PASSWORD}
dcmanager:
username: guest
password: ${RABBITMQ_PASSWORD}
hosts:
default: rabbitmq
host_fqdn_override:
default: controller.internal
path: /
scheme: rabbit
port:
amqp:
default: 5672
http:
default: 15672
identity:
name: keystone
auth:
admin:
username: admin
password: ${ADMIN_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: admin
user_domain_name: Default
project_domain_name: Default
dcmanager:
role: admin
username: dcmanager
password: ${DCMANAGER_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: services
user_domain_name: Default
project_domain_name: Default
hosts:
default: keystone-api
public: keystone
host_fqdn_override:
default: controller.internal
path:
default: /v3
scheme:
default: http
port:
api:
default: 80
internal: 5000
dcmanager:
name: dcmanager
hosts:
default: dcmanager-api
public: dcmanager
host_fqdn_override:
default: null
path:
default: /v1.0
scheme:
default: 'http'
port:
api:
default: 8119
public: 80
EOF
cat<<EOF>dcorch.yaml
images:
tags:
dcorch: ${DOCKER_IMAGE}
ks_user: ${DOCKER_IMAGE}
ks_service: ${DOCKER_IMAGE}
ks_endpoints: ${DOCKER_IMAGE}
db_init: ${DOCKER_IMAGE}
db_drop: ${DOCKER_IMAGE}
pullPolicy: Always
pod:
image_pull_secrets:
default:
- name: default-registry-key
tolerations:
dcorch:
enabled: true
replicas:
dcorch_engine_worker: 1
dcorch_sysinv_api_proxy: 1
keystone_api_proxy: 1
dcorch_patch_api_proxy: 1
dcorch_usm_api_proxy: 1
conf:
dcorch:
DEFAULT:
log_config_append: /etc/dcorch/logging.conf
transport_url: rabbit://guest:${RABBITMQ_PASSWORD}@controller.internal:5672
auth_strategy: keystone
playbook_timeout: 3600
use_usm: False
endpoint_cache:
password: ${DCMANAGER_KS_PASSWORD}
database:
connection_recycle_time: 3600
max_pool_size: 105
max_overflow: 100
keystone_authtoken:
auth_version: v3
auth_type: password
dependencies:
static:
api:
jobs:
- dcorch-ks-user
- dcorch-ks-service
ks_endpoints:
jobs:
- dcorch-ks-user
- dcorch-ks-service
endpoints:
cluster_domain_suffix: cluster.local
oslo_db:
auth:
admin:
username: admin-dcorch
password: ${DCORCH_DB_PASSWORD}
dcorch:
username: admin-dcorch
password: ${DCORCH_DB_PASSWORD}
dcmanager:
username: admin-dcmanager
password: ${DCMANAGER_DB_PASSWORD}
hosts:
default: postgresql
host_fqdn_override:
default: controller.internal
port:
postgresql:
default: 5432
path: /dcorch
scheme: postgresql+psycopg2
oslo_messaging:
auth:
admin:
username: guest
password: ${RABBITMQ_PASSWORD}
dcmanager:
username: guest
password: ${RABBITMQ_PASSWORD}
hosts:
default: rabbitmq
host_fqdn_override:
default: controller.internal
path: /
scheme: rabbit
port:
amqp:
default: 5672
http:
default: 15672
identity:
name: keystone
auth:
admin:
username: admin
password: ${ADMIN_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: admin
user_domain_name: Default
project_domain_name: Default
dcorch:
role: admin
username: dcorch
password: ${DCORCH_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: services
user_domain_name: Default
project_domain_name: Default
hosts:
default: keystone-api
public: keystone
host_fqdn_override:
default: controller.internal
path:
default: /v3
scheme:
default: http
port:
api:
default: 80
internal: 5000
dcorch:
name: dcorch
hosts:
default: dcorch-api
public: dcorch
host_fqdn_override:
default: null
path:
default: /v1.0
scheme:
default: 'http'
port:
api:
default: 8118
public: 80
EOF
cat<<EOF>dcdbsync.yaml
images:
tags:
dcdbsync: ${DOCKER_IMAGE}
ks_user: ${DOCKER_IMAGE}
ks_service: ${DOCKER_IMAGE}
ks_endpoints: ${DOCKER_IMAGE}
conf:
dcdbsync:
keystone_authtoken:
region_name: ${OS_REGION_NAME}
password: ${DCDBSYNC_KS_PASSWORD}
endpoint_cache:
region_name: ${OS_REGION_NAME}
password: ${DCDBSYNC_KS_PASSWORD}
endpoints:
cluster_domain_suffix: cluster.local
sql_alchemy:
auth:
keystone:
password: ${KEYSTONE_DB_PASSWORD}
identity:
name: keystone
auth:
admin:
username: admin
password: ${ADMIN_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: admin
user_domain_name: Default
project_domain_name: Default
dcdbsync:
role: admin
username: dcdbsync
password: ${DCDBSYNC_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: services
user_domain_name: Default
project_domain_name: Default
hosts:
default: keystone-api
public: keystone
host_fqdn_override:
default: controller.internal
path:
default: /v3
scheme:
default: http
port:
api:
default: 80
internal: 5000
EOF
cat<<EOF>dcagent.yaml
images:
tags:
dcagent: ${DOCKER_IMAGE}
ks_user: ${DOCKER_IMAGE}
ks_service: ${DOCKER_IMAGE}
ks_endpoints: ${DOCKER_IMAGE}
pullPolicy: Always
pod:
image_pull_secrets:
default:
- name: default-registry-key
tolerations:
dcagent:
enabled: true
conf:
dcagent:
DEFAULT:
log_config_append: /etc/dcagent/logging.conf
auth_strategy: keystone
workers: 1
keystone_authtoken:
auth_uri: http://controller.internal:5000
auth_url: http://controller.internal:5000
auth_type: password
region_name: ${OS_REGION_NAME}
username: dcagent
password: ${DCAGENT_KS_PASSWORD}
project_name: services
user_domain_name: Default
project_domain_name: Default
endpoint_cache:
auth_uri: http://controller.internal:5000/v3
auth_plugin: password
region_name: ${OS_REGION_NAME}
username: dcagent
password: ${DCAGENT_KS_PASSWORD}
user_domain_name: Default
project_name: services
project_domain_name: Default
http_connect_timeout: 15
dependencies:
static:
api:
jobs:
- dcagent-ks-user
- dcagent-ks-service
- dcagent-ks-endpoints
ks_endpoints:
jobs:
- dcagent-ks-user
- dcagent-ks-service
endpoints:
cluster_domain_suffix: cluster.local
identity:
name: keystone
auth:
admin:
username: admin
password: ${ADMIN_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: admin
user_domain_name: Default
project_domain_name: Default
dcagent:
role: admin
username: dcagent
password: ${DCAGENT_KS_PASSWORD}
region_name: ${OS_REGION_NAME}
project_name: services
user_domain_name: Default
project_domain_name: Default
hosts:
default: keystone-api
public: keystone
host_fqdn_override:
default: controller.internal
path:
default: /v3
scheme:
default: http
port:
api:
default: 80
internal: 5000
dcagent:
name: dcagent
hosts:
default: dcagent-api
public: dcagent
host_fqdn_override:
default: null
path:
default: /v1
scheme:
default: 'http'
port:
api:
default: 8325
public: 80
EOF
system helm-override-update distributed-cloud dcmanager distributed-cloud --values dcmanager.yaml
system helm-override-update distributed-cloud dcorch distributed-cloud --values dcorch.yaml
system helm-override-update distributed-cloud dcdbsync distributed-cloud --values dcdbsync.yaml
system helm-override-update distributed-cloud dcagent distributed-cloud --values dcagent.yaml
system helm-override-show distributed-cloud dcmanager distributed-cloud
system helm-override-show distributed-cloud dcorch distributed-cloud
system helm-override-show distributed-cloud dcdbsync distributed-cloud
system helm-override-show distributed-cloud dcagent distributed-cloud
Apply app-distributed-cloud
system application-apply distributed-cloud
system application-show distributed-cloud
To remove
system application-remove distributed-cloud
system application-delete distributed-cloud
Check dcmanager endpoints
openstack endpoint list | grep dcmanager
Check if dcmanager-api endpoint works
kubectl get svc dcmanager-api -n distributed-cloud
kubectl get endpoints dcmanager-api -n distributed-cloud
# Get Token
openstack token issue
curl -i http://<endpoint>/v1.0/subclouds -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}"
Configure dcmanager-client
Edit file: /usr/lib/python3/dist-packages/dcmanagerclient/api/v1/client.py
_DEFAULT_DCMANAGER_URL = (
"http://dcmanager-api.distributed-cloud.svc.cluster.local:8119/v1.0"
)
# delete if not dcmanager_url: to always set default
dcmanager_url = _DEFAULT_DCMANAGER_URL
Check dcmanager-manager is working
dcmanager subcloud-group add --name test
dcmanager subcloud update --group 2 subcloud2-stx-latest