KubeVirt/CDI introduction

Inital draft guide for KubeVirt introduction
Conditionalize version
Implement patchset 2 review changes
Implement patchset 3 review changes
Resolve merge conflict
Implement patchset 4 review changes

Story: 2010466

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: I57a16fca9b78992b249a1aa04e6b12893c94fe9f
This commit is contained in:
Ron Stone 2022-12-12 06:31:24 -05:00
parent a200f9c624
commit cf755b146c
13 changed files with 1138 additions and 126 deletions

View File

@ -0,0 +1 @@

View File

@ -146,6 +146,16 @@ Updates
updates/index-updates-e3b970bb69ce
--------
KubeVirt
--------
.. toctree::
:maxdepth: 2
kube-virt/index-kubevirt-f1bfd2a21152
-----------
Admin tasks
-----------

View File

@ -0,0 +1,160 @@
.. _create-a-windows-vm-82957181df02:
===================
Create a Windows VM
===================
.. rubric:: |context|
This section provides an example of deploying a WindowsServer-based |VM| with
KubeVirt.
The example uses:
* A Windows Server 2019 image pre-installed in a qcow2 type image
- See
https://superuser.openstack.org/articles/how-to-deploy-windows-on-openstack/
for information on how to create such an image using VirtualBox and
starting with a Windows Server 2019 ISO image and Fedora VirtIO drivers.
* In order to make things easier, as part of making this image be sure to:
- configure a well-known Administrator password,
- enable Remote Desktop, and
- enable Cloud-Init.
* The |CDI| Upload Proxy service to upload the Windows Server 2019 pre-installed
qcow2 image into a DataVolume/|PVC|, for the root disk,
Note that this image will be larger than previous ubuntu image so will take
longer to load.
* Explicit resource request for 4x CPUs and 8G of Memory
* Multus and |SRIOV| CNIs in order to add an additional |SRIOV|-based interface.
These allow the |VM| to be assigned a unique IP Address from the IP Subnet
attached to the |SRIOV|-based interface.
* Connect with the graphical console interface via Virtctl in order to extend
the root disk and configure the IP Interface on the SRIOV-based interface.
* Remote Desktop (RDP) from a remote workstation to the Windows |VM|'s unique IP
Address on the IP Subnet attached to the |SRIOV|-based interface.
This example assumes the same infrastructure changes as in the previous Ubuntu
VM example have been done here. i.e., |SRIOV| interfaces connecting to a
10.10.186.0/24 network on vlan-id=20 have been configured on all hosts, and a
``NetworkAttachmentDefinition``, ``186-subnet``, has been created to this
network.
From a remote workstation that you have configured kubectl, virtctl and
virt-viewer, follow the procedure below to create the Windows |VM|, login to the
graphical console and configure the |VM|'s interface on the 10.10.186.0/24
network. Finally, RDP to the |VM| from a remote workstation.
.. rubric:: |proc|
#. Use ``virtctl`` and the CDI Upload Proxy service to load the Windows Server
2019 qcow2 image into a new DataVolume of size 500G, in the stx-lab
namespace.
.. code-block::
$ virtctl image-upload dv stx-lab-winserv-test-disk --namespace stx-lab --insecure \
--access-mode ReadWriteOnce --size 100Gi --image-path \
/home/sysadmin/admin/kubevirt/images/winserv2019.qcow2 \
--uploadproxy-url https://admin.starlingx.abc.com:32111
#. Create the ``yaml`` file defining the ``VirtualMachine`` |CRD| instance
.. code-block::
$ cat <<EOF > stx-lab-winserv-test-vm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata
labels:
kubevirt.io/vm: stx-lab-winserv-test
name: stx-lab-winserv-test
namespace: stx-lab
spec:
running: true
template:
metadata:
labels:
kubevirt.io/vm: stx-lab-winserv-test
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: myrootdisk
interfaces:
- masquerade: {}
name: default
- name: 186-subnet
sriov: {}
machine:
type: q35
resources:
requests:
cpu: 4
memory: 8G
terminationGracePeriodSeconds: 0
networks:
- name: default
pod: {}
- multus:
networkName: stx-lab/186-subnet
name: 186-subnet
volumes:
- name: myrootdisk
dataVolume:
name: stx-lab-winserv-test-disk
#. Apply the configuration.
.. code-block::
$ kubectl apply -f stx-lab-winserv-test-vm.yaml
#. Connect to the graphical console, extend the root disk, and configure the
|VM|'s interface on the 10.10.186.0/24 network.
.. code-block::
$ virtctl -n stx-lab vnc --kubeconfig="/home/jdoe/.kube/config" stx-lab-winserv
This command launches Windows graphical console.
#. Login with well-known Administrator password set when the Windows Server
2019 qcow2 image was created.
#. Extend the root disk to fully use the space on the root disk.
**Computer Management** > **Storage** > **Disk Management** > **Extend
Volume** (on the C: drive)
#. Configure the second ethernet adapter (SRIOV-based Interface).
For example:
- with static ip address in 10.10.186.0/24 subnet
- with the gateway ip address and
- with DNS address (10.10.186.130)
#. Logout of graphical console.
.. rubric:: |result|
You can now RDP to the Windows |VM| using the 10.10.186.<nnn> IP Address.

View File

@ -0,0 +1,355 @@
.. _create-an-ubuntu-vm-fafb82ec424b:
===================
Create an Ubuntu VM
===================
.. rubric:: |context|
This section provides a more complex, but likely more real-life, example of
deploying a linux-based |VM| with KubeVirt.
The example uses:
* An ubuntu 22.04 jammy cloud image
https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
* The |CDI| Upload Proxy service to upload the ubuntu jammy image into a
DataVolume/|PVC|, for the root disk. A production linux distribution and a
DataVolume/|PVC| for persistent storage.
* A ``cloudInitNoCloud`` volume type.
Used to pass in cloud init formatted 'userData' in order to create a user and
password for initial login.
* Explicit resource request for 8x CPUs and 16G of Memory.
* Multus and |SRIOV| CNIs in order to add an additional |SRIOV|-based interface.
This allows the |VM| to be assigned a unique IP Address from the IP Subnet
attached to the |SRIOV|-based interface.
* Connection using the serial console interface via ``virtctl`` in order to
configure the IP Interface on the SRIOV-based interface.
* SSH from a REMOTE WORKSTATION to the |VM|'s unique IP Address on the IP Subnet
attached to the |SRIOV|-based interface.
.. rubric:: |proc|
#. Make the infrastructure changes to support |SRIOV|-based interfaces on
containers (including KubeVirt |VM| containers):
#. Create a new Data Network of type 'vlan' for SRIOV interfaces to be used
by your KubeVirt |VMs|.
.. code-block::
~(keystone_admin)$ system datanetwork-add kube-sriov vlan
#. Create pci-sriov type interfaces.
For every AIO-Controller and Worker/Compute, create a PCI-SRIOV type
interface and attach the new data network to this interface. e.g. for
compute-1's enp24s0f0 ethernet interface
.. code-block::
~(keystone_admin)$ system host-if-modify -n sriov0 -c pci-sriov -N 64 --vf-driver vfio compute-1 enp24s0f0
~(keystone_admin)$ system interface-datanetwork-assign compute-1 sriov0 kube-sriov
#. Create a ``NetworkAttachmentDefinition`` in Kubernetes to point to the new
Data Network and specify a particular vlan-id to use within that Data
Network.
In the example below, a ``NetworkAttachmentDefinition`` is created in the
stx-lab namespace for vlan-id=20 on interfaces attached to the kube-sriov
data network. In this example, vlan-id=20 is attached to a router which
has this interface configured as the 10.10.186.0/24 IP Subnet.
Note that the ``k8s.v1.cni.cncf.io/resourceName`` annotation is used to
reference the specific Data Network, and has a structure of
``intel.com/pci_sriov_net_<data_network_name>`` (with all dashes in the
data network name, if any, converted to underscores).
.. code-block:: yaml
$ cat <<EOF > 186-subnet.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: 186-subnet
namespace: stx-lab
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_kube_sriov
spec:
config: '{
"cniVersion": "0.3.0",
"type": "sriov",
"vlan": 20
}'
EOF
#. Apply the configuration.
.. code-block::
$ kubectl apply -f 186-subnet.yaml
#. Create the |VM|, login to the console and configure the |VM|'s interface on
the 10.10.186.0/24 network. Then SSH to the |VM| from a remote workstation.
#. Use ``virtctl`` and the |CDI| Upload Proxy service to load the ubuntu jammy
cloud image into a new DataVolume of size 500G, in the stx-lab namespace.
.. code-block::
$ virtctl image-upload dv stx-lab-ubuntu-test-disk -n stx-lab --insecure \
--access-mode ReadWriteOnce --size 500Gi \
--image-path /home/sysadmin/admin/kubevirt/images/jammy-server-cloudimg-amd64.img \
--uploadproxy-url https://admin.starlingx.abc.com:32111
#. Create the ``yaml`` file defining the ``VirtualMachine`` |CRD| instance.
.. code-block:: yaml
$ cat <<EOF > stx-lab-ubuntu-test-vm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: stx-lab-ubuntu-test
name: stx-lab-ubuntu-test
namespace: stx-lab
spec:
running: true
template:
metadata:
labels:
kubevirt.io/vm: stx-lab-ubuntu-test
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: myrootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
name: default
- name: 186-subnet
sriov: {}
machine:
type: ""
resources:
requests:
cpu: 8
memory: 16Gi
networks:
- name: default
pod: {}
- multus:
networkName: stx-lab/186-subnet
name: 186-subnet
terminationGracePeriodSeconds: 0
volumes:
- name: myrootdisk
dataVolume:
name: stx-lab-ubuntu-test-disk
- cloudInitNoCloud:
userData: |-
#cloud-config
user: jenkins
password: myP@ssw0rd
chpasswd: { expire: False }
ssh_pwauth: True
name: cloudinitdisk
EOF
#. Apply the configuration.
.. code-block::
$ kubectl apply -f stx-lab-ubuntu-test-vm.yaml
#. Connect to console and configure |VM| and the |VM|'s interface on the
10.10.186.0/24 network.
.. code-block::
$ virtctl -n stx-lab console stx-lab-ubuntu-test-vm
Successfully connected to stx-lab-ubuntu-test console. The escape sequence is ^]4
stx-lab-ubuntu-test login: jenkins
Password:
Welcome to Ubuntu 22.04 LTS (GNU/Linux 5.15.0-39-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Thu Dec 8 16:55:12 UTC 2022
System information as of Thu Dec 8 16:55:12 UTC 2022
System load: 0.2587890625 Processes: 178
Usage of /: 0.3% of 476.62GB Users logged in: 0
Memory usage: 1% IPv4 address for enp1s0: 10.0.2.2
Swap usage: 0%
0 updates can be applied immediately.
...
#. Still in the |VM| console, list the interfaces.
Note that this |VM| has 2x interfaces.
* enp1s0 is the default container |CNI| interface
* enp6s0 is the |SRIOV| interface
.. code-block::
jenkins@stx-lab-ubuntu-test:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group
default qlen 1000
link/ether 52:54:00:41:84:a0 brd ff:ff:ff:ff:ff:ff
3: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 46:c5:53:3b:b3:b3 brd ff:ff:ff:ff:ff:ff
jenkins@stx-lab-ubuntu-test:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:41:84:a0 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.2/24 metric 100 brd 10.0.2.255 scope global dynamic enp1s0
valid_lft 86313505sec preferred_lft 86313505sec
inet6 fe80::5054:ff:fe41:84a0/64 scope link
valid_lft forever preferred_lft forever
3: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 46:c5:53:3b:b3:b3 brd ff:ff:ff:ff:ff:ff cumulus@stx-lab-ubuntu-test:~$
#. Still in the |VM| console, using the default |CNI| interface (which has
connectivity out the |OAM| interface), update and upgrade the ubuntu
deployment in the |VM|.
.. code-block:: none
jenkins@stx-lab-ubuntu-test:~$ sudo apt-get update
jenkins@stx-lab-ubuntu-test:~$ sudo apt-get -y upgrade
#. Still in the |VM| console, configure Networking persistently with netplan
and reboot
Specifically disable default CNI interface (enp1s0), and configure the
|SRIOV| interface (enp6s0).
.. code-block:: bash
$ sudo su -
$ cat <<EOF > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
network: {config: disabled}
EOF
# Update /etc/netplan/50-cloud-init.yaml as shown below.
$ vi /etc/netplan/50-cloud-init.yaml
network:
ethernets:
# enp1s0:
# dhcp4: true
enp6s0:
dhcp4: no
addresses:
- 10.10.186.97/24
nameservers:
addresses: [10.10.186.130]
routes:
- to: default
via: 10.10.186.1
version: 2
# Apply the updates
$ netplan apply
# Restart the system
$ /sbin/reboot
< LOGS FROM BOOTING ON CONSOLE >
[ OK ] Finished Execute cloud user/final scripts.
[ OK ] Reached target Cloud-init target.
Ubuntu 22.04.1 LTS stx-lab-ubuntu-test ttyS0
stx-lab-ubuntu-test login: jenkins
Password:
Last login: Thu Dec 8 16:55:13 UTC 2022 on ttyS0
jenkins@stx-lab-ubuntu-test:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:41:84:a0 brd ff:ff:ff:ff:ff:ff
3: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 46:c5:53:3b:b3:b3 brd ff:ff:ff:ff:ff:ff
jenkins@stx-lab-ubuntu-test:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 52:54:00:41:84:a0 brd ff:ff:ff:ff:ff:ff
3: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 46:c5:53:3b:b3:b3 brd ff:ff:ff:ff:ff:ff
inet 10.10.186.97/24 brd 10.10.186.255 scope global enp6s0
valid_lft forever preferred_lft forever
inet6 2620:10a:a001:a106:44c5:53ff:fe3b:b3b3/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 2591972sec preferred_lft 604772sec
inet6 fe80::44c5:53ff:fe3b:b3b3/64 scope link
valid_lft forever preferred_lft forever
jenkins@stx-lab-ubuntu-test:~$ ip route
default via 10.10.186.1 dev enp6s0 proto static
10.10.186.0/24 dev enp6s0 proto kernel scope link src 10.10.186.97
#. Connect from a remote workstation.
.. code-block:: bash
$ ssh jenkins@10.10.186.97
password:
Last login: Thu Dec 8 18:14:18 2022
jenkins@stx-lab-ubuntu-test:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:41:84:a0 brd ff:ff:ff:ff:ff:ff
3: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 46:c5:53:3b:b3:b3 brd ff:ff:ff:ff:ff:ff
.. rubric:: |result|
.. procedure results here

View File

@ -0,0 +1,196 @@
.. _hello-world-kubevirt-vm-05503659173c:
=======================
Hello World KubeVirt VM
=======================
.. rubric:: |context|
This section provides a 'hello world' example of creating, running and attaching
to a |VM| with KubeVirt. The example uses
* A CirrOS image packaged as a Container Image and available from
docker.io:kubevirt/cirros-container-disk-demo:latest
* A 'containerDisk' volume type
- The containerDisk feature provides the ability to store and distribute |VM|
disks in a container image registry.
- containerDisks can be assigned to |VMs| in the disks section of the
VirtualMachine spec.
- containerDisks are ephemeral storage devices; so can they only be used by
applications that do NOT require persistent data.
* A ``cloudInitNoCloud`` volume type, which allows attaching
``cloudInitNoCloud`` data-sources to the |VM|. If the |VM| contains a proper
cloud-init setup, it will pick up the disk as a user-data source.
* No additional network interfaces other than the default |CNI| interface of the
container running the |VM|.
* Connect with either the serial console interface via ``virtctl``, or through a
NodePort service on the |prod| Floating OAM IP Address.
.. rubric:: |proc|
Complete the procedure below to create the |VM|, start the |VM| and login into
the |VM| via the console:
#. Create the ``yaml`` file defining the ``VirtualMachine`` |CRD| instance.
.. code-block:: yaml
$ cat <<EOF > vm-cirros-containerdisk.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: vm-cirros
name: vm-cirros
spec:
running: false template:
metadata:
labels:
kubevirt.io/vm: vm-cirros
spec:
running: false
template:
metadata:
labels:
kubevirt.io/vm: vm-cirros
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
machine:
type: ""
resources:
requests:
memory: 64M
terminationGracePeriodSeconds: 0
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/cirros-container-disk-demo:latest
- cloudInitNoCloud:
userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK
name: cloudinitdisk
EOF
#. Apply the ``yaml`` file to create the |VM| in a stopped state.
.. code-block:: none
$ kubectl apply -f cdi-uploadproxy-nodeport-service.yaml virtualmachine.kubevirt.io/vm-cirros created
$ kubectl get vm
NAME AGE STATUS READY
vm-cirros 17s Stopped False
$ kubectl get vmi
No resources found in default namespace.
#. Start the |VM| with the ``virtctl`` tool.
.. code-block:: none
$ virtctl start vm-cirros
VM vm-cirros was scheduled to start
$ kubectl get vm
NAME AGE STATUS READY
vm-cirro 87s Running True
$ kubectl get vmi
NAME AGE PHASE IP NODENAME READY
vm-cirros 17s Running 172.16.225.72 compute-2 True
#. Connect to and login into the |VM| console using the ``virtctl`` tool.
.. code-block:: bash
$ virtctl console vm-cirros
Successfully connected to vm-cirros console. The escape sequence is ^]
# login as 'cirros' user. default password: 'gocubsgo'. Use 'sudo' for root.
# vm-cirros login: cirros
Password:
$ hostname vm-cirros
$ ls /
bin home lib64 mnt root tmp
boot init linuxrc old-root run usr
dev initrd.img lost+found opt sbin var
etc lib media proc sys vmlinuz
$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether a6:77:37:4c:ee:10 brd ff:ff:ff:ff:ff:ff
# List Interfaces
# Notice how the VM has a single eth0 interface, the default CNI interface.
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether a6:77:37:4c:ee:10 brd ff:ff:ff:ff:ff:ff
inet 172.16.225.72/32 brd 172.16.255.255 scope global eth0
valid_lft forever preferred_lft forever inet6 fe80::a477:37ff:fe4c:ee10/64 scope link valid_lft forever preferred_lft forever
# Exit/escape from the VM Console with ctrl+']'
$ ^]
#. Expose the SSH port of vm-cirros via a NodePort.
.. code-block:: bash
$ virtctl expose vmi vm-cirros --port=22 --name vm-cirros-ssh --type=NodePort
Service vm-cirros-ssh successfully exposed for vmi vm-cirros
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kuard-nodeport NodePort 10.96.155.165 <none> 80:31118/TCP 92d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 188d
nodeinfo ClusterIP 10.96.189.47 <none> 1080/TCP 92d
vm-cirros-ssh NodePort 10.99.91.228 <none> 22:31562/TCP 9s
#. Connect from a remote workstation.
.. parsed-literal::
$ ssh -p 31562 cirros@<Floating-OAM-IP-Address-of-|prod|>
password:
$ hostname vm-cirros
# List Interfaces
# Notice how the VM has a single eth0 interface, the default CNI interface.
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether a6:77:37:4c:ee:10 brd ff:ff:ff:ff:ff:ff
inet 172.16.225.72/32 brd 172.16.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a477:37ff:fe4c:ee10/64 scope link
valid_lft forever preferred_lft forever
$ exit

View File

@ -0,0 +1,26 @@
.. include:: /_includes/toc-title-kubevirt.rest
.. _index-kubevirt-f1bfd2a21152:
========
KubeVirt
========
.. toctree::
:maxdepth: 1
introduction-bb3a04279bf5
installation-66477d7646db
removal-97cc897941bc
Usage Examples
==============
.. toctree::
:maxdepth: 1
hello-world-kubevirt-vm-05503659173c
set-up-cdi-proxy-ad165d884417
create-an-ubuntu-vm-fafb82ec424b
set-up-remote-management-of-vms-a082461d660e
create-a-windows-vm-82957181df02

View File

@ -0,0 +1,63 @@
.. _installation-66477d7646db:
============
Installation
============
.. rubric:: |proc|
Complete the following steps to install KubeVirt.
#. Upload the KubeVirt system application tarball and check the KubeVirt
application status:
.. code-block:: none
~(keystone_admin)$ system application-upload /usr/local/share/application/helm/kubevirt-app-1.0-1.tgz
~(keystone_admin)$ system application-list
#. Apply the KubeVirt system application and check the KubeVirt and |CDI|
status:
.. code-block:: none
~(keystone_admin)$ system application-apply kubevirt-app
Wait for kubevirt-app status to complete.
.. code-block:: bash
$ watch -n 5 system application-list
# Wait for all pods in kubevirt namespace to be Running
$ watch -n 5 kubectl get pods -n kubevirt
# Wait for all pods in cdi namespace to be Running
$ watch -n 5 kubectl get pods -n cdi
#. Setup 'virtctl' client executable to be accessible from sysadmin's PATH
.. code-block:: bash
# Create /home/sysadmin/bin directory, if it doesn't exist already
$ mkdir -p /home/sysadmin/bin
# Create symbolic link in /home/sysadmin/bin to virtctl client executable installed on host in step 2)
$ cd /home/sysadmin/bin
$ ln -s /var/opt/kubevirt/virtctl-v0.53.1-linux-amd64 virtctl
# Logout and log back in to ensure that /home/sysadmin/bin gets added to your PATH variable.
$ exit
login: sysadmin
password:
$ which virtctl
/home/sysadmin/bin/virtctl
.. rubric:: |result|
KubeVirt has been installed on the system.

View File

@ -0,0 +1,29 @@
.. _introduction-bb3a04279bf5:
============
Introduction
============
The KubeVirt system application in StarlingX includes: KubeVirt, Containerized
Data Importer (|CDI|), and the Virtctl client tool.
KubeVirt is an open source project that allows |VMs| to be run and managed as
pods inside a Kubernetes cluster. This is a particularly important innovation as
traditional |VM| workloads can be moved into Kubernetes alongside already
containerized workloads, thereby taking advantage of Kubernetes as an
orchestration engine. Specifically, KubeVirt |VM| workloads and containerized
workloads can exist on the same node/host; an advantage over the OpenStack
solution in |prod| where workloads can exist on same system/cluster but not on
the same node/host.
The |CDI| is an open source project that provides facilities for enabling |PVCs|
to be used as disks for KubeVirt |VMs| by way of DataVolumes.
The Virtctl client tool is an open source tool distributed with KubeVirt and
required to use advanced features such as serial and graphical console access.
It also provides convenience commands for starting/stopping |VMs|, live
migrating |VMs|, canceling live migrations and uploading |VM| disk images.
See https://kubevirt.io/user-guide for more details.

View File

@ -0,0 +1,38 @@
.. _removal-97cc897941bc:
=======
Removal
=======
.. rubric:: |proc|
#. Remove all VirtualMachines, Virtual MachineInstances and Data Volumes
.. code-block:: none
$ kubectl get vm -A
$ kubectl delete vm <vm-name> -n <namespace>
$ kubectl get vmi -A
$ kubectl delete vmi <vmi-name> -n <namespace>
$ kubectl get dv -A
$ kubectl delete dv <dv-name> -n <namespace>
#. Remove KubeVirt and CDI pods and resources, and the virtctl command:
.. code-block:: none
~(keystone_admin)$ system application-remove kubevirt-app
$ rm /home/sysadmin/bin/virtctl
#. Remove the KubeVirt and CDI helm charts and application:
.. code-block:: none
~(keystone_admin)$ system application-delete kubevirt-app
.. rubric:: |result|
KubeVirt has been removed from the system.

View File

@ -0,0 +1,75 @@
.. _set-up-cdi-proxy-ad165d884417:
================
Set up CDI Proxy
================
.. rubric:: |context|
The Containerized Data Importer (|CDI|) project provides facilities for enabling
|PVCs| to be used as disks for KubeVirt |VMs| by way of DataVolumes.
The |CDI| service is installed as part of uploading and applying the KubeVirt
system application. The most common use case for |CDI| is 'uploading a disk
image to a DataVolume'. To use the |CDI| service, your Kubernetes cluster-admin
should make the ``cdi-uploadproxy`` service accessible from outside the cluster.
This can be done via NodePort service or Ingress service.
Configuring the NodePort service option is shown below:
.. rubric:: |proc|
#. Create the |CDI| proxy yaml configuration.
.. code-block:: yaml
$ cat <<EOF > cdi-uploadproxy-nodeport-service.yaml apiVersion: v1
kind: Service
metadata:
name: cdi-uploadproxy-nodeport
namespace: cdi
labels:
cdi.kubevirt.io: "cdi-uploadproxy"
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 32111 # Use unused nodeport in 31,500 to 32,767 range
protocol: TCP
selector:
cdi.kubevirt.io: cdi-uploadproxy
EOF
#. Apply the configuration.
.. code-block:: none
$ kubectl apply -f cdi-uploadproxy-nodeport-service.yaml
.. rubric:: |result|
Now the ``virtctl`` command can be used to upload a |VM| image file into an
existing or new DataVolume (|PVC|).
See the example (:ref:`create-an-ubuntu-vm-fafb82ec424b`) that uploads an ubuntu
cloud image (``jammy-server-cloudimg-amd64.img`` from
https://cloud-images.ubuntu.com/jammy/current/) into a new 500G DataVolume
named stx-lab-jenkins-disk:
.. code-block::
$ virtctl image-upload dv stx-lab-jenkins-disk -n jenkins-ns --insecure \
--access-mode ReadWriteOnce --size 500Gi --image-path \
/home/sysadmin/admin/kubevirt/images/jammy-server-cloudimg-amd64.img \
--uploadproxy-url https://admin.starlingx.abc.com:32111
See sections on :ref:`create-an-ubuntu-vm-fafb82ec424b` and
:ref:`create-a-windows-vm-82957181df02` on how to then use this DataVolume to
create/launch a |VM|.
See https://kubevirt.io/user-guide/operations/containerized_data_importer/ for
more details and other use cases.

View File

@ -0,0 +1,48 @@
.. _set-up-remote-management-of-vms-a082461d660e:
===============================
Set up remote management of VMs
===============================
.. rubric:: |context|
Configure the ``kubectl``, ``virtctl`` and ``virt-viewer`` clients on a remote
workstation, in order to manage KubeVirt |VMs| remotely. Note that the
graphical console of a VM can only be accessed remotely from a workstation with
X Windows (e.g. graphical ubuntu desktop), kubectl, ``virtctl`` and
``virt-viewer`` installed.
.. rubric:: |proc|
Configure kubectl and helm
==========================
.. include:: /shared/_includes/install-kubectl-and-helm-clients-directly-on-a-host-6383c5f2b484.rest
:start-after: begin-install-proc
:end-before: end-install-proc
Configure virtctl
=================
On the remote workstation, install virtctl client tool.
.. code-block::
$ export VERSION=v0.53.1
$ wget https://github.com/kubevirt/kubevirt/releases/download/
$ ${VERSION}/virtctl-${VERSION}-linux-amd64 chmod a+x ./virtctl-${VERSION}-linux-amd64
$ sudo cp ./virtctl-${VERSION}-linux-amd64 /usr/bin/virtctl
Configure virt-viewer
=====================
On the remote workstation, install virt-viewer in order to enable use of
graphical console.
.. code-block::
$ sudo apt -y install virt-viewer

View File

@ -28,132 +28,9 @@ configuration is required in order to use :command:`helm`.
.. _security-install-kubectl-and-helm-clients-directly-on-a-host-steps-f54-qqd-tkb:
#. On the controller, if an **admin-user** service account is not already available, create one.
#. Create the **admin-user** service account in **kube-system**
namespace and bind the **cluster-admin** ClusterRoleBinding to this user.
.. code-block:: none
% cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-admin
namespace: kube-system
EOF
% kubectl apply -f admin-login.yaml
#. Retrieve the secret token.
.. code-block:: none
~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-admin | awk '{print $1}') | grep "token:" | awk '{print $2}')
#. On a remote workstation, install the :command:`kubectl` client. Go to the
following link: `https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
<https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>`__.
#. Install the :command:`kubectl` client CLI (for example, an Ubuntu host).
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by
the |prod-long| K8S API, you must ensure that the
``k8s_root_ca_cert`` specified at install time is a trusted
|CA| certificate by your host. Follow the instructions for adding
a trusted |CA| certificate for the operating system distribution
of your particular host.
If you did not specify a ``k8s_root_ca_cert`` at install
time, then specify ``--insecure-skip-tls-verify``, as shown below.
The following example configures the default ~/.kube/config. See the
following reference:
`https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
<https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/>`__.
You need to obtain a floating |OAM| IP.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://${OAM_IP}:6443 \
--insecure-skip-tls-verify
% kubectl config set-credentials kubernetes-admin@mycluster --token=$TOKEN_DATA
% kubectl config set-context kubernetes-admin@mycluster --cluster=mycluster \
--user kubernetes-admin@mycluster --namespace=default
% kubectl config use-context kubernetes-admin@mycluster
``$TOKEN_DATA`` is the token retrieved in step 1.
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
compute-0 Ready <none> 9d v1.24.4 192.168.204.69 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
compute-1 Ready <none> 9d v1.24.4 192.168.204.7 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
controller-0 Ready control-plane,master 9d v1.24.4 192.168.204.3 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
controller-1 Ready control-plane,master 9d v1.24.4 192.168.204.4 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
%
#. On the workstation, install the :command:`helm` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. Install :command:`helm`. See the following reference:
`https://helm.sh/docs/intro/install/
<https://helm.sh/docs/intro/install/>`__. Helm accesses the Kubernetes
cluster as configured in the previous step, using the default ~/.kube/config.
.. code-block:: none
% wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
% tar xvf helm-v3.2.1-linux-amd64.tar.gz
% sudo cp linux-amd64/helm /usr/local/bin
#. Verify that :command:`helm` installed correctly.
.. code-block:: none
% helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
#. Run the following commands:
.. code-block:: none
% helm repo add bitnami https://charts.bitnami.com/bitnami
% helm repo update
% helm repo list
% helm search repo
% helm install wordpress bitnami/wordpress
.. include:: /shared/_includes/install-kubectl-and-helm-clients-directly-on-a-host-6383c5f2b484.rest
:start-after: begin-install-proc
:end-before: end-install-proc
.. seealso::

View File

@ -0,0 +1,134 @@
.. begin-install-proc
#. On the controller, if an **admin-user** service account is not already available, create one.
#. Create the **admin-user** service account in **kube-system**
namespace and bind the **cluster-admin** ClusterRoleBinding to this user.
.. code-block:: none
% cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-admin
namespace: kube-system
EOF
% kubectl apply -f admin-login.yaml
#. Retrieve the secret token.
.. code-block:: none
~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-admin | awk '{print $1}') | grep "token:" | awk '{print $2}')
#. On a remote workstation, install the :command:`kubectl` client. Go to the
following link: `https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
<https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>`__.
#. Install the :command:`kubectl` client CLI (for example, an Ubuntu host).
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by
the |prod-long| K8S API, you must ensure that the
``k8s_root_ca_cert`` specified at install time is a trusted
|CA| certificate by your host. Follow the instructions for adding
a trusted |CA| certificate for the operating system distribution
of your particular host.
If you did not specify a ``k8s_root_ca_cert`` at install
time, then specify ``--insecure-skip-tls-verify``, as shown below.
The following example configures the default ~/.kube/config. See the
following reference:
`https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
<https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/>`__.
You need to obtain a floating |OAM| IP.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://${OAM_IP}:6443 \
--insecure-skip-tls-verify
% kubectl config set-credentials kubernetes-admin@mycluster --token=$TOKEN_DATA
% kubectl config set-context kubernetes-admin@mycluster --cluster=mycluster \
--user kubernetes-admin@mycluster --namespace=default
% kubectl config use-context kubernetes-admin@mycluster
``$TOKEN_DATA`` is the token retrieved in step 1.
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE ...
controller-0 Ready master 15h v1.12.3 192.168.204.3 <none> CentOS L ...
controller-1 Ready master 129m v1.12.3 192.168.204.4 <none> CentOS L ...
worker-0 Ready <none> 99m v1.12.3 192.168.204.201 <none> CentOS L ...
worker-1 Ready <none> 99m v1.12.3 192.168.204.202 <none> CentOS L ...
%
#. On the workstation, install the :command:`helm` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. Install :command:`helm`. See the following reference:
`https://helm.sh/docs/intro/install/
<https://helm.sh/docs/intro/install/>`__. Helm accesses the Kubernetes
cluster as configured in the previous step, using the default ~/.kube/config.
.. code-block:: none
% wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
% tar xvf helm-v3.2.1-linux-amd64.tar.gz
% sudo cp linux-amd64/helm /usr/local/bin
#. Verify that :command:`helm` installed correctly.
.. code-block:: none
% helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
#. Run the following commands:
.. code-block:: none
% helm repo add bitnami https://charts.bitnami.com/bitnami
% helm repo update
% helm repo list
% helm search repo
% helm install wordpress bitnami/wordpress
.. end-install-proc