# Overview The purpose of the contained Ansible playbook and roles is to deploy an ORAN-compliant O-Cloud instance. Currently supported Kubernetes platforms and infrastructure targets are: ## Platform - [OKD](https://www.okd.io/) ## Infrastructure - KVM/libvirtd virtual machine # Prerequisites The following prerequisites must be installed on the host where the playbook will be run (localhost, by default): # DNS To enable network access to cluster services, DNS address records must be defined for the following endpoints: * api.. (e.g. api.ocloud.example.com) * api-int.. (e.g. api-int.ocloud.example.com) * *.apps.. (e.g. *.apps.ocloud.example.com) In the case of all-in-one topology clusters, all addresses must resolve to the machine network IP assigned to the node. ## Ansible Install Ansible per [Installing Ansible on specific operating systems](https://docs.ansible.com/ansible/latest/installation_guide/installation_distros.html) documentation. ## libvirt/KVM If deploying the O-Cloud as a virtual machine, the host must be configured as a libvirt/KVM host. Instructions for doing so vary by Linux distribution, for example: - [Fedora](https://docs.fedoraproject.org/en-US/quick-docs/virtualization-getting-started/) - [Ubuntu](https://ubuntu.com/server/docs/virtualization-libvirt) Ensure that the 'libvirt-devel' package is installed, as it is a dependency for the 'libvirt-python' module. ## Python Modules Install required python modules by installing via the package manager (e.g. yum, dnf, apt) or running: ``` pip install -r requirements.txt ``` ## Ansible Collections Install required Ansible collections by running: ``` ansible-galaxy collection install -r requirements.yml ``` ## Ansible Variables ### General #### Optional The following variables can be set to override deployment defaults: - ocloud_infra [default="vm"]: infrastructure target - ocloud_platform [default="okd"]: platform target - ocloud_topology [default="aio"]: O-Cloud cluster topology - ocloud_cluster_name [default="ocloud-{{ ocloud_infra }}-{{ ocloud_platform }}-{{ ocloud_topology }}"]: O-Cloud cluster name - ocloud_domain_name [default="example.com"]: O-Cloud domain name - ocloud_net_cidr [default="192.168.123.0/24"]: O-Cloud machine network CIDR ### Infrastructure / VM #### Optional The following variables can be set to override defaults for deploying to a VM infrastructure target: - ocloud_infra_vm_cpus [default=8]: Number of vCPUs to allocate to the VM - ocloud_infra_vm_mem_gb [default=24]: Amount of RAM to allocate to the VM in GB - ocloud_infra_vm_disk_gb [default=120]: Amount of disk space to allocate to the VM in GB - ocloud_infra_vm_disk_dir [default="/var/lib/libvirt/images"]: directory where VM images are stored - ocloud_net_name [default="ocloud"]: virtual network name - ocloud_net_bridge [default="ocloud-br"]: virtual network bridge name - ocloud_net_mac_prefix [default="52:54:00:01:23"]: virtual network MAC prefix ### Platform / OKD #### Required The following Ansible variables must be defined in group_vars/all.yml: - ocloud_platform_okd_ssh_pubkey: the SSH public key that will be embedded in the OKD install image and used to access deployed nodes #### Optional Optionally, the following variables can be set to override default settings: - ocloud_platform_okd_release [default=4.14.0-0.okd-2024-01-26-175629]: OKD release, as defined in [OKD releases](https://github.com/okd-project/okd/releases) - ocloud_platform_okd_pull_secret [default=None]: pull secret for use with non-public image registries # Installation Execute the playbook from the base directory as follows: ``` ansible-playbook -i inventory playbooks/ocloud.yml ``` This will deploy the O-Cloud up through the bootstrap phase. Continue to monitor the cluster deployment through completion per the Validation section below. # Validation ## OKD Set the KUBECONFIG variable to point to the config generated by the agent-based installer, for example: ``` export KUBECONFIG=/tmp/ansible.6u4ydu5n/cfg/auth/kubeconfig ``` Monitor the progress of the installation by running the 'oc get nodes', 'oc get clusteroperators', and 'oc get clusterversion' commands until all nodes are ready and all cluster operators are available, for example: ``` $ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master,worker 105m v1.27.9+e36e183 $ oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.0-0.okd-2024-01-26-175629 True False False 87m baremetal 4.14.0-0.okd-2024-01-26-175629 True False False 94m cloud-controller-manager 4.14.0-0.okd-2024-01-26-175629 True False False 93m cloud-credential 4.14.0-0.okd-2024-01-26-175629 True False False 116m cluster-autoscaler 4.14.0-0.okd-2024-01-26-175629 True False False 94m config-operator 4.14.0-0.okd-2024-01-26-175629 True False False 92m console 4.14.0-0.okd-2024-01-26-175629 True False False 88m control-plane-machine-set 4.14.0-0.okd-2024-01-26-175629 True False False 94m csi-snapshot-controller 4.14.0-0.okd-2024-01-26-175629 True False False 96m dns 4.14.0-0.okd-2024-01-26-175629 True False False 93m etcd 4.14.0-0.okd-2024-01-26-175629 True False False 94m image-registry 4.14.0-0.okd-2024-01-26-175629 True False False 89m ingress 4.14.0-0.okd-2024-01-26-175629 True False False 96m insights 4.14.0-0.okd-2024-01-26-175629 True False False 91m kube-apiserver 4.14.0-0.okd-2024-01-26-175629 True False False 92m kube-controller-manager 4.14.0-0.okd-2024-01-26-175629 True False False 93m kube-scheduler 4.14.0-0.okd-2024-01-26-175629 True False False 91m kube-storage-version-migrator 4.14.0-0.okd-2024-01-26-175629 True False False 97m machine-api 4.14.0-0.okd-2024-01-26-175629 True False False 91m machine-approver 4.14.0-0.okd-2024-01-26-175629 True False False 94m machine-config 4.14.0-0.okd-2024-01-26-175629 True False False 96m marketplace 4.14.0-0.okd-2024-01-26-175629 True False False 96m monitoring 4.14.0-0.okd-2024-01-26-175629 True False False 85m network 4.14.0-0.okd-2024-01-26-175629 True False False 98m node-tuning 4.14.0-0.okd-2024-01-26-175629 True False False 93m openshift-apiserver 4.14.0-0.okd-2024-01-26-175629 True False False 89m openshift-controller-manager 4.14.0-0.okd-2024-01-26-175629 True False False 90m openshift-samples 4.14.0-0.okd-2024-01-26-175629 True False False 90m operator-lifecycle-manager 4.14.0-0.okd-2024-01-26-175629 True False False 93m operator-lifecycle-manager-catalog 4.14.0-0.okd-2024-01-26-175629 True False False 94m operator-lifecycle-manager-packageserver 4.14.0-0.okd-2024-01-26-175629 True False False 93m service-ca 4.14.0-0.okd-2024-01-26-175629 True False False 97m storage 4.14.0-0.okd-2024-01-26-175629 True False False 92m $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.0-0.okd-2024-01-26-175629 True False 83m Cluster version is 4.14.0-0.okd-2024-01-26-175629 ``` # Troubleshooting ## OKD Refer to [Troubleshooting installation issues](https://docs.okd.io/4.14/installing/installing-troubleshooting.html) for information on diagnosing OKD deployment failures. # Cleanup ## VM To cleanup a VM-based deployment due to failure, or to prepare to redeploy, execute the following as root on the libvirt/KVM host: 1. Shut down and remove the virtual machine (note that the VM name may differ if the default is overridden): ``` virsh destroy master-0 virsh undefine master-0 ``` 2. Disable and remove the virtual network (note that the network name may differ if the default is overridden): ``` virsh net-destroy ocloud virsh net-undefine ocloud ``` 3. Remove virtual disk and boot media: ``` rm /var/lib/libvirt/images/master-0*.{qcow2,iso} ```