2 The purpose of the contained Ansible playbook and roles is to deploy an ORAN-compliant O-Cloud instance.
4 Currently supported Kubernetes platforms and infrastructure targets are:
7 - [OKD](https://www.okd.io/)
10 - KVM/libvirtd virtual machine
13 The following prerequisites must be installed on the host where the playbook will be run (localhost, by default):
16 To enable network access to cluster services, DNS address records must be defined for the following endpoints:
18 * api.<cluster>.<domain> (e.g. api.ocloud.example.com)
19 * api-int.<cluster>.<domain> (e.g. api-int.ocloud.example.com)
20 * *.apps.<cluster>.<domain> (e.g. *.apps.ocloud.example.com)
22 In the case of all-in-one topology clusters, all addresses must resolve to the machine network IP assigned to the node.
26 Install Ansible per [Installing Ansible on specific operating systems](https://docs.ansible.com/ansible/latest/installation_guide/installation_distros.html) documentation.
30 If deploying the O-Cloud as a virtual machine, the host must be configured as a libvirt/KVM host.
31 Instructions for doing so vary by Linux distribution, for example:
33 - [Fedora](https://docs.fedoraproject.org/en-US/quick-docs/virtualization-getting-started/)
34 - [Ubuntu](https://ubuntu.com/server/docs/virtualization-libvirt)
36 Ensure that the 'libvirt-devel' package is installed, as it is a dependency for the 'libvirt-python' module.
40 Install required python modules by installing via the package manager (e.g. yum, dnf, apt) or running:
43 pip install -r requirements.txt
46 ## Ansible Collections
48 Install required Ansible collections by running:
51 ansible-galaxy collection install -r requirements.yml
59 The following variables can be set to override deployment defaults:
60 - ocloud_infra [default="vm"]: infrastructure target
61 - ocloud_platform [default="okd"]: platform target
62 - ocloud_topology [default="aio"]: O-Cloud cluster topology
63 - ocloud_cluster_name [default="ocloud-{{ ocloud_infra }}-{{ ocloud_platform }}-{{ ocloud_topology }}"]: O-Cloud cluster name
64 - ocloud_domain_name [default="example.com"]: O-Cloud domain name
65 - ocloud_net_cidr [default="192.168.123.0/24"]: O-Cloud machine network CIDR
67 ### Infrastructure / VM
70 The following variables can be set to override defaults for deploying to a VM infrastructure target:
72 - ocloud_infra_vm_cpus [default=8]: Number of vCPUs to allocate to the VM
73 - ocloud_infra_vm_mem_gb [default=24]: Amount of RAM to allocate to the VM in GB
74 - ocloud_infra_vm_disk_gb [default=120]: Amount of disk space to allocate to the VM in GB
75 - ocloud_infra_vm_disk_dir [default="/var/lib/libvirt/images"]: directory where VM images are stored
76 - ocloud_net_name [default="ocloud"]: virtual network name
77 - ocloud_net_bridge [default="ocloud-br"]: virtual network bridge name
78 - ocloud_net_mac_prefix [default="52:54:00:01:23"]: virtual network MAC prefix
83 The following Ansible variables must be defined in group_vars/all.yml:
85 - ocloud_platform_okd_ssh_pubkey: the SSH public key that will be embedded in the OKD install image and used to access deployed nodes
88 Optionally, the following variables can be set to override default settings:
90 - ocloud_platform_okd_release [default=4.14.0-0.okd-2024-01-26-175629]: OKD release, as defined in [OKD releases](https://github.com/okd-project/okd/releases)
91 - ocloud_platform_okd_pull_secret [default=None]: pull secret for use with non-public image registries
95 Execute the playbook from the base directory as follows:
98 ansible-playbook -i inventory playbooks/ocloud.yml
101 This will deploy the O-Cloud up through the bootstrap phase.
102 Continue to monitor the cluster deployment through completion per the Validation section below.
108 Set the KUBECONFIG variable to point to the config generated by the agent-based installer, for example:
111 export KUBECONFIG=/tmp/ansible.6u4ydu5n/cfg/auth/kubeconfig
114 Monitor the progress of the installation by running the 'oc get nodes', 'oc get clusteroperators', and
115 'oc get clusterversion' commands until all nodes are ready and all cluster operators are available, for example:
119 NAME STATUS ROLES AGE VERSION
120 master-0 Ready control-plane,master,worker 105m v1.27.9+e36e183
122 $ oc get clusteroperators
123 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
124 authentication 4.14.0-0.okd-2024-01-26-175629 True False False 87m
125 baremetal 4.14.0-0.okd-2024-01-26-175629 True False False 94m
126 cloud-controller-manager 4.14.0-0.okd-2024-01-26-175629 True False False 93m
127 cloud-credential 4.14.0-0.okd-2024-01-26-175629 True False False 116m
128 cluster-autoscaler 4.14.0-0.okd-2024-01-26-175629 True False False 94m
129 config-operator 4.14.0-0.okd-2024-01-26-175629 True False False 92m
130 console 4.14.0-0.okd-2024-01-26-175629 True False False 88m
131 control-plane-machine-set 4.14.0-0.okd-2024-01-26-175629 True False False 94m
132 csi-snapshot-controller 4.14.0-0.okd-2024-01-26-175629 True False False 96m
133 dns 4.14.0-0.okd-2024-01-26-175629 True False False 93m
134 etcd 4.14.0-0.okd-2024-01-26-175629 True False False 94m
135 image-registry 4.14.0-0.okd-2024-01-26-175629 True False False 89m
136 ingress 4.14.0-0.okd-2024-01-26-175629 True False False 96m
137 insights 4.14.0-0.okd-2024-01-26-175629 True False False 91m
138 kube-apiserver 4.14.0-0.okd-2024-01-26-175629 True False False 92m
139 kube-controller-manager 4.14.0-0.okd-2024-01-26-175629 True False False 93m
140 kube-scheduler 4.14.0-0.okd-2024-01-26-175629 True False False 91m
141 kube-storage-version-migrator 4.14.0-0.okd-2024-01-26-175629 True False False 97m
142 machine-api 4.14.0-0.okd-2024-01-26-175629 True False False 91m
143 machine-approver 4.14.0-0.okd-2024-01-26-175629 True False False 94m
144 machine-config 4.14.0-0.okd-2024-01-26-175629 True False False 96m
145 marketplace 4.14.0-0.okd-2024-01-26-175629 True False False 96m
146 monitoring 4.14.0-0.okd-2024-01-26-175629 True False False 85m
147 network 4.14.0-0.okd-2024-01-26-175629 True False False 98m
148 node-tuning 4.14.0-0.okd-2024-01-26-175629 True False False 93m
149 openshift-apiserver 4.14.0-0.okd-2024-01-26-175629 True False False 89m
150 openshift-controller-manager 4.14.0-0.okd-2024-01-26-175629 True False False 90m
151 openshift-samples 4.14.0-0.okd-2024-01-26-175629 True False False 90m
152 operator-lifecycle-manager 4.14.0-0.okd-2024-01-26-175629 True False False 93m
153 operator-lifecycle-manager-catalog 4.14.0-0.okd-2024-01-26-175629 True False False 94m
154 operator-lifecycle-manager-packageserver 4.14.0-0.okd-2024-01-26-175629 True False False 93m
155 service-ca 4.14.0-0.okd-2024-01-26-175629 True False False 97m
156 storage 4.14.0-0.okd-2024-01-26-175629 True False False 92m
158 $ oc get clusterversion
159 NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
160 version 4.14.0-0.okd-2024-01-26-175629 True False 83m Cluster version is 4.14.0-0.okd-2024-01-26-175629
167 Refer to [Troubleshooting installation issues](https://docs.okd.io/4.14/installing/installing-troubleshooting.html) for information
168 on diagnosing OKD deployment failures.
174 To cleanup a VM-based deployment due to failure, or to prepare to redeploy, execute the following as root on the libvirt/KVM host:
176 1. Shut down and remove the virtual machine (note that the VM name may differ if the default is overridden):
179 virsh destroy master-0
180 virsh undefine master-0
183 2. Disable and remove the virtual network (note that the network name may differ if the default is overridden):
186 virsh net-destroy ocloud
187 virsh net-undefine ocloud
190 3. Remove virtual disk and boot media:
193 rm /var/lib/libvirt/images/master-0*.{qcow2,iso}