poky_stx_aio_ks.cfg: fix for creating rpm repo
[pti/rtp.git] / docs / k8s-plugins-deployments.md
1 # Kubernetes cluster and plugins deployment instructions
2
3 This instruction will show you how to deploy kubernetes cluster and plugins in two example scenarios:
4   * One node (All-in-one)
5   * Three nodes (one master, two worker nodes)
6
7 > Please refer to the installation.md for how to install and configure the O-RAN INF platform.
8
9 ## 1. One node (All-in-one) deployment example
10
11 ### 1.1 Change the hostname
12
13 ```
14 # Assuming the hostname is oran-aio, ip address is <aio_host_ip>
15 # please DO NOT copy and paste, use your actaul hostname and ip address
16 root@intel-x86-64:~# echo oran-aio > /etc/hostname
17 root@intel-x86-64:~# export AIO_HOST_IP="<aio_host_ip>"
18 root@intel-x86-64:~# echo "$AIO_HOST_IP oran-aio" >> /etc/hosts
19 ```
20
21 ### 1.2 Disable swap for Kubernetes
22
23 ```
24 root@intel-x86-64:~# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
25 root@intel-x86-64:~# systemctl mask dev-sda4.swap
26 ```
27
28 ### 1.3 Set the proxy for docker (Optional) 
29
30 * If you are under a firewall, you may need to set the proxy for docker to pull images
31 ```
32 root@intel-x86-64:~# HTTP_PROXY="http://<your_proxy_server_ip>:<port>"
33 root@intel-x86-64:~# mkdir /etc/systemd/system/docker.service.d/
34 root@intel-x86-64:~# cat << EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
35 [Service]
36 Environment="HTTP_PROXY=$HTTP_PROXY" "NO_PROXY=localhost,127.0.0.1,localaddress,.localdomain.com,$AIO_HOST_IP,10.244.0.0/16"
37 EOF
38 ```
39
40 ### 1.4 Reboot the target
41
42 ### 1.5 Initialize kubernetes cluster master
43
44 ```
45 root@oran-aio:~# kubeadm init --kubernetes-version v1.15.2 --pod-network-cidr=10.244.0.0/16
46 root@oran-aio:~# mkdir -p $HOME/.kube
47 root@oran-aio:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
48 root@oran-aio:~# chown $(id -u):$(id -g) $HOME/.kube/config
49 ```
50
51 ### 1.6 Make the master also works as a worker node
52
53 ```
54 root@oran-aio:~# kubectl taint nodes oran-aio node-role.kubernetes.io/master-
55 ```
56
57 ### 1.7 Deploy flannel
58
59 ```
60 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/flannel/kube-flannel.yml
61 ```
62
63 * Check that the aio node is ready after flannel is successfully deployed and running
64
65 ```
66 root@oran-aio:~# kubectl get pods --all-namespaces |grep flannel
67 kube-system   kube-flannel-ds-amd64-bwt52        1/1     Running   0          3m24s
68
69 root@oran-aio:~# kubectl get nodes
70 NAME       STATUS   ROLES    AGE     VERSION
71 oran-aio   Ready    master   3m17s   v1.15.2-dirty
72 ```
73
74 ### 1.8 Deploy kubernetes dashboard
75
76 ```
77 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/kubernetes-dashboard/kubernetes-dashboard-admin.rbac.yaml
78 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/kubernetes-dashboard/kubernetes-dashboard.yaml
79 ```
80
81 * Verify that the dashboard is up and running
82
83 ```
84 # Check the pod for dashboard
85 root@oran-aio:~# kubectl get pods --all-namespaces |grep dashboard
86 kube-system   kubernetes-dashboard-5b67bf4d5f-ghg4f   1/1     Running   0          64s
87
88 ```
89
90 * Access the dashboard UI in a web browser with the url: https://<aio_host_ip>:30443
91
92 * For detail usage, please refer to [doc for dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/)
93
94 ### 1.9 Deploy Multus-CNI
95
96 ```
97 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/multus-cni/multus-daemonset.yml
98 ```
99
100 * Verify that the multus-cni is up and running
101
102 ```
103 root@oran-aio:~# kubectl get pods --all-namespaces | grep -i multus
104 kube-system   kube-multus-ds-amd64-hjpk4              1/1     Running   0          7m34s
105 ```
106
107 * For further validating, please refer to the [multus quick start](https://github.com/intel/multus-cni/blob/master/doc/quickstart.md)
108
109 ### 1.10 Deploy NFD (node-feature-discovery)
110
111 ```
112 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/node-feature-discovery/nfd-master.yaml
113 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/node-feature-discovery/nfd-worker-daemonset.yaml
114 ```
115
116 * Verify that nfd-master and nfd-worker are up and running
117
118 ```
119 root@oran-aio:~# kubectl get pods --all-namespaces |grep nfd
120 default       nfd-master-7v75k                        1/1     Running   0          91s
121 default       nfd-worker-xn797                        1/1     Running   0          24s
122 ```
123
124 * Verify that the node is labeled by nfd:
125
126 ```
127 root@oran-aio:~# kubectl describe nodes|grep feature.node.kubernetes
128                    feature.node.kubernetes.io/cpu-cpuid.AESNI=true
129                    feature.node.kubernetes.io/cpu-cpuid.AVX=true
130                    feature.node.kubernetes.io/cpu-cpuid.AVX2=true
131                    (...snip...)
132 ```
133
134 ### 1.11 Deploy CMK (CPU-Manager-for-Kubernetes)
135
136
137 * Build the CMK docker image
138
139 ```
140 root@oran-aio:~# cd /opt/kubernetes_plugins/cpu-manager-for-kubernetes/
141 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# make
142 ```
143
144 * Verify that the cmk docker images is built successfully
145
146 ```
147 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# docker images|grep cmk
148 cmk          v1.3.1              3fec5f753b05        44 minutes ago      765MB
149 ```
150
151 * Edit the template yaml file for your deployment:
152   * The template file is: /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-cluster-init-pod-template.yaml
153   * The options you may need to change:
154 ```
155     # You can change the value for the following env:
156     env:
157     - name: HOST_LIST
158       # Change this to modify the the host list to be initialized
159       value: "oran-aio"               
160     - name: NUM_EXCLUSIVE_CORES
161       # Change this to modify the value passed to `--num-exclusive-cores` flag
162       value: "4"                  
163     - name: NUM_SHARED_CORES
164       # Change this to modify the value passed to `--num-shared-cores` flag
165       value: "1"
166     - name: CMK_IMG
167       # Change his ONLY if you built the docker images with a different tag name
168       value: "cmk:v1.3.1"
169
170 ```
171   * Or you can also refer to [CMK operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md)
172
173 ```
174 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-rbac-rules.yaml
175 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-serviceaccount.yaml
176 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-cluster-init-pod-template.yaml
177 ```
178
179 * Verify that the cmk cluster init completed and the pods for nodereport and webhook deployment are up and running
180
181 ```
182 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# kubectl get pods --all-namespaces |grep cmk
183 default       cmk-cluster-init-pod                         0/1     Completed   0          11m
184 default       cmk-init-install-discover-pod-oran-aio       0/2     Completed   0          10m
185 default       cmk-reconcile-nodereport-ds-oran-aio-qbdqb   2/2     Running     0          10m
186 default       cmk-webhook-deployment-6f9dd7dfb6-2lj2p      1/1     Running     0          10m
187 ```
188
189 * For detail usage, please refer to [CMK user manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md)
190
191 ## 2. Three nodes deployment example
192
193 TBD