1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. SPDX-License-Identifier: CC-BY-4.0
3 .. Copyright (C) 2019 Wind River Systems, Inc.
16 This document describes how to install O-RAN INF image, example configuration for better
17 real time performance, and example deployment of Kubernetes cluster and plugins.
19 The audience of this document is assumed to have basic knowledge in Yocto/Open-Embedded Linux
20 and container technology.
24 +--------------------+--------------------+--------------------+--------------------+
25 | **Date** | **Ver.** | **Author** | **Comment** |
27 +--------------------+--------------------+--------------------+--------------------+
28 | 2019-11-02 | 1.0.0 | Jackie Huang | Initail version |
30 +--------------------+--------------------+--------------------+--------------------+
33 +--------------------+--------------------+--------------------+--------------------+
36 +--------------------+--------------------+--------------------+--------------------+
42 Before starting the installation and deployment of O-RAN INF, you need to download the ISO image or build from source as described in developer-guide.
48 Following minimum hardware requirements must be met for installation of O-RAN INF image:
50 +--------------------+----------------------------------------------------+
51 | **HW Aspect** | **Requirement** |
53 +--------------------+----------------------------------------------------+
54 | **# of servers** | 1 |
55 +--------------------+----------------------------------------------------+
58 +--------------------+----------------------------------------------------+
61 +--------------------+----------------------------------------------------+
64 +--------------------+----------------------------------------------------+
67 +--------------------+----------------------------------------------------+
71 Software Installation and Deployment
72 ------------------------------------
74 1. Installation from the O-RAN INF ISO image
75 ````````````````````````````````````````````
77 - Please see the README.md file for how to build the image.
78 - The Image is a live ISO image with CLI installer: oran-image-inf-host-intel-x86-64.iso
80 1.1 Burn the image to the USB device
81 ''''''''''''''''''''''''''''''''''''
83 - Assume the the usb device is /dev/sdX here
87 $ sudo dd if=/path/to/oran-image-inf-host-intel-x86-64.iso of=/dev/sdX bs=1M
89 1.2 Insert the USB device in the target to be booted.
90 '''''''''''''''''''''''''''''''''''''''''''''''''''''
92 1.3 Reboot the target from the USB device.
93 ''''''''''''''''''''''''''''''''''''''''''
95 1.4 Select "Graphics console install" or "Serial console install" and press ENTER
96 '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
98 1.5 Select the hard disk and press ENTER
99 ''''''''''''''''''''''''''''''''''''''''
101 Notes: In this installer, you can only select which hard disk to install, the whole disk will be used and partitioned automatically.
103 - e.g. insert "sda" and press ENTER
105 1.6 Remove the USB device and press ENTER to reboot
106 '''''''''''''''''''''''''''''''''''''''''''''''''''
108 2. Configuration for better real time performance
109 `````````````````````````````````````````````````
111 Notes: Some of the tuning options are machine specific or depend on use cases,
112 like the hugepages, isolcpus, rcu_nocbs, kthread_cpus, irqaffinity, nohz_full and
113 so on, please do not just copy and past.
115 - Edit the grub.cfg with the following example tuning options
119 # Notes: the grub.cfg file path is different for legacy and UEFI mode
120 # For legacy mode: /boot/grub/grub.cfg
121 # For UEFI mode: /boot/EFI/BOOT/grub.cfg
123 grub_cfg="/boot/grub/grub.cfg"
124 #grub_cfg="/boot/EFI/BOOT/grub.cfg"
126 # In this example, 1-16 cores are isolated for real time processes
127 root@intel-x86-64:~# rt_tuning="crashkernel=auto biosdevname=0 iommu=pt usbcore.autosuspend=-1 nmi_watchdog=0 softlockup_panic=0 intel_iommu=on cgroup_enable=memory skew_tick=1 hugepagesz=1G hugepages=4 default_hugepagesz=1G isolcpus=1-16 rcu_nocbs=1-16 kthread_cpus=0 irqaffinity=0 nohz=on nohz_full=1-16 intel_idle.max_cstate=0 processor.max_cstate=1 intel_pstate=disable nosoftlockup idle=poll mce=ignore_ce"
129 # optional to add the console setting
130 root@intel-x86-64:~# console="console=ttyS0,115200"
132 root@intel-x86-64:~# sed -i "/linux / s/$/ $console $rt_tuning/" $grub_cfg
139 root@intel-x86-64:~# reboot
141 3. Kubernetes cluster and plugins deployment instructions (All-in-one)
142 ``````````````````````````````````````````````````````````````````````
143 This instruction will show you how to deploy kubernetes cluster and plugins in an all-in-one example scenario after the above installation.
145 3.1 Change the hostname (Optional)
146 ''''''''''''''''''''''''''''''''''
150 # Assuming the hostname is oran-aio, ip address is <aio_host_ip>
151 # please DO NOT copy and paste, use your actaul hostname and ip address
152 root@intel-x86-64:~# echo oran-aio > /etc/hostname
153 root@intel-x86-64:~# export AIO_HOST_IP="<aio_host_ip>"
154 root@intel-x86-64:~# echo "$AIO_HOST_IP oran-aio" >> /etc/hosts
156 3.2 Disable swap for Kubernetes
157 '''''''''''''''''''''''''''''''
161 root@intel-x86-64:~# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
162 root@intel-x86-64:~# systemctl mask dev-sda4.swap
164 3.3 Set the proxy for docker (Optional)
165 '''''''''''''''''''''''''''''''''''''''
167 - If you are under a firewall, you may need to set the proxy for docker to pull images
171 root@intel-x86-64:~# HTTP_PROXY="http://<your_proxy_server_ip>:<port>"
172 root@intel-x86-64:~# mkdir /etc/systemd/system/docker.service.d/
173 root@intel-x86-64:~# cat << EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
175 Environment="HTTP_PROXY=$HTTP_PROXY" "NO_PROXY=localhost,127.0.0.1,localaddress,.localdomain.com,$AIO_HOST_IP,10.244.0.0/16"
178 3.4 Reboot the target
179 '''''''''''''''''''''
183 root@intel-x86-64:~# reboot
185 3.5 Initialize kubernetes cluster master
186 ''''''''''''''''''''''''''''''''''''''''
190 root@oran-aio:~# kubeadm init --kubernetes-version v1.16.2 --pod-network-cidr=10.244.0.0/16
191 root@oran-aio:~# mkdir -p $HOME/.kube
192 root@oran-aio:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
193 root@oran-aio:~# chown $(id -u):$(id -g) $HOME/.kube/config
195 3.6 Make the master also works as a worker node
196 '''''''''''''''''''''''''''''''''''''''''''''''
200 root@oran-aio:~# kubectl taint nodes oran-aio node-role.kubernetes.io/master-
207 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/flannel/kube-flannel.yml
209 Check that the aio node is ready after flannel is successfully deployed and running
213 root@oran-aio:~# kubectl get pods --all-namespaces |grep flannel
214 kube-system kube-flannel-ds-amd64-bwt52 1/1 Running 0 3m24s
216 root@oran-aio:~# kubectl get nodes
217 NAME STATUS ROLES AGE VERSION
218 oran-aio Ready master 3m17s v1.15.2-dirty
220 3.8 Deploy kubernetes dashboard
221 '''''''''''''''''''''''''''''''
223 Deploy kubernetes dashboard
227 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/kubernetes-dashboard/kubernetes-dashboard-admin.rbac.yaml
228 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/kubernetes-dashboard/kubernetes-dashboard.yaml
230 Verify that the dashboard is up and running
234 # Check the pod for dashboard
235 root@oran-aio:~# kubectl get pods --all-namespaces |grep dashboard
236 kube-system kubernetes-dashboard-5b67bf4d5f-ghg4f 1/1 Running 0 64s
238 Access the dashboard UI in a web browser with the https url, port number is 30443.
240 - For detail usage, please refer to `Doc for dashboard`_
242 .. _`Doc for dashboard`: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
244 3.9 Deploy Multus-CNI
245 '''''''''''''''''''''
249 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/multus-cni/multus-daemonset.yml
251 Verify that the multus-cni is up and running
255 root@oran-aio:~# kubectl get pods --all-namespaces | grep -i multus
256 kube-system kube-multus-ds-amd64-hjpk4 1/1 Running 0 7m34s
258 - For further validating, please refer to the `Multus-CNI quick start`_
260 .. _`Multus-CNI quick start`: https://github.com/intel/multus-cni/blob/master/doc/quickstart.md
262 3.10 Deploy NFD (node-feature-discovery)
263 ''''''''''''''''''''''''''''''''''''''''
267 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/node-feature-discovery/nfd-master.yaml
268 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/node-feature-discovery/nfd-worker-daemonset.yaml
270 Verify that nfd-master and nfd-worker are up and running
274 root@oran-aio:~# kubectl get pods --all-namespaces |grep nfd
275 default nfd-master-7v75k 1/1 Running 0 91s
276 default nfd-worker-xn797 1/1 Running 0 24s
278 Verify that the node is labeled by nfd:
282 root@oran-aio:~# kubectl describe nodes|grep feature.node.kubernetes
283 feature.node.kubernetes.io/cpu-cpuid.AESNI=true
284 feature.node.kubernetes.io/cpu-cpuid.AVX=true
285 feature.node.kubernetes.io/cpu-cpuid.AVX2=true
288 3.11 Deploy SRIOV CNI
289 '''''''''''''''''''''
291 Provision VF drivers and devices
298 root@oran-aio:~/dpdk-18.08/usertools# lspci -D |grep 82599
299 0000:04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
300 0000:04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
302 Correlate the PF device to eth interfaces and bring them up
306 root@oran-aio:~# ethtool -i eth4 |grep bus-info
307 bus-info: 0000:04:00.0
308 root@oran-aio:~# ethtool -i eth5 |grep bus-info
309 bus-info: 0000:04:00.1
310 root@oran-aio:~# ifconfig eth4 up
311 root@oran-aio:~# ifconfig eth5 up
313 Load VF Driver modules
317 root@oran-aio:~# modprobe ixgbevf
318 root@oran-aio:~# modprobe uio
319 root@oran-aio:~# modprobe igb-uio
320 root@oran-aio:~# modprobe vfio
321 root@oran-aio:~# modprobe vfio-pci
322 root@oran-aio:~# lsmod |grep ixgbevf
324 root@oran-aio:~# lsmod |grep vfio
326 vfio_virqfd 16384 1 vfio_pci
327 vfio_iommu_type1 24576 0
328 vfio 24576 2 vfio_iommu_type1,vfio_pci
329 irqbypass 16384 2 vfio_pci,kvm
332 Bind VF drivers to VF devices
336 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.0/sriov_totalvfs
337 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.1/sriov_totalvfs
338 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.0/sriov_numvfs
339 root@oran-aio:~# cat /sys/bus/pci/devices/0000\:04\:00.1/sriov_numvfs
340 root@oran-aio:~# echo 8 > /sys/bus/pci/devices/0000\:04\:00.0/sriov_numvfs
341 root@oran-aio:~# echo 8 > /sys/bus/pci/devices/0000\:04\:00.1/sriov_numvfs
343 root@oran-aio:~# lspci -D |grep 82599
344 0000:04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
345 0000:04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
346 0000:04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
347 0000:04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
348 0000:04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
349 0000:04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
350 0000:04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
351 0000:04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
352 0000:04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
353 0000:04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
354 0000:04:11.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
355 0000:04:11.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
356 0000:04:11.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
357 0000:04:11.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
358 0000:04:11.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
359 0000:04:11.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
360 0000:04:11.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
361 0000:04:11.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
363 root@oran-aio:~# dpdk-devbind -b vfio-pci 0000:04:11.0 0000:04:11.1 0000:04:11.2 0000:04:11.3 0000:04:11.4 0000:04:11.5 0000:04:11.6 0000:04:11.7
365 root@oran-aio:~# dpdk-devbind --status-dev net
367 Network devices using DPDK-compatible driver
368 ============================================
369 0000:04:11.0 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
370 0000:04:11.1 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
371 0000:04:11.2 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
372 0000:04:11.3 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
373 0000:04:11.4 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
374 0000:04:11.5 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
375 0000:04:11.6 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
376 0000:04:11.7 '82599 Ethernet Controller Virtual Function 10ed' drv=vfio-pci unused=ixgbevf,igb_uio
378 Network devices using kernel driver
379 ===================================
380 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth4 drv=ixgbe unused=igb_uio,vfio-pci
381 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth5 drv=ixgbe unused=igb_uio,vfio-pci
382 0000:04:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=eth6 drv=ixgbevf unused=igb_uio,vfio-pci
383 0000:04:10.1 '82599 Ethernet Controller Virtual Function 10ed' if=eth14 drv=ixgbevf unused=igb_uio,vfio-pci
384 0000:04:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=eth7 drv=ixgbevf unused=igb_uio,vfio-pci
385 0000:04:10.3 '82599 Ethernet Controller Virtual Function 10ed' if=eth15 drv=ixgbevf unused=igb_uio,vfio-pci
386 0000:04:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=eth8 drv=ixgbevf unused=igb_uio,vfio-pci
387 0000:04:10.5 '82599 Ethernet Controller Virtual Function 10ed' if=eth16 drv=ixgbevf unused=igb_uio,vfio-pci
388 0000:04:10.6 '82599 Ethernet Controller Virtual Function 10ed' if= drv=ixgbevf unused=igb_uio,vfio-pci
389 0000:04:10.7 '82599 Ethernet Controller Virtual Function 10ed' if=eth17 drv=ixgbevf unused=igb_uio,vfio-pci
396 root@oran-aio:~# HTTP_PROXY="http://<your_proxy_server_ip>:<port>"
398 root@oran-aio:~# wget https://dl.google.com/go/go1.14.1.linux-amd64.tar.gz
399 root@oran-aio:~# tar -zxvf go1.14.1.linux-amd64.tar.gz
400 root@oran-aio:~# PATH=$PATH:/root/go/bin/
401 root@oran-aio:~# git clone https://github.com/intel/sriov-cni
402 root@oran-aio:~# cd sriov-cni
403 root@oran-aio:~/sriov-cni# make
404 root@oran-aio:~/sriov-cni# cp build/sriov /opt/cni/bin
406 root@oran-aio:~# cd ~/
407 root@oran-aio:~# git clone https://github.com/intel/sriov-network-device-plugin
408 root@oran-aio:~# cd sriov-network-device-plugin
409 root@oran-aio:~/sriov-network-device-plugin# git fetch origin pull/196/head:fpgadp
410 root@oran-aio:~/sriov-network-device-plugin# git checkout fpgadp
411 root@oran-aio:~/sriov-network-device-plugin# make image
412 root@oran-aio:~/sriov-network-device-plugin# docker images |grep sriov-device-plugin
413 nfvpe/sriov-device-plugin latest f4e6bbefad67 5 minutes ago 25.5MB
420 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/sriovdp_configMap.yaml
425 namespace: kube-system
430 "resourceName": "intel_sriov_netdevice",
433 "devices": ["154c", "10ed"],
434 "drivers": ["i40evf", "ixgbevf"]
438 "resourceName": "intel_sriov_dpdk",
441 "devices": ["154c", "10ed"],
442 "drivers": ["vfio-pci"]
446 "resourceName": "mlnx_sriov_rdma",
451 "drivers": ["mlx5_ib"]
458 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/sriovdp_configMap.yaml
459 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/k8s-v1.16/sriovdp-daemonset.yaml
461 root@oran-aio:~/sriov-network-device-plugin# kubectl get pods --all-namespaces |grep kube-sriov-device-plugin
462 kube-system kube-sriov-device-plugin-amd64-6lm8n 1/1 Running 0 12m
464 root@oran-aio:~/sriov-network-device-plugin# kubectl -n kube-system logs kube-sriov-device-plugin-amd64-6lm8n
465 I0327 02:14:46.488409 14488 manager.go:115] Creating new ResourcePool: intel_sriov_netdevice
466 I0327 02:14:46.488427 14488 factory.go:144] device added: [pciAddr: 0000:04:10.0, vendor: 8086, device: 10ed, driver: ixgbevf]
467 I0327 02:14:46.488439 14488 factory.go:144] device added: [pciAddr: 0000:04:10.1, vendor: 8086, device: 10ed, driver: ixgbevf]
468 I0327 02:14:46.488446 14488 factory.go:144] device added: [pciAddr: 0000:04:10.2, vendor: 8086, device: 10ed, driver: ixgbevf]
469 I0327 02:14:46.488459 14488 factory.go:144] device added: [pciAddr: 0000:04:10.3, vendor: 8086, device: 10ed, driver: ixgbevf]
470 I0327 02:14:46.488467 14488 factory.go:144] device added: [pciAddr: 0000:04:10.4, vendor: 8086, device: 10ed, driver: ixgbevf]
471 I0327 02:14:46.488473 14488 factory.go:144] device added: [pciAddr: 0000:04:10.5, vendor: 8086, device: 10ed, driver: ixgbevf]
472 I0327 02:14:46.488479 14488 factory.go:144] device added: [pciAddr: 0000:04:10.6, vendor: 8086, device: 10ed, driver: ixgbevf]
473 I0327 02:14:46.488485 14488 factory.go:144] device added: [pciAddr: 0000:04:10.7, vendor: 8086, device: 10ed, driver: ixgbevf]
474 I0327 02:14:46.488502 14488 manager.go:128] New resource server is created for intel_sriov_netdevice ResourcePool
475 I0327 02:14:46.488511 14488 manager.go:114]
476 I0327 02:14:46.488516 14488 manager.go:115] Creating new ResourcePool: intel_sriov_dpdk
477 I0327 02:14:46.488529 14488 factory.go:144] device added: [pciAddr: 0000:04:11.0, vendor: 8086, device: 10ed, driver: vfio-pci]
478 I0327 02:14:46.488538 14488 factory.go:144] device added: [pciAddr: 0000:04:11.1, vendor: 8086, device: 10ed, driver: vfio-pci]
479 I0327 02:14:46.488545 14488 factory.go:144] device added: [pciAddr: 0000:04:11.2, vendor: 8086, device: 10ed, driver: vfio-pci]
480 I0327 02:14:46.488551 14488 factory.go:144] device added: [pciAddr: 0000:04:11.3, vendor: 8086, device: 10ed, driver: vfio-pci]
481 I0327 02:14:46.488562 14488 factory.go:144] device added: [pciAddr: 0000:04:11.4, vendor: 8086, device: 10ed, driver: vfio-pci]
482 I0327 02:14:46.488569 14488 factory.go:144] device added: [pciAddr: 0000:04:11.5, vendor: 8086, device: 10ed, driver: vfio-pci]
483 I0327 02:14:46.488575 14488 factory.go:144] device added: [pciAddr: 0000:04:11.6, vendor: 8086, device: 10ed, driver: vfio-pci]
484 I0327 02:14:46.488581 14488 factory.go:144] device added: [pciAddr: 0000:04:11.7, vendor: 8086, device: 10ed, driver: vfio-pci]
485 I0327 02:14:46.488591 14488 manager.go:128] New resource server is created for intel_sriov_dpdk ResourcePool
488 Test intel_sriov_netdeivce
492 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/sriov-crd.yaml
493 apiVersion: "k8s.cni.cncf.io/v1"
494 kind: NetworkAttachmentDefinition
498 k8s.v1.cni.cncf.io/resourceName: intel.com/intel_sriov_netdevice
502 "cniVersion": "0.3.1",
503 "name": "sriov-network",
506 "type": "host-local",
507 "subnet": "10.56.217.0/24",
511 "gateway": "10.56.217.1"
516 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/sriov-crd.yaml
517 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/pod-tc1.yaml
518 root@oran-aio:~/sriov-network-device-plugin# kubectl get pods |grep testpod1
519 root@oran-aio:~/sriov-network-device-plugin# ip link |grep 'vlan 100'
520 vf 3 MAC a6:01:0a:34:39:e1, vlan 100, spoof checking on, link-state auto, trust off, query_rss off
522 root@oran-aio:~/sriov-network-device-plugin# kubectl exec -it testpod1 -- ip addr show |grep a6:01:0a:34:39:e1 -C 2
523 valid_lft forever preferred_lft forever
524 21: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
525 link/ether a6:01:0a:34:39:e1 brd ff:ff:ff:ff:ff:ff
526 inet 10.56.217.3/24 brd 10.56.217.255 scope global net1
527 valid_lft forever preferred_lft forever
530 Test intel_sriov_dpdk
534 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/sriovdpdk-crd.yaml
535 apiVersion: "k8s.cni.cncf.io/v1"
536 kind: NetworkAttachmentDefinition
540 k8s.v1.cni.cncf.io/resourceName: intel.com/intel_sriov_dpdk
544 "cniVersion": "0.3.1",
546 "name": "sriov1-vfio"
550 root@oran-aio:~/sriov-network-device-plugin# cat <<EOF> deployments/dpdk-1g.yaml
556 k8s.v1.cni.cncf.io/networks: '[
557 {"name": "sriov1-vfio"},
558 {"name": "sriov1-vfio"}
565 imagePullPolicy: IfNotPresent
567 - mountPath: /mnt/huge-2048
570 mountPath: /lib/modules
573 command: ["/bin/bash", "-ec", "sleep infinity"]
583 intel.com/intel_sriov_dpdk: '2'
587 intel.com/intel_sriov_dpdk: '2'
589 - name: admin-registry-secret
601 - name: admin-registry-secret
604 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/sriovdpdk-crd.yaml
605 root@oran-aio:~/sriov-network-device-plugin# kubectl create -f deployments/dpdk-1g.yaml
607 root@oran-aio:~/sriov-network-device-plugin# root@oran-aio:~/sriov-network-device-plugin# kubectl get pods | grep dpdk
608 dpdk-1g 1/1 Running 0 13s
610 root@oran-aio:~/sriov-network-device-plugin# ip link |grep 101
611 vf 7 MAC 00:00:00:00:00:00, vlan 101, spoof checking on, link-state auto, trust off, query_rss off
612 vf 6 MAC 00:00:00:00:00:00, vlan 101, spoof checking on, link-state auto, trust off, query_rss off
619 ### build following package and copy to target server: bitbake bison;bitbake kernel-devsrc
620 root@oran-aio:~/sriov-network-device-plugin# rpm -ivh ~/bison-3.0.4-r0.corei7_64.rpm
621 root@oran-aio:~/sriov-network-device-plugin# rpm -ivh ~/kernel-devsrc-1.0-r0.intel_x86_64.rpm
623 root@oran-aio:~/sriov-network-device-plugin# kubectl exec -it $(kubectl get pods -o wide | grep dpdk | awk '{ print $1 }') -- /bin/bash
624 [root@dpdk-1g /]# export |grep INTEL
625 declare -x PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK="0000:04:11.6,0000:04:11.5"
627 [root@dpdk-1g /]# yum -y install wget ncurses-devel unzip libpcap-devel ncurses-devel libedit-devel pciutils lua-devel
629 [root@dpdk-1g /]# cd /opt
630 [root@dpdk-1g /]# wget https://fast.dpdk.org/rel/dpdk-18.08.tar.xz
631 [root@dpdk-1g /]# tar xf dpdk-18.08.tar.xz
632 [root@dpdk-1g /]# cd dpdk-18.08/
633 [root@dpdk-1g /]# sed -i 's/CONFIG_RTE_EAL_IGB_UIO=y/CONFIG_RTE_EAL_IGB_UIO=n/g' config/common_linuxapp
634 [root@dpdk-1g /]# sed -i 's/CONFIG_RTE_LIBRTE_KNI=y/CONFIG_RTE_LIBRTE_KNI=n/g' config/common_linuxapp
635 [root@dpdk-1g /]# sed -i 's/CONFIG_RTE_KNI_KMOD=y/CONFIG_RTE_KNI_KMOD=n/g' config/common_linuxapp
636 [root@dpdk-1g /]# export RTE_SDK=/opt/dpdk-18.08
637 [root@dpdk-1g /]# export RTE_TARGET=x86_64-native-linuxapp-gcc
638 [root@dpdk-1g /]# export RTE_BIND=$RTE_SDK/usertools/dpdk-devbind.py
639 [root@dpdk-1g /]# make install T=$RTE_TARGET
640 [root@dpdk-1g /]# cd examples/helloworld
641 [root@dpdk-1g /]# make
642 [root@dpdk-1g /]# NR_hugepages=2
643 [root@dpdk-1g /]# ./build/helloworld -l 1-4 -n 4 -m $NR_hugepages
652 3.12 Deploy CMK (CPU-Manager-for-Kubernetes)
653 ''''''''''''''''''''''''''''''''''''''''''''
655 Build the CMK docker image
659 root@oran-aio:~# cd /opt/kubernetes_plugins/cpu-manager-for-kubernetes/
660 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# make
662 Verify that the cmk docker images is built successfully
666 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# docker images|grep cmk
667 cmk v1.3.1 3fec5f753b05 44 minutes ago 765MB
669 Edit the template yaml file for your deployment:
670 - The template file is: /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-cluster-init-pod-template.yaml
671 - The options you may need to change:
675 # You can change the value for the following env:
678 # Change this to modify the the host list to be initialized
680 - name: NUM_EXCLUSIVE_CORES
681 # Change this to modify the value passed to `--num-exclusive-cores` flag
683 - name: NUM_SHARED_CORES
684 # Change this to modify the value passed to `--num-shared-cores` flag
687 # Change his ONLY if you built the docker images with a different tag name
690 Or you can also refer to `CMK operator manual`_
692 .. _`CMK operator manual`: https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md
695 Depoly CMK from yaml files
699 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-rbac-rules.yaml
700 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-serviceaccount.yaml
701 root@oran-aio:~# kubectl apply -f /etc/kubernetes/plugins/cpu-manager-for-kubernetes/cmk-cluster-init-pod-template.yaml
703 Verify that the cmk cluster init completed and the pods for nodereport and webhook deployment are up and running
707 root@oran-aio:/opt/kubernetes_plugins/cpu-manager-for-kubernetes# kubectl get pods --all-namespaces |grep cmk
708 default cmk-cluster-init-pod 0/1 Completed 0 11m
709 default cmk-init-install-discover-pod-oran-aio 0/2 Completed 0 10m
710 default cmk-reconcile-nodereport-ds-oran-aio-qbdqb 2/2 Running 0 10m
711 default cmk-webhook-deployment-6f9dd7dfb6-2lj2p 1/1 Running 0 10m
713 - For detail usage, please refer to `CMK user manual`_
715 .. _`CMK user manual`: https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md
721 - `Doc for dashboard`_
722 - `Multus-CNI quick start`_
723 - `CMK operator manual`_
726 .. _`Flannel`: https://github.com/coreos/flannel/blob/master/README.md