1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. SPDX-License-Identifier: CC-BY-4.0
3 .. Copyright (C) 2021-2022 Wind River Systems, Inc.
5 INF O2 Service User Guide
6 =========================
8 This guide will introduce the process that make INF O2 interface work
11 - Assume you have an O2 service with INF platform environment
15 export OAM_IP=<INF_OAM_IP>
17 - Discover INF platform inventory
19 - INF platform auto discovery
21 After you installed the INF O2 service, it will automatically
22 discover the INF through the parameters that you give from the
23 “*o2service-override.yaml*”
25 Below command can get the INF platform information as O-Cloud
30 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/" \
31 -H 'accept: application/json'
35 One INF platform have one resource pool, all the resources that
36 belong to this INF platform will be organized into this resource
39 Get the resource pool information through this interface
44 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/resourcePools" \
45 -H 'accept: application/json'
47 # export resource pool id
48 export resourcePoolId=`curl -X 'GET' "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/resourcePools" -H 'accept: application/json' -H 'X-Fields: resourcePoolId' 2>/dev/null | jq .[].resourcePoolId | xargs echo`
50 echo ${resourcePoolId} # check the exported resource pool id
54 Resource type defined what type is the specified resource, like a
55 physical machine, memory, or CPU
57 Show all resource type
62 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/resourceTypes" \
63 -H 'accept: application/json'
67 Get the list of all resources, the value of *resourcePoolId* from
68 the result of resource pool interface
73 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/resourcePools/${resourcePoolId}/resources" \
74 -H 'accept: application/json'
76 Get detail of one resource, need to export one specific resource
77 id that wants to check
82 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/resourcePools/${resourcePoolId}/resources/${resourceId}" \
83 -H 'accept: application/json'
85 - Deployment manager services endpoint
87 The Deployment Manager Service (DMS) that related to this IMS
88 information you can use below API to check
93 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers" \
94 -H 'accept: application/json'
96 - Provisioning INF platform with SMO endpoint configuration
98 Assume you have an SMO, then configure INF platform with SMO endpoint
99 address. This provisioning of INF O2 service will make a request from
100 INF O2 service to SMO, that make SMO know the O2 service is working.
102 It needs SMO to have an API like
103 “*http(s)://SMO_HOST:SMO_PORT/registration*”, which can accept JSON
109 'http://'${OAM_IP}':30205/provision/v1/smo-endpoint' \
110 -H 'accept: application/json' \
111 -H 'Content-Type: application/json' \
113 "endpoint": "http://<SMO_HOST>:<SMO_PORT>/registration"
116 - Subscribe to the INF platform resource change notification
118 Assume you have an SMO, and the SMO have an API can be receive
121 - Create subscription in the INF O2 IMS
126 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/subscriptions" \
127 -H 'accept: application/json' \
128 -H 'Content-Type: application/json' \
130 "callback": "http://SMO/address/to/callback",
131 "consumerSubscriptionId": "<ConsumerIdHelpSmoToIdentify>",
132 "filter": "<ResourceTypeNameSplitByComma,EmptyToGetAll>"
135 - Handle resource change notification
137 When the SMO callback API get the notification that the resource
138 of INF platform changing, use the URL to get the latest resource
139 information to update its database
141 - Orchestrate CNF in helm chart
143 On this sample, we prepare a firewall chart to test the
146 We need to do some preparation to make the helm repo work and include
147 our firewall chart inside of the repository.
149 Get the DMS Id in the INF O2 service, and set it into bash
154 curl --location --request GET "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers"
156 export dmsId=`curl --location --request GET "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers" 2>/dev/null | jq .[].deploymentManagerId | xargs echo`
158 echo ${dmsId} # check the exported DMS id
160 Using helm to deploy a chartmuseum to the INF platform
164 helm repo add chartmuseum https://chartmuseum.github.io/charts
166 helm pull chartmuseum/chartmuseum # download chartmuseum-3.4.0.tgz to local
167 tar zxvf chartmuseum-3.4.0.tgz
168 cat <<EOF>chartmuseum-override.yaml
177 helm install chartmuseumrepo chartmuseum/chartmuseum -f chartmuseum-override.yaml
181 Update the helm repo and add the chartmusem into the repository
185 helm repo add o2imsrepo http://${NODE_IP}:30330
188 Download the firewall chart and push it into the repository
192 git clone https://github.com/biny993/firewall-host-netdevice.git
193 tar -zcvf firewall-host-netdevice-1.0.0.tgz firewall-host-netdevice/
194 helm plugin install https://github.com/chartmuseum/helm-push.git
195 helm cm-push firewall-host-netdevice-1.0.0.tgz o2imsrepo
197 helm search repo firewall
199 Setup host net device over INF node
203 ssh sysadmin@<INF OAM IP>
204 sudo ip link add name veth11 type veth peer name veth12
205 sudo ip link add name veth21 type veth peer name veth22
206 sudo ip link |grep veth
209 - Create NfDeploymentDescriptor on the INF O2 DMS
213 curl --location --request POST "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor" \
214 --header 'Content-Type: application/json' \
217 "description": "demo nf deployment descriptor",
218 "artifactRepoUrl": "http://'${NODE_IP}':30330",
219 "artifactName": "firewall-host-netdevice",
221 "{\n \"image\": {\n \"repository\": \"ubuntu\",\n \"tag\": 18.04,\n \"pullPolicy\": \"IfNotPresent\"\n },\n \"resources\": {\n \"cpu\": 2,\n \"memory\": \"2Gi\",\n \"hugepage\": \"0Mi\",\n \"unprotectedNetPortVpg\": \"veth11\",\n \"unprotectedNetPortVfw\": \"veth12\",\n \"unprotectedNetCidr\": \"10.10.1.0/24\",\n \"unprotectedNetGwIp\": \"10.10.1.1\",\n \"protectedNetPortVfw\": \"veth21\",\n \"protectedNetPortVsn\": \"veth22\",\n \"protectedNetCidr\": \"10.10.2.0/24\",\n \"protectedNetGwIp\": \"10.10.2.1\",\n \"vfwPrivateIp0\": \"10.10.1.1\",\n \"vfwPrivateIp1\": \"10.10.2.1\",\n \"vpgPrivateIp0\": \"10.10.1.2\",\n \"vsnPrivateIp0\": \"10.10.2.2\"\n }\n}",
222 "outputParams": "{\"output1\": 100}"
225 curl --location --request GET "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor"
227 export descId=` curl -X 'GET' "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor" -H 'accept: application/json' -H 'X-Fields: id' 2>/dev/null | jq .[].id | xargs echo`
229 echo ${descId} # check the exported descriptor id
231 - Create NfDeployment on the INF O2 DMS
233 When you have an descriptor of deployment, you can create a
234 NfDeployment, it will trigger an event inside of the IMS/DMS, and
235 use the K8S API to create a real pod of the firewall sample
239 curl --location --request POST "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment" \
240 --header 'Content-Type: application/json' \
243 "description": "demo nf deployment",
244 "descriptorId": "'${descId}'",
245 "parentDeploymentId": ""
248 curl --location --request GET "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment"
250 - Check pods of the firewall sample
256 - Delete the deployment we just created
260 export NfDeploymentId=`curl --location --request GET "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment" 2>/dev/null | jq .[].id | xargs echo`
262 echo ${NfDeploymentId} # Check the exported deployment id
264 curl --location --request DELETE "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment/${NfDeploymentId}"
266 - Use Kubernetes Control Client through O2 DMS profile
268 Assume you have kubectl command tool installed on your Linux
271 And install the ‘jq’ command for your Linux bash terminal. If you are
272 use ubuntu, you can following below command to install it.
276 # install the 'jq' command
277 sudo apt-get install -y jq
279 # install 'kubectl' command
280 sudo apt-get install -y apt-transport-https
281 echo "deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main" | \
282 sudo tee -a /etc/apt/sources.list.d/kubernetes.list
283 gpg --keyserver keyserver.ubuntu.com --recv-keys 836F4BEB
284 gpg --export --armor 836F4BEB | sudo apt-key add -
286 sudo apt-get install -y kubectl
288 We need to get Kubernetes profile to set up the kubectl command tool.
290 Get the DMS Id in the INF O2 service, and set it into bash
295 # Get all DMS ID, and print them with command
296 dmsIDs=$(curl -s -X 'GET' \
297 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers" \
298 -H 'accept: application/json' | jq --raw-output '.[]["deploymentManagerId"]')
299 for i in $dmsIDs;do echo ${i};done;
301 # Choose one DMS and set it to bash environment, here I set the first one
302 export dmsID=$(curl -s -X 'GET' \
303 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers" \
304 -H 'accept: application/json' | jq --raw-output '.[0]["deploymentManagerId"]')
306 echo ${dmsID} # check the exported DMS Id
308 The profile of the ‘kubectl’ need the cluster name, I assume it set
311 It also need the server endpoint address, username and authority, and
312 for the environment that has Certificate Authority validation, it
313 needs the CA data to be set up.
317 CLUSTER_NAME="o2dmsk8s1" # set the cluster name
319 K8S_SERVER=$(curl -s -X 'GET' \
320 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
321 -H 'accept: application/json' | jq --raw-output '.["profileData"]["cluster_api_endpoint"]')
322 K8S_CA_DATA=$(curl -s -X 'GET' \
323 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
324 -H 'accept: application/json' | jq --raw-output '.["profileData"]["cluster_ca_cert"]')
326 K8S_USER_NAME=$(curl -s -X 'GET' \
327 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
328 -H 'accept: application/json' | jq --raw-output '.["profileData"]["admin_user"]')
329 K8S_USER_CLIENT_CERT_DATA=$(curl -s -X 'GET' \
330 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
331 -H 'accept: application/json' | jq --raw-output '.["profileData"]["admin_client_cert"]')
332 K8S_USER_CLIENT_KEY_DATA=$(curl -s -X 'GET' \
333 "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
334 -H 'accept: application/json' | jq --raw-output '.["profileData"]["admin_client_key"]')
337 # If you do not want to set up the CA data, you can execute following command without the secure checking
338 # kubectl config set-cluster ${CLUSTER_NAME} --server=${K8S_SERVER} --insecure-skip-tls-verify
340 kubectl config set-cluster ${CLUSTER_NAME} --server=${K8S_SERVER}
341 kubectl config set clusters.${CLUSTER_NAME}.certificate-authority-data ${K8S_CA_DATA}
343 kubectl config set-credentials ${K8S_USER_NAME}
344 kubectl config set users.${K8S_USER_NAME}.client-certificate-data ${K8S_USER_CLIENT_CERT_DATA}
345 kubectl config set users.${K8S_USER_NAME}.client-key-data ${K8S_USER_CLIENT_KEY_DATA}
347 # set the context and use it
348 kubectl config set-context ${K8S_USER_NAME}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user ${K8S_USER_NAME}
349 kubectl config use-context ${K8S_USER_NAME}@${CLUSTER_NAME}
351 kubectl get ns # check the command working with this context
354 Now you can use “kubectl”, it means you set up successful of the
355 Kubernetes client. But, it use the default admin user, so I recommend
356 you create an account for yourself.
358 Create a new user and account for K8S with “cluster-admin” role. And,
359 set the token of this user to the base environment.
364 NAMESPACE="kube-system"
366 cat <<EOF > admin-login.yaml
371 namespace: kube-system
373 apiVersion: rbac.authorization.k8s.io/v1
374 kind: ClusterRoleBinding
378 apiGroup: rbac.authorization.k8s.io
382 - kind: ServiceAccount
384 namespace: kube-system
387 kubectl apply -f admin-login.yaml
388 TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}')
391 Set the new user in ‘kubectl’ replace the original user, and set the
392 default namespace into the context.
397 TOKEN_DATA=<TOKEN_DATA from INF>
400 CLUSTER_NAME="o2dmsk8s1"
402 kubectl config set-credentials ${USER} --token=$TOKEN_DATA
403 kubectl config set-context ${USER}@inf-cluster --cluster=${CLUSTER_NAME} --user ${USER} --namespace=${NAMESPACE}
404 kubectl config use-context ${USER}@inf-cluster