-O-Cloud O2 Service User Guide
-=============================
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. Copyright (C) 2021-2022 Wind River Systems, Inc.
-This guide will introduce the process that make O2 interface work with
-SMO.
+INF O2 Service User Guide
+=========================
-- Assume you have an O-Cloud O2 environment
+This guide will introduce the process that make INF O2 interface work
+with SMO.
+
+- Assume you have an O2 service with INF platform environment
.. code:: bash
export OAM_IP=<INF_OAM_IP>
-- Discover O-Cloud inventory
+- Discover INF platform inventory
- - O-Cloud auto discovery
+ - INF platform auto discovery
- After you installed the O-Cloud service, it will automatically
+ After you installed the INF O2 service, it will automatically
discover the INF through the parameters that you give from the
“*o2service-override.yaml*”
- Below command can get the O-Cloud information
+ Below command can get the INF platform information as O-Cloud
.. code:: shell
- Resource pool
- One O-Cloud have one resource pool, all the resources that belong
- to this O-Cloud will be organized into this resource pool
+ One INF platform have one resource pool, all the resources that
+ belong to this INF platform will be organized into this resource
+ pool
Get the resource pool information through this interface
"http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers" \
-H 'accept: application/json'
-- Provisioning O-Cloud with SMO endpoint configuration
+- Provisioning INF platform with SMO endpoint configuration
- Assume you have an SMO, then configure O-Cloud with SMO endpoint
- address. This provisioning of O-Cloud will make a request from
- O-Cloud to SMO, that make SMO know the O2 service is working.
+ Assume you have an SMO, then configure INF platform with SMO endpoint
+ address. This provisioning of INF O2 service will make a request from
+ INF O2 service to SMO, that make SMO know the O2 service is working.
It needs SMO to have an API like
“*http(s)://SMO_HOST:SMO_PORT/registration*”, which can accept JSON
"endpoint": "http://<SMO_HOST>:<SMO_PORT>/registration"
}'
-- Subscribe to the O-Cloud resource change notification
+- Subscribe to the INF platform resource change notification
Assume you have an SMO, and the SMO have an API can be receive
callback request
- - Create subscription in O-Cloud IMS
+ - Create subscription in the INF O2 IMS
.. code:: bash
- Handle resource change notification
When the SMO callback API get the notification that the resource
- of O-Cloud changing, use the URL to get the latest resource
+ of INF platform changing, use the URL to get the latest resource
information to update its database
- Orchestrate CNF in helm chart
We need to do some preparation to make the helm repo work and include
our firewall chart inside of the repository.
- Get the DMS Id in the O-Cloud, and set it into bash environment
+ Get the DMS Id in the INF O2 service, and set it into bash
+ environment
.. code:: bash
echo ${dmsId} # check the exported DMS id
- Using helm to deploy a chartmuseum to the INF
+ Using helm to deploy a chartmuseum to the INF platform
.. code:: bash
helm repo update
helm search repo firewall
- Setup host net device over INF
+ Setup host net device over INF node
.. code:: bash
sudo ip link |grep veth
exit
- - Create NfDeploymentDescriptor
+ - Create NfDeploymentDescriptor on the INF O2 DMS
.. code:: bash
- curl --location --request POST "http://${OAM_IP}:30205/o2dms/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor" \
+ curl --location --request POST "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor" \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "cfwdesc1",
"outputParams": "{\"output1\": 100}"
}'
- curl --location --request GET "http://${OAM_IP}:30205/o2dms/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor"
+ curl --location --request GET "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor"
- export descId=` curl -X 'GET' "http://${OAM_IP}:30205/o2dms/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor" -H 'accept: application/json' -H 'X-Fields: id' 2>/dev/null | jq .[].id | xargs echo`
+ export descId=` curl -X 'GET' "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeploymentDescriptor" -H 'accept: application/json' -H 'X-Fields: id' 2>/dev/null | jq .[].id | xargs echo`
echo ${descId} # check the exported descriptor id
- - Create NfDeployment
+ - Create NfDeployment on the INF O2 DMS
When you have an descriptor of deployment, you can create a
NfDeployment, it will trigger an event inside of the IMS/DMS, and
.. code:: bash
- curl --location --request POST "http://${OAM_IP}:30205/o2dms/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment" \
+ curl --location --request POST "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment" \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "cfw100",
"parentDeploymentId": ""
}'
- curl --location --request GET "http://${OAM_IP}:30205/o2dms/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment"
+ curl --location --request GET "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment"
- Check pods of the firewall sample
.. code:: shell
- export NfDeploymentId=`curl --location --request GET "http://${OAM_IP}:30205/o2dms/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment" 2>/dev/null | jq .[].id | xargs echo`
+ export NfDeploymentId=`curl --location --request GET "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment" 2>/dev/null | jq .[].id | xargs echo`
echo ${NfDeploymentId} # Check the exported deployment id
- curl --location --request DELETE "http://${OAM_IP}:30205/o2dms/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment/${NfDeploymentId}"
+ curl --location --request DELETE "http://${OAM_IP}:30205/o2dms/v1/${dmsId}/O2dms_DeploymentLifecycle/NfDeployment/${NfDeploymentId}"
+
+- Use Kubernetes Control Client through O2 DMS profile
+
+ Assume you have kubectl command tool installed on your Linux
+ environment.
+
+ And install the ‘jq’ command for your Linux bash terminal. If you are
+ use ubuntu, you can following below command to install it.
+
+ .. code:: bash
+
+ # install the 'jq' command
+ sudo apt-get install -y jq
+
+ # install 'kubectl' command
+ sudo apt-get install -y apt-transport-https
+ echo "deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main" | \
+ sudo tee -a /etc/apt/sources.list.d/kubernetes.list
+ gpg --keyserver keyserver.ubuntu.com --recv-keys 836F4BEB
+ gpg --export --armor 836F4BEB | sudo apt-key add -
+ sudo apt-get update
+ sudo apt-get install -y kubectl
+
+ We need to get Kubernetes profile to set up the kubectl command tool.
+
+ Get the DMS Id in the INF O2 service, and set it into bash
+ environment.
+
+ .. code:: bash
+
+ # Get all DMS ID, and print them with command
+ dmsIDs=$(curl -s -X 'GET' \
+ "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers" \
+ -H 'accept: application/json' | jq --raw-output '.[]["deploymentManagerId"]')
+ for i in $dmsIDs;do echo ${i};done;
+
+ # Choose one DMS and set it to bash environment, here I set the first one
+ export dmsID=$(curl -s -X 'GET' \
+ "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers" \
+ -H 'accept: application/json' | jq --raw-output '.[0]["deploymentManagerId"]')
+
+ echo ${dmsID} # check the exported DMS Id
+
+ The profile of the ‘kubectl’ need the cluster name, I assume it set
+ to “o2dmsk8s1”.
+
+ It also need the server endpoint address, username and authority, and
+ for the environment that has Certificate Authority validation, it
+ needs the CA data to be set up.
+
+ .. code:: bash
+
+ CLUSTER_NAME="o2dmsk8s1" # set the cluster name
+
+ K8S_SERVER=$(curl -s -X 'GET' \
+ "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
+ -H 'accept: application/json' | jq --raw-output '.["profileData"]["cluster_api_endpoint"]')
+ K8S_CA_DATA=$(curl -s -X 'GET' \
+ "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
+ -H 'accept: application/json' | jq --raw-output '.["profileData"]["cluster_ca_cert"]')
+
+ K8S_USER_NAME=$(curl -s -X 'GET' \
+ "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
+ -H 'accept: application/json' | jq --raw-output '.["profileData"]["admin_user"]')
+ K8S_USER_CLIENT_CERT_DATA=$(curl -s -X 'GET' \
+ "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
+ -H 'accept: application/json' | jq --raw-output '.["profileData"]["admin_client_cert"]')
+ K8S_USER_CLIENT_KEY_DATA=$(curl -s -X 'GET' \
+ "http://${OAM_IP}:30205/o2ims_infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=sol018" \
+ -H 'accept: application/json' | jq --raw-output '.["profileData"]["admin_client_key"]')
+
+
+ # If you do not want to set up the CA data, you can execute following command without the secure checking
+ # kubectl config set-cluster ${CLUSTER_NAME} --server=${K8S_SERVER} --insecure-skip-tls-verify
+
+ kubectl config set-cluster ${CLUSTER_NAME} --server=${K8S_SERVER}
+ kubectl config set clusters.${CLUSTER_NAME}.certificate-authority-data ${K8S_CA_DATA}
+
+ kubectl config set-credentials ${K8S_USER_NAME}
+ kubectl config set users.${K8S_USER_NAME}.client-certificate-data ${K8S_USER_CLIENT_CERT_DATA}
+ kubectl config set users.${K8S_USER_NAME}.client-key-data ${K8S_USER_CLIENT_KEY_DATA}
+
+ # set the context and use it
+ kubectl config set-context ${K8S_USER_NAME}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user ${K8S_USER_NAME}
+ kubectl config use-context ${K8S_USER_NAME}@${CLUSTER_NAME}
+
+ kubectl get ns # check the command working with this context
+
+
+ Now you can use “kubectl”, it means you set up successful of the
+ Kubernetes client. But, it use the default admin user, so I recommend
+ you create an account for yourself.
+
+ Create a new user and account for K8S with “cluster-admin” role. And,
+ set the token of this user to the base environment.
+
+ .. code:: bash
+
+ USER="admin-user"
+ NAMESPACE="kube-system"
+
+ cat <<EOF > admin-login.yaml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: ${USER}
+ namespace: kube-system
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: ${USER}
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: cluster-admin
+ subjects:
+ - kind: ServiceAccount
+ name: ${USER}
+ namespace: kube-system
+ EOF
+
+ kubectl apply -f admin-login.yaml
+ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}')
+ echo $TOKEN_DATA
+
+ Set the new user in ‘kubectl’ replace the original user, and set the
+ default namespace into the context.
+
+ .. code:: bash
+
+ NAMESPACE=default
+ TOKEN_DATA=<TOKEN_DATA from INF>
+
+ USER="admin-user"
+ CLUSTER_NAME="o2dmsk8s1"
+
+ kubectl config set-credentials ${USER} --token=$TOKEN_DATA
+ kubectl config set-context ${USER}@inf-cluster --cluster=${CLUSTER_NAME} --user ${USER} --namespace=${NAMESPACE}
+ kubectl config use-context ${USER}@inf-cluster