From 0f71e33316e10b5e7472fdbec156eb0f1c9e616f Mon Sep 17 00:00:00 2001 From: Zhe Huang Date: Mon, 27 Jan 2020 13:34:15 -0500 Subject: [PATCH] Update the installation documents to incorporate the it/dep refactor effort Signed-off-by: Zhe Huang Change-Id: Ib7fdfd304546812de168846f89fed706b734a6d7 --- docs/installation-aux.rst | 77 +++++++++++++++++--- docs/installation-guides.rst | 28 +++++-- docs/installation-k8s1node.rst | 14 ++-- docs/installation-ric.rst | 113 ++++++++++++++++------------- docs/installation-virtualbox.rst | 10 +-- docs/installation-xapps.rst | 63 ++++++++-------- ric-aux/RECIPE_EXAMPLE/example_recipe.yaml | 2 +- 7 files changed, 193 insertions(+), 114 deletions(-) diff --git a/docs/installation-aux.rst b/docs/installation-aux.rst index f3befca3..3727b01d 100644 --- a/docs/installation-aux.rst +++ b/docs/installation-aux.rst @@ -29,16 +29,42 @@ Run the following commands in a root shell: .. code:: bash - git clone http://gerrit.o-ran-sc.org/r/it/dep - cd RECIPE_EXAMPLE + git clone https://gerrit.o-ran-sc.org/r/it/dep + cd dep + git submodule update --init --recursive --remote -Edit the recipe files RIC_INFRA_RECIPE_EXAMPLE and RIC_PLATFORM_RECIPE_EXAMPLE. -In particular the following values often need adaptation to local deployments: -#. Docker registry URL -#. Docker registry credential -#. Helm repo credential -#. Component docker container image tags. +Modify the deployment recipe +---------------------------- + +Edit the recipe file ./RECIPE_EXAMPLE/AUX/example_recipe.yaml. + +- Specify the IP addresses used by the RIC and AUX cluster ingress controller (e.g., the main interface IP) in the following section. + If you are only testing the AUX cluster, you can put down any private IPs (e.g., 10.0.2.1 and 10.0.2.2). + +.. code:: bash + + extsvcplt: + ricip: "" + auxip: "" + +- To specify which version of the RIC platform components will be deployed, update the RIC platform component container tags in their corresponding section. +- You can specify which docker registry will be used for each component. If the docker registry requires login credential, you can add the credential in the following section. + Note that the installation script has already included credentials for O-RAN Linux Foundation docker registries. Please do not create duplicate entries. + +.. code:: bash + + docker-credential: + enabled: true + credential: + SOME_KEY_NAME: + registry: "" + credential: + user: "" + password: "" + email: "" + +For more advanced recipe configuration options, refer to the recipe configuration guideline. Deploying the Aux Group @@ -49,8 +75,7 @@ After the recipes are edited, the AUX group is ready to be deployed. .. code:: bash cd dep/bin - ./deploy-ric-infra ../RECIPE_EXAMPLE/RIC_INFRA_AUX_RECIPE_EXAMPLE - ./deploy-ric-aux ../RECIPE_EXAMPLE/RIC_INFRA_RECIPE_EXAMPLE + ./deploy-ric-aux ../RECIPE_EXAMPLE/AUX/example_recipe.yaml Checking the Deployment Status @@ -60,4 +85,34 @@ Now check the deployment status and results similar to the below indicate a comp .. code:: - TBD + # helm list + NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE + r3-aaf 1 Mon Jan 27 13:24:59 2020 DEPLOYED aaf-5.0.0 onap + r3-dashboard 1 Mon Jan 27 13:22:52 2020 DEPLOYED dashboard-1.2.2 1.0 ricaux + r3-infrastructure 1 Mon Jan 27 13:22:44 2020 DEPLOYED infrastructure-3.0.0 1.0 ricaux + r3-mc-stack 1 Mon Jan 27 13:23:37 2020 DEPLOYED mc-stack-0.0.1 1 ricaux + r3-message-router 1 Mon Jan 27 13:23:09 2020 DEPLOYED message-router-1.1.0 ricaux + r3-mrsub 1 Mon Jan 27 13:23:24 2020 DEPLOYED mrsub-0.1.0 1.0 ricaux + r3-portal 1 Mon Jan 27 13:24:12 2020 DEPLOYED portal-5.0.0 ricaux + r3-ves 1 Mon Jan 27 13:23:01 2020 DEPLOYED ves-1.1.1 1.0 ricaux + # kubectl get pods -n ricaux + NAME READY STATUS RESTARTS AGE + deployment-ricaux-dashboard-f78d7b556-m5nbw 1/1 Running 0 6m30s + deployment-ricaux-ves-69db8c797-v9457 1/1 Running 0 6m24s + elasticsearch-master-0 1/1 Running 0 5m36s + r3-infrastructure-kong-7697bccc78-nsln7 2/2 Running 3 6m40s + r3-mc-stack-kibana-78f648bdc8-nfw48 1/1 Running 0 5m37s + r3-mc-stack-logstash-0 1/1 Running 0 5m36s + r3-message-router-message-router-0 1/1 Running 3 6m11s + r3-message-router-message-router-kafka-0 1/1 Running 1 6m11s + r3-message-router-message-router-kafka-1 1/1 Running 2 6m11s + r3-message-router-message-router-kafka-2 1/1 Running 1 6m11s + r3-message-router-message-router-zookeeper-0 1/1 Running 0 6m11s + r3-message-router-message-router-zookeeper-1 1/1 Running 0 6m11s + r3-message-router-message-router-zookeeper-2 1/1 Running 0 6m11s + r3-mrsub-5c94f5b8dd-wxcw5 1/1 Running 0 5m58s + r3-portal-portal-app-8445f7f457-dj4z8 2/2 Running 0 4m53s + r3-portal-portal-cassandra-79cf998f69-xhpqg 1/1 Running 0 4m53s + r3-portal-portal-db-755b7dc667-kjg5p 1/1 Running 0 4m53s + r3-portal-portal-db-config-bfjnc 2/2 Running 0 4m53s + r3-portal-portal-zookeeper-5f8f77cfcc-t6z7w 1/1 Running 0 4m53s diff --git a/docs/installation-guides.rst b/docs/installation-guides.rst index 6f734a0f..f8717548 100644 --- a/docs/installation-guides.rst +++ b/docs/installation-guides.rst @@ -33,15 +33,19 @@ Version history | **Date** | **Ver.** | **Author** | **Comment** | | | | | | +--------------------+--------------------+--------------------+--------------------+ -| 2019-11-25 | 0.1.0 |Lusheng Ji | First draft | +| 2019-11-25 | 0.1.0 |Lusheng Ji | Amber | | | | | | +--------------------+--------------------+--------------------+--------------------+ +| 2020-01-23 | 0.2.0 |Zhe Huang | Bronze RC | +| | | | | ++--------------------+--------------------+--------------------+--------------------+ + Overview ======== -The installation of Amber Near Realtime RAN Intelligent Controller is spread onto two separate +The installation of Near Realtime RAN Intelligent Controller is spread onto two separate Kubernetes clusters. The first cluster is used for deploying the Near Realtime RIC (platform and applications), and the other is for deploying other auxiliary functions. They are referred to as RIC cluster and AUX cluster respectively. @@ -52,7 +56,7 @@ The following diagram depicts the installation architecture. :width: 600 Within the RIC cluster, Kubernetes resources are deployed using three name spaces: ricinfra, ricplt, -and ricxapp. Similarly, within the AUX cluster, Kubernetes resources are deployed using two name spaces: +and ricxapp by default. Similarly, within the AUX cluster, Kubernetes resources are deployed using two name spaces: ricinfra, and ricaux. For each cluster, there is a Kong ingress controller that proxies incoming API calls into the cluster. @@ -67,10 +71,22 @@ together to realize cross-cluster communication. :width: 600 + +Prerequisites +============= + +Both RIC and AUX clusters need to fulfill the following prerequisites. + +- Kubernetes v.1.16.0 or above +- helm v2.12.3 or above +- Read-write access to directory /mnt + +The following two sections show two example methods to create an environment for installing RIC. + VirtualBox VMs as Installation Hosts -==================================== +------------------------------------ -The deployment of Amber Near Realtime RIC can be done on a wide range of hosts, including +The deployment of Near Realtime RIC can be done on a wide range of hosts, including bare metal servers, OpenStack VMs, and VirtualBox VMs. This section provides detailed instructions for setting up Oracle VirtualBox VMs to be used as installation hosts. @@ -78,7 +94,7 @@ for setting up Oracle VirtualBox VMs to be used as installation hosts. One-Node Kubernetes Cluster -=========================== +--------------------------- This section describes how to set up a one-node Kubernetes cluster onto a VM installation host. diff --git a/docs/installation-k8s1node.rst b/docs/installation-k8s1node.rst index 32abe2b2..ad1a82cd 100644 --- a/docs/installation-k8s1node.rst +++ b/docs/installation-k8s1node.rst @@ -16,12 +16,12 @@ .. ===============LICENSE_END========================================================= -Script for Setting Up 1-Node Kubernetes Cluster ------------------------------------------------ +Script for Setting Up 1-node Kubernetes Cluster +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The it/dep repo can be used for generating a simple script that can help setting up a one-node Kubernetes cluster for dev and testing purposes. Related files are under the -**ric-infra/00-Kubernetes** directory. Clone the it/dep git repository on the target VM. +**tools/k8s/bin** directory. Clone the repository on the target VM: :: @@ -29,7 +29,7 @@ one-node Kubernetes cluster for dev and testing purposes. Related files are und Configurations --------------- +^^^^^^^^^^^^^^ The generation of the script reads in the parameters from the following files: @@ -46,7 +46,7 @@ The generation of the script reads in the parameters from the following files: Generating Set-up Script ------------------------- +^^^^^^^^^^^^^^^^^^^^^^^^ After the configurations are updated, the following steps will create a script file that can be used for setting up a one-node Kubernetes cluster. You must run this command on a Linux machine @@ -54,14 +54,14 @@ with the 'envsubst' command installed. :: - % cd bin + % cd tools/k8s/bin % ./gen-cloud-init.sh A file named **k8s-1node-cloud-init.sh** would now appear under the bin directory. Setting up Kubernetes Cluster ------------------------------ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The new **k8s-1node-cloud-init.sh** file is now ready for setting up the Kubernetes cluster. diff --git a/docs/installation-ric.rst b/docs/installation-ric.rst index 54f7e09d..61454ec9 100644 --- a/docs/installation-ric.rst +++ b/docs/installation-ric.rst @@ -29,39 +29,51 @@ Clone the it/dep git repository that has deployment scripts and support files on Check out the appropriate branch of the repository with the release you want to deploy. For example: -:: + git clone https://gerrit.o-ran-sc.org/r/it/dep + cd dep + git submodule update --init --recursive --remote + +Modify the deployment recipe +---------------------------- + +Edit the recipe files ./RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml. + +- Specify the IP addresses used by the RIC and AUX cluster ingress controller (e.g., the main interface IP) in the following section. If you do not plan to set up an AUX cluster, you can put down any private IPs (e.g., 10.0.2.1 and 10.0.2.2). - % git checkout Amber +.. code:: bash + + extsvcplt: + ricip: "" + auxip: "" -In the RECIPE_EXAMPLE directory, edit the recipe files RIC_INFRA_RECIPE_EXAMPLE and -RIC_PLATFORM_RECIPE_EXAMPLE. In particular the following values often need adaptation -to local deployments: +- To specify which version of the RIC platform components will be deployed, update the RIC platform component container tags in their corresponding section. +- You can specify which docker registry will be used for each component. If the docker registry requires login credential, you can add the credential in the following section. Please note that the installation suite has already included credentials for O-RAN Linux Foundation docker registries. Please do not create duplicate entries. + +.. code:: bash -#. Docker registry URL (property "repository"). This is the default source for - container images. For example, - nexus3.o-ran-sc.org:10004/o-ran-sc is the staging registry and has freshly built images; - nexus3.o-ran-sc.org:10002/o-ran-sc is the release registry and has stable images. -#. Docker registry credential. This is a name of a Kubernetes credential. Some registries - allow anonymous read access, including nexus3.o-ran-sc.org. -#. Helm repo and credential. The xApp Manager deploys xApps from charts in this repo. - No changes are required here for basic dev testing of platform components. -#. Component docker container image repository override and tag. The recipes specify - the docker image to use in terms of name and tag. These entries also allow override - of the default docker registry URL (see above); for example, the default might be the - releases registry and then a component under test is deployed from the staging registry. + docker-credential: + enabled: true + credential: + SOME_KEY_NAME: + registry: "" + credential: + user: "" + password: "" + email: "" + +For more advanced recipe configuration options, please refer to the recipe configuration guideline. Deploying the Infrastructure and Platform Groups ------------------------------------------------ -After the recipes are edited, the Near Realtime RIC is ready to be deployed. -Perform the following steps in a root shell. +After the recipes are edited, the Near Realtime RIC platform is ready to be deployed. .. code:: bash - % sudo -i - # cd dep/bin - # ./deploy-ric-platform ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml + cd dep/bin + ./deploy-ric-platform ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml + Checking the Deployment Status ------------------------------ @@ -74,38 +86,37 @@ STATUS column from both kubectl outputs to ensure that all are either .. code:: # helm list - NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE - r3-a1mediator 1 Tue Jan 28 20:11:39 2020 DEPLOYED a1mediator-3.0.0 1.0 ricplt - r3-appmgr 1 Tue Jan 28 20:10:52 2020 DEPLOYED appmgr-3.0.0 1.0 ricplt - r3-dbaas1 1 Tue Jan 28 20:11:13 2020 DEPLOYED dbaas1-3.0.0 1.0 ricplt - r3-e2mgr 1 Tue Jan 28 20:11:23 2020 DEPLOYED e2mgr-3.0.0 1.0 ricplt - r3-e2term 1 Tue Jan 28 20:11:31 2020 DEPLOYED e2term-3.0.0 1.0 ricplt - r3-infrastructure 1 Tue Jan 28 20:10:39 2020 DEPLOYED infrastructure-3.0.0 1.0 ricplt - r3-jaegeradapter 1 Tue Jan 28 20:12:14 2020 DEPLOYED jaegeradapter-3.0.0 1.0 ricplt - r3-rsm 1 Tue Jan 28 20:12:04 2020 DEPLOYED rsm-3.0.0 1.0 ricplt - r3-rtmgr 1 Tue Jan 28 20:11:02 2020 DEPLOYED rtmgr-3.0.0 1.0 ricplt - r3-submgr 1 Tue Jan 28 20:11:48 2020 DEPLOYED submgr-3.0.0 1.0 ricplt - r3-vespamgr 1 Tue Jan 28 20:11:56 2020 DEPLOYED vespamgr-3.0.0 1.0 ricplt - - # kubectl get pods -n ricinfra - NAME READY STATUS RESTARTS AGE - deployment-tiller-ricxapp-d4f98ff65-xxpbb 1/1 Running 0 2m46s - tiller-secret-generator-76b5t 0/1 Completed 0 2m46s + NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE + r3-a1mediator 1 Thu Jan 23 14:29:12 2020 DEPLOYED a1mediator-3.0.0 1.0 ricplt + r3-appmgr 1 Thu Jan 23 14:28:14 2020 DEPLOYED appmgr-3.0.0 1.0 ricplt + r3-dbaas1 1 Thu Jan 23 14:28:40 2020 DEPLOYED dbaas1-3.0.0 1.0 ricplt + r3-e2mgr 1 Thu Jan 23 14:28:52 2020 DEPLOYED e2mgr-3.0.0 1.0 ricplt + r3-e2term 1 Thu Jan 23 14:29:04 2020 DEPLOYED e2term-3.0.0 1.0 ricplt + r3-infrastructure 1 Thu Jan 23 14:28:02 2020 DEPLOYED infrastructure-3.0.0 1.0 ricplt + r3-jaegeradapter 1 Thu Jan 23 14:29:47 2020 DEPLOYED jaegeradapter-3.0.0 1.0 ricplt + r3-rsm 1 Thu Jan 23 14:29:39 2020 DEPLOYED rsm-3.0.0 1.0 ricplt + r3-rtmgr 1 Thu Jan 23 14:28:27 2020 DEPLOYED rtmgr-3.0.0 1.0 ricplt + r3-submgr 1 Thu Jan 23 14:29:23 2020 DEPLOYED submgr-3.0.0 1.0 ricplt + r3-vespamgr 1 Thu Jan 23 14:29:31 2020 DEPLOYED vespamgr-3.0.0 1.0 ricplt # kubectl get pods -n ricplt - NAME READY STATUS RESTARTS AGE - deployment-ricplt-a1mediator-69f6d68fb4-ndkdv 1/1 Running 0 95s - deployment-ricplt-appmgr-845d85c989-4z7t5 2/2 Running 0 2m22s - deployment-ricplt-dbaas-7c44fb4697-6lbqq 1/1 Running 0 2m1s - deployment-ricplt-e2mgr-569fb7588b-fqfqn 1/1 Running 0 111s - deployment-ricplt-e2term-alpha-db949d978-nsjds 1/1 Running 0 103s - deployment-ricplt-jaegeradapter-585b4f8d69-gvmdf 1/1 Running 0 60s - deployment-ricplt-rsm-755f7c5c85-wdn46 0/1 ErrImagePull 0 69s - deployment-ricplt-rtmgr-c7cdb5b58-lsqw4 1/1 Running 0 2m12s - deployment-ricplt-submgr-5b4864dcd7-5k26s 1/1 Running 0 86s - deployment-ricplt-vespamgr-864f95c9c9-lj74h 1/1 Running 0 78s - r3-infrastructure-kong-79b6d8b95b-4lg58 2/2 Running 1 2m33s + NAME READY STATUS RESTARTS AGE + deployment-ricplt-a1mediator-69f6d68fb4-7trcl 1/1 Running 0 159m + deployment-ricplt-appmgr-845d85c989-qxd98 2/2 Running 0 160m + deployment-ricplt-dbaas-7c44fb4697-flplq 1/1 Running 0 159m + deployment-ricplt-e2mgr-569fb7588b-wrxrd 1/1 Running 0 159m + deployment-ricplt-e2term-alpha-db949d978-rnd2r 1/1 Running 0 159m + deployment-ricplt-jaegeradapter-585b4f8d69-tmx7c 1/1 Running 0 158m + deployment-ricplt-rsm-755f7c5c85-j7fgf 1/1 Running 0 158m + deployment-ricplt-rtmgr-c7cdb5b58-2tk4z 1/1 Running 0 160m + deployment-ricplt-submgr-5b4864dcd7-zwknw 1/1 Running 0 159m + deployment-ricplt-vespamgr-864f95c9c9-5wth4 1/1 Running 0 158m + r3-infrastructure-kong-68f5fd46dd-lpwvd 2/2 Running 3 160m + # kubectl get pods -n ricinfra + NAME READY STATUS RESTARTS AGE + deployment-tiller-ricxapp-d4f98ff65-9q6nb 1/1 Running 0 163m + tiller-secret-generator-plpbf 0/1 Completed 0 163m Checking Container Health ------------------------- diff --git a/docs/installation-virtualbox.rst b/docs/installation-virtualbox.rst index 2f824cb4..98bc2bb1 100644 --- a/docs/installation-virtualbox.rst +++ b/docs/installation-virtualbox.rst @@ -18,7 +18,7 @@ Networking ----------- +^^^^^^^^^^ The set up requires two VMs connected by a private network. With VirtualBox, this can be done by going under its "Preferences" menu and setting up a private NAT network. @@ -37,7 +37,7 @@ done by going under its "Preferences" menu and setting up a private NAT network. Creating VMs ------------- +^^^^^^^^^^^^ Create a VirtualBox VM: @@ -57,14 +57,14 @@ Repeat the process and create the second VM named **myaux**. Booting VM and OS Installation ------------------------------- +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Follow the OS installation steps to install OS to the VM virtual disk media. During the setup you must configure static IP addresses as discussed next. And make sure to install openssh server. VM Network Configuration ------------------------- +^^^^^^^^^^^^^^^^^^^^^^^^ Depending on the version of the OS, the networking may be configured during the OS installation or after. The network interface is configured with a static IP address: @@ -76,7 +76,7 @@ The network interface is configured with a static IP address: Accessing the VMs ------------------ +^^^^^^^^^^^^^^^^^ Because of the port forwarding configurations, the VMs are accessible from the VirtualBox host via ssh. diff --git a/docs/installation-xapps.rst b/docs/installation-xapps.rst index d5af024a..3d94fe9b 100644 --- a/docs/installation-xapps.rst +++ b/docs/installation-xapps.rst @@ -18,64 +18,61 @@ Loading xApp Helm Charts ------------------------ -The RIC Platform App Manager deploys RIC applications (a.k.a. xApps) using Helm charts stored in a private Helm repo. -In the dev testing deployment described in this documentation, this private Helm repo is the Chart Museum pod that is deployed within the ric infrastructure group into the RIC cluster. +The RIC Platform App Manager deploys RIC applications (a.k.a. xApps) using Helm charts stored in a private local Helm repo. +The Helm local repo is deployed as a sidecar of the App Manager pod, and its APIs are exposed using an ingress controller with TLS enabled. +You can use both HTTP and HTTPS to access it. -The Helm repo location and credential for access the repo are specified in both the infrastructure and platform recipe files. +Before any xApp can be deployed, its Helm chart must be loaded into this private Helm repo. +The example below shows the command sequences that upload and delete the xApp Helm charts: -Before any xApp can be deployed, its Helm chart must be loaded into this private Helm repo before the App manager can deploy them. -The example below show a command sequence that completes: - -#. Add the Helm repo at the Helm client running on the RIC cluster host VM (via Kong Ingress Controller); -#. Load the xApp Helm chart into the Helm repo; -#. Update the local cache for the Helm repo and check the Helm chart is loaded; -#. Calling App Manager to deploy the xApp; -#. Calling App Manager to delete the xApp; +#. Load the xApp Helm charts into the Helm repo; +#. Verify the xApp Helm charts; +#. Call App Manager to deploy the xApp; +#. Call App Manager to delete the xApp; #. Delete the xApp helm chart from the private Helm repo. -.. code:: bash +In the following example, the is the IP address that the RIC cluster ingress controller is listening to. +If you are using a VM, it is the IP address of the main interface. +If you are using REC clusters, it is the DANM network IP address you assigned in the recipe. +If the commands are executed inside the host machine, you can use "localhost" as the . - # add the Chart Museum as repo cm - helm repo add cm http://10.0.2.100:32080/helm - # load admin-xapp Helm chart to the Chart Museum - curl -L -u helm:helm --data-binary "@admin-xapp-1.0.7.tgz" \ - http://10.0.2.100:32080/helm/api/charts +.. code:: bash + + # load admin-xapp Helm chart to the Helm repo + curl -L --data-binary "@admin-xapp-1.0.7.tgz" http://:32080/helmrepo - # check the local cache of repo cm - helm repo update cm - # verify that the Helm chart is loaded and accessible - helm search cm/ - # the new admin-app chart should show up here. + # verify the xApp Helm charts + curl -L http://:32080/helmrepo/index.yaml # test App Manager health check API - curl -v http://10.0.2.100:32080/appmgr/ric/v1/health/ready + curl -v http://:32080/appmgr/ric/v1/health/ready # expecting a 200 response # list deployed xApps - curl http://10.0.2.100:32080/appmgr/ric/v1/xapps + curl http://:32080/appmgr/ric/v1/xapps # expecting a [] - - # deploy xApp - curl -X POST http://10.0.2.100:32080/appmgr/ric/v1/xapps -d '{"name": "admin-xapp"}' + + # deploy xApp, the xApp name has to be the same as the xApp Helm chart name + curl -X POST http:///appmgr/ric/v1/xapps -d '{"name": "admin-xapp"}' # expecting: {"name":"admin-app","status":"deployed","version":"1.0","instances":null} - + # check again deployed xApp - curl http://10.0.2.10:32080/appmgr/ric/v1/xapps + curl http://:32080/appmgr/ric/v1/xapps # expecting a JSON array with an entry for admin-app - + # check pods using kubectl kubectl get pods --all-namespaces # expecting the admin-xapp pod showing up - + # underlay the xapp - curl -X DELETE http://10.0.2.100:32080/appmgr/ric/v1/xapps/admin-xapp + curl -X DELETE http://:32080/appmgr/ric/v1/xapps/admin-xapp # check pods using kubectl kubectl get pods --all-namespaces # expecting the admin-xapp pod gone or shown as terminating # to delete a chart - curl -L -X DELETE -u helm:helm http://10.0.2.100:32080/api/charts/admin-xapp/0.0.5 + curl -L -X DELETE -u helm:helm http://:32080/api/charts/admin-xapp/0.0.5 For more xApp deployment and usage examples, please see the documentation for the it/test repository. diff --git a/ric-aux/RECIPE_EXAMPLE/example_recipe.yaml b/ric-aux/RECIPE_EXAMPLE/example_recipe.yaml index e380e5ae..81c9d3a1 100644 --- a/ric-aux/RECIPE_EXAMPLE/example_recipe.yaml +++ b/ric-aux/RECIPE_EXAMPLE/example_recipe.yaml @@ -43,7 +43,7 @@ extsvcaux: # Specify the docker registry credential using the following # The release and staging LF repos' credentials have already been included. -# Please do not create duplicated entries +# Please do not create duplicate entries #docker-credential: # enabled: true # credential: -- 2.16.6