[OAM-264] Fix renamed paths in README.md for smo-deployment 50/8350/3
authordemskeq8 <alexander.dehn@highstreet-technologies.com>
Thu, 19 May 2022 12:40:58 +0000 (14:40 +0200)
committerdemx8as6 <martin.skorupski@highstreet-technologies.com>
Thu, 26 May 2022 09:18:52 +0000 (11:18 +0200)
Issue-ID: OAM-264
Signed-off-by: demskeq8 <alexander.dehn@highstreet-technologies.com>
Change-Id: I72f9c45da5e23d8f4034fbfa5d7b9a24d9848cb4

smo-install/README.md

index 3cce915..9291a9f 100644 (file)
@@ -1,15 +1,15 @@
-# ORAN SMO Package
+# O-RAN SMO Package
 
 This project uses different helm charts from different Linux Foundation projects and integrate them into a unique SMO deployment.
 <p>The ONAP and ORAN project helm charts are built and then configured by using "helm override" so that it represents a valid ORAN SMO installation.</p>
-<p>It contains also provisioning scripts that can be used to bootstrap the platform and execute test usecases, network simulators, a1 simulators, cnf network simulators, etc ...</p>
+<p>It contains also provisioning scripts that can be used to bootstrap the platform and execute test use cases, network simulators, a1 simulators, cnf network simulators, etc ...</p>
 
 <strong>Note:</strong>
 The CNF part is still a "work in progress" so not well documented, it's a DU/RU/topology server deployment done by ONAP SO instantiation.
-It has been created out of the ONAP vfirewall usecase.
+It has been created out of the ONAP vfirewall use case.
 
 ## Quick Installation on blank node
-* Setup a VM with 40GB Memory, 6VCPU, 60GB of diskspace. 
+* Setup a VM with 40GB Memory, 6VCPU, 60GB of disk space.
 * Install an ubuntu live server 20.04 LTS (https://releases.ubuntu.com/20.04/ubuntu-20.04.3-live-server-amd64.iso)
 * Install snap and restart the shell session: sudo apt-get install snapd -y
 * Execute the following commands being logged as root:
@@ -17,45 +17,53 @@ It has been created out of the ONAP vfirewall usecase.
 
        ```git clone --recursive https://github.com/sebdet/oran-deployment.git```
 
-       ```./oran-deployment/scripts/layer-0/0-setup-microk8s.sh```
+       ```./dep/smo-install/scripts/layer-0/0-setup-microk8s.sh```
 
-       ```./oran-deployment/scripts/layer-0/0-setup-charts-museum.sh```
-       
-       ```./oran-deployment/scripts/layer-0/0-setup-helm3.sh```
-       
-       ```./oran-deployment/scripts/layer-1/1-build-all-charts.sh```
+       ```./dep/smo-install/scripts/layer-0/0-setup-charts-museum.sh```
 
-       ```./oran-deployment/scripts/layer-2/2-install-oran.sh```
+       ```./dep/smo-install/scripts/layer-0/0-setup-helm3.sh```
+
+       ```./dep/smo-install/scripts/layer-1/1-build-all-charts.sh```
+
+       ```./dep/smo-install/scripts/layer-2/2-install-oran.sh```
 
        Verify pods:
 
        ```kubectl get pods -n onap && kubectl get pods -n nonrtric```
-       
+
        When all pods in "onap" and "nonrtric" namespaces are well up & running:
-       
-       ```./oran-deployment/scripts/layer-2/2-install-simulators.sh```
+
+       ```./dep/smo-install/scripts/layer-2/2-install-simulators.sh```
 
 ## Quick Installation on existing kubernetes
-* Ensure you have at least 20GB Memory, 6VCPU, 60GB of diskspace. 
+* Ensure you have at least 20GB Memory, 6VCPU, 60GB of disk space.
 * Execute the following commands being logged as root:
 
-       ```git clone --recursive git@github.com:gmngueko/oran-deployment.git```
+        ```git clone --recursive "https://gerrit.o-ran-sc.org/r/it/dep"```
+
+        ```./dep/smo-install/scripts/layer-0/0-setup-charts-museum.sh```
+
+        ```./dep/smo-install/scripts/layer-0/0-setup-helm3.sh```
 
-       ```./oran-deployment/scripts/layer-0/0-setup-charts-museum.sh```
-       
-       ```./oran-deployment/scripts/layer-0/0-setup-helm3.sh```
-       
-       ```./oran-deployment/scripts/layer-1/1-build-all-charts.sh```
+        ```./dep/smo-install/scripts/layer-1/1-build-all-charts.sh```
 
-       ```./oran-deployment/scripts/layer-2/2-install-oran.sh```
+        ```./dep/smo-install/scripts/layer-2/2-install-oran.sh```
+
+        Verify pods:
+
+        ```kubectl get pods -n onap && kubectl get pods -n nonrtric```
+
+        When all pods in "onap" and "nonrtric" namespaces are well up & running:
+
+        ```./smo-install/scripts/layer-2/2-install-simulators.sh```
 
        Verify pods:
 
        ```kubectl get pods -n onap && kubectl get pods -n nonrtric```
-       
+
        When all pods in "onap" and "nonrtric" namespaces are well up & running:
-       
-       ```./oran-deployment/scripts/layer-2/2-install-simulators.sh```
+
+       ```./dep/smo-install/scripts/layer-2/2-install-simulators.sh```
 
 
 ## Structure
@@ -107,7 +115,7 @@ The user entry point is located in the <strong>scripts</strong> folder
 │   ├── layer-2                              <--- Scripts to install SMO package
 │   │   ├── 2-install-nonrtric-only.sh           <--- Install SMO NONRTRIC k8s namespace only
 │   │   ├── 2-install-oran-cnf.sh                <--- Install SMO full with ONAP CNF features
-│   │   ├── 2-install-oran.sh                    <--- Install SMO minimal 
+│   │   ├── 2-install-oran.sh                    <--- Install SMO minimal
 │   │   └── 2-install-simulators.sh              <--- Install Network simulator (RU/DU/Topology Server)
 │   │   └── 2-upgrade-simulators.sh              <--- Upgrade the simulators install at runtime when changes are done on override files
 │   ├── sub-scripts                  <--- Sub-Scripts used by the main layer-0, layer-1, layer-2
@@ -147,20 +155,20 @@ Use git clone to get it on your server (github ssh key config is required):
 
 
 <strong>Note:</strong> The current repository has multiple sub git submodules, therefore the <strong>--recursive</strong> flag is absolutely <strong>REQUIRED</strong>
-  
+
 ## Requirements:
 * K8S node setup with Helm 3 and Kubectl properly configured (tested with <strong>K8S v1.21.5</strong> and <strong>HELM v3.5.4</strong>).
   FOR K8S installation, multiple options are available:
        - MicroK8S standalone deployment:
 
-               ```./oran-deployment/scripts/layer-0/0-setup-microk8s.sh```
+               ```./dep/smo-install/scripts/layer-0/0-setup-microk8s.sh```
 
                OR this wiki can help to setup it (<strong>Section 1, 2 and 3</strong>): https://wiki.onap.org/display/DW/Deploy+OOM+and+SDC+%28or+ONAP%29+on+a+single+VM+with+microk8s+-+Honolulu+Setup
 
-       - KubeSpray using ONAP multicloud KUD (https://git.onap.org/multicloud/k8s/tree/kud) installation by executing(this is required for ONAP CNF deployments): 
-            
-           ```./oran-deployment/scripts/layer-0/0-setup-kud-node.sh```
-    
+       - KubeSpray using ONAP multicloud KUD (https://git.onap.org/multicloud/k8s/tree/kud) installation by executing(this is required for ONAP CNF deployments):
+
+           ```./dep/smo-install/scripts/layer-0/0-setup-kud-node.sh```
+
 
        - Use an existing K8S installation (Cloud, etc ...).
        - ....
@@ -168,58 +176,58 @@ Use git clone to get it on your server (github ssh key config is required):
 * ChartMuseum to store the HELM charts on the server, multiple options are available:
        - Execute the install script:
 
-               ```./oran-deployment/scripts/layer-0/0-setup-charts-museum.sh```
-               
-               ```./oran-deployment/scripts/layer-0/0-setup-helm3.sh```
+               ```./dep/smo-install/scripts/layer-0/0-setup-charts-museum.sh```
+
+               ```./dep/smo-install/scripts/layer-0/0-setup-helm3.sh```
 
        - Install chartmuseum manually on port 18080 (https://chartmuseum.com/#Instructions, https://github.com/helm/chartmuseum)
-    
+
 ## Configuration:
-In the ./helm-override/ folder the helm config that are used by the SMO installation. 
+In the ./helm-override/ folder the helm config that are used by the SMO installation.
 <p>Different flavors are preconfigured, and should NOT be changed unless you intentionally want to updates some configurations.
 
 ## Installation:
-* Build ONAP/ORAN charts 
+* Build ONAP/ORAN charts
 
-       ```./oran-deployment/scripts/layer-1/1-build-all-charts.sh```
+       ```./dep/smo-install/scripts/layer-1/1-build-all-charts.sh```
 
 * Choose the installation:
-       - ONAP + ORAN "nonrtric" <strong>(RECOMMENDED ONE)</strong>:  
-       
-               ```./oran-deployment/scripts/layer-2/2-install-oran.sh```
-       - ORAN "nonrtric" par only: 
-       
-               ```./oran-deployment/scripts/layer-2/2-install-nonrtric-only.sh```
+       - ONAP + ORAN "nonrtric" <strong>(RECOMMENDED ONE)</strong>:
 
-       - ONAP CNF + ORAN "nonrtric" (This must still be documented properly): 
+               ```./dep/smo-install/scripts/layer-2/2-install-oran.sh```
+       - ORAN "nonrtric" par only:
 
-               ```./oran-deployment/scripts/layer-2/2-install-oran-cnf.sh```
+               ```./dep/smo-install/scripts/layer-2/2-install-nonrtric-only.sh```
+
+       - ONAP CNF + ORAN "nonrtric" (This must still be documented properly):
+
+               ```./dep/smo-install/scripts/layer-2/2-install-oran-cnf.sh```
 
 
 
 * Install the network simulators (DU/RU/Topo):
        - When all pods in "onap" and "nonrtric" namespaces are well up & running:
-               
+
                ```kubectl get pods -n onap && kubectl get pods -n nonrtric```
 
        - Execute the install script:
-               
-               ```./oran-deployment/scripts/layer-2/2-install-simulators.sh```
+
+               ```./dep/smo-install/scripts/layer-2/2-install-simulators.sh```
 
        - Check the simulators status:
 
                ```kubectl get pods -n network```
 
-       Note: The simulators topology can be customized in the file ./oran-deployment/helm-override/network-simulators-topology-override.yaml
+       Note: The simulators topology can be customized in the file ./smo-install/helm-override/network-simulators-topology-override.yaml
 
 ## Platform access points:
-* SDNR WEB: 
+* SDNR WEB:
        https://<K8SServerIP>:30205/odlux/index.html
-* NONRTRIC Dashboard: 
+* NONRTRIC Dashboard:
        http://<K8SServerIP>:30091/
   More to come ...
 
 ## Uninstallation:
-* Execute 
-       
-       ```./oran-deployment/scripts/uninstall-all.sh```
+* Execute
+
+       ```./dep/smo-install/scripts/uninstall-all.sh```