Upgrade Onap to Jakarta + FIx jenkins startup + Add NFS scripts for K8S multinodes + CL tests (disabled for now)
Issue-ID: NONRTRIC-669
Signed-off-by: sebdet <sebastien.determe@intl.att.com>
Change-Id: I650814f19bc9b9932864c9d15fa8fc1cb5af2dbb
The CNF part is still a "work in progress" so not well documented, it's a DU/RU/topology server deployment done by ONAP SO instantiation.
It has been created out of the ONAP vfirewall usecase.
-## Quick Installation
-* Setup a VM with 20GB Memory, 8VCPU, 60GB of diskspace.
+## Quick Installation on blank node
+* Setup a VM with 40GB Memory, 6VCPU, 60GB of diskspace.
* Install an ubuntu live server 20.04 LTS (https://releases.ubuntu.com/20.04/ubuntu-20.04.3-live-server-amd64.iso)
+* Install snap and restart the shell session: sudo apt-get install snapd -y
* Execute the following commands being logged as root:
- ```git clone --recursive "https://gerrit.o-ran-sc.org/r/it/dep"```
- ```./dep/smo-install/scripts/layer-0/0-setup-microk8s.sh```
+ ```git clone --recursive https://github.com/sebdet/oran-deployment.git```
- ```./dep/smo-install/scripts/layer-0/0-setup-charts-museum.sh```
+ ```./oran-deployment/scripts/layer-0/0-setup-microk8s.sh```
- ```./dep/smo-install/scripts/layer-0/0-setup-helm3.sh```
-
- ```./dep/smo-install/scripts/layer-1/1-build-all-charts.sh```
+ ```./oran-deployment/scripts/layer-0/0-setup-charts-museum.sh```
+
+ ```./oran-deployment/scripts/layer-0/0-setup-helm3.sh```
+
+ ```./oran-deployment/scripts/layer-1/1-build-all-charts.sh```
- ```./dep/smo-install/scripts/layer-2/2-install-oran.sh```
+ ```./oran-deployment/scripts/layer-2/2-install-oran.sh```
Verify pods:
When all pods in "onap" and "nonrtric" namespaces are well up & running:
- ```./dep/smo-install/scripts/layer-2/2-install-simulators.sh```
+ ```./oran-deployment/scripts/layer-2/2-install-simulators.sh```
## Quick Installation on existing kubernetes
-* Ensure you have at least 20GB Memory, 6VCPU, 60GB of diskspace.
+* Ensure you have at least 20GB Memory, 6VCPU, 60GB of diskspace.
* Execute the following commands being logged as root:
- ```git clone --recursive "https://gerrit.o-ran-sc.org/r/it/dep"```
-
- ```./oran-deployment/scripts/layer-0/0-setup-charts-museum.sh```
-
- ```./oran-deployment/scripts/layer-0/0-setup-helm3.sh```
-
- ```./oran-deployment/scripts/layer-1/1-build-all-charts.sh```
+ ```git clone --recursive git@github.com:gmngueko/oran-deployment.git```
- ```./oran-deployment/scripts/layer-2/2-install-oran.sh```
-
- Verify pods:
-
- ```kubectl get pods -n onap && kubectl get pods -n nonrtric```
-
- When all pods in "onap" and "nonrtric" namespaces are well up & running:
+ ```./oran-deployment/scripts/layer-0/0-setup-charts-museum.sh```
+
+ ```./oran-deployment/scripts/layer-0/0-setup-helm3.sh```
+
+ ```./oran-deployment/scripts/layer-1/1-build-all-charts.sh```
- ```./oran-deployment/scripts/layer-2/2-install-simulators.sh```
+ ```./oran-deployment/scripts/layer-2/2-install-oran.sh```
+ Verify pods:
+ ```kubectl get pods -n onap && kubectl get pods -n nonrtric```
+
+ When all pods in "onap" and "nonrtric" namespaces are well up & running:
+
+ ```./oran-deployment/scripts/layer-2/2-install-simulators.sh```
## Structure
│ │ └── 0-setup-kud-node.sh <--- Setup K8S node with ONAP Multicloud KUD installation
│ │ └── 0-setup-microk8s.sh <--- Setup K8S node with MicroK8S installation
│ │ └── 0-setup-helm3.sh <--- Setup HELM3
-│ │ └── 0-setup-tests-env.sh <--- Setup Python SDK tools
│ ├── layer-1 <--- Scripts to prepare for the SMO installation
│ │ └── 1-build-all-charts.sh <--- Build all HELM charts and upload them to ChartMuseum
│ ├── layer-2 <--- Scripts to install SMO package
│ │ ├── 2-install-oran-cnf.sh <--- Install SMO full with ONAP CNF features
│ │ ├── 2-install-oran.sh <--- Install SMO minimal
│ │ └── 2-install-simulators.sh <--- Install Network simulator (RU/DU/Topology Server)
-│ │ └── 2-upgrade-simulators.sh <--- Upgrade the simulators install at runtime when changes are done on override files
+│ │ └── 2-upgrade-simulators.sh <--- Upgrade the simulators install at runtime when changes are done on override files
│ ├── sub-scripts <--- Sub-Scripts used by the main layer-0, layer-1, layer-2
│ │ ├── clean-up.sh
│ │ ├── install-nonrtric.sh
│ ├── apex-policy-test.sh
│ └── data
├── enable-sim-fault-report <--- Enable the fault reporting of the network simulators by SDNC
- │ ├── data
- │ └── enable-network-sim-fault-reporting.sh
- └── pythonsdk <--- Test based on ONAP Python SDK to validate O1 and A1
+ │ ├── data
+ │ └── enable-network-sim-fault-reporting.sh
+ └── pythonsdk <--- Test based on ONAP Python SDK to validate O1 and A1
├── oran-tests.xml
├── Pipfile.lock
├── README.md
├── tox.ini
└── unit-tests
-
```
## Download:
-Use git clone to get it on your server:
+Use git clone to get it on your server (github ssh key config is required):
-```git clone --recursive "https://gerrit.o-ran-sc.org/r/it/dep"```
+```git clone --recursive git@github.com:gmngueko/oran-deployment.git```
<strong>Note:</strong> The current repository has multiple sub git submodules, therefore the <strong>--recursive</strong> flag is absolutely <strong>REQUIRED</strong>
FOR K8S installation, multiple options are available:
- MicroK8S standalone deployment:
- ```./dep/smo-install/scripts/layer-0/0-setup-microk8s.sh```
+ ```./oran-deployment/scripts/layer-0/0-setup-microk8s.sh```
OR this wiki can help to setup it (<strong>Section 1, 2 and 3</strong>): https://wiki.onap.org/display/DW/Deploy+OOM+and+SDC+%28or+ONAP%29+on+a+single+VM+with+microk8s+-+Honolulu+Setup
- KubeSpray using ONAP multicloud KUD (https://git.onap.org/multicloud/k8s/tree/kud) installation by executing(this is required for ONAP CNF deployments):
- ```./dep/smo-install/scripts/layer-0/0-setup-kud-node.sh```
+ ```./oran-deployment/scripts/layer-0/0-setup-kud-node.sh```
- Use an existing K8S installation (Cloud, etc ...).
* ChartMuseum to store the HELM charts on the server, multiple options are available:
- Execute the install script:
- ```./dep/smo-install/scripts/layer-0/0-setup-charts-museum.sh```
-
+ ```./oran-deployment/scripts/layer-0/0-setup-charts-museum.sh```
+
```./oran-deployment/scripts/layer-0/0-setup-helm3.sh```
- Install chartmuseum manually on port 18080 (https://chartmuseum.com/#Instructions, https://github.com/helm/chartmuseum)
## Configuration:
-In the ./helm-override/ folder the helm config that are used by the SMO installation.
+In the ./helm-override/ folder the helm config that are used by the SMO installation.
<p>Different flavors are preconfigured, and should NOT be changed unless you intentionally want to updates some configurations.
## Installation:
* Build ONAP/ORAN charts
- ```./dep/smo-install/scripts/layer-1/1-build-all-charts.sh```
+ ```./oran-deployment/scripts/layer-1/1-build-all-charts.sh```
* Choose the installation:
- ONAP + ORAN "nonrtric" <strong>(RECOMMENDED ONE)</strong>:
- ```./dep/smo-install/scripts/layer-2/2-install-oran.sh```
+ ```./oran-deployment/scripts/layer-2/2-install-oran.sh```
- ORAN "nonrtric" par only:
- ```./dep/smo-install/scripts/layer-2/2-install-nonrtric-only.sh```
+ ```./oran-deployment/scripts/layer-2/2-install-nonrtric-only.sh```
- ONAP CNF + ORAN "nonrtric" (This must still be documented properly):
-
- ```./dep/smo-install/scripts/layer-2/2-install-oran-cnf.sh```
+
+ ```./oran-deployment/scripts/layer-2/2-install-oran-cnf.sh```
- Execute the install script:
- ```./dep/smo-install/scripts/layer-2/2-install-simulators.sh```
+ ```./oran-deployment/scripts/layer-2/2-install-simulators.sh```
- Check the simulators status:
```kubectl get pods -n network```
Note: The simulators topology can be customized in the file ./oran-deployment/helm-override/network-simulators-topology-override.yaml
-
+
## Platform access points:
* SDNR WEB:
- https://K8SServerIP:30205/odlux/index.html
+ https://<K8SServerIP>:30205/odlux/index.html
* NONRTRIC Dashboard:
- http://K8SServerIP:30091/
+ http://<K8SServerIP>:30091/
More to come ...
## Uninstallation:
* Execute
- ```./dep/smo-install/scripts/uninstall-all.sh```
+ ```./oran-deployment/scripts/uninstall-all.sh```
enabled: false
platform:
enabled: true
-policy:
+policy:
enabled: true
policy-api:
enabled: true
policy-distribution:
enabled: false
policy-clamp-be:
- enabled: false
- policy-clamp-fe:
- enabled: false
- policy-clamp-cl-runtime:
- enabled: false
- policy-clamp-cl-k8s-ppnt:
- enabled: false
+ enabled: true
+ policy-clamp-runtime-acm:
+ enabled: true
+ policy-clamp-ac-k8s-ppnt:
+ enabled: true
policy-gui:
- enabled: false
+ enabled: true
policy-nexus:
enabled: false
- policy-clamp-cl-pf-ppnt:
- enabled: false
- policy-clamp-cl-http-ppnt:
- enabled: false
+ policy-clamp-ac-pf-ppnt:
+ enabled: true
+ policy-clamp-ac-http-ppnt:
+ enabled: true
pomba:
enabled: false
portal:
jenkins: true
tests: false
-oran-tests:
- oranTests:
- name: orantests1
- flag: true
- commitId: 83be1833161166e663098ab09f56551fc83b84c0
-
github:
username: "username"
password: "token_api"
password: "token_api"
jenkins:
+ persistence:
+ enabled: false
controller:
jenkinsUrl: "http://smo-jenkins:32080"
+ JCasC:
+ securityRealm: |-
+ local:
+ allowsSignup: false
+ enableCaptcha: false
+ users:
+ - id: "test"
+ name: "Jenkins Admin"
+ password: "testos"
+
enabled: true
dcae-ves-mapper:
enabled: false
+ dcae-ves-openapi-manager:
+ enabled: false
+
dcaemod:
enabled: false
holmes:
enabled: false
policy-clamp-be:
enabled: true
- image: onap/policy-clamp-backend:6.2-SNAPSHOT-latest
- policy-clamp-fe:
- enabled: true
- policy-clamp-cl-runtime:
+ policy-clamp-runtime-acm:
enabled: true
- policy-clamp-cl-k8s-ppnt:
+ policy-clamp-ac-k8s-ppnt:
enabled: true
policy-gui:
enabled: true
+ image: onap/policy-gui:2.2.1
policy-nexus:
enabled: false
- policy-clamp-cl-pf-ppnt:
+ policy-clamp-ac-pf-ppnt:
enabled: true
- policy-clamp-cl-http-ppnt:
+ policy-clamp-ac-http-ppnt:
enabled: true
pomba:
-# Copyright © 2021 AT&T
+# Copyright © 2021-2022 AT&T Intellectual Property
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# See the License for the specific language governing permissions and
# limitations under the License.
-dependencies:
- - name: common
- version: ~9.x-0
- repository: '@local'
- - name: certInitializer
- version: ~9.x-0
- repository: '@local'
- - name: repositoryGenerator
- version: ~9.x-0
- repository: '@local'
- - name: serviceAccount
- version: ~9.x-0
- repository: '@local'
+# Static Defaults
+testsSuite:
+ jenkins: false
+ tests: true
+
+oran-tests:
+ oranTests:
+ name: orantests1
+ commitId: origin/main
+
enabled: true
dcae-ves-mapper:
enabled: false
+ dcae-ves-openapi-manager:
+ enabled: false
+
dcaemod:
enabled: false
holmes:
enabled: false
platform:
enabled: true
-policy:
+policy:
enabled: true
policy-api:
enabled: true
enabled: false
policy-clamp-be:
enabled: true
- image: onap/policy-clamp-backend:6.2-SNAPSHOT-latest
- policy-clamp-fe:
- enabled: true
- policy-clamp-cl-runtime:
+ policy-clamp-runtime-acm:
enabled: true
- policy-clamp-cl-k8s-ppnt:
+ policy-clamp-ac-k8s-ppnt:
enabled: true
policy-gui:
enabled: true
policy-nexus:
enabled: false
- policy-clamp-cl-pf-ppnt:
+ policy-clamp-ac-pf-ppnt:
enabled: true
- policy-clamp-cl-http-ppnt:
+ policy-clamp-ac-http-ppnt:
enabled: true
pomba:
installRappcatalogueservice: true
installNonrtricgateway: true
installKong: true
- installORUApp: true
+ installORUApp: false
installTopology: true
installDmaapadapterservice: true
installDmaapmediatorservice: true
} else {
sh 'tox'
}
+ currentBuild.result = 'SUCCESS'
}
catch(exec) {
echo 'TOX tests crashed'
+ currentBuild.result = 'FAILURE'
}
}
}
} else {
sh 'tox'
}
+ currentBuild.result = 'SUCCESS'
}
catch(exec) {
echo 'TOX tests crashed'
+ currentBuild.result = 'FAILURE'
}
}
}
-Subproject commit e1661bdc0f82fe98a11a723de54e9b6b13418b43
+Subproject commit c45c81f10b54f55f0eb3a7d43c2474e28cf05dd2
@make package-$@
submod-%:
- @make $*/requirements.yaml
+ @make $*/Chart.yaml
-%/requirements.yaml:
+%/Chart.yaml:
$(error Submodule $* needs to be retrieved from gerrit. See https://wiki.onap.org/display/DW/OOM+-+Development+workflow+after+code+transfer+to+tech+teams ); fi
print_helm_bin:
@if [ -f $*/Makefile ]; then make -C $*; fi
dep-%: make-%
- @if [ -f $*/requirements.yaml ]; then $(HELM_BIN) dep up $*; fi
+ @if [ -f $*/Chart.yaml ]; then $(HELM_BIN) dep up $*; fi
lint-%: dep-%
@if [ -f $*/Chart.yaml ]; then $(HELM_LINT_CMD) $*; fi
endif
clean:
- @rm -f */requirements.lock
+ @rm -f */Chart.lock
@find . -type f -name '*.tgz' -delete
@rm -rf $(PACKAGE_DIR)/*
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "2.0.0"
description: A Helm chart for nonrtric a1controller
name: a1controller
version: 2.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
-apiVersion: v1
+apiVersion: v2
appVersion: "2.0.0"
description: A Helm chart for A1 simulator
name: a1simulator
version: 2.1.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
description: Common templates for inclusion in other charts
name: aux-common
version: 3.0.0
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "2.0.0"
description: A Helm chart for nonrtric controlpanel
name: controlpanel
version: 2.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "1.0.0"
description: A Helm chart for Dmaap Adapter Service
name: dmaapadapterservice
version: 1.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2021 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "1.0.0"
description: A Helm chart for Dmaap Mediator Service
name: dmaapmediatorservice
version: 1.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# See the License for the specific language governing permissions and #
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "1.0.0"
description: A Helm chart for Helm Manager
name: helmmanager
version: 1.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2021 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "1.0.0"
description: A Helm chart for Information Coordinator Service
name: informationservice
version: 1.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
description: NONRTRIC Common templates for inclusion in other charts
name: nonrtric-common
version: 2.0.0
\ No newline at end of file
-apiVersion: v1
+apiVersion: v2
name: nonrtric
version: 1.0.0
appVersion: test
sources:
- https://gerrit.o-ran-sc.org/r/#/admin/projects/
kubeVersion: ">=1.19.0-0"
+
+dependencies:
+ - name: a1controller
+ version: ~2.0.0
+ repository: "@local"
+ condition: nonrtric.installA1controller
+
+ - name: a1simulator
+ version: ~2.1.0
+ repository: "@local"
+ condition: nonrtric.installA1simulator
+
+ - name: controlpanel
+ version: ~2.0.0
+ repository: "@local"
+ condition: nonrtric.installControlpanel
+
+ - name: policymanagementservice
+ version: ~2.0.0
+ repository: "@local"
+ condition: nonrtric.installPms
+
+ - name: informationservice
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installInformationservice
+
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
+
+ - name: rappcatalogueservice
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installRappcatalogueservice
+
+ - name: nonrtricgateway
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installNonrtricgateway
+
+ - name: oru-app
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installORUApp
+
+ - name: topology
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installTopology
+
+ - name: dmaapmediatorservice
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installDmaapmediatorservice
+
+ - name: helmmanager
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installHelmmanager
+
+ - name: kong
+ version: ~2.4.0
+ repository: https://nexus3.o-ran-sc.org/repository/helm-konghq/
+ condition: nonrtric.installKong
+
+ - name: dmaapadapterservice
+ version: ~1.0.0
+ repository: "@local"
+ condition: nonrtric.installDmaapadapterservice
+
+ - name: cert-wrapper
+ version: ~10.x-0
+ repository: '@local'
+ condition: cert-wrapper.enabled
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: a1controller
- version: ~2.0.0
- repository: "@local"
- condition: nonrtric.installA1controller
-
- - name: a1simulator
- version: ~2.1.0
- repository: "@local"
- condition: nonrtric.installA1simulator
-
- - name: controlpanel
- version: ~2.0.0
- repository: "@local"
- condition: nonrtric.installControlpanel
-
- - name: policymanagementservice
- version: ~2.0.0
- repository: "@local"
- condition: nonrtric.installPms
-
- - name: informationservice
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installInformationservice
-
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
-
- - name: rappcatalogueservice
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installRappcatalogueservice
-
- - name: nonrtricgateway
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installNonrtricgateway
-
- - name: oru-app
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installORUApp
-
- - name: topology
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installTopology
-
- - name: dmaapmediatorservice
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installDmaapmediatorservice
-
- - name: helmmanager
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installHelmmanager
-
- - name: kong
- version: ~2.4.0
- repository: https://nexus3.o-ran-sc.org/repository/helm-konghq/
- condition: nonrtric.installKong
-
- - name: dmaapadapterservice
- version: ~1.0.0
- repository: "@local"
- condition: nonrtric.installDmaapadapterservice
-
- - name: cert-wrapper
- version: ~9.x-0
- repository: '@local'
- condition: cert-wrapper.enabled
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for Nonrtric Gateway
name: nonrtricgateway
version: 1.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2021 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
-apiVersion: v1
+apiVersion: v2
appVersion: "1.0.0"
description: A Helm chart to deploy oru-app
name: oru-app
version: 1.0.0
+
+dependencies:
+ - name: common
+ version: ~10.x-0
+ repository: '@local'
+ - name: certInitializer
+ version: ~10.x-0
+ repository: '@local'
+ - name: repositoryGenerator
+ version: ~10.x-0
+ repository: '@local'
+ - name: serviceAccount
+ version: ~10.x-0
+ repository: '@local'
\ No newline at end of file
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "2.0.0"
description: A Helm chart for Policy Management Service
name: policymanagementservice
version: 2.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
appVersion: "2.0.0"
description: A Helm chart for rAPP Catalogue Service
name: rappcatalogueservice
version: 1.0.0
+
+dependencies:
+ - name: nonrtric-common
+ version: ^2.0.0
+ repository: "@local"
\ No newline at end of file
+++ /dev/null
-################################################################################
-# Copyright (c) 2020 Nordix Foundation. #
-# #
-# Licensed under the Apache License, Version 2.0 (the "License"); #
-# you may not use this file except in compliance with the License. #
-# You may obtain a copy of the License at #
-# #
-# http://www.apache.org/licenses/LICENSE-2.0 #
-# #
-# Unless required by applicable law or agreed to in writing, software #
-# distributed under the License is distributed on an "AS IS" BASIS, #
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
-# See the License for the specific language governing permissions and #
-# limitations under the License. #
-################################################################################
-
-dependencies:
- - name: nonrtric-common
- version: ^2.0.0
- repository: "@local"
# limitations under the License. #
################################################################################
-apiVersion: v1
+apiVersion: v2
description: Common templates for inclusion in other charts
name: ric-common
version: 3.3.2
# See the License for the specific language governing permissions and
# limitations under the License.
-apiVersion: v1
+apiVersion: v2
appVersion: "1.0.0"
description: A Helm chart to deploy topology
name: topology
--- /dev/null
+#!/bin/sh
+
+usage () {
+ echo "Usage:"
+ echo " ./$(basename $0) node1_ip node2_ip ... nodeN_ip"
+ exit 1
+}
+
+if [ "$#" -lt 1 ]; then
+ echo "Missing NFS slave nodes"
+ usage
+fi
+
+#Install NFS kernel
+sudo apt-get update
+sudo apt-get install -y nfs-kernel-server
+
+#Create /dockerdata-nfs and set permissions
+sudo mkdir -p /dockerdata-nfs
+sudo chmod 777 -R /dockerdata-nfs
+sudo chown nobody:nogroup /dockerdata-nfs/
+
+#Update the /etc/exports
+NFS_EXP=""
+for i in $@; do
+ NFS_EXP="${NFS_EXP}$i(rw,sync,no_root_squash,no_subtree_check) "
+done
+echo "/dockerdata-nfs "$NFS_EXP | sudo tee -a /etc/exports
+
+#Restart the NFS service
+sudo exportfs -a
+sudo systemctl restart nfs-kernel-server
###
## Microk8S part
+sudo apt-get update
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
snap remove microk8s
ufw default allow routed
## Enable required features for K8S
-microk8s enable dns storage
+microk8s enable dns storage prometheus
## Setup kubectl
cd
-mkdir .kube
+mkdir -p .kube
cd .kube
sudo microk8s.config > config
chmod 700 config
--- /dev/null
+#!/bin/sh
+
+usage () {
+ echo "Usage:"
+ echo " ./$(basename $0) nfs_master_ip"
+ exit 1
+}
+
+if [ "$#" -ne 1 ]; then
+ echo "Missing NFS mater node"
+ usage
+fi
+
+MASTER_IP=$1
+
+#Install NFS common
+sudo apt-get update
+sudo apt-get install -y nfs-common
+
+#Create NFS directory
+sudo mkdir -p /dockerdata-nfs
+
+#Mount the remote NFS directory to the local one
+sudo mount $MASTER_IP:/dockerdata-nfs /dockerdata-nfs/
+echo "$MASTER_IP:/dockerdata-nfs /dockerdata-nfs nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0" | sudo tee -a /etc/fstab
sudo apt-get install maven -y
sudo apt-get install python3.8 -y
sudo apt-get install python3-pip -y
+sudo apt-get install curl -y
pip3 install tox
--- /dev/null
+#!/bin/bash
+
+###
+# ============LICENSE_START=======================================================
+# ORAN SMO Package
+# ================================================================================
+# Copyright (C) 2021 AT&T Intellectual Property. All rights
+# reserved.
+# ================================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END============================================
+# ===================================================================
+#
+###
+
+SCRIPT=$(readlink -f "$0")
+SCRIPT_PATH=$(dirname "$SCRIPT")
+cd $SCRIPT_PATH
+
+FLAVOUR=$1
+if [ -z "$1" ]
+ then
+ echo "No helm override flavour supplied, going to default"
+ FLAVOUR="default"
+fi
+
+
+echo "Starting CICD in 'tests' namespace..."
+
+../sub-scripts/install-cicd.sh ../../helm-override/$FLAVOUR/tests-override.yaml
+
+kubectl get pods -n tests
+kubectl get namespaces
POLICY_API_URL = "https://localhost:6969"
SDNC_URL = "http://localhost:8282"
EMS_URL = "http://localhost:8083"
+CLAMP_URL = "http://localhost:8084"
#
###
"""Onap Policy Clamp Tosca Template module."""
-
+import time
from typing import Dict
from onapsdk.clamp.clamp_element import Clamp
+from onapsdk.exceptions import RequestError
class ClampToscaTemplate(Clamp):
"""Onap Policy Clamp Tosca Template class."""
Returns:
the tosca template instance
"""
- url = f"{self.base_url()}/toscaControlLoop/getToscaInstantiation"
+ url = f"{self.base_url()}/acm/getToscaInstantiation"
template_instance = self.send_message_json('GET',
'Get tosca template instance',
url,
the response of the uploading action
"""
- url = f"{self.base_url()}/toscaControlLoop/commissionToscaTemplate"
+ url = f"{self.base_url()}/acm/commissionToscaTemplate"
response = self.send_message_json('POST',
'Upload Tosca to commissioning',
url,
Returns:
the response of the creation action
"""
- url = f"{self.base_url()}/toscaControlLoop/postToscaInstanceProperties"
+ url = f"{self.base_url()}/acm/postToscaInstanceProperties"
response = self.send_message_json('POST',
'Create Tosca instance',
url,
Returns:
the template instance
"""
- url = f"{self.base_url()}/toscaControlLoop/getInstantiationOrderState?name={name}&version={version}"
+ url = f"{self.base_url()}/acm/getInstantiationOrderState?name={name}&version={version}"
template_instance = self.send_message_json('GET',
'Get tosca template instance',
url,
return template_instance
- def change_instance_status(self, new_status, name, version) -> dict:
+ def change_instance_status(self, new_status, name, version) -> str:
"""
Update tosca instance status.
the updated template instance
"""
body = '{"orderedState":"' + new_status + '","controlLoopIdentifierList":[{"name":"' + name + '","version":"' + version + '"}]}'
- url = f"{self.base_url()}/toscaControlLoop/putToscaInstantiationStateChange"
- response = self.send_message_json('PUT',
- 'Update tosca instance status',
- url,
- data=body,
- headers=self.header,
- basic_auth=self.basic_auth)
- return response
+ url = f"{self.base_url()}/acm/putToscaInstantiationStateChange"
+ try:
+ response = self.send_message_json('PUT',
+ 'Update tosca instance status',
+ url,
+ data=body,
+ headers=self.header,
+ basic_auth=self.basic_auth)
+ except RequestError:
+ self._logger.error("Change Instance Status request returned failed. Will query the instance status to double check whether the request is successful or not.")
+
+ # There's a bug in Clamp code, sometimes it returned 500, but actually the status has been changed successfully
+ # Thus we verify the status to determine whether it was successful or not
+ time.sleep(2)
+ response = self.get_template_instance()
+ return response["automationCompositionList"][0]["orderedState"]
+
+ def verify_instance_status(self, new_status):
+ """
+ Verify whether the instance changed to the new status.
+
+ Args:
+ new_status : the new status of the instance
+ Returns:
+ the boolean value indicating whether status changed successfully
+ """
+ response = self.get_template_instance()
+ if response["automationCompositionList"][0]["state"] == new_status:
+ return True
+ return False
def delete_template_instance(self, name: str, version: str) -> dict:
"""
Returns:
the response of the deletion action
"""
- url = f"{self.base_url()}/toscaControlLoop/deleteToscaInstanceProperties?name={name}&version={version}"
+ url = f"{self.base_url()}/acm/deleteToscaInstanceProperties?name={name}&version={version}"
response = self.send_message_json('DELETE',
'Delete the tosca instance',
url,
Returns:
the response of the decommission action
"""
- url = f"{self.base_url()}/toscaControlLoop/decommissionToscaTemplate?name={name}&version={version}"
+ url = f"{self.base_url()}/acm/decommissionToscaTemplate?name={name}&version={version}"
response = self.send_message_json('DELETE',
'Decommission the tosca template',
url,
"DMaaPConsumer": {
"carrierTechnologyParameters": {
"parameters": {
- "url": "http://message-router:3904/events/unauthenticated.SEC_FAULT_OUTPUT/users/link-monitor-nonrtric?timeout=15000&limit=100"
+ "url": "http://message-router:3904/events/unauthenticated.SEC_FAULT_OUTPUT/{{dmaapGroup}}/{{dmaapUser}}?timeout=15000&limit=100"
},
"carrierTechnology": "RESTCLIENT",
"parameterClassName": "org.onap.policy.apex.plugins.event.carrier.restclient.RestClientCarrierTechnologyParameters"
"version": "1.0.1"
}
}
-}
\ No newline at end of file
+}
--- /dev/null
+{
+ "data_types": {
+ "onap.datatypes.ToscaConceptIdentifier": {
+ "derived_from": "tosca.datatypes.Root",
+ "properties": {
+ "version": {
+ "required": true,
+ "type": "string"
+ },
+ "name": {
+ "required": true,
+ "type": "string"
+ }
+ }
+ }
+ },
+ "topology_template": {
+ "node_templates": {
+ "org.onap.k8s.controlloop.K8SControlLoopParticipant": {
+ "properties": {
+ "provider": "ONAP"
+ },
+ "description": "Participant for k8s",
+ "version": "2.3.4",
+ "type_version": "1.0.1",
+ "type": "org.onap.policy.clamp.controlloop.Participant"
+ },
+ "org.onap.domain.linkmonitor.LinkMonitorControlLoopDefinition1": {
+ "properties": {
+ "elements": [
+ {
+ "version": "1.2.3",
+ "name": "org.onap.domain.linkmonitor.OruAppK8SMicroserviceControlLoopElement"
+ }],
+ "provider": "Ericsson"
+ },
+ "description": "Control loop for Link Monitor",
+ "version": "1.2.3",
+ "type_version": "1.0.1",
+ "type": "org.onap.policy.clamp.controlloop.ControlLoop"
+ },
+ "org.onap.domain.linkmonitor.OruAppK8SMicroserviceControlLoopElement": {
+ "properties": {
+ "participantType": {
+ "version": "2.3.4",
+ "name": "org.onap.k8s.controlloop.K8SControlLoopParticipant"
+ },
+ "participant_id": {
+ "version": "1.0.0",
+ "name": "K8sParticipant0"
+ },
+ "provider": "ONAP",
+ "chart": {
+ "repository": {
+ "address": "{{chartmuseumIp}}",
+ "repoName": "chartmuseum",
+ "port": {{chartmuseumPort}},
+ "protocol": "http"
+ },
+ "namespace": "nonrtric",
+ "chartId": {
+ "version": "{{chartVersion}}",
+ "name": "{{chartName}}"
+ },
+ "releaseName": "{{releaseName}}"
+ }
+ },
+ "description": "Deploy {{chartName}}",
+ "version": "1.2.3",
+ "type_version": "1.0.1",
+ "type": "org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement"
+ }
+ }
+ },
+ "tosca_definitions_version": "tosca_simple_yaml_1_1_0",
+ "node_types": {
+ "org.onap.policy.clamp.controlloop.ControlLoop": {
+ "derived_from": "tosca.nodetypes.Root",
+ "properties": {
+ "elements": {
+ "required": true,
+ "entry_schema": {
+ "type": "onap.datatypes.ToscaConceptIdentifier"
+ },
+ "type": "list"
+ },
+ "provider": {
+ "required": false,
+ "type": "string"
+ }
+ },
+ "version": "1.0.1"
+ },
+ "org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement": {
+ "derived_from": "org.onap.policy.clamp.controlloop.ControlLoopElement",
+ "properties": {
+ "templates": {
+ "required": false,
+ "entry_schema": null,
+ "type": "list"
+ },
+ "requirements": {
+ "required": false,
+ "type": "string"
+ },
+ "values": {
+ "required": true,
+ "type": "string"
+ },
+ "configs": {
+ "required": false,
+ "type": "list"
+ },
+ "chart": {
+ "required": true,
+ "type": "string"
+ }
+ },
+ "version": "1.0.1"
+ },
+ "org.onap.policy.clamp.controlloop.Participant": {
+ "derived_from": "tosca.nodetypes.Root",
+ "properties": {
+ "provider": {
+ "required": false,
+ "type": "string"
+ }
+ },
+ "version": "1.0.1"
+ },
+ "org.onap.policy.clamp.controlloop.ControlLoopElement": {
+ "derived_from": "tosca.nodetypes.Root",
+ "properties": {
+ "participant_id": {
+ "required": true,
+ "type": "onap.datatypes.ToscaConceptIdentifier"
+ },
+ "provider": {
+ "required": false,
+ "type": "string"
+ }
+ },
+ "version": "1.0.1"
+ }
+ }
+}
\ No newline at end of file
SO_AUTH = "Basic SW5mcmFQb3J0YWxDbGllbnQ6cGFzc3dvcmQxJA=="
VID_URL = "https://vid.api.simpledemo.onap.org:30200"
VID_API_VERSION = "/vid"
-CLAMP_URL = "https://clamp.api.simpledemo.onap.org:30258"
CLAMP_AUTH = "Basic ZGVtb0BwZW9wbGUub3NhYWYub3JnOmRlbW8xMjM0NTYh"
VES_URL = "http://ves.api.simpledemo.onap.org:30417"
DMAAP_URL = "http://192.168.1.39:3904"
POLICY_PAP_URL = "https://"+subprocess.run("kubectl get services policy-pap -n onap |grep policy-pap | awk '{print $3}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()+":6969"
POLICY_API_URL = "https://"+subprocess.run("kubectl get services policy-api -n onap |grep policy-api | awk '{print $3}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()+":6969"
SDNC_URL = "http://"+subprocess.run("kubectl get services sdnc-oam -n onap |grep sdnc-oam | awk '{print $3}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()+":8282"
+CLAMP_URL = "https://"+subprocess.run("kubectl get services policy-clamp-be -n onap |grep policy-clamp-be | awk '{print $3}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()+":8443"
### Network simulators topology
NETWORK_SIMULATORS_RU_LIST = ["o-ru-11211","o-ru-11221","o-ru-11222","o-ru-11223"]
NETWORK_SIMULATORS_LIST = NETWORK_SIMULATORS_DU_RU_LIST + NETWORK_SIMULATORS_TOPOLOGY_SERVER
DMAAP_GROUP = "o1test"
DMAAP_USER = "o1test"
+DMAAP_CL_GROUP = "cltest"
+DMAAP_CL_USER = "cltest"
DMAAP_TOPIC_PNFREG = "unauthenticated.VES_PNFREG_OUTPUT"
DMAAP_TOPIC_PNFREG_JSON = '{"topicName": "' + DMAAP_TOPIC_PNFREG + '"}'
DMAAP_TOPIC_FAULT = "unauthenticated.SEC_FAULT_OUTPUT"
SDNC_CHECK_TIMEOUT = 900
POLICY_CHECK_RETRY = 30
POLICY_CHECK_TIMEOUT = 900
+CLAMP_CHECK_RETRY = 30
+CLAMP_CHECK_TIMEOUT = 900
+NETWORK_SIMULATOR_CHECK_RETRY = 30
+NETWORK_SIMULATOR_CHECK_TIMEOUT = 900
from waiting import wait
from urllib3.exceptions import NewConnectionError
from oransdk.dmaap.dmaap import OranDmaap
+from oransdk.policy.clamp import ClampToscaTemplate
from oransdk.policy.policy import OranPolicy
from oransdk.sdnc.sdnc import OranSdnc
from smo.smo import Smo
network_sims = NetworkSimulators("./resources")
smo = Smo()
+clamp = ClampToscaTemplate(settings.CLAMP_BASICAUTH)
dmaap = OranDmaap()
sdnc = OranSdnc()
policy = OranPolicy()
return False
return response.status_code == 200
+def clamp_component_ready():
+ """Check if Clamp component is ready."""
+ logger.info("Verify Clamp component is ready")
+ try:
+ response = clamp.get_template_instance()
+ except (RequestException, NewConnectionError, ConnectionFailed, APIError) as e:
+ logger.error(e)
+ return False
+ return response["automationCompositionList"] is not None
+
###### Entry points of PYTEST Session
def pytest_sessionstart():
"""Pytest calls it when starting a test session."""
logger.info("Check and for for SDNC to be running")
wait(lambda: policy_component_ready(), sleep_seconds=settings.POLICY_CHECK_RETRY, timeout_seconds=settings.POLICY_CHECK_TIMEOUT, waiting_for="Policy to be ready")
wait(lambda: sdnc_component_ready(), sleep_seconds=settings.SDNC_CHECK_RETRY, timeout_seconds=settings.SDNC_CHECK_TIMEOUT, waiting_for="SDNC to be ready")
+ # disable for now, until policy/clamp issue has been fixed
+ ##wait(lambda: clamp_component_ready(), sleep_seconds=settings.CLAMP_CHECK_RETRY, timeout_seconds=settings.CLAMP_CHECK_TIMEOUT, waiting_for="Clamp to be ready")
+
## Just kill any simulators that could already be runnin
network_sims.stop_network_simulators()
###### END of FIRST start, now we can start the sims for the real tests.
--- /dev/null
+#!/usr/bin/env python3
+###
+# ============LICENSE_START=======================================================
+# ORAN SMO PACKAGE - PYTHONSDK TESTS
+# ================================================================================
+# Copyright (C) 2022 AT&T Intellectual Property. All rights
+# reserved.
+# ================================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END============================================
+# ===================================================================
+#
+###
+"""Closed Loop Apex usecase tests module."""
+
+import time
+
+import logging
+import logging.config
+import os
+import pytest
+from onapsdk.configuration import settings
+from onapsdk.exceptions import ResourceNotFound
+from waiting import wait
+from oransdk.dmaap.dmaap import OranDmaap
+from oransdk.policy.clamp import ClampToscaTemplate
+from oransdk.policy.policy import OranPolicy, PolicyType
+from oransdk.sdnc.sdnc import OranSdnc
+from oransdk.utils.jinja import jinja_env
+from smo.network_simulators import NetworkSimulators
+from smo.dmaap import DmaapUtils
+from smo.cl_usecase import ClCommissioningUtils
+
+
+# Set working dir as python script location
+abspath = os.path.abspath(__file__)
+dname = os.path.dirname(abspath)
+os.chdir(dname)
+
+logging.config.dictConfig(settings.LOG_CONFIG)
+logger = logging.getLogger("test Control Loops for O-RU Fronthaul Recovery usecase - Apex policy")
+dmaap = OranDmaap()
+dmaap_utils = DmaapUtils()
+clcommissioning_utils = ClCommissioningUtils()
+network_simulators = NetworkSimulators("./resources")
+clamp = ClampToscaTemplate(settings.CLAMP_BASICAUTH)
+policy = OranPolicy()
+
+@pytest.fixture(scope="module", autouse=True)
+def setup_simulators():
+ """Prepare the test environment before the executing the tests."""
+ logger.info("Test class setup for Closed Loop Apex test")
+
+ dmaap_utils.clean_dmaap(settings.DMAAP_CL_GROUP, settings.DMAAP_CL_USER)
+
+ network_simulators.start_and_wait_network_simulators()
+
+ # Wait enough time to have at least the SDNR notifications sent
+ logger.info("Waiting 10s that SDNR sends all registration events to VES...")
+ time.sleep(10)
+ logger.info("Test Session setup completed successfully")
+
+ ### Cleanup code
+ yield
+ # Finish and delete the cl instance
+ clcommissioning_utils.clean_instance()
+
+ try:
+ policy.undeploy_policy("operational.apex.linkmonitor", "1.0.0", settings.POLICY_BASICAUTH)
+ except ResourceNotFound:
+ logger.info("Policy already undeployed")
+ try:
+ policy.delete_policy(PolicyType(type="onap.policies.controlloop.operational.common.Apex", version="1.0.0"), "operational.apex.linkmonitor", "1.0.0", settings.POLICY_BASICAUTH)
+ except ResourceNotFound:
+ logger.info("Policy already deleted")
+
+ network_simulators.stop_network_simulators()
+ time.sleep(10)
+ logger.info("Test Session cleanup done")
+
+def verify_apex_policy_created():
+ """
+ Verify whether the Apex policy has deployed successfully.
+
+ Returns:
+ the boolean value indicating whether policy deployed successfully
+ """
+ logger.info("Verify Apex policy is deployed")
+ policy_status_list = policy.get_policy_status(settings.POLICY_BASICAUTH)
+
+ for status in policy_status_list:
+ logger.info("the status %s,%s,%s,%s:", status["policy"]["name"], status["policy"]["version"], status["deploy"], status["state"])
+ if (status["policy"]["name"] == "operational.apex.linkmonitor" and status["policy"]["version"] == "1.0.0" and status["deploy"] and status["state"] == "SUCCESS"):
+ logger.info("Policy deployement OK")
+ return True
+
+ logger.info("Failed to deploy Apex policy")
+ return False
+
+def send_dmaap_event():
+ """Send a event to Dmaap that should trigger the apex policy."""
+ event = jinja_env().get_template("LinkFailureEvent.json.j2").render()
+ dmaap.send_link_failure_event(event)
+
+def test_cl_apex():
+ """The Closed Loop O-RU Fronthaul Recovery usecase Apex version."""
+ logger.info("Upload tosca to commissioning")
+ tosca_template = jinja_env().get_template("commission_apex.json.j2").render(dmaapGroup=settings.DMAAP_CL_GROUP, dmaapUser=settings.DMAAP_CL_USER)
+ assert clcommissioning_utils.create_instance(tosca_template) is True
+
+ sdnc = OranSdnc()
+ status = sdnc.get_odu_oru_status("o-du-1122", "rrm-pol-2", settings.SDNC_BASICAUTH)
+ assert status["o-ran-sc-du-hello-world:radio-resource-management-policy-ratio"][0]["administrative-state"] == "locked"
+
+ send_dmaap_event()
+
+ wait(lambda: verify_apex_policy_created(), sleep_seconds=10, timeout_seconds=60, waiting_for="Policy Deployment to be OK")
+
+ time.sleep(20)
+ logger.info("Check O-du/O-ru status again")
+ status = sdnc.get_odu_oru_status("o-du-1122", "rrm-pol-2", settings.SDNC_BASICAUTH)
+ assert status["o-ran-sc-du-hello-world:radio-resource-management-policy-ratio"][0]["administrative-state"] == "unlocked"
--- /dev/null
+#!/usr/bin/env python3
+###
+# ============LICENSE_START=======================================================
+# ORAN SMO PACKAGE - PYTHONSDK TESTS
+# ================================================================================
+# Copyright (C) 2022 AT&T Intellectual Property. All rights
+# reserved.
+# ================================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END============================================
+# ===================================================================
+#
+###
+"""Closed Loop Apex usecase tests module."""
+# This usecase has limitations due to Clamp issue.
+# 1. make sure using the policy-clamp-be version 6.2.0-snapshot-latest at this the moment
+import time
+import logging.config
+import subprocess
+import os
+from subprocess import check_output
+import pytest
+from waiting import wait
+from onapsdk.configuration import settings
+from oransdk.utils.jinja import jinja_env
+from oransdk.policy.clamp import ClampToscaTemplate
+from smo.cl_usecase import ClCommissioningUtils
+
+# Set working dir as python script location
+abspath = os.path.abspath(__file__)
+dname = os.path.dirname(abspath)
+os.chdir(dname)
+
+logging.config.dictConfig(settings.LOG_CONFIG)
+logger = logging.getLogger("test Control Loops for O-RU Fronthaul Recovery usecase - Clamp K8S usecase")
+clcommissioning_utils = ClCommissioningUtils()
+clamp = ClampToscaTemplate(settings.CLAMP_BASICAUTH)
+
+chartmuseum_ip = subprocess.run("kubectl get services -n test | grep test-chartmuseum | awk '{print $3}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()+":8080"
+chartmuseum_port = "8080"
+chart_version = "1.0.0"
+chart_name = "oru-app"
+release_name = "oru-app"
+
+@pytest.fixture(scope="module", autouse=True)
+def setup_simulators():
+ """Prepare the test environment before the executing the tests."""
+ logger.info("Test class setup for Closed Loop tests")
+
+ deploy_chartmuseum()
+
+ # Add the remote repo to Clamp k8s pod
+ logger.info("Add the remote repo to Clamp k8s pod")
+ k8s_pod = subprocess.run("kubectl get pods -n onap | grep k8s | awk '{print $1}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()
+
+ repo_url = subprocess.run("kubectl get services -n test | grep test-chartmuseum | awk '{print $3}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()+":8080"
+ logger.info("k8s: %s, repo_url:%s", k8s_pod, repo_url)
+ cmd = f"kubectl exec -it -n onap {k8s_pod} -- sh -c \"helm repo add chartmuseum http://{repo_url}\""
+ check_output(cmd, shell=True).decode('utf-8')
+ cmd = f"kubectl exec -it -n onap {k8s_pod} -- sh -c \"helm repo update\""
+ check_output(cmd, shell=True).decode('utf-8')
+ cmd = f"kubectl exec -it -n onap {k8s_pod} -- sh -c \"helm search repo -l oru-app\""
+ result = check_output(cmd, shell=True).decode('utf-8')
+ if result == '':
+ logger.info("Failed to update the K8s pod repo")
+ logger.info("Test Session setup completed successfully")
+
+ ### Cleanup code
+ yield
+ # Finish and delete the cl instance
+ clcommissioning_utils.clean_instance()
+ wait(lambda: is_oru_app_down(), sleep_seconds=5, timeout_seconds=60, waiting_for="Oru app is down")
+ # Remove the remote repo to Clamp k8s pod
+ cmd = f"kubectl exec -it -n onap {k8s_pod} -- sh -c \"helm repo remove chartmuseum\""
+ check_output(cmd, shell=True).decode('utf-8')
+ cmd = "kubectl delete namespace test"
+ check_output(cmd, shell=True).decode('utf-8')
+ cmd = "helm repo remove test"
+ check_output(cmd, shell=True).decode('utf-8')
+ time.sleep(10)
+ logger.info("Test Session cleanup done")
+
+
+def deploy_chartmuseum():
+ """Start chartmuseum pod and populate with the nedded helm chart."""
+ logger.info("Start to deploy chartmuseum")
+ cmd = "helm repo add test https://chartmuseum.github.io/charts"
+ check_output(cmd, shell=True).decode('utf-8')
+ cmd = "kubectl create namespace test"
+ check_output(cmd, shell=True).decode('utf-8')
+
+ cmd = "helm install test test/chartmuseum --version 3.1.0 --namespace test --set env.open.DISABLE_API=false"
+ check_output(cmd, shell=True).decode('utf-8')
+ wait(lambda: is_chartmuseum_up(), sleep_seconds=10, timeout_seconds=60, waiting_for="chartmuseum to be ready")
+
+ chartmuseum_url = subprocess.run("kubectl get services -n test | grep test-chartmuseum | awk '{print $3}'", shell=True, check=True, stdout=subprocess.PIPE).stdout.decode('utf-8').strip()+":8080"
+ cmd = f"curl --noproxy '*' -X POST --data-binary @{dname}/resources/cl-test-helm-chart/oru-app-1.0.0.tgz http://{chartmuseum_url}/api/charts"
+ check_output(cmd, shell=True).decode('utf-8')
+
+
+def is_chartmuseum_up() -> bool:
+ """Check if the chartmuseum is up."""
+ cmd = "kubectl get pods --field-selector status.phase=Running -n test"
+ result = check_output(cmd, shell=True).decode('utf-8')
+ logger.info("Checking if chartmuseum is UP: %s", result)
+ if result == '':
+ logger.info("chartmuseum is Down")
+ return False
+ logger.info("chartmuseum is Up")
+ return True
+
+
+def is_oru_app_up() -> bool:
+ """Check if the oru-app is up."""
+ cmd = "kubectl get pods -n nonrtric | grep oru-app | wc -l"
+ result = check_output(cmd, shell=True).decode('utf-8')
+ logger.info("Checking if oru-app is up :%s", result)
+ if int(result) == 1:
+ logger.info("ORU-APP is Up")
+ return True
+ logger.info("ORU-APP is Down")
+ return False
+
+def is_oru_app_down() -> bool:
+ """Check if the oru-app is down."""
+ cmd = "kubectl get pods -n nonrtric | grep oru-app | wc -l"
+ result = check_output(cmd, shell=True).decode('utf-8')
+ logger.info("Checking if oru-app is down :%s", result)
+ if int(result) == 0:
+ logger.info("ORU-APP is Down")
+ return True
+ logger.info("ORU-APP is Up")
+ return False
+
+def test_cl_oru_app_deploy():
+ """The Closed Loop O-RU Fronthaul Recovery usecase Apex version."""
+ logger.info("Upload tosca to commissioning")
+ tosca_template = jinja_env().get_template("commission_k8s.json.j2").render(chartmuseumIp=chartmuseum_ip, chartmuseumPort=chartmuseum_port, chartVersion=chart_version, chartName=chart_name, releaseName=release_name)
+ assert clcommissioning_utils.create_instance(tosca_template) is True
+
+ logger.info("Check if oru-app is up")
+ wait(lambda: is_oru_app_up(), sleep_seconds=5, timeout_seconds=60, waiting_for="Oru app to be up")
+++ /dev/null
-#!/usr/bin/env python3
-###
-# ============LICENSE_START=======================================================
-# ORAN SMO PACKAGE - PYTHONSDK TESTS
-# ================================================================================
-# Copyright (C) 2022 AT&T Intellectual Property. All rights
-# reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END============================================
-# ===================================================================
-#
-###
-"""Closed Loop Apex usecase tests module."""
-# This usecase has limitations due to Clamp issue.
-# 1. manually change clamp-be settings before running the test:
-# - run command "kubectl -n onap edit cm onap-policy-clamp-be-configmap"
-# find variable clamp.config.controlloop.runtime.url and change http into https
-# - run command "kubectl rollout restart deployment onap-policy-clamp-be -n onap"
-# and wait until policy-clamp-be pod restarted successfully
-# 2. make sure using the policy-clamp-be version 6.2.0-snapshot-latest at this the moment
-
-import time
-import logging
-import logging.config
-from onapsdk.configuration import settings
-from onapsdk.exceptions import RequestError
-from waiting import wait
-from oransdk.dmaap.dmaap import OranDmaap
-from oransdk.policy.policy import OranPolicy
-from oransdk.policy.clamp import ClampToscaTemplate
-from oransdk.sdnc.sdnc import OranSdnc
-from oransdk.utils.jinja import jinja_env
-
-logging.config.dictConfig(settings.LOG_CONFIG)
-logger = logging.getLogger("test Control Loops for O-RU Fronthaul Recovery usecase - Apex policy")
-dmaap = OranDmaap()
-clamp = ClampToscaTemplate(settings.CLAMP_BASICAUTH)
-
-def create_topic():
- """Create the topic in Dmaap."""
- logger.info("Create new topic")
- topic = '{ "topicName": "unauthenticated.SEC_FAULT_OUTPUT", "topicDescription": "test topic", "partitionCount": 1, "replicationCnCount": 1, "transactionEnabled": "false"}'
- response = dmaap.create_topic(topic)
- logger.info("response is: %s", response)
-
-def verify_topic_created():
- """Verify whether needed topic created."""
- logger.info("Verify topic created")
- topiclist = dmaap.get_all_topics({})
- topic_created = False
- for topic in topiclist:
- if topic["topicName"] == "unauthenticated.SEC_FAULT_OUTPUT":
- topic_created = True
- break
-
- if topic_created:
- logger.info("Topic created successfully")
- else:
- logger.info("Topic creation failed")
-
-def upload_commission(tosca_template):
- """
- Upload the tosca to commissioning.
-
- Args:
- tosca_template : the tosca template to upload in json format
- Returns:
- the response from the upload action
- """
- logger.info("Upload tosca to commissioning")
- return clamp.upload_commission(tosca_template)
-
-def create_instance(tosca_template):
- """
- Create a instance.
-
- Args:
- tosca_template : the tosca template to create in json format
- Returns:
- the response from the creation action
- """
- logger.info("Create Instance")
- return clamp.create_instance(tosca_template)
-
-def change_instance_status(new_status) -> str:
- """
- Change the instance statue.
-
- Args:
- new_status : the new instance to be changed to
- Returns:
- the new status to be changed to
- """
- logger.info("Change Instance Status to %s", new_status)
- try:
- clamp.change_instance_status(new_status, "PMSH_Instance1", "1.2.3")
- except RequestError:
- logger.info("Change Instance Status request returned failed. Will query the instance status to double check whether the request is successful or not.")
-
- # There's a bug in Clamp code, sometimes it returned 500, but actually the status has been changed successfully
- # Thus we verify the status to determine whether it was successful or not
- time.sleep(2)
- response = clamp.get_template_instance()
- return response["controlLoopList"][0]["orderedState"]
-
-def verify_instance_status(new_status):
- """
- Verify whether the instance changed to the new status.
-
- Args:
- new_status : the new status of the instance
- Returns:
- the boolean value indicating whether status changed successfully
- """
- logger.info("Verify the Instance Status is updated to the expected status %s", new_status)
- response = clamp.get_template_instance()
- if response["controlLoopList"][0]["state"] == new_status:
- return True
- return False
-
-def verify_apex_policy_created():
- """
- Verify whether the Apex policy has deployed successfully.
-
- Returns:
- the boolean value indicating whether policy deployed successfully
- """
- logger.info("Verify Apex policy is deployed")
- policy = OranPolicy()
- policy_status_list = policy.get_policy_status(settings.POLICY_BASICAUTH)
-
- for status in policy_status_list:
- logger.info("the status %s,%s,%s:", status["policy"]["name"], status["policy"]["version"], status["deploy"])
- if (status["policy"]["name"] == "operational.apex.linkmonitor" and status["policy"]["version"] == "1.0.0" and status["deploy"]):
- return True
-
- logger.info("Failed to deploy Apex policy")
- return False
-
-def delete_template_instance():
- """
- Delete the template instance.
-
- Returns:
- the response from the deletion action
- """
- logger.info("Delete Instance")
- return clamp.delete_template_instance("PMSH_Instance1", "1.2.3")
-
-def decommission_tosca():
- """
- Decommission the tosca template.
-
- Returns:
- the response from the decommission action
- """
- logger.info("Decommission tosca")
- return clamp.decommission_template("ToscaServiceTemplateSimple", "1.0.0")
-
-def send_dmaap_event():
- """Send a event to Dmaap that should trigger the apex policy."""
- event = jinja_env().get_template("LinkFailureEvent.json.j2").render()
- dmaap.send_link_failure_event(event)
-
-def test_cl_oru_recovery():
- """The Closed Loop O-RU Fronthaul Recovery usecase Apex version."""
- create_topic()
- verify_topic_created()
-
- tosca_template = jinja_env().get_template("commission_apex.json.j2").render()
-
- response = upload_commission(tosca_template)
- assert response["errorDetails"] is None
-
- response = create_instance(tosca_template)
- assert response["errorDetails"] is None
-
- response = change_instance_status("PASSIVE")
- assert response == "PASSIVE"
- wait(lambda: verify_instance_status("PASSIVE"), sleep_seconds=5, timeout_seconds=60, waiting_for="Clamp instance switches to PASSIVE")
-
- response = change_instance_status("RUNNING")
- assert response == "RUNNING"
- wait(lambda: verify_instance_status("RUNNING"), sleep_seconds=5, timeout_seconds=60, waiting_for="Clamp instance switches to RUNNING")
-
- sdnc = OranSdnc()
- status = sdnc.get_odu_oru_status("o-du-1122", "rrm-pol-2", settings.SDNC_BASICAUTH)
- assert status["o-ran-sc-du-hello-world:radio-resource-management-policy-ratio"][0]["administrative-state"] == "locked"
-
- send_dmaap_event()
-
- assert verify_apex_policy_created()
-
- time.sleep(20)
- logger.info("Check O-du/O-ru status again")
- status = sdnc.get_odu_oru_status("o-du-1122", "rrm-pol-2", settings.SDNC_BASICAUTH)
- assert status["o-ran-sc-du-hello-world:radio-resource-management-policy-ratio"][0]["administrative-state"] == "unlocked"
-
- response = change_instance_status("PASSIVE")
- assert response == "PASSIVE"
- wait(lambda: verify_instance_status("PASSIVE"), sleep_seconds=5, timeout_seconds=60, waiting_for="Clamp instance switches to PASSIVE")
-
- response = change_instance_status("UNINITIALISED")
- assert response == "UNINITIALISED"
- wait(lambda: verify_instance_status("UNINITIALISED"), sleep_seconds=5, timeout_seconds=60, waiting_for="Clamp instance switches to UNINITIALISED")
-
- response = delete_template_instance()
- assert response["errorDetails"] is None
-
- response = decommission_tosca()
- assert response["errorDetails"] is None
--- /dev/null
+#!/usr/bin/env python3
+###
+# ============LICENSE_START=======================================================
+# ORAN SMO PACKAGE - PYTHONSDK TESTS
+# ================================================================================
+# Copyright (C) 2021-2022 AT&T Intellectual Property. All rights
+# reserved.
+# ================================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END============================================
+# ===================================================================
+#
+###
+
+"""Cl usecase utils module."""
+
+import logging.config
+from waiting import wait
+from onapsdk.configuration import settings
+from oransdk.policy.clamp import ClampToscaTemplate
+
+logging.config.dictConfig(settings.LOG_CONFIG)
+logger = logging.getLogger("Cl usecae utils")
+clamp = ClampToscaTemplate(settings.CLAMP_BASICAUTH)
+
+
+class ClCommissioningUtils():
+ """Can be used to have cl usecase utils methods."""
+
+ @classmethod
+ def clean_instance(cls):
+ """Clean template instance."""
+ clamp.change_instance_status("PASSIVE", "PMSH_Instance1", "1.2.3")
+ wait(lambda: clamp.verify_instance_status("PASSIVE"), sleep_seconds=5, timeout_seconds=60,
+ waiting_for="Clamp instance switches to PASSIVE")
+ clamp.change_instance_status("UNINITIALISED", "PMSH_Instance1", "1.2.3")
+ wait(lambda: clamp.verify_instance_status("UNINITIALISED"), sleep_seconds=5, timeout_seconds=60,
+ waiting_for="Clamp instance switches to UNINITIALISED")
+
+ logger.info("Delete Instance")
+ clamp.delete_template_instance("PMSH_Instance1", "1.2.3")
+ logger.info("Decommission tosca")
+ clamp.decommission_template("ToscaServiceTemplateSimple", "1.0.0")
+
+ @classmethod
+ def create_instance(cls, tosca_template) -> bool:
+ """Create template instance."""
+ response = clamp.upload_commission(tosca_template)
+ if response["errorDetails"] is not None:
+ return False
+
+ logger.info("Create Instance")
+ response = clamp.create_instance(tosca_template)
+ if response["errorDetails"] is not None:
+ return False
+
+ logger.info("Change Instance Status to PASSIVE")
+ clamp.change_instance_status("PASSIVE", "PMSH_Instance1", "1.2.3")
+ wait(lambda: clamp.verify_instance_status("PASSIVE"), sleep_seconds=5, timeout_seconds=60,
+ waiting_for="Clamp instance switches to PASSIVE")
+
+ logger.info("Change Instance Status to RUNNING")
+ clamp.change_instance_status("RUNNING", "PMSH_Instance1", "1.2.3")
+ wait(lambda: clamp.verify_instance_status("RUNNING"), sleep_seconds=5, timeout_seconds=60,
+ waiting_for="Clamp instance switches to RUNNING")
+
+ return True
--- /dev/null
+#!/usr/bin/env python3
+###
+# ============LICENSE_START=======================================================
+# ORAN SMO PACKAGE - PYTHONSDK TESTS
+# ================================================================================
+# Copyright (C) 2021-2022 AT&T Intellectual Property. All rights
+# reserved.
+# ================================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END============================================
+# ===================================================================
+#
+###
+
+"""Dmaap utils module."""
+import logging
+import logging.config
+from onapsdk.configuration import settings
+from waiting import wait
+from oransdk.dmaap.dmaap import OranDmaap
+
+logging.config.dictConfig(settings.LOG_CONFIG)
+logger = logging.getLogger("DMaap utils")
+dmaap = OranDmaap()
+
+class DmaapUtils():
+ """Can be used to have dmaap utils methods."""
+
+ @classmethod
+ def clean_dmaap(cls, dmaap_group, dmaap_user):
+ """Clean DMAAP useful topics."""
+ dmaap.create_topic(settings.DMAAP_TOPIC_FAULT_JSON)
+ dmaap.create_topic(settings.DMAAP_TOPIC_PNFREG_JSON)
+ # Purge the FAULT TOPIC
+ wait(lambda: (dmaap.get_message_from_topic(settings.DMAAP_TOPIC_FAULT, 5000, dmaap_group, dmaap_user).json() == []), sleep_seconds=10, timeout_seconds=60, waiting_for="DMaap topic SEC_FAULT_OUTPUT to be empty")
+ wait(lambda: (dmaap.get_message_from_topic(settings.DMAAP_TOPIC_PNFREG, 5000, dmaap_group, dmaap_user).json() == []), sleep_seconds=10, timeout_seconds=60, waiting_for="DMaap topic unauthenticated.VES_PNFREG_OUTPUT to be empty")
cmd = f"helm install --debug oran-simulator local/ru-du-simulators --namespace network -f {self.resources_path}/network-simulators-topology/network-simulators-override.yaml -f {self.resources_path}/network-simulators-topology/network-simulators-topology-override.yaml"
check_output(cmd, shell=True).decode('utf-8')
+ def start_and_wait_network_simulators(self):
+ """Start and wait for all simulators defined in resources_path."""
+ logger.info("Start the network simulators")
+ self.start_network_simulators()
+ NetworkSimulators.wait_for_network_simulators_to_be_running()
+
@staticmethod
def get_all_simulators():
"""Retrieve all simulators defined in k8s services."""
@staticmethod
def wait_for_network_simulators_to_be_running():
"""Check and wait for the network sims to be running."""
- wait(lambda: NetworkSimulators.is_network_simulators_up(), sleep_seconds=10, timeout_seconds=60, waiting_for="Network simulators to be ready")
+ wait(lambda: NetworkSimulators.is_network_simulators_up(), sleep_seconds=settings.NETWORK_SIMULATOR_CHECK_RETRY, timeout_seconds=settings.NETWORK_SIMULATOR_CHECK_TIMEOUT, waiting_for="Network simulators to be ready")
from oransdk.policy.policy import OranPolicy, PolicyType
from oransdk.sdnc.sdnc import OranSdnc
from oransdk.utils.jinja import jinja_env
+from smo.dmaap import DmaapUtils
from smo.network_simulators import NetworkSimulators
# Set working dir as python script location
logging.config.dictConfig(settings.LOG_CONFIG)
logger = logging.getLogger("test APEX policy")
dmaap = OranDmaap()
+dmaap_utils = DmaapUtils()
policy = OranPolicy()
network_simulators = NetworkSimulators("./resources")
def setup_simulators():
"""Setup the simulators before the executing the tests."""
logger.info("Test class setup for Apex tests")
-
- dmaap.create_topic(settings.DMAAP_TOPIC_FAULT_JSON)
- dmaap.create_topic(settings.DMAAP_TOPIC_PNFREG_JSON)
- # Purge the FAULT TOPIC
- wait(lambda: (dmaap.get_message_from_topic(settings.DMAAP_TOPIC_FAULT, 5000, settings.DMAAP_GROUP, settings.DMAAP_USER).json() == []), sleep_seconds=10, timeout_seconds=60, waiting_for="DMaap topic SEC_FAULT_OUTPUT to be empty")
- wait(lambda: (dmaap.get_message_from_topic(settings.DMAAP_TOPIC_PNFREG, 5000, settings.DMAAP_GROUP, settings.DMAAP_USER).json() == []), sleep_seconds=10, timeout_seconds=60, waiting_for="DMaap topic unauthenticated.VES_PNFREG_OUTPUT to be empty")
-
- network_simulators.start_network_simulators()
- network_simulators.wait_for_network_simulators_to_be_running()
+ dmaap_utils.clean_dmaap(settings.DMAAP_GROUP, settings.DMAAP_USER)
+ network_simulators.start_and_wait_network_simulators()
# Wait enough time to have at least the SDNR notifications sent
+
logger.info("Waiting 10s that SDNR sends all registration events to VES...")
time.sleep(10)
logger.info("Test Session setup completed successfully")
network_simulators.stop_network_simulators()
policy.undeploy_policy(policy_id, policy_version, settings.POLICY_BASICAUTH)
policy.delete_policy(policy_type, policy_id, policy_version, settings.POLICY_BASICAUTH)
+ time.sleep(10)
logger.info("Test Session cleanup done")
def create_policy():
import time
import pytest
from onapsdk.configuration import settings
-from waiting import wait
from smo.network_simulators import NetworkSimulators
+from smo.dmaap import DmaapUtils
from oransdk.dmaap.dmaap import OranDmaap
from oransdk.sdnc.sdnc import OranSdnc
network_simulators = NetworkSimulators("./resources")
dmaap = OranDmaap()
-test_session_timestamp = datetime.datetime.now()
+dmaap_utils = DmaapUtils()
+test_session_timestamp = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc)
@pytest.fixture(scope="module", autouse=True)
# Do a first get to register the o1test/o1test user in DMAAP
# all registration messages will then be stored for the registration tests.
# If it exists already it clears all cached events.
- dmaap.create_topic(settings.DMAAP_TOPIC_PNFREG_JSON)
- dmaap.create_topic(settings.DMAAP_TOPIC_FAULT_JSON)
- wait(lambda: (dmaap.get_message_from_topic(settings.DMAAP_TOPIC_PNFREG, 5000, settings.DMAAP_GROUP, settings.DMAAP_USER).json() == []), sleep_seconds=10, timeout_seconds=60, waiting_for="DMaap topic unauthenticated.VES_PNFREG_OUTPUT to be empty")
- wait(lambda: (dmaap.get_message_from_topic(settings.DMAAP_TOPIC_FAULT, 5000, settings.DMAAP_GROUP, settings.DMAAP_USER).json() == []), sleep_seconds=10, timeout_seconds=60, waiting_for="DMaap topic unauthenticated.SEC_FAULT_OUTPUT to be empty")
- network_simulators.start_network_simulators()
- network_simulators.wait_for_network_simulators_to_be_running()
+
+ dmaap_utils.clean_dmaap(settings.DMAAP_GROUP, settings.DMAAP_USER)
+
+ network_simulators.start_and_wait_network_simulators()
# ADD DU RESTART just in case
# Wait enough time to have at least the SDNR notifications sent
logger.info("Waiting 20s that SDNR sends all registration events to VES...")
# time.sleep(20)
# Preparing the DMaap to cache all the events for the fault topics.
# If it exists already it clears all cached events.
- logger.info("Waiting 120s to have registration and faults events in DMaap")
- time.sleep(120)
+ logger.info("Waiting 300s to have registration and faults events in DMaap")
+ time.sleep(300)
logger.info("Test Session setup completed successfully")
### Cleanup code
yield
network_simulators.stop_network_simulators()
+ time.sleep(10)
logger.info("Test Session cleanup done")
def create_registration_structure(events):
"""Extract only the faults returned by SDNC that have been raised during this test."""
valid_faults = []
for fault in faults['data-provider:output']['data']:
- converted_fault_timestamp = datetime.datetime.strptime(fault['timestamp'], "%Y-%m-%dT%H:%M:%S.%fZ")
+ try:
+ converted_fault_timestamp = datetime.datetime.strptime(fault['timestamp'], "%Y-%m-%dT%H:%M:%S.%f%z")
+ except ValueError:
+ converted_fault_timestamp = datetime.datetime.strptime(fault['timestamp'], "%Y-%m-%dT%H:%M:%S%z")
logger.info("Comparing fault timestamp %s (%s) to session test timestamp %s", converted_fault_timestamp, fault['timestamp'], test_session_timestamp)
if converted_fault_timestamp > test_session_timestamp:
valid_faults.append(fault)
[tox]
-#envlist = pylint,pydocstyle,unit-tests,oran-tests
+envlist = pylint,pydocstyle,unit-tests,oran-tests
skipsdist=True
[testenv]
--- /dev/null
+#!/usr/bin/env python3
+# SPDX-License-Identifier: Apache-2.0
+"""Test Clamp module."""
+from unittest import mock
+from oransdk.policy.clamp import ClampToscaTemplate
+
+
+HEADER = {"Accept": "application/json", "Content-Type": "application/json"}
+BASIC_AUTH = {'username': 'dcae@dcae.onap.org', 'password': 'demo123456!'}
+BASE_URL = "http://localhost:8084"
+CLAMP = ClampToscaTemplate(BASIC_AUTH)
+
+
+def test_initialization():
+ """Class initialization test."""
+ clamp = ClampToscaTemplate(BASIC_AUTH)
+ assert isinstance(clamp, ClampToscaTemplate)
+
+
+@mock.patch.object(ClampToscaTemplate, 'send_message')
+def test_get_template_instance(mock_send_message):
+ """Test Clamp's class method."""
+ ClampToscaTemplate.get_template_instance(CLAMP)
+ url = f"{CLAMP.base_url()}/acm/getToscaInstantiation"
+ mock_send_message.assert_called_with('GET',
+ 'Get tosca template instance',
+ url,
+ basic_auth=BASIC_AUTH)
+
+
+@mock.patch.object(ClampToscaTemplate, 'send_message')
+def test_upload_commission(mock_send_message):
+ """Test Clamp's class method."""
+ tosca_template = {}
+ ClampToscaTemplate.upload_commission(CLAMP, tosca_template)
+ url = f"{CLAMP.base_url()}/acm/commissionToscaTemplate"
+ mock_send_message.assert_called_with('POST',
+ 'Upload Tosca to commissioning',
+ url,
+ data=tosca_template,
+ headers=HEADER,
+ basic_auth=BASIC_AUTH)
+
+
+@mock.patch.object(ClampToscaTemplate, 'send_message')
+def test_create_instance(mock_send_message):
+ """Test Clamp's class method."""
+ tosca_instance_properties = {}
+ ClampToscaTemplate.create_instance(CLAMP, tosca_instance_properties)
+ url = f"{CLAMP.base_url()}/acm/postToscaInstanceProperties"
+ mock_send_message.assert_called_once_with('POST', 'Create Tosca instance', url, data=tosca_instance_properties,
+ headers=HEADER, basic_auth=BASIC_AUTH)
+
+
+@mock.patch.object(ClampToscaTemplate,'send_message')
+def test_get_template_instance_status(mock_send_message):
+ """Test Clamp's class method."""
+ name = ""
+ version = ""
+ ClampToscaTemplate.get_template_instance_status(CLAMP, name, version)
+ url = f"{CLAMP.base_url()}/acm/getInstantiationOrderState?name={name}&version={version}"
+ mock_send_message.assert_called_with('GET',
+ 'Get tosca template instance',
+ url,
+ basic_auth=BASIC_AUTH)
+
+
+@mock.patch.object(ClampToscaTemplate, 'send_message')
+def test_delete_template_instance(mock_send_message):
+ name = ""
+ version = ""
+ ClampToscaTemplate.delete_template_instance(CLAMP, name, version)
+ url = f"{CLAMP.base_url()}/acm/deleteToscaInstanceProperties?name={name}&version={version}"
+ mock_send_message.assert_called_with('DELETE', 'Delete the tosca instance', url, headers=HEADER,
+ basic_auth=BASIC_AUTH)
+
+
+@mock.patch.object(ClampToscaTemplate, 'send_message')
+def test_decommission_template(mock_send_message):
+ name = ""
+ version = ""
+ ClampToscaTemplate.decommission_template(CLAMP, name, version)
+ url = f"{CLAMP.base_url()}/acm/decommissionToscaTemplate?name={name}&version={version}"
+ mock_send_message.assert_called_with('DELETE', 'Decommission the tosca template', url, headers=HEADER,
+ basic_auth=BASIC_AUTH)
# Static Defaults
image:
- repository: 'nexus3.o-ran-sc.org:10001/o-ran-sc'
+ repository: 'nexus3.o-ran-sc.org:10004/o-ran-sc'
name: nts-ng-o-ran-du
- tag: 1.4.3
+ tag: 1.4.5
pullPolicy: IfNotPresent
service:
dependencies:
- name: jenkins
- version: ~3.11.3
+ version: 3.12.2
repository: https://charts.jenkins.io
condition: testsSuite.jenkins
- name: oran-tests
# Static Defaults
image:
- repository: 'nexus3.o-ran-sc.org:10001/o-ran-sc'
+ repository: 'nexus3.o-ran-sc.org:10004/o-ran-sc'
name: nts-ng-o-ran-ru-fh
- tag: 1.4.3
+ tag: 1.4.5
pullPolicy: IfNotPresent
service: