This document describes the supported software and hardware configurations for the reference component as well as providing guidelines on how to install and configure such reference system.
-The audience of this document is assumed to have good knowledge in RAN network nd Linux system.
+The audience of this document is assumed to have good knowledge in RAN network and Linux system.
Hardware Requirements
Below are the minimum requirements for installing the AIMLFW
-1. OS: Ubuntu 18.04 server
-2. 8 cpu cores
-3. 16 GB RAM
-4. 60 GB harddisk
+#. OS: Ubuntu 18.04 server
+#. 8 cpu cores
+#. 16 GB RAM
+#. 60 GB harddisk
Software Installation and Deployment
------------------------------------
git clone "https://gerrit.o-ran-sc.org/r/aiml-fw/aimlfw-dep"
cd aimlfw-dep
-Update recipe file “RECIPE_EXAMPLE/example_recipe_latest_stable.yaml” which includes update of VM IP and datalake details
-Note: In case the Influx DB datalake is not available, this can be skipped at this stage and can be updated after installing datalake.
+Update recipe file :file:`RECIPE_EXAMPLE/example_recipe_latest_stable.yaml` which includes update of VM IP and datalake details.
+
+**Note**: In case the Influx DB datalake is not available, this can be skipped at this stage and can be updated after installing datalake.
.. code:: bash
http://localhost:32005/
-In case Influx DB datalake not available, it can be installed using the steps mentioned in section “Install influx db as datalake”. Once installed the access details of the datalake can be updated in RECIPE_EXAMPLE/example_recipe_latest_stable.yaml . Once updated, follow the below steps for reinstall of some components:
+In case Influx DB datalake not available, it can be installed using the steps mentioned in section :ref:`install-influx-db-as-datalake`. Once installed the access details of the datalake can be updated in :file:`RECIPE_EXAMPLE/example_recipe_latest_stable.yaml`. Once updated, follow the below steps for reinstall of some components:
.. code:: bash
bin/uninstall_traininghost.sh
+.. _install-influx-db-as-datalake:
+
Install Influx DB as datalake
-----------------------------
.. code:: bash
+ helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/influxdb
kubectl exec -it <pod name> bash
Use the tokens further in the below configurations and in the recipe file.
-Following are the steps to add qoe data to influx DB
+Following are the steps to add qoe data to Influx DB.
-Execute below from inside influx Db container to create a bucket:
+Execute below from inside Influx DB container to create a bucket:
.. code:: bash
sudo pip3 install influxdb_client
-Use the insert.py in ric-app/qp repository to upload the qoe data in influx DB
+Use the :file:`insert.py` in ``ric-app/qp repository`` to upload the qoe data in Influx DB
.. code:: bash
git clone https://gerrit.o-ran-sc.org/r/ric-app/qp
cd qp/qp
-update insert.py file with the following content:
+Update :file:`insert.py` file with the following content:
-.. code:: bash
+.. code-block:: python
import pandas as pd
from influxdb_client import InfluxDBClient
populatedb()
-Update <token> in insert.py file
+Update ``<token>`` in :file:`insert.py` file
Follow below command to port forward to access Influx DB
python3 insert.py
-To check inserted data in Influx DB , execute below command inside the influx DB container:
+To check inserted data in Influx DB , execute below command inside the Influx DB container:
.. code:: bash
kubectl create namespace kserve-test
-Create qoe.yaml file with below contents
+Create :file:`qoe.yaml` file with below contents
-.. code:: bash
+.. code-block:: yaml
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
memory: 0.5Gi
-To deploy model update the Model URL in the qoe.yaml file and execute below command to deploy model
+To deploy model update the Model URL in the :file:`qoe.yaml` file and execute below command to deploy model
.. code:: bash
kubectl get svc istio-ingressgateway -n istio-system
Obtain nodeport corresponding to port 80.
-In the below example, the port is 31206
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-istio-ingressgateway LoadBalancer 10.105.222.242 <pending> 15021:31423/TCP,80:31206/TCP,443:32145/TCP,31400:32338/TCP,15443:31846/TCP 4h15m
+In the below example, the port is 31206.
+
+.. code::
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ istio-ingressgateway LoadBalancer 10.105.222.242 <pending> 15021:31423/TCP,80:31206/TCP,443:32145/TCP,31400:32338/TCP,15443:31846/TCP 4h15m
Create predict.sh file with following contents
model_name=qoe-model
curl -v -H "Host: $model_name.kserve-test.example.com" http://<IP of where Kserve is deployed>:<ingress port for Kserve>/v1/models/$model_name:predict -d @./input_qoe.json
-Update the IP of host where Kserve is deployed and ingress port of Kserve obtained using above method.
+Update the ``IP`` of host where Kserve is deployed and ingress port of Kserve obtained using above method.
-Create sample data for predictions in file input_qoe.json. Add the following content in input_qoe.json file.
+Create sample data for predictions in file :file:`input_qoe.json`. Add the following content in :file:`input_qoe.json` file.
.. code:: bash