TEIV: add docker-compose README 57/13357/1
authorJvD_Ericsson <jeff.van.dam@est.tech>
Tue, 17 Sep 2024 09:26:51 +0000 (10:26 +0100)
committerJvD_Ericsson <jeff.van.dam@est.tech>
Tue, 17 Sep 2024 09:26:51 +0000 (10:26 +0100)
Change-Id: I7ab75bd3679439619fc1a507e2b6fb6ec8f2f3fc
Signed-off-by: JvD_Ericsson <jeff.van.dam@est.tech>
docker-compose/README.md [new file with mode: 0644]
docker-compose/docker-compose.yml
pgsql-schema-generator/README.md

diff --git a/docker-compose/README.md b/docker-compose/README.md
new file mode 100644 (file)
index 0000000..8133e63
--- /dev/null
@@ -0,0 +1,54 @@
+# TEIV docker-compose
+
+The all the files in this directory should have everything you need to run TEIV with the default sql schemas or with
+your own custom schemas generated from you own YANG files.
+
+The docker-compose.yml file in its default state is set up to run TEIV with the default sql schemas generated from the
+[default YANG files](../teiv/src/main/resources/models). When ``docker-compose up`` is run the following containers
+may start and run:
+
+| Container Name               | Image Name                                                                                                |
+|------------------------------|-----------------------------------------------------------------------------------------------------------|
+| kafka-producer               | confluentinc/cp-kafka:7.6.1 (optional - automatically populates TEIV)                                     |
+| pgsql-schema-generator       | o-ran-sc/smo-teiv-pgsql-schema-generator:latest (optional - if you want to generate your own sql schemas) |
+| kafka                        | confluentinc/cp-kafka:7.6.1                                                                               |
+| kafka2                       | confluentinc/cp-kafka:7.6.1                                                                               |
+| kafka3                       | confluentinc/cp-kafka:7.6.1                                                                               |
+| topology-ingestion-inventory | o-ran-sc/smo-teiv-ingestion:latest                                                                        |
+| topology-exposure-inventory  | o-ran-sc/smo-teiv-exposure:latest                                                                         |
+| zookeeper                    | confluentinc/cp-zookeeper:6.2.1                                                                           |
+| dbpostgresql                 | postgis/postgis:13-3.4-alpine                                                                             |
+
+- **topology-ingestion-inventory** - consumes kafka messages, as a cloud events, to populate TEIV
+- **topology-exposure-inventory** - allows the user to query TEIV and its data
+- **kafka**, **kafka2**, **kafka3** - the three kafka brokers
+- **zookeeper** - coordinates and manages the Kafka brokers
+- **kafka-producer** - an optional service that will populate TEIV automatically with data present in
+  [./cloudEventProducer/events](./cloudEventProducer/events)
+- **dbpostgresql** - stores TEIV's data
+- **pgsql-schema-generator** - an optional service that allows the user to supply their own YANG files to create their
+  own SQL schemas
+
+## Running with default sql schemas
+
+Running with the default sql schemas provided by TEIV you should just run ``docker-compose up`` without any
+modifications.
+This will bring up TEIV populated with data, and you can start testing it immediately.
+[Sample queries can be found here](https://lf-o-ran-sc.atlassian.net/wiki/spaces/SMO/pages/13567704/Sample+TEIV+Queries).
+
+## Running with your own YANG models
+
+**NB:** pgsql-schema-generator might fail due to a permission error, you can run ``sudo chmod -R 777 sql_scripts/`` to 
+fix
+
+Running with your own YANG models means TEIV has to generate sql schemas of those YANG files to use in TEIV. To do this
+there is a pgsql-schema-generator service provided that will do this. To do this there are a few steps to follow:
+
+1. Delete the current sql schemas from [./sql_scripts](./sql_scripts)
+2. Provide your own YANG files in [./generate-defaults](./generate-defaults)
+3. Uncomment pgsql-schema-generator from [./docker-compose.yml](./docker-compose.yml) **AND** the depends on in
+   dbpostgresql
+4. Comment out kafka-producer **OR** provide your own event that will work with the generated sql schema
+
+You should now be able to run ``docker-compose up`` with your own YANG models
+[Sample queries can be found here](https://lf-o-ran-sc.atlassian.net/wiki/spaces/SMO/pages/13567704/Sample+TEIV+Queries).
\ No newline at end of file
index 2d35294..3976d79 100644 (file)
@@ -22,6 +22,9 @@ services:
 
 #
 # uncomment if you have your own YANG files in docker-compose/generate-defaults
+# NB: uncomment lines 74 and 75 as well from dbpostgresql
+#    depends_on:
+#      - pgsql-schema-generator
 #
 #  pgsql-schema-generator:
 #    container_name: pgsql-schema-generator
@@ -68,8 +71,8 @@ services:
       POSTGRES_USER: ${DB_USERNAME:-topology_exposure_user}
       POSTGRES_PASSWORD: ${DB_PASSWORD:-dbpassword}
     restart: always
-    depends_on:
-      - pgsql-schema-generator
+#    depends_on:
+#      - pgsql-schema-generator
     deploy:
       resources:
         reservations:
index 9fb9323..f898035 100644 (file)
@@ -365,6 +365,7 @@ The SQL entries for consumer data include
 - **module_reference:** For the consumer module reference related module names from provided classifiers or decorators
   retrieved from the model service are extracted and stored which will be used for
   execution to module_reference table.
+
   | Column name | Type                  | Description                                                                                                                   |
   |-------------|-----------------------|-------------------------------------------------------------------------------------------------------------------------------|
   | name        | TEXT PRIMARY KEY      | The module name                                                                                                               |
@@ -376,6 +377,7 @@ The SQL entries for consumer data include
 
 - **decorators:** There will be the ability for Administrators to decorate topology entities and relationships.
   We will be storing the schemas for the decorators in this table.
+
   | Column name                                                                                                   | Type             | Description                                                                      |
   |---------------------------------------------------------------------------------------------------------------|------------------|----------------------------------------------------------------------------------|
   | name                                                                                                          | TEXT PRIMARY KEY | The key of the decorator.                                                        |
@@ -386,6 +388,7 @@ The SQL entries for consumer data include
 - **classifiers:** There will be the ability for client applications to apply user-defined keywords/tags (classifiers) to
   topology entities and relationships.
   We will be storing the schemas for the classifiers in this table.
+
   | Column name                                                                                                   | Type             | Description                                                                       |
   |---------------------------------------------------------------------------------------------------------------|------------------|-----------------------------------------------------------------------------------|
   | name                                                                                                          | TEXT PRIMARY KEY | The actual classifier.                                                            |
@@ -473,3 +476,8 @@ To run testsuite:
 
 - In your terminal navigate into pgsql-schema-generator directory and run 'mvn clean install'
 - In your terminal navigate into pgsql-schema-generator directory and run 'mvn -Dtest=`<Test Name>` test'
+
+## Local Use Using docker-compose
+
+For local use using docker-compose follow the how-to guide at 
+[How to Run TEIV with Your Models](https://lf-o-ran-sc.atlassian.net/wiki/spaces/SMO/pages/62324740/Release+J+-+How+to+Run+TEIV+with+Your+Models)