1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. SPDX-License-Identifier: CC-BY-4.0
3 .. Copyright (C) 2019 highstreet technologies GmbH and others
6 sim/o1-interface Overview
7 **************************
9 Network Topology Simulator (NTS) | next generation
10 --------------------------------------------------
12 The Network Topology Simulator is a framework that allows simulating Network Functions (NF) that expose a management interface via NETCONF/YANG.
17 The NETCONF/YANG management interface is simulated, and any YANG models can be loaded by the framework to be exposed. Random data is generated based on the specific models, such that each simulated NF presents different data on its management interface.
19 The NTS framework is based on several open-source projects
21 - `cJSON <https://github.com/DaveGamble/cJSON>`_
22 - `libcurl <https://curl.haxx.se>`_
23 - `libyang <https://github.com/CESNET/libyang>`_
24 - `sysrepo <https://github.com/sysrepo/sysrepo>`_
25 - `libnetconf2 <https://github.com/CESNET/libnetconf2>`_
26 - `Netopeer2 <https://github.com/CESNET/Netopeer2>`_
28 The NTS Manager can be used to specify the simulation details and to manage the simulation environment at runtime.
30 Each simulated NF is represented as a docker container, where the NETCONF Server is running. The creation and deletion of docker containers associated with simulated NFs is handled by the NTS Manager. The NTS Manager is also running as a docker container and exposes a proprietary NETCONF/YANG interface to control the simulation.
35 The purpose of the NTS Manager is to ease the utilization of the NTS framework. It enables the user to interact with the simulation framework through a NETCONF/YANG interface. The user has the ability to modify the simulation parameters at runtime and to see the status of the current state of the NTS. The NETCONF/YANG interface will be detailed below.
41 +--ro available-images
42 | +--ro network-function-image* []
43 | +--ro function-type? identityref
44 | +--ro docker-image-name string
45 | +--ro docker-version-tag string
46 | +--ro docker-repository string
47 +--rw network-functions!
48 | +--rw network-function* [function-type]
49 | +--rw function-type identityref
50 | +--rw started-instances uint16
51 | +--rw mounted-instances uint16
52 | +--rw mount-point-addressing-method? enumeration
53 | +--rw docker-instance-name string
54 | +--rw docker-version-tag string
55 | +--rw docker-repository string
56 | +--rw fault-generation!
57 | | +--rw fault-delay-list* [index]
58 | | | +--rw index uint16
59 | | | +--rw delay-period? uint16
60 | | +--ro fault-count {faults-status}?
61 | | +--ro normal? uint32
62 | | +--ro warning? uint32
63 | | +--ro minor? uint32
64 | | +--ro major? uint32
65 | | +--ro critical? uint32
67 | | +--rw faults-enabled? boolean
68 | | +--rw call-home? boolean
70 | | +--rw faults-enabled? boolean
71 | | +--rw pnf-registration? boolean
72 | | +--rw heartbeat-period? uint16
74 | +--ro instance* [name]
75 | +--ro mount-point-addressing-method? enumeration
77 | +--ro is-mounted? boolean
79 | +--ro docker-ip? inet:ip-address
80 | +--ro docker-ports* [port]
81 | | +--ro port inet:port-number
82 | | +--ro protocol? identityref
83 | +--ro host-ip? inet:ip-address
84 | +--ro host-ports* [port]
85 | +--ro port inet:port-number
86 | +--ro protocol? identityref
88 | +--rw controller-protocol? enumeration
89 | +--rw controller-ip? inet:ip-address
90 | +--rw controller-port? inet:port-number
91 | +--rw controller-netconf-call-home-ip? inet:ip-address
92 | +--rw controller-netconf-call-home-port? inet:port-number
93 | +--rw controller-username? string
94 | +--rw controller-password? string
96 | +--rw ves-endpoint-protocol? enumeration
97 | +--rw ves-endpoint-ip? inet:ip-address
98 | +--rw ves-endpoint-port? inet:port-number
99 | +--rw ves-endpoint-auth-method? authentication-method-type
100 | +--rw ves-endpoint-username? string
101 | +--rw ves-endpoint-password? string
102 | +--rw ves-endpoint-certificate? string
104 | +--ro netconf-ssh-port? inet:port-number
105 | +--ro netconf-tls-port? inet:port-number
106 | +--ro transport-ftp-port? inet:port-number
107 | +--ro transport-sftp-port? inet:port-number
108 +--ro ssh-connections? uint8
109 +--ro tls-connections? uint8
110 +--ro cpu-usage? percent
111 +--ro mem-usage? uint32
112 +--ro last-operation-status? string
115 +---n instance-changed
116 | +--ro change-status string
117 | +--ro function-type identityref
119 | +--ro is-mounted? boolean
121 | +--ro docker-ip? inet:ip-address
122 | +--ro docker-ports* [port]
123 | | +--ro port inet:port-number
124 | | +--ro protocol? identityref
125 | +--ro host-ip? inet:ip-address
126 | +--ro host-ports* [port]
127 | +--ro port inet:port-number
128 | +--ro protocol? identityref
129 +---n operation-status-changed
130 +--ro operation-status string
131 +--ro error-message? string
133 Detailed information about the YANG attributes
134 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
136 Under **simulation** there are 3 configuration containers and a couple of statistics leafs:
138 - **network-functions** - represents the simulation data, which will be best described below
139 - **sdn-controller** - this container groups the configuration related to the ODL based SDN controller that the simulated devices can connect to:
141 - **controller-protocol** - SDN controller protocol (http/https)
142 - **controller-ip** - the IP address of the ODL based SDN controller where the simulated devices can be mounted. Both IPv4 and IPv6 are supported
143 - **controller-port** - the port of the ODL based SDN controller
144 - **controller-netconf-call-home-ip** - the IP address of the ODL based SDN controller where the simulated devices can Call Home via the NETCONF Call Home feature.
145 - **controller-netconf-call-home-port** - the NETCONF Call Home port of the ODL based SDN controller
146 - **controller-username** - the username to be used when connecting to the ODL based SDN controller
147 - **controller-password** - the password to be used when connecting to the ODL based SDN controller
149 - **ves-endpoint** - this container groups the configuration related to the VES endpoint where the VES messages are targeted:
151 - **ves-endpoint-protocol** - the protocol of the VES endpoint where VES messages are targeted (http/https)
152 - **ves-endpoint-ip** - the IP address of the VES endpoint where VES messages are targeted
153 - **ves-endpoint-port** - the port address of the VES endpoint where VES messages are targeted
154 - **ves-endpoint-auth-method** - the authentication method to be used when sending the VES message to the VES endpoint. Possible values are:
156 + *no-auth* - no authentication
157 + *cert-only* - certificate only authentication in this case the certificate to be used for the communication must be configured
158 + *basic-auth* - classic username/password authentication in this case both the username and password need to be configured
159 + *cert-basic-auth* - authentication that uses both username/password and a certificate all three values need to be configured in this case
161 - **ves-endpoint-username** - the username to be used when authenticating to the VES endpoint
162 - **ves-endpoint-password** - the password to be used when authenticating to the VES endpoint
163 - **ves-endpoint-certificate** - the certificate to be used when authenticating to the VES endpoint
164 - **ports**: if any ports share the same number, the order is: netconf-ssh (all ports), netconf-tls (all ports), ftp (1 port), sftp (1 port):
166 - **netconf-ssh-port** - base port for NETCONF SSH
167 - **netconf-tls-port** - base port for NETCONF TLS
168 - **transport-ftp-port** - base port for FTP
169 - **transport-sftp-port** - base port for SFTP
171 - **ssh-connections** - status node indicating the number of SSH Endpoints each network function instance exposes
172 - **tls-connections** - status node indicating the number of TLS Endpoints each network function instance exposes
173 - **cpu-usage** - status node indicating the **total** CPU usage of the simulation
174 - **mem-usage** - status node indicating the **total** memory usage of the simulation
175 - **last-operation-status** - indicates the status of last manager ran operation
177 Under the **network-functions** there is the **network-function** list. This list is automatically populated by the NTS Manager at start time with the available network functions. No changes at the actual list are allowed (adding or removing elements), only the changes of the properties of the elements have effect. The structure of an element of this list is described below:
179 - **function-type** - the function type
180 - **started-devices** - represents the number of simulated devices. The default value is 0, meaning that when the NTS is started, there are no simulated devices. When this value is increased to **n**, the NTS Manager starts docker containers in order to reach **n** simulated devices. If the value is decreased to **k**, the NTS Manager will remove docker containers in a LIFO manner, until the number of simulated devices reaches **k**
181 - **mounted-devices** - represents the number of devices to be mounted to an ODL based SDN Controller. The same phylosophy as in the case of the previous leaf applies. If this number is increased, the number of ODL mountpoints increases. Else, the simulated devices are being unmounted from ODL. The number of mounted devices cannot exceed the number of started devices. The details about the ODL controller where to mount/unmount are given by the **sdn-controller** container
182 - **mount-point-addressing-method** - addressing method of the mount point. Possible values are:
184 + *docker-mapping* - [default value] future started simulated devices will be mapped on the Docker container
185 + *host-mapping* - future started simulated devices will me mapped on the host's IP address and port based on *base-port*
186 - **docker-instance-name** - the prefix for future simulated devices (to this name a dash and an increasing number is added)
187 - **docker-version-tag** - a specific version tag for the Docker container to be ran. if empty, the latest version is ran
188 - **docker-repository** - the prefix for containing the Docker repository information. if local repository is used, value can be either blank or *local*
189 - **fault-generation** - container which groups the fault generation features, explained later
190 - **netconf** - container with settings for enabling or disabling netconf features:
192 - **faults-enabled** - enable or disable faults over netconf
193 - **call-home** - enable the NETCONF Call Home feature. If set to 'true', each simulated device, when booting up, will try to Call Home to the SDN Controller.
194 - **ves** - container with settings for enabling or disabling VES features:
196 - **faults-enabled** - enable or disable faults over VES
197 - **pnf-registration** - enable PNF registration on start
198 - **heartbeat-period** - the number of seconds between VES heartbeat messages
200 The **available-images** container has a list containing available (installed) simulations. The list corresponds (has the same name, and specific leafs) to the **network-function** list inside **simulation**, and the description is the same. This list is populated by the Manager at runtime after it checks which Docker images are pulled, including having multiple versions (both in tag and repository). To be more clear, each entry of this list is a possible simulation, and the list contains all the possible simulations. This allows the user to know the simulation capabilities of the Manager.
202 There are 2 defined **notifications**:
204 - **instance-changed** notification: is called by the manager whenever a change is done to any of the network functions. This contains data about the change:
206 - **change-status**: is a string which has the following structure: operation STATUS - info. operation can be *start*, *stop*, *mount*, *unmount*, *config* and *reconfig*; STATUS can be SUCCESS or FAILED; info can be present or not, depending on what further information is available about the change
207 - **function-type**: the function-type for the instance
208 - **name**: name of the instance that is changed
209 - **networking**: when starting and configuring an instance, this container has all the necessary networking data, such as IP and ports
211 - **operation-status-changed** notification is called by the manager at the end of an operation:
213 - **status** returns the status of the operation: SUCCESS/FAILED. This status can also be statically read from the operational datastore under *nts-manager:simulation/last-operation-status*
214 - **error-mesage** an error message with details of the error (if any).
216 Manager datastore changes mode of operation
217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
219 Changing any value from **sdn-controller** or **ves-endpoint** containers will be propagated to all running simulated network functions, and all new ones will use the values here. In the same manner, triggering any changes to the **fault-generation**, **netconf** and **ves** settings in a network function element from the *network-function* list will automatically propagate to all running network functions of the same *function-type*. However, changing the *docker-\** leafs of the *network-function* won't propagate, as they're only used as settings for starting new network functions.
222 ---------------------
224 The NTS network function represents the actual simulated device.
228 module: nts-network-function
230 | +--ro build-time? yang:date-and-time
231 | +--ro version? string
232 | +--ro started-features? ntsc:feature-type
234 +--rw network-function
235 | +--rw function-type? string
236 | +--rw mount-point-addressing-method? enumeration
237 | +--rw fault-generation!
238 | | +--rw fault-delay-list* [index]
239 | | | +--rw index uint16
240 | | | +--rw delay-period? uint16
241 | | +--ro fault-count {faults-status}?
242 | | +--ro normal? uint32
243 | | +--ro warning? uint32
244 | | +--ro minor? uint32
245 | | +--ro major? uint32
246 | | +--ro critical? uint32
248 | | +--rw faults-enabled? boolean
249 | | +--rw call-home? boolean
251 | +--rw faults-enabled? boolean
252 | +--rw pnf-registration? boolean
253 | +--rw heartbeat-period? uint16
255 | +--rw controller-ip? inet:ip-address
256 | +--rw controller-port? inet:port-number
257 | +--rw controller-netconf-call-home-ip? inet:ip-address
258 | +--rw controller-netconf-call-home-port? inet:port-number
259 | +--rw controller-username? string
260 | +--rw controller-password? string
262 +--rw ves-endpoint-protocol? enumeration
263 +--rw ves-endpoint-ip? inet:ip-address
264 +--rw ves-endpoint-port? inet:port-number
265 +--rw ves-endpoint-auth-method? authentication-method-type
266 +--rw ves-endpoint-username? string
267 +--rw ves-endpoint-password? string
268 +--rw ves-endpoint-certificate? string
271 +---x datastore-populate
273 | +--ro status enumeration
274 +---x feature-control
276 | | +---w start-features? ntsc:feature-type
277 | | +---w stop-features? ntsc:feature-type
279 | +--ro status enumeration
280 +---x invoke-notification
282 | | +---w notification-format enumeration
283 | | +---w notification-object string
285 | +--ro status enumeration
286 +---x invoke-ves-pm-file-ready
288 | | +---w file-location string
290 | +--ro status enumeration
291 +---x clear-fault-counters
293 +--ro status enumeration
296 Detailed information about the YANG attributes
297 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
299 All de details and mechanisms of the **network-function** container are explained in the **NTS Manager** section. Besides this container, there are also a couple of RPCs defined:
301 - **datastore-populate** - calling this will trigger the network function to populate all its datastores with data based on the *config.json* defined rules
302 - **feature-control** - calling this will start or stop selected features. currently available features are (features marked with * can not be stopped once started):
304 - **ves-file-ready** - enables VES file ready, and stats a FTP and a SFTP server on the network function
305 - **ves-heartbeat** - enabled VES heartbeat feature
306 - **ves-pnf-registration*** - enables VES PNF registration
307 - **manual-notification-generation** - enables the manual notification generation feature
308 - **netconf-call-home*** - enables NETCONF's Call Home feature
309 - **web-cut-through** - enables web cut through, adding the info to the ietf-system module
311 - **invoke-notification** - this RPC is used for forcing a simulated device to send a NETCONF notification, as defined by the user:
313 - The **input** needed by the RPC:
315 - **notification-format** - can be either *json* or *xml*
316 - **notification-object** - this is a string containing the notification object that we are trying to send from the simulated device, in JSON format. **Please note that the user has the responsibility to ensure that the JSON object is valid, according to the definition of the notification in the YANG module.** There is no possibility to see what was wrong when trying to send an incorrect notification. The RPC will only respond with an "ERROR" status in that case, without further information. E.g. of a JSON containing a notification object of type ***otdr-scan-result*** defined in the ***org-openroadm-device*** YANG module: ***{"org-openroadm-device:otdr-scan-result":{"status":"Successful","status-message":"Scan result was successful","result-file":"/home/result-file.txt"}}***. **Please note that the notification object contains also the name of the YANG model defning it, as a namespace, as seen in the example.**
317 - The **output** returned by the RPC:
319 - **status** - if the notification was send successfully by the simulated device, the RPC will return a **SUCCESS** value. Else, the RPC will return a **ERROR** value.
321 - **invoke-ves-pm-file-ready** - as name impiles, it invokes a file ready VES request, with a specified *file-location*
322 - **clear-fault-counters** - clears all counters for the fault generation system. see **Fault generation** below.
324 It is worth mentioning that the *NTS Manager* also populates the `function-type` leaf of its own *nts-network-function* module with the value `NTS_FUNCTION_TYPE_MANAGER`. This is done to help users which are connected to a NETCONF server get the data from *nts-network-function* and immediatly see what they are connected to.
326 Network function operation
327 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
329 Under usual operation, the network functions are managed by the manager which will perform the operations listed below. However, if a user chooses to, it can manually start up a network function, and manage it via NETCONF (datastore and RPCs) or enviroment (see below).
330 1. Create and start Docker container
331 2. Set the VES and SDN controller data via NETCONF
332 3. Invoke **datastore-populate** RPC to populate the datastore
333 4. Invoke **feature-control**, enabling **ALL** the features.
336 Network function standalone operation
337 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
339 The network function can run in standalone mode when the **NTS_NF_STANDALONE_START_FEATURES** environment variable is not blank. The value found here determines the standalone operation, and it can be combined of two values:
341 - datastore-populate, which populates the datastore by the rules
342 - any bits of the feature-type YANG typedef (defined in nts-common.yang), which will enable the respective features.
344 Other than this, the network-function will operate just as it would when started by the manager and it can be controller through the **nts-network-function.yang** interface.
346 The default mount point addressing method is "docker-mapping". However this behaviour can be changed by setting the **NTS_NF_MOUNT_POINT_ADDRESSING_METHOD** enviroment variable to "host-mapping". When "host-mapping" is chosen, all the host ports must be fowareded from Docker by the user when running the network function, and **NTS_HOST_IP** and **NTS_HOST_xxxx_PORT** enviroment variables should be set for the network function to know how to perform its tasks.
348 Datastore random population
349 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
351 The datastore will be populated with random values on each of its leafs. However, certain there is some control on the population itself, which can be found in *config.json*, which is commented. Please note that the nodes below should be main nodes in *config.json*:
355 "datastore-random-generation-rules" : {
356 "excluded-modules": [ //modules to be excluded from populating
358 "sysrepo-monitoring",
361 "ietf-netconf-monitoring",
366 "ietf-netconf-server"
369 "debug-max-string-size" : 50, //max size of string. if not set, default is 255
371 "default-list-instances": 1, //default number of instances a list or a leaflist should be populated with
372 "custom-list-instances" : [ //custom number of list instances. instance is schema name, and should reflect a list or a leaflist
373 {"/ietf-interfaces:interfaces/interface": 2}, //2 instances of this. if 0, list will be excluded from populating
375 "restrict-schema" : [ //restrictions to certain schema nodes to a set of values (so no random here)
376 {"/ietf-interfaces:interfaces/interface/type" : ["iana-if-type:ethernetCsmacd", "other-value"]},
377 {"/ietf-interfaces:interfaces/interface/name" : ["name1", "name2"]}
381 "datastore-populate-rules": {
382 "random-generation-enabled": true, //true or false, whether to generate random data or not (use false only if you want to load pre-generated data only and nothing more)
384 "pre-generated-operational-data": [ //path with files containing NETCONF data, either JSON or XML
389 "pre-generated-running-data": [ //path with files containing NETCONF data, either JSON or XML
395 NOTE: pre-generated data must be in either JSON or XML format; be careful on how the file name is saved, because the simulator can only discover format based on filename (case-sensitve ".json" or ".xml")
397 NOTE: when generating random data, the pre-generated data is loaded first, and any module affected by the pre-generated data is automatically excluded from random populating. The order in which data is added to the datastore is:
399 1. pre-generated data
402 NOTE: the order in which datastores are being populated:
404 1. the RUNNING datastore
405 2. the OPERATIONAL datastore
410 Fault generation is controlled using a combination of JSON and YANG settings. From the JSON perspective, the settings are as below:
415 "yang-notif-template" : "<xml ... %%severity%% $$time$$ %%custom1%%>",
416 "choosing-method" : "random | linear",
419 //ves mandatory fields
423 "date-time" : "$$time$$",
424 "specific-problem" : "",
426 //template custom fileds
437 - **fault-rules** node should be a main node in *config.json* for the respective network function in order for the fault generation to be enabled
438 - **yang-notif-template** - template of the yang notification model in current network function. can be "" to disable notifications. must always be present
439 - **choosing-method** - method to choose the fault. can be either *linear* or *random*, and must always be present
440 - **faults** list of faults to choose from by "choosing-method". it can contain any number of fields, custom ones, along with the mandatory VES fields presented below:
444 - **severity** - should correspond to VES defined: NORMAL, WARNING, MINOR, MAJOR, CRITICAL (case sensitive)
446 - **specific-problem**
448 On the **yang-notif-template** and on any of the fields, there are two options for creating "dynamic" content (also see example above):
449 - **variables** - any field put in between %% will be replaced with the field's value
450 - **functions** - function names are put in between $$. Available functions are:
452 - **time** - returns current timestamp in a YANG date-time format
453 - **uint8_counter** - a unique 8-bit counter, starting from 0, each time this function is found, the counter is automatically increased; when going above the max value, it will reset from 0
454 - **uint16_counter** - a unique 16-bit counter, starting from 0, each time this function is found, the counter is automatically increased; when going above the max value, it will reset from 0
455 - **uint32_counter** - a unique 32-bit counter, starting from 0, each time this function is found, the counter is automatically increased; when going above the max value, it will reset from 0
457 It is worth to mention that the replacement is done within any field, of any field. This means that it is possible to have nested fields and functions. See example for better understanding.
459 From the YANG perspective, one can control whether faults are enabled or disabled independently via NETCONF and/or VES, through their respective containers described in the sections above. The YANG **fault-generation** container contains:
461 - **fault-delay-list** - a list with elements which consists of *index* (unimportant, but needs to be unique) and *delay-period* which represents the number of seconds in between the current fault and the next fault. Please note that the fault is chosen from and based on the settings esablished in *config.json*
462 - **fault-count** - the status of the faults encountered by the network function; it is not present in the manager's schema
464 In order to clear the **fault-count** counters, on the **network-function** module there is a **clear-fault-counters** RPC which can be called via NETCONF.
469 Either of the two main functionalities (*manager* and *network-function*) are implemented by the same binary application. This another functionality added in v1.0.8 which implements supervisor capabilities for governing the Docker container. Besides these functionalities, the application can also do some utility functions, which can be used if the application is ran from the CLI (command line interface), along with some parameters.
474 The paramers are described below:
475 - --help - shows the help (also described here)
476 - --version - describes ntsng version and build time
479 - --container-init - is automatically used by Docker when building the images to install modules and enable features. Described in the next chapter. **Do not run manually**
480 - --supervisor - runs in supervisor mode (configuration is done via config.json)
481 - --manager - runs in manager mode
482 - --network-function - runs in network function mode
483 - --generate - generates data based on current settings and datastores, without commiting the data (saves to file)
484 - --test-mode - test mode for automated tests. **Do not use**
486 - global settings changer:
488 - --fixed-rand - used in testing. specify a fixed value seed for the randomness
489 - --verbose - set the verbose level. can range from 0 (errors-only) to 2 (verbose), default is 1 (info)
490 - --workspace - set the current working workspace. the workspace **MUST** be writeable and should contain *config/config.json* file, otherwise a blank json file will be created
493 - --ls - list all modules in the datastore with their attributes
494 - --schema - list the schema of an xpath given as parameter
496 Environment variables
497 ^^^^^^^^^^^^^^^^^^^^^
499 Below all the available enviroment variables are listed. Please note that if a variable is not defined, it will have a default behaviour:
501 - **NTS_MANUAL** - when defined, SUPERVISOR will not start any tasks marked as "nomanual"
502 - **NTS_BUILD_VERSION** - defines build version, set by Dockerfile
503 - **NTS_BUILD_DATE** - defines build date, set by Dockerfile
504 - **NTS_NF_STANDALONE_START_FEATURES** - when value is not blank, it allows the network function to run in standalone mode; see "Network function standalone mode" sub-chapter for this
505 - **NTS_NF_MOUNT_POINT_ADDRESSING_METHOD** - either "docker-mapping" or "host-mapping"; available only when running in network function STANDALONE MODE
507 - **DOCKER_ENGINE_VERSION** - Docker engine version, defaults to 1.40 if not set
508 - **HOSTNAME** - Container hostname
509 - **IPv6_ENABLED** - true/false whether IP v6 is enabled (default false)
510 - **SSH_CONNECTIONS** - number of NETCONF SSH connections that should be enabled (default 1)
511 - **TLS_CONNECTIONS** - number of NETCONF TLS connections that should be enabled (default 0)
513 - **NTS_HOST_IP** - Docker host IP address
514 - **NTS_HOST_BASE_PORT** - see "Starting the NTS Manager" sub-chapter
515 - **NTS_HOST_NETCONF_SSH_BASE_PORT** - see "Starting the NTS Manager" sub-chapter
516 - **NTS_HOST_NETCONF_TLS_BASE_PORT** - see "Starting the NTS Manager" sub-chapter
517 - **NTS_HOST_TRANSFER_FTP_BASE_PORT** - see "Starting the NTS Manager" sub-chapter
518 - **NTS_HOST_TRANSFER_SFTP_BASE_PORT** - see "Starting the NTS Manager" sub-chapter
520 - **SDN_CONTROLLER_PROTOCOL** - protocol used for communication with the SDN controller (http or https, defaults to https)
521 - **SDN_CONTROLLER_IP** - SDN controller IP address
522 - **SDN_CONTROLLER_PORT** - SDN controller port
523 - **SDN_CONTROLLER_CALLHOME_IP** - SDN controller IP address for NETCONF call-home
524 - **SDN_CONTROLLER_CALLHOME_PORT** - SDN controller port for NETCONF call-home
525 - **SDN_CONTROLLER_USERNAME** - SDN controller username
526 - **SDN_CONTROLLER_PASSWORD** - SDN controller password
528 - **VES_COMMON_HEADER_VERSION** - VES protocol version to report (defaults to 7.2)
529 - **VES_ENDPOINT_PROTOCOL** - protocol used for communication with the VES endpoint (http or https, defaults to https)
530 - **VES_ENDPOINT_IP** - VES endpoint IP address
531 - **VES_ENDPOINT_PORT** - VES endpoint port
532 - **VES_ENDPOINT_AUTH_METHOD** - VES endpoint auth method; see YANG definition for possible values
533 - **VES_ENDPOINT_USERNAME** - VES endpoint username
534 - **VES_ENDPOINT_PASSWORD** - VES endpoint password
535 - **VES_ENDPOINT_CERTIFICATE** - VES endpoint certificate; not implemented at the moment of writing
537 Supervisor functionality and configuration
538 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
540 The NTS app tries to be very little dependent on other tools. Until v1.0.8 one of these tools was supervisord, and now its functionality is embedded inside the NTS app. Now the Docker image runs the NTS app with --supervisor parameter to start the supervisor. When supervisor is ran, other main modes and their options are unavailable for that instance (the supervisor will spawn another instance for the main functionalities). Configuration of the supervisor functionality is done via config.json:
544 "supervisor-rules": {
546 "path": "/usr/local/bin/netopeer2-server",
547 "args": ["-d", "-v2"],
549 "stdout": "log/netopeer-stdout.log",
550 "stderr": "log/netopeer-stderr.log"
554 "path": "/usr/sbin/sshd",
557 "stdout": "log/sshd-stdout.log",
558 "stderr": "log/sshd-stderr.log"
561 "ntsim-network-function": {
562 "path": "/opt/dev/ntsim-ng/ntsim-ng",
563 "args": ["-w/opt/dev/ntsim-ng", "-f"],
570 The example above is the default example for a network function. The *supervisor-rules* object contains a list of tasks to run, each with their own settings. Below is a description of all parameters:
572 - path: *mandatory field* - full path to the the binary
573 - args: a list of arguments to be passed to the binary, default is no arguments
574 - autorestart: this is true or false, whether to autorestart the application on exit/kill, default is false
575 - nomanual: when this is true, the task **won't** be automatically ran when the **NTS_MANUAL** environment variable is present. Default is false, and using this is usually good for debugging.
576 - stdout and stderr: path to redirect stdout or stderr to; if **blank**, it will be replaced by **/dev/null** for discarding. If any of the fields are not present in configuration, default value will be used (actual stdout/stderr).
578 Docker container initialization
579 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
581 The NTS app is responsible for initializing the Docker container upon build. What it actually does is described below:
583 1. Install modules located in the *deploy/yang/* folder recusively
584 - note that if a module requires startup data (mandatory data), this can be acheived by having an **XML** or a **JSON** file with this data along the YANG file. For example, if, let's say *ietf-interfaces.yang* would require startup date, there must be a *ietf-interfaces.xml* or *ietf-interfaces.json* located in the same folder.
585 2. Enable all YANG features of the modules, unless specifically excluded
587 If the initialization failes, the result is returned to the Docker builder, so the build will fail, and user can see the output. Docker initialization can be customized from the *config.json* file, as described below. The example is self-expainatory, and the *container-rules* node needs to be a main node of *config.json*:
592 "excluded-modules": [ //excluded modules from installing
596 "excluded-features": [ //excluded features from installing
602 Building the images locally
603 ---------------------------
605 The `nts_build.sh` script should be used for building the docker images needed by the NTS to the local machine. This will create docker images for the Manager and for each type of simulated network function.
607 The user can also directly use the already built docker images, that are pushed to the nexus3 docker repository by the LF Jenkins Job. E.g.: *nexus3.o-ran-sc.org:10004/o-ran-sc/nts-ng-o-ran-du:1.2.0*
609 Starting the NTS Manager
610 ------------------------
612 The **nts-manager-ng** can be started using the docker-compose file in this repo. The file assumes that the docker images were built locally previously.
616 docker-compose up -d ntsim-ng
619 Before starting, the user should set the environment variables defined in the docker-compose file according to his needs:
621 - **NTS_HOST_IP**: an IP address from the host, which should be used by systems outside the local machine to address the simulators;
622 - **NTS_HOST_BASE_PORT**: the port from where the allocation for the simulated network functions should start, if not specified otherwise sepparately (see below); any port not defined will automatically be assigned to *BASE_PORT*; **NOTE** that in order for a port to be eligible, it must be greater than or equal to **1000**:
624 - **NTS_HOST_NETCONF_SSH_BASE_PORT**
625 - **NTS_HOST_NETCONF_TLS_BASE_PORT**
626 - **NTS_HOST_TRANSFER_FTP_BASE_PORT**
627 - **NTS_HOST_TRANSFER_SFTP_BASE_PORT**
629 - **IPv6_ENABLED**: should be set to `true` if IPv6 is enabled in the docker daemon and the user wants to use IPv6 to address the simulated network functions.
631 In each simulated network-function the **docker-repository** leaf must be set accordingly (to the value: *o-ran-sc/*), because all the docker images that are being built locally have this prefix.
633 Starting standalone NFs
634 -----------------------
636 One could start 1 instance of a simulated O-RU-FH and 1 instance of a simulated O-DU by running the `nts-start.sh` script. Pre-configured values can be set in the `.env` file.