| 2025-06-30 | 12.0.0 | Jackie Huang | L Release |
| | | | |
+--------------------+--------------------+--------------------+--------------------+
+| 2025-12-31 | 13.0.0 | Jackie Huang | M Release |
+| | | | |
++--------------------+--------------------+--------------------+--------------------+
+
+Version 13.0.0, 2025-12-31
+--------------------------
+#. 13th version (M release)
+#. INF Multi O-Cloud and Multi OS support:
+
+ * StarlingX 10.0
+
+ * Supported OS:
+
+ * Debian 11 (bulllseye)
+
+ * OKD 4.20
+
+ * Supported OS:
+
+ * CentOS Stream CoreOS 9
+
+#. INF MultiArch support for StarlingX O-Cloud:
+
+ * Add support for ARM64 arch.
+ * See developer-guide for how to build image for ARM64.
+ * No image will be provided.
+
+#. Support four deployment modes on Debian based image for StarlingX O-Cloud:
+
+ * AIO simplex mode
+ * AIO duplex mode (2 servers with High Availabity)
+ * AIO duplex mode (2 servers with High Availabity) with additional worker node
+ * Distributed Cloud
+
+#. Support four deployment modes on CentOS based image for StarlingX O-Cloud:
+
+ * AIO simplex mode
+ * AIO duplex mode (2 servers with High Availabity)
+ * AIO duplex mode (2 servers with High Availabity) with additional worker node
+ * Distributed Cloud
+
+#. Support three deployment modes on Yocto based image for StarlingX O-Cloud:
+
+ * AIO simplex mode
+ * AIO duplex mode (2 servers with High Availabity)
+ * AIO duplex mode (2 servers with High Availabity) with additional worker node
+
+#. Support two deployment modes on OKD O-Cloud:
+
+ * Single-Node OKD (SNO)
+ * Multi-Node OKD (3 control plane nodes minimum)
+
+#. Support VM and bare metal automated deployment for OKD O-Cloud
+#. Support automated integration of Stolostron, multi-cluster-observability,
+ cluster-group-upgrades, and oran-o2ims operators into OKD O-Cloud
+#. Includes supporting playbooks/roles for automating O2 compliance testing
+ and sample workload deployment for OKD O-Cloud
Version 12.0.0, 2025-06-30
--------------------------
## Infrastructure
- KVM/libvirtd virtual machine
-- Bare metal (x86_64 architecture); see [Requirements for installing OpenShift on a single node](https://docs.okd.io/4.16/installing/installing_sno/install-sno-preparing-to-install-sno.html#install-sno-requirements-for-installing-on-a-single-node_install-sno-preparing) for hardware minimum resource requirements
+- Bare metal (x86_64 architecture); see [Requirements for installing OpenShift on a single node](https://docs.okd.io/latest/installing/installing_sno/install-sno-preparing-to-install-sno.html#install-sno-requirements-for-installing-on-a-single-node_install-sno-preparing) for hardware minimum resource requirements
# Prerequisites
The following prerequisites must be installed on the host where the playbook will be run (localhost, by default):
- libvirt development headers/libraries
Following are examples of how to install these packages on common distributions:
-```
-Fedora Linux
+**Fedora Linux**
+
```
dnf install https://dl.fedoraproject.org/pub/epel/epel{,-next}-release-latest-9.noarch.rpm
dnf group install "Development Tools"
dnf install python3-devel python3-libvirt python3-netaddr ansible pip pkgconfig libvirt-devel python-lxml nmstate wget make
```
-Ubuntu Linux
+**Ubuntu Linux**
+
```
apt-get install libpython3-dev python3-libvirt python3-netaddr ansible python3-pip wget make
+```
## Ansible
#### Optional
Optionally, the following variables can be set to override default settings:
-- ocloud_platform_okd_release [default=4.19.0-okd-scos.0]: OKD release, as defined in [OKD releases](https://github.com/okd-project/okd/releases)
+- ocloud_platform_okd_release [default=4.20.0-okd-scos.12]: OKD release, as defined in [OKD releases](https://github.com/okd-project/okd/releases)
- ocloud_platform_okd_pull_secret [default=None]: pull secret for use with non-public image registries
- ocloud_platform_okd_api_vips [default=None]: list of virtual IPs to use for OKD API access (required if deploying a multi-node cluster)
- ocloud_platform_okd_ingress_vips [default=None]: list of virtual IPs to use for ingress (required if deploying a multi-node cluster)