+### Separate admin and proxy nodes
+
+*Note: although this section is titled "Separate admin and proxy nodes", this
+split release technique is generally applicable to any deployment with
+different types of Kong nodes. Separating Admin API and proxy nodes is one of
+the more common use cases for splitting across multiple releases, but you can
+also split releases for split proxy and Developer Portal nodes, multiple groups
+of proxy nodes with separate listen configurations for network segmentation, etc.
+However, it does not apply to hybrid mode, as only the control plane release
+interacts with the database.*
+
+Users may wish to split their Kong deployment into multiple instances that only
+run some of Kong's services (i.e. you run `helm install` once for every
+instance type you wish to create).
+
+To disable Kong services on an instance, you should set `SVC.enabled`,
+`SVC.http.enabled`, `SVC.tls.enabled`, and `SVC.ingress.enabled` all to
+`false`, where `SVC` is `proxy`, `admin`, `manager`, `portal`, or `portalapi`.
+
+The standard chart upgrade automation process assumes that there is only a
+single Kong release in the Kong cluster, and runs both `migrations up` and
+`migrations finish` jobs. To handle clusters split across multiple releases,
+you should:
+1. Upgrade one of the releases with `helm upgrade RELEASENAME -f values.yaml
+ --set migrations.preUpgrade=true --set migrations.postUpgrade=false`.
+2. Upgrade all but one of the remaining releases with `helm upgrade RELEASENAME
+ -f values.yaml --set migrations.preUpgrade=false --set
+ migrations.postUpgrade=false`.
+3. Upgrade the final release with `helm upgrade RELEASENAME -f values.yaml
+ --set migrations.preUpgrade=false --set migrations.postUpgrade=true`.
+
+This ensures that all instances are using the new Kong package before running
+`kong migrations finish`.
+
+Users should note that Helm supports supplying multiple values.yaml files,
+allowing you to separate shared configuration from instance-specific
+configuration. For example, you may have a shared values.yaml that contains
+environment variables and other common settings, and then several
+instance-specific values.yamls that contain service configuration only. You can
+then create releases with:
+
+```bash
+helm install proxy-only -f shared-values.yaml -f only-proxy.yaml kong/kong
+helm install admin-only -f shared-values.yaml -f only-admin.yaml kong/kong
+```
+
+### Standalone controller nodes
+
+The chart can deploy releases that contain the controller only, with no Kong
+container, by setting `deployment.kong.enabled: false` in values.yaml. There
+are several controller settings that must be populated manually in this
+scenario and several settings that are useful when using multiple controllers:
+
+* `ingressController.env.kong_admin_url` must be set to the Kong Admin API URL.
+ If the Admin API is exposed by a service in the cluster, this should look
+ something like `https://my-release-kong-admin.kong-namespace.svc:8444`
+* `ingressController.env.publish_service` must be set to the Kong proxy
+ service, e.g. `namespace/my-release-kong-proxy`.
+* `ingressController.ingressClass` should be set to a different value for each
+ instance of the controller.
+* `ingressController.env.kong_admin_filter_tag` should be set to a different value
+ for each instance of the controller.
+* If using Kong Enterprise, `ingressController.env.kong_workspace` can
+ optionally create configuration in a workspace other than `default`.
+
+Standalone controllers require a database-backed Kong instance, as DB-less mode
+requires that a single controller generate a complete Kong configuration.
+
+### Hybrid mode
+
+Kong supports [hybrid mode
+deployments](https://docs.konghq.com/2.0.x/hybrid-mode/) as of Kong 2.0.0 and
+[Kong Enterprise 2.1.0](https://docs.konghq.com/enterprise/2.1.x/deployment/hybrid-mode/).
+These deployments split Kong nodes into control plane (CP) nodes, which provide
+the admin API and interact with the database, and data plane (DP) nodes, which
+provide the proxy and receive configuration from control plane nodes.
+
+You can deploy hybrid mode Kong clusters by [creating separate releases for each node
+type](#separate-admin-and-proxy-nodes), i.e. use separate control and data
+plane values.yamls that are then installed separately. The [control
+plane](#control-plane-node-configuration) and [data
+plane](#data-plane-node-configuration) configuration sections below cover the
+values.yaml specifics for each.
+
+Cluster certificates are not generated automatically. You must [create a
+certificate and key pair](#certificates) for intra-cluster communication.
+
+When upgrading the Kong version, you must [upgrade the control plane release
+first and then upgrade the data plane release](https://docs.konghq.com/gateway/latest/plan-and-deploy/hybrid-mode/#version-compatibility).
+
+#### Certificates
+
+> This example shows how to use Kong Hybrid mode with `cluster_mtls: shared`.
+> For an example of `cluster_mtls: pki` see the [hybrid-cert-manager example](https://github.com/Kong/charts/blob/main/charts/kong/example-values/hybrid-cert-manager/)
+
+Hybrid mode uses TLS to secure the CP/DP node communication channel, and
+requires certificates for it. You can generate these either using `kong hybrid
+gen_cert` on a local Kong installation or using OpenSSL:
+
+```bash
+openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
+ -keyout /tmp/cluster.key -out /tmp/cluster.crt \
+ -days 1095 -subj "/CN=kong_clustering"
+```
+
+You must then place these certificates in a Secret:
+
+```bash
+kubectl create secret tls kong-cluster-cert --cert=/tmp/cluster.crt --key=/tmp/cluster.key
+```
+
+#### Control plane node configuration
+
+You must configure the control plane nodes to mount the certificate secret on
+the container filesystem is serve it from the cluster listen. In values.yaml:
+
+```yaml
+secretVolumes:
+- kong-cluster-cert
+```
+
+```yaml
+env:
+ role: control_plane
+ cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
+ cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
+```
+
+Furthermore, you must enable the cluster listen and Kubernetes Service, and
+should typically disable the proxy:
+
+```yaml
+cluster:
+ enabled: true
+ tls:
+ enabled: true
+ servicePort: 8005
+ containerPort: 8005
+
+proxy:
+ enabled: false
+```
+
+Enterprise users with Vitals enabled must also enable the cluster telemetry
+service:
+
+```yaml
+clustertelemetry:
+ enabled: true
+ tls:
+ enabled: true
+ servicePort: 8006
+ containerPort: 8006
+```
+
+If using the ingress controller, you must also specify the DP proxy service as
+its publish target to keep Ingress status information up to date:
+
+```
+ingressController:
+ env:
+ publish_service: hybrid/example-release-data-kong-proxy
+```
+
+Replace `hybrid` with your DP nodes' namespace and `example-release-data` with
+the name of the DP release.
+
+#### Data plane node configuration
+
+Data plane configuration also requires the certificate and `role`
+configuration, and the database should always be set to `off`. You must also
+trust the cluster certificate and indicate what hostname/port Kong should use
+to find control plane nodes.
+
+Though not strictly required, you should disable the admin service (it will not
+work on DP nodes anyway, but should be disabled to avoid creating an invalid
+Service resource).
+
+```yaml
+secretVolumes:
+- kong-cluster-cert
+```
+
+```yaml
+admin:
+ enabled: false
+```
+
+```yaml
+env:
+ role: data_plane
+ database: "off"
+ cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
+ cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
+ lua_ssl_trusted_certificate: /etc/secrets/kong-cluster-cert/tls.crt
+ cluster_control_plane: control-plane-release-name-kong-cluster.hybrid.svc.cluster.local:8005
+ cluster_telemetry_endpoint: control-plane-release-name-kong-clustertelemetry.hybrid.svc.cluster.local:8006 # Enterprise-only
+```
+
+Note that the `cluster_control_plane` value will differ depending on your
+environment. `control-plane-release-name` will change to your CP release name,
+`hybrid` will change to whatever namespace it resides in. See [Kubernetes'
+documentation on Service
+DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
+for more detail.
+
+If you use multiple Helm releases to manage different data plane configurations
+attached to the same control plane, setting the `deployment.hostname` field
+will help you keep track of which is which in the `/clustering/data-plane`
+endpoint.
+
+### Cert Manager Integration
+
+By default, Kong will create self-signed certificates on start for its TLS
+listens if you do not provide your own. The chart can create
+[cert-manager](https://cert-manager.io/docs/) Certificates for its Services and
+configure them for you. To use this integration, install cert-manager, create
+an issuer, set `certificates.enabled: true` in values.yaml, and set your issuer
+name in `certificates.issuer` or `certificates.clusterIssuer` depending on the
+issuer type.
+
+If you do not have an issuer available, you can install the example [self-signed ClusterIssuer](https://cert-manager.io/docs/configuration/selfsigned/#bootstrapping-ca-issuers)
+and set `certificates.clusterIssuer: selfsigned-issuer` for testing. You
+should, however, migrate to an issuer using a CA your clients trust for actual
+usage.
+
+The `proxy`, `admin`, `portal`, and `cluster` subsections under `certificates`
+let you choose hostnames, override issuers, set `subject` or set `privateKey` on a per-certificate basis for the
+proxy, admin API and Manager, Portal and Portal API, and hybrid mode mTLS
+services, respectively.
+
+To use hybrid mode, the control and data plane releases must use the same
+issuer for their cluster certificates.
+
+### CRD management
+
+Earlier versions of this chart (<2.0) created CRDs associated with the ingress
+controller as part of the release. This raised two challenges:
+
+- Multiple release of the chart would conflict with one another, as each would
+ attempt to create its own set of CRDs.
+- Because deleting a CRD also deletes any custom resources associated with it,
+ deleting a release of the chart could destroy user configuration without
+ providing any means to restore it.
+
+Helm 3 introduced a simplified CRD management method that was safer, but
+requires some manual work when a chart added or modified CRDs: CRDs are created
+on install if they are not already present, but are not modified during
+release upgrades or deletes. Our chart release upgrade instructions call out
+when manual action is necessary to update CRDs. This CRD handling strategy is
+recommended for most users.
+
+Some users may wish to manage their CRDs automatically. If you manage your CRDs
+this way, we _strongly_ recommend that you back up all associated custom
+resources in the event you need to recover from unintended CRD deletion.
+
+While Helm 3's CRD management system is recommended, there is no simple means
+of migrating away from release-managed CRDs if you previously installed your
+release with the old system (you would need to back up your existing custom
+resources, delete your release, reinstall, and restore your custom resources
+after). As such, the chart detects if you currently use release-managed CRDs
+and continues to use the old CRD templates when using chart version 2.0+. If
+you do (your resources will have a `meta.helm.sh/release-name` annotation), we
+_strongly_ recommend that you back up all associated custom resources in the
+event you need to recover from unintended CRD deletion.
+
+### InitContainers
+
+The chart is able to deploy initContainers along with Kong. This can be very
+useful when there's a requirement for custom initialization. The
+`deployment.initContainers` field in values.yaml takes an array of objects that
+get appended as-is to the existing `spec.template.initContainers` array in the
+kong deployment resource.
+
+### HostAliases
+
+The chart is able to inject host aliases into containers. This can be very useful
+when it's required to resolve additional domain name which can't be looked-up
+directly from dns server. The `deployment.hostAliases` field in values.yaml
+takes an array of objects that set to `spec.template.hostAliases` field in the
+kong deployment resource.
+
+### Sidecar Containers
+
+The chart can deploy additional containers along with the Kong and Ingress
+Controller containers, sometimes referred to as "sidecar containers". This can
+be useful to include network proxies or logging services along with Kong. The
+`deployment.sidecarContainers` field in values.yaml takes an array of objects
+that get appended as-is to the existing `spec.template.spec.containers` array
+in the Kong deployment resource.
+
+### Migration Sidecar Containers
+
+In the same way sidecar containers are attached to the Kong and Ingress
+Controller containers the chart can add sidecars to the containers that runs
+the migrations. The
+`migrations.sidecarContainers` field in values.yaml takes an array of objects
+that get appended as-is to the existing `spec.template.spec.containers` array
+in the pre-upgrade-migrations, post-upgrade-migrations and migration resrouces.
+Keep in mind the containers should be finite and they should be terminated
+with the migration containers, otherwise the migration could get the status
+as finished and the deployment of the chart will reach the timeout.
+
+### User Defined Volumes
+
+The chart can deploy additional volumes along with Kong. This can be useful to
+include additional volumes which required during iniatilization phase
+(InitContainer). The `deployment.userDefinedVolumes` field in values.yaml
+takes an array of objects that get appended as-is to the existing
+`spec.template.spec.volumes` array in the kong deployment resource.
+
+### User Defined Volume Mounts
+
+The chart can mount user-defined volumes. The
+`deployment.userDefinedVolumeMounts` and
+`ingressController.userDefinedVolumeMounts` fields in values.yaml take an array
+of object that get appended as-is to the existing
+`spec.template.spec.containers[].volumeMounts` and
+`spec.template.spec.initContainers[].volumeMounts` array in the kong deployment
+resource.
+
+### Removing cluster-scoped permissions
+
+You can limit the controller's access to allow it to only watch specific
+namespaces for namespaced resources. By default, the controller watches all
+namespaces. Limiting access requires several changes to configuration:
+
+- Set `ingressController.watchNamespaces` to a list of namespaces you want to
+ watch. The chart will automatically generate roles for each namespace and
+ assign them to the controller's service account.
+- Optionally set `ingressController.installCRDs=false` if your user role (the
+ role you use when running `helm install`, not the controller service
+ account's role) does not have access to get CRDs. By default, the chart
+ attempts to look up the controller CRDs for [a legacy behavior
+ check](#crd-management).
+
+### Using a DaemonSet
+
+Setting `deployment.daemonset: true` deploys Kong using a [DaemonSet
+controller](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
+instead of a Deployment controller. This runs a Kong Pod on every kubelet in
+the Kubernetes cluster. For such configuration it may be desirable to configure
+Pods to use the network of the host they run on instead of a dedicated network
+namespace. The benefit of this approach is that the Kong can bind ports directly
+to Kubernetes nodes' network interfaces, without the extra network translation
+imposed by NodePort Services. It can be achieved by setting `deployment.hostNetwork: true`.
+
+### Using dnsPolicy and dnsConfig
+
+The chart able to inject custom DNS configuration into containers. This can be useful when you have EKS cluster with [NodeLocal DNSCache](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) configured and attach AWS security groups directly to pod using [security groups for pods feature](https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html).
+
+### Example configurations
+
+Several example values.yaml are available in the
+[example-values](https://github.com/Kong/charts/blob/main/charts/kong/example-values/)
+directory.
+