3 [Kong for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller)
4 is an open-source Ingress Controller for Kubernetes that offers
5 API management capabilities with a plugin architecture.
7 This chart bootstraps all the components needed to run Kong on a
8 [Kubernetes](http://kubernetes.io) cluster using the
9 [Helm](https://helm.sh) package manager.
14 helm repo add kong https://charts.konghq.com
17 helm install kong/kong --generate-name
22 - [Prerequisites](#prerequisites)
24 - [Uninstall](#uninstall)
26 - [Kong Enterprise](#kong-enterprise)
27 - [Deployment Options](#deployment-options)
28 - [Database](#database)
29 - [DB-less deployment](#db-less-deployment)
30 - [Using the Postgres sub-chart](#using-the-postgres-sub-chart)
31 - [Postgres sub-chart considerations for OpenShift](#postgres-sub-chart-considerations-for-openshift)
32 - [Runtime package](#runtime-package)
33 - [Configuration method](#configuration-method)
34 - [Separate admin and proxy nodes](#separate-admin-and-proxy-nodes)
35 - [Standalone controller nodes](#standalone-controller-nodes)
36 - [Hybrid mode](#hybrid-mode)
37 - [Certificates](#certificates)
38 - [Control plane node configuration](#control-plane-node-configuration)
39 - [Data plane node configuration](#data-plane-node-configuration)
40 - [Cert Manager Integration](#cert-manager-integration)
41 - [CRD management](#crd-management)
42 - [InitContainers](#initcontainers)
43 - [HostAliases](#hostaliases)
44 - [Sidecar Containers](#sidecar-containers)
45 - [Migration Sidecar Containers](#migration-sidecar-containers)
46 - [User Defined Volumes](#user-defined-volumes)
47 - [User Defined Volume Mounts](#user-defined-volume-mounts)
48 - [Removing cluster-scoped permissions](#removing-cluster-scoped-permissions)
49 - [Using a DaemonSet](#using-a-daemonset)
50 - [Using dnsPolicy and dnsConfig](#using-dnspolicy-and-dnsconfig)
51 - [Example configurations](#example-configurations)
52 - [Configuration](#configuration)
53 - [Kong parameters](#kong-parameters)
54 - [Kong Service Parameters](#kong-service-parameters)
55 - [Admin Service mTLS](#admin-service-mtls)
56 - [Stream listens](#stream-listens)
57 - [Ingress Controller Parameters](#ingress-controller-parameters)
58 - [The `env` section](#the-env-section)
59 - [The `customEnv` section](#the-customenv-section)
60 - [General Parameters](#general-parameters)
61 - [The `env` section](#the-env-section-1)
62 - [The `customEnv` section](#the-customenv-section-1)
63 - [The `extraLabels` section](#the-extralabels-section)
64 - [Kong Enterprise Parameters](#kong-enterprise-parameters)
65 - [Overview](#overview)
66 - [Prerequisites](#prerequisites-1)
67 - [Kong Enterprise License](#kong-enterprise-license)
68 - [Kong Enterprise Docker registry access](#kong-enterprise-docker-registry-access)
69 - [Service location hints](#service-location-hints)
71 - [Sessions](#sessions)
72 - [Email/SMTP](#emailsmtp)
73 - [Prometheus Operator integration](#prometheus-operator-integration)
74 - [Argo CD considerations](#argo-cd-considerations)
75 - [Changelog](https://github.com/Kong/charts/blob/main/charts/kong/CHANGELOG.md)
76 - [Upgrading](https://github.com/Kong/charts/blob/main/charts/kong/UPGRADE.md)
77 - [Seeking help](#seeking-help)
81 - Kubernetes 1.17+. Older chart releases support older Kubernetes versions.
82 Refer to the [supported version matrix](https://docs.konghq.com/kubernetes-ingress-controller/latest/references/version-compatibility/#kubernetes)
83 and the [chart changelog](https://github.com/Kong/charts/blob/main/charts/kong/CHANGELOG.md)
84 for information about the default chart controller versions and Kubernetes
85 versions supported by controller releases.
86 - PV provisioner support in the underlying infrastructure if persistence
87 is needed for Kong datastore.
94 helm repo add kong https://charts.konghq.com
97 helm install kong/kong --generate-name
102 To uninstall/delete a Helm release `my-release`:
105 helm delete my-release
108 The command removes all the Kubernetes components associated with the
109 chart and deletes the release.
111 > **Tip**: List all releases using `helm list`
116 [FAQs](https://github.com/Kong/charts/blob/main/charts/kong/FAQs.md)
121 If using Kong Enterprise, several additional steps are necessary before
122 installing the chart:
124 - Set `enterprise.enabled` to `true` in `values.yaml` file.
125 - Update values.yaml to use a Kong Enterprise image.
126 - Satisfy the two prerequisites below for
127 [Enterprise License](#kong-enterprise-license) and
128 [Enterprise Docker Registry](#kong-enterprise-docker-registry-access).
129 - (Optional) [set a `password` environment variable](#rbac) to create the
130 initial super-admin. Though not required, this is recommended for users that
131 wish to use RBAC, as it cannot be done after initial setup.
133 Once you have these set, it is possible to install Kong Enterprise.
136 [Kong Enterprise considerations](#kong-enterprise-parameters)
137 to understand all settings that are enterprise specific.
139 ## Deployment Options
141 Kong is a highly configurable piece of software that can be deployed
142 in a number of different ways, depending on your use-case.
144 All combinations of various runtimes, databases and configuration methods are
145 supported by this Helm chart.
146 The recommended approach is to use the Ingress Controller based configuration
147 along-with DB-less mode.
149 Following sections detail on various high-level architecture options available:
153 Kong can run with or without a database (DB-less). By default, this chart
154 installs Kong without a database.
156 You can set the database the `env.database` parameter. For more details, please
157 read the [env](#the-env-section) section.
159 #### DB-less deployment
161 When deploying Kong in DB-less mode(`env.database: "off"`)
162 and without the Ingress Controller(`ingressController.enabled: false`),
163 you have to provide a [declarative configuration](https://docs.konghq.com/gateway-oss/latest/db-less-and-declarative-config/#the-declarative-configuration-format) for Kong to run.
164 You can provide an existing ConfigMap
165 (`dblessConfig.configMap`) or Secret (`dblessConfig.secret`) or place the whole
166 configuration into `values.yaml` (`dblessConfig.config`) parameter. See the
167 example configuration in the default values.yaml for more details. You can use
168 `--set-file dblessConfig.config=/path/to/declarative-config.yaml` in Helm
169 commands to substitute in a complete declarative config file.
171 Note that externally supplied ConfigMaps are not hashed or tracked in deployment annotations.
172 Subsequent ConfigMap updates will require user-initiated new deployment rollouts
173 to apply the new configuration. You should run `kubectl rollout restart deploy`
174 after updating externally supplied ConfigMap content.
176 #### Using the Postgres sub-chart
178 The chart can optionally spawn a Postgres instance using [Bitnami's Postgres
179 chart](https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md)
180 as a sub-chart. Set `postgresql.enabled=true` to enable the sub-chart. Enabling
181 this will auto-populate Postgres connection settings in Kong's environment.
183 The Postgres sub-chart is best used to quickly provision temporary environments
184 without installing and configuring your database separately. For longer-lived
185 environments, we recommend you manage your database outside the Kong Helm
188 ##### Postgres sub-chart considerations for OpenShift
190 Due to the default `securityContexts` in the postgres sub-chart, you will need to add the following values to the `postgresql` section to get postgres running on OpenShift:
198 containerSecurityContext:
206 There are three different packages of Kong that are available:
209 This is the [Open-Source](https://github.com/kong/kong) offering. It is a
210 full-blown API Gateway and Ingress solution with a wide-array of functionality.
211 When Kong Gateway is combined with the Ingress based configuration method,
212 you get Kong for Kubernetes. This is the default deployment for this Helm
214 - **Kong Enterprise K8S**\
215 This package builds up on top of the Open-Source Gateway and bundles in all
216 the Enterprise-only plugins as well.
217 When Kong Enterprise K8S is combined with the Ingress based
218 configuration method, you get Kong for Kubernetes Enterprise.
219 This package also comes with 24x7 support from Kong Inc.
220 - **Kong Enterprise**\
221 This is the full-blown Enterprise package which packs with itself all the
222 Enterprise functionality like Manager, Portal, Vitals, etc.
223 This package can't be run in DB-less mode.
225 The package to run can be changed via `image.repository` and `image.tag`
226 parameters. If you would like to run the Enterprise package, please read
227 the [Kong Enterprise Parameters](#kong-enterprise-parameters) section.
229 ### Configuration method
231 Kong can be configured via two methods:
232 - **Ingress and CRDs**\
233 The configuration for Kong is done via `kubectl` and Kubernetes-native APIs.
234 This is also known as Kong Ingress Controller or Kong for Kubernetes and is
235 the default deployment pattern for this Helm Chart. The configuration
236 for Kong is managed via Ingress and a few
237 [Custom Resources](https://docs.konghq.com/kubernetes-ingress-controller/latest/concepts/custom-resources).
238 For more details, please read the
239 [documentation](https://docs.konghq.com/kubernetes-ingress-controller/)
240 on Kong Ingress Controller.
241 To configure and fine-tune the controller, please read the
242 [Ingress Controller Parameters](#ingress-controller-parameters) section.
244 This is the traditional method of running and configuring Kong.
245 By default, the Admin API of Kong is not exposed as a Service. This
246 can be controlled via `admin.enabled` and `env.admin_listen` parameters.
248 ### Separate admin and proxy nodes
250 *Note: although this section is titled "Separate admin and proxy nodes", this
251 split release technique is generally applicable to any deployment with
252 different types of Kong nodes. Separating Admin API and proxy nodes is one of
253 the more common use cases for splitting across multiple releases, but you can
254 also split releases for split proxy and Developer Portal nodes, multiple groups
255 of proxy nodes with separate listen configurations for network segmentation, etc.
256 However, it does not apply to hybrid mode, as only the control plane release
257 interacts with the database.*
259 Users may wish to split their Kong deployment into multiple instances that only
260 run some of Kong's services (i.e. you run `helm install` once for every
261 instance type you wish to create).
263 To disable Kong services on an instance, you should set `SVC.enabled`,
264 `SVC.http.enabled`, `SVC.tls.enabled`, and `SVC.ingress.enabled` all to
265 `false`, where `SVC` is `proxy`, `admin`, `manager`, `portal`, or `portalapi`.
267 The standard chart upgrade automation process assumes that there is only a
268 single Kong release in the Kong cluster, and runs both `migrations up` and
269 `migrations finish` jobs. To handle clusters split across multiple releases,
271 1. Upgrade one of the releases with `helm upgrade RELEASENAME -f values.yaml
272 --set migrations.preUpgrade=true --set migrations.postUpgrade=false`.
273 2. Upgrade all but one of the remaining releases with `helm upgrade RELEASENAME
274 -f values.yaml --set migrations.preUpgrade=false --set
275 migrations.postUpgrade=false`.
276 3. Upgrade the final release with `helm upgrade RELEASENAME -f values.yaml
277 --set migrations.preUpgrade=false --set migrations.postUpgrade=true`.
279 This ensures that all instances are using the new Kong package before running
280 `kong migrations finish`.
282 Users should note that Helm supports supplying multiple values.yaml files,
283 allowing you to separate shared configuration from instance-specific
284 configuration. For example, you may have a shared values.yaml that contains
285 environment variables and other common settings, and then several
286 instance-specific values.yamls that contain service configuration only. You can
287 then create releases with:
290 helm install proxy-only -f shared-values.yaml -f only-proxy.yaml kong/kong
291 helm install admin-only -f shared-values.yaml -f only-admin.yaml kong/kong
294 ### Standalone controller nodes
296 The chart can deploy releases that contain the controller only, with no Kong
297 container, by setting `deployment.kong.enabled: false` in values.yaml. There
298 are several controller settings that must be populated manually in this
299 scenario and several settings that are useful when using multiple controllers:
301 * `ingressController.env.kong_admin_url` must be set to the Kong Admin API URL.
302 If the Admin API is exposed by a service in the cluster, this should look
303 something like `https://my-release-kong-admin.kong-namespace.svc:8444`
304 * `ingressController.env.publish_service` must be set to the Kong proxy
305 service, e.g. `namespace/my-release-kong-proxy`.
306 * `ingressController.ingressClass` should be set to a different value for each
307 instance of the controller.
308 * `ingressController.env.kong_admin_filter_tag` should be set to a different value
309 for each instance of the controller.
310 * If using Kong Enterprise, `ingressController.env.kong_workspace` can
311 optionally create configuration in a workspace other than `default`.
313 Standalone controllers require a database-backed Kong instance, as DB-less mode
314 requires that a single controller generate a complete Kong configuration.
318 Kong supports [hybrid mode
319 deployments](https://docs.konghq.com/2.0.x/hybrid-mode/) as of Kong 2.0.0 and
320 [Kong Enterprise 2.1.0](https://docs.konghq.com/enterprise/2.1.x/deployment/hybrid-mode/).
321 These deployments split Kong nodes into control plane (CP) nodes, which provide
322 the admin API and interact with the database, and data plane (DP) nodes, which
323 provide the proxy and receive configuration from control plane nodes.
325 You can deploy hybrid mode Kong clusters by [creating separate releases for each node
326 type](#separate-admin-and-proxy-nodes), i.e. use separate control and data
327 plane values.yamls that are then installed separately. The [control
328 plane](#control-plane-node-configuration) and [data
329 plane](#data-plane-node-configuration) configuration sections below cover the
330 values.yaml specifics for each.
332 Cluster certificates are not generated automatically. You must [create a
333 certificate and key pair](#certificates) for intra-cluster communication.
335 When upgrading the Kong version, you must [upgrade the control plane release
336 first and then upgrade the data plane release](https://docs.konghq.com/gateway/latest/plan-and-deploy/hybrid-mode/#version-compatibility).
340 > This example shows how to use Kong Hybrid mode with `cluster_mtls: shared`.
341 > For an example of `cluster_mtls: pki` see the [hybrid-cert-manager example](https://github.com/Kong/charts/blob/main/charts/kong/example-values/hybrid-cert-manager/)
343 Hybrid mode uses TLS to secure the CP/DP node communication channel, and
344 requires certificates for it. You can generate these either using `kong hybrid
345 gen_cert` on a local Kong installation or using OpenSSL:
348 openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
349 -keyout /tmp/cluster.key -out /tmp/cluster.crt \
350 -days 1095 -subj "/CN=kong_clustering"
353 You must then place these certificates in a Secret:
356 kubectl create secret tls kong-cluster-cert --cert=/tmp/cluster.crt --key=/tmp/cluster.key
359 #### Control plane node configuration
361 You must configure the control plane nodes to mount the certificate secret on
362 the container filesystem is serve it from the cluster listen. In values.yaml:
372 cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
373 cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
376 Furthermore, you must enable the cluster listen and Kubernetes Service, and
377 should typically disable the proxy:
391 Enterprise users with Vitals enabled must also enable the cluster telemetry
403 If using the ingress controller, you must also specify the DP proxy service as
404 its publish target to keep Ingress status information up to date:
409 publish_service: hybrid/example-release-data-kong-proxy
412 Replace `hybrid` with your DP nodes' namespace and `example-release-data` with
413 the name of the DP release.
415 #### Data plane node configuration
417 Data plane configuration also requires the certificate and `role`
418 configuration, and the database should always be set to `off`. You must also
419 trust the cluster certificate and indicate what hostname/port Kong should use
420 to find control plane nodes.
422 Though not strictly required, you should disable the admin service (it will not
423 work on DP nodes anyway, but should be disabled to avoid creating an invalid
440 cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
441 cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
442 lua_ssl_trusted_certificate: /etc/secrets/kong-cluster-cert/tls.crt
443 cluster_control_plane: control-plane-release-name-kong-cluster.hybrid.svc.cluster.local:8005
444 cluster_telemetry_endpoint: control-plane-release-name-kong-clustertelemetry.hybrid.svc.cluster.local:8006 # Enterprise-only
447 Note that the `cluster_control_plane` value will differ depending on your
448 environment. `control-plane-release-name` will change to your CP release name,
449 `hybrid` will change to whatever namespace it resides in. See [Kubernetes'
450 documentation on Service
451 DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
454 If you use multiple Helm releases to manage different data plane configurations
455 attached to the same control plane, setting the `deployment.hostname` field
456 will help you keep track of which is which in the `/clustering/data-plane`
459 ### Cert Manager Integration
461 By default, Kong will create self-signed certificates on start for its TLS
462 listens if you do not provide your own. The chart can create
463 [cert-manager](https://cert-manager.io/docs/) Certificates for its Services and
464 configure them for you. To use this integration, install cert-manager, create
465 an issuer, set `certificates.enabled: true` in values.yaml, and set your issuer
466 name in `certificates.issuer` or `certificates.clusterIssuer` depending on the
469 If you do not have an issuer available, you can install the example [self-signed ClusterIssuer](https://cert-manager.io/docs/configuration/selfsigned/#bootstrapping-ca-issuers)
470 and set `certificates.clusterIssuer: selfsigned-issuer` for testing. You
471 should, however, migrate to an issuer using a CA your clients trust for actual
474 The `proxy`, `admin`, `portal`, and `cluster` subsections under `certificates`
475 let you choose hostnames, override issuers, set `subject` or set `privateKey` on a per-certificate basis for the
476 proxy, admin API and Manager, Portal and Portal API, and hybrid mode mTLS
477 services, respectively.
479 To use hybrid mode, the control and data plane releases must use the same
480 issuer for their cluster certificates.
484 Earlier versions of this chart (<2.0) created CRDs associated with the ingress
485 controller as part of the release. This raised two challenges:
487 - Multiple release of the chart would conflict with one another, as each would
488 attempt to create its own set of CRDs.
489 - Because deleting a CRD also deletes any custom resources associated with it,
490 deleting a release of the chart could destroy user configuration without
491 providing any means to restore it.
493 Helm 3 introduced a simplified CRD management method that was safer, but
494 requires some manual work when a chart added or modified CRDs: CRDs are created
495 on install if they are not already present, but are not modified during
496 release upgrades or deletes. Our chart release upgrade instructions call out
497 when manual action is necessary to update CRDs. This CRD handling strategy is
498 recommended for most users.
500 Some users may wish to manage their CRDs automatically. If you manage your CRDs
501 this way, we _strongly_ recommend that you back up all associated custom
502 resources in the event you need to recover from unintended CRD deletion.
504 While Helm 3's CRD management system is recommended, there is no simple means
505 of migrating away from release-managed CRDs if you previously installed your
506 release with the old system (you would need to back up your existing custom
507 resources, delete your release, reinstall, and restore your custom resources
508 after). As such, the chart detects if you currently use release-managed CRDs
509 and continues to use the old CRD templates when using chart version 2.0+. If
510 you do (your resources will have a `meta.helm.sh/release-name` annotation), we
511 _strongly_ recommend that you back up all associated custom resources in the
512 event you need to recover from unintended CRD deletion.
516 The chart is able to deploy initContainers along with Kong. This can be very
517 useful when there's a requirement for custom initialization. The
518 `deployment.initContainers` field in values.yaml takes an array of objects that
519 get appended as-is to the existing `spec.template.initContainers` array in the
520 kong deployment resource.
524 The chart is able to inject host aliases into containers. This can be very useful
525 when it's required to resolve additional domain name which can't be looked-up
526 directly from dns server. The `deployment.hostAliases` field in values.yaml
527 takes an array of objects that set to `spec.template.hostAliases` field in the
528 kong deployment resource.
530 ### Sidecar Containers
532 The chart can deploy additional containers along with the Kong and Ingress
533 Controller containers, sometimes referred to as "sidecar containers". This can
534 be useful to include network proxies or logging services along with Kong. The
535 `deployment.sidecarContainers` field in values.yaml takes an array of objects
536 that get appended as-is to the existing `spec.template.spec.containers` array
537 in the Kong deployment resource.
539 ### Migration Sidecar Containers
541 In the same way sidecar containers are attached to the Kong and Ingress
542 Controller containers the chart can add sidecars to the containers that runs
544 `migrations.sidecarContainers` field in values.yaml takes an array of objects
545 that get appended as-is to the existing `spec.template.spec.containers` array
546 in the pre-upgrade-migrations, post-upgrade-migrations and migration resrouces.
547 Keep in mind the containers should be finite and they should be terminated
548 with the migration containers, otherwise the migration could get the status
549 as finished and the deployment of the chart will reach the timeout.
551 ### User Defined Volumes
553 The chart can deploy additional volumes along with Kong. This can be useful to
554 include additional volumes which required during iniatilization phase
555 (InitContainer). The `deployment.userDefinedVolumes` field in values.yaml
556 takes an array of objects that get appended as-is to the existing
557 `spec.template.spec.volumes` array in the kong deployment resource.
559 ### User Defined Volume Mounts
561 The chart can mount user-defined volumes. The
562 `deployment.userDefinedVolumeMounts` and
563 `ingressController.userDefinedVolumeMounts` fields in values.yaml take an array
564 of object that get appended as-is to the existing
565 `spec.template.spec.containers[].volumeMounts` and
566 `spec.template.spec.initContainers[].volumeMounts` array in the kong deployment
569 ### Removing cluster-scoped permissions
571 You can limit the controller's access to allow it to only watch specific
572 namespaces for namespaced resources. By default, the controller watches all
573 namespaces. Limiting access requires several changes to configuration:
575 - Set `ingressController.watchNamespaces` to a list of namespaces you want to
576 watch. The chart will automatically generate roles for each namespace and
577 assign them to the controller's service account.
578 - Optionally set `ingressController.installCRDs=false` if your user role (the
579 role you use when running `helm install`, not the controller service
580 account's role) does not have access to get CRDs. By default, the chart
581 attempts to look up the controller CRDs for [a legacy behavior
582 check](#crd-management).
584 ### Using a DaemonSet
586 Setting `deployment.daemonset: true` deploys Kong using a [DaemonSet
587 controller](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
588 instead of a Deployment controller. This runs a Kong Pod on every kubelet in
589 the Kubernetes cluster. For such configuration it may be desirable to configure
590 Pods to use the network of the host they run on instead of a dedicated network
591 namespace. The benefit of this approach is that the Kong can bind ports directly
592 to Kubernetes nodes' network interfaces, without the extra network translation
593 imposed by NodePort Services. It can be achieved by setting `deployment.hostNetwork: true`.
595 ### Using dnsPolicy and dnsConfig
597 The chart able to inject custom DNS configuration into containers. This can be useful when you have EKS cluster with [NodeLocal DNSCache](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) configured and attach AWS security groups directly to pod using [security groups for pods feature](https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html).
599 ### Example configurations
601 Several example values.yaml are available in the
602 [example-values](https://github.com/Kong/charts/blob/main/charts/kong/example-values/)
609 | Parameter | Description | Default |
610 | ---------------------------------- | ------------------------------------------------------------------------------------- | ------------------- |
611 | image.repository | Kong image | `kong` |
612 | image.tag | Kong image version | `3.5` |
613 | image.effectiveSemver | Semantic version to use for version-dependent features (if `tag` is not a semver) | |
614 | image.pullPolicy | Image pull policy | `IfNotPresent` |
615 | image.pullSecrets | Image pull secrets | `null` |
616 | replicaCount | Kong instance count. It has no effect when `autoscaling.enabled` is set to true | `1` |
617 | plugins | Install custom plugins into Kong via ConfigMaps or Secrets | `{}` |
618 | env | Additional [Kong configurations](https://getkong.org/docs/latest/configuration/) | |
619 | customEnv | Custom Environment variables without `KONG_` prefix | |
620 | envFrom | Populate environment variables from ConfigMap or Secret keys | |
621 | migrations.preUpgrade | Run "kong migrations up" jobs | `true` |
622 | migrations.postUpgrade | Run "kong migrations finish" jobs | `true` |
623 | migrations.annotations | Annotations for migration job pods | `{"sidecar.istio.io/inject": "false" |
624 | migrations.jobAnnotations | Additional annotations for migration jobs | `{}` |
625 | migrations.backoffLimit | Override the system backoffLimit | `{}` |
626 | waitImage.enabled | Spawn init containers that wait for the database before starting Kong | `true` |
627 | waitImage.repository | Image used to wait for database to become ready. Uses the Kong image if none set | |
628 | waitImage.tag | Tag for image used to wait for database to become ready | |
629 | waitImage.pullPolicy | Wait image pull policy | `IfNotPresent` |
630 | postgresql.enabled | Spin up a new postgres instance for Kong | `false` |
631 | dblessConfig.configMap | Name of an existing ConfigMap containing the `kong.yml` file. This must have the key `kong.yml`.| `` |
632 | dblessConfig.config | Yaml configuration file for the dbless (declarative) configuration of Kong | see in `values.yaml` |
634 #### Kong Service Parameters
636 The various `SVC.*` parameters below are common to the various Kong services
637 (the admin API, proxy, Kong Manager, the Developer Portal, and the Developer
638 Portal API) and define their listener configuration, K8S Service properties,
639 and K8S Ingress properties. Defaults are listed only if consistent across the
640 individual services: see values.yaml for their individual default values.
642 `SVC` below can be substituted with each of:
653 `status` is intended for internal use within the cluster. Unlike other
654 services it cannot be exposed externally, and cannot create a Kubernetes
655 service or ingress. It supports the settings under `SVC.http` and `SVC.tls`
658 `cluster` is used on hybrid mode control plane nodes. It does not support the
659 `SVC.http.*` settings (cluster communications must be TLS-only) or the
660 `SVC.ingress.*` settings (cluster communication requires TLS client
661 authentication, which cannot pass through an ingress proxy). `clustertelemetry`
662 is similar, and used when Vitals is enabled on Kong Enterprise control plane
665 `udpProxy` is used for UDP stream listens (Kubernetes does not yet support
666 mixed TCP/UDP LoadBalancer Services). It _does not_ support the `http`, `tls`,
667 or `ingress` sections, as it is used only for stream listens.
669 | Parameter | Description | Default |
670 |-----------------------------------|-------------------------------------------------------------------------------------------|--------------------------|
671 | SVC.enabled | Create Service resource for SVC (admin, proxy, manager, etc.) | |
672 | SVC.http.enabled | Enables http on the service | |
673 | SVC.http.servicePort | Service port to use for http | |
674 | SVC.http.containerPort | Container port to use for http | |
675 | SVC.http.nodePort | Node port to use for http | |
676 | SVC.http.hostPort | Host port to use for http | |
677 | SVC.http.parameters | Array of additional listen parameters | `[]` |
678 | SVC.http.appProtocol | `appProtocol` to be set in a Service's port. If left empty, no `appProtocol` will be set. | |
679 | SVC.tls.enabled | Enables TLS on the service | |
680 | SVC.tls.containerPort | Container port to use for TLS | |
681 | SVC.tls.servicePort | Service port to use for TLS | |
682 | SVC.tls.nodePort | Node port to use for TLS | |
683 | SVC.tls.hostPort | Host port to use for TLS | |
684 | SVC.tls.overrideServiceTargetPort | Override service port to use for TLS without touching Kong containerPort | |
685 | SVC.tls.parameters | Array of additional listen parameters | `["http2"]` |
686 | SVC.tls.appProtocol | `appProtocol` to be set in a Service's port. If left empty, no `appProtocol` will be set. | |
687 | SVC.type | k8s service type. Options: NodePort, ClusterIP, LoadBalancer | |
688 | SVC.clusterIP | k8s service clusterIP | |
689 | SVC.loadBalancerClass | loadBalancerClass to use for LoadBalancer provisionning | |
690 | SVC.loadBalancerSourceRanges | Limit service access to CIDRs if set and service type is `LoadBalancer` | `[]` |
691 | SVC.loadBalancerIP | Reuse an existing ingress static IP for the service | |
692 | SVC.externalIPs | IPs for which nodes in the cluster will also accept traffic for the servic | `[]` |
693 | SVC.externalTrafficPolicy | k8s service's externalTrafficPolicy. Options: Cluster, Local | |
694 | SVC.ingress.enabled | Enable ingress resource creation (works with SVC.type=ClusterIP) | `false` |
695 | SVC.ingress.ingressClassName | Set the ingressClassName to associate this Ingress with an IngressClass | |
696 | SVC.ingress.hostname | Ingress hostname | `""` |
697 | SVC.ingress.path | Ingress path. | `/` |
698 | SVC.ingress.pathType | Ingress pathType. One of `ImplementationSpecific`, `Exact` or `Prefix` | `ImplementationSpecific` |
699 | SVC.ingress.hosts | Slice of hosts configurations, including `hostname`, `path` and `pathType` keys | `[]` |
700 | SVC.ingress.tls | Name of secret resource or slice of `secretName` and `hosts` keys | |
701 | SVC.ingress.annotations | Ingress annotations. See documentation for your ingress controller for details | `{}` |
702 | SVC.ingress.labels | Ingress labels. Additional custom labels to add to the ingress. | `{}` |
703 | SVC.annotations | Service annotations | `{}` |
704 | SVC.labels | Service labels | `{}` |
706 #### Admin Service mTLS
708 On top of the common parameters listed above, the `admin` service supports parameters for mTLS client verification.
709 If any of `admin.tls.client.caBundle` or `admin.tls.client.secretName` are set, the admin service will be configured to
710 require mTLS client verification. If both are set, `admin.tls.client.caBundle` will take precedence.
712 | Parameter | Description | Default |
713 |-----------------------------|---------------------------------------------------------------------------------------------|---------|
714 | admin.tls.client.caBundle | CA certificate to use for TLS verification of the Admin API client (PEM-encoded). | `""` |
715 | admin.tls.client.secretName | CA certificate secret name - must contain a `tls.crt` key with the PEM-encoded certificate. | `""` |
719 The proxy configuration additionally supports creating stream listens. These
720 are configured using an array of objects under `proxy.stream` and `udpProxy.stream`:
722 | Parameter | Description | Default |
723 | ---------------------------------- | ------------------------------------------------------------------------------------- | ------------------- |
724 | protocol | The listen protocol, either "TCP" or "UDP" | |
725 | containerPort | Container port to use for a stream listen | |
726 | servicePort | Service port to use for a stream listen | |
727 | nodePort | Node port to use for a stream listen | |
728 | hostPort | Host port to use for a stream listen | |
729 | parameters | Array of additional listen parameters | `[]` |
731 ### Ingress Controller Parameters
733 All of the following properties are nested under the `ingressController`
734 section of `values.yaml` file:
736 | Parameter | Description | Default |
737 |--------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|
738 | enabled | Deploy the ingress controller, rbac and crd | true |
739 | image.repository | Docker image with the ingress controller | kong/kubernetes-ingress-controller |
740 | image.tag | Version of the ingress controller | `3.0` |
741 | image.effectiveSemver | Version of the ingress controller used for version-specific features when image.tag is not a valid semantic version | |
742 | readinessProbe | Kong ingress controllers readiness probe | |
743 | livenessProbe | Kong ingress controllers liveness probe | |
744 | installCRDs | Legacy toggle for Helm 2-style CRD management. Should not be set [unless necessary due to cluster permissions](#removing-cluster-scoped-permissions). | false |
745 | env | Specify Kong Ingress Controller configuration via environment variables | |
746 | customEnv | Specify custom environment variables (without the CONTROLLER_ prefix) | |
747 | envFrom | Populate environment variables from ConfigMap or Secret keys | |
748 | ingressClass | The name of this controller's ingressClass | kong |
749 | ingressClassAnnotations | The ingress-class value for controller | kong |
750 | args | List of ingress-controller cli arguments | [] |
751 | watchNamespaces | List of namespaces to watch. Watches all namespaces if empty | [] |
752 | admissionWebhook.enabled | Whether to enable the validating admission webhook | true |
753 | admissionWebhook.failurePolicy | How unrecognized errors from the admission endpoint are handled (Ignore or Fail) | Ignore |
754 | admissionWebhook.port | The port the ingress controller will listen on for admission webhooks | 8080 |
755 | admissionWebhook.address | The address the ingress controller will listen on for admission webhooks, if not 0.0.0.0 | |
756 | admissionWebhook.annotations | Annotations for the Validation Webhook Configuration | |
757 | admissionWebhook.certificate.provided | Use a provided certificate. When set to false, the chart will automatically generate a certificate. | false |
758 | admissionWebhook.certificate.secretName | Name of the TLS secret for the provided webhook certificate | |
759 | admissionWebhook.certificate.caBundle | PEM encoded CA bundle which will be used to validate the provided webhook certificate | |
760 | admissionWebhook.namespaceSelector | Add namespaceSelector to the webhook. Please go to [Kubernetes doc for the specs](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector) | |
761 | admissionWebhook.timeoutSeconds | Kubernetes `apiserver`'s timeout when running this webhook. Default: 10 seconds. | |
762 | userDefinedVolumes | Create volumes. Please go to Kubernetes doc for the spec of the volumes | |
763 | userDefinedVolumeMounts | Create volumeMounts. Please go to Kubernetes doc for the spec of the volumeMounts | |
764 | terminationGracePeriodSeconds | Sets the [termination grace period](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution) for Deployment pod | 30 |
765 | gatewayDiscovery.enabled | Enables Kong instance service discovery (for more details see [gatewayDiscovery section][gd_section]) | false |
766 | gatewayDiscovery.generateAdminApiService | Generate the admin API service name based on the release name (for more details see [gatewayDiscovery section][gd_section]) | false |
767 | gatewayDiscovery.adminApiService.namespace | The namespace of the Kong admin API service (for more details see [gatewayDiscovery section][gd_section]) | `.Release.Namespace` |
768 | gatewayDiscovery.adminApiService.name | The name of the Kong admin API service (for more details see [gatewayDiscovery section][gd_section]) | "" |
769 | konnect.enabled | Enable synchronisation of data plane configuration with Konnect Runtime Group | false |
770 | konnect.runtimeGroupID | Konnect Runtime Group's unique identifier. | |
771 | konnect.apiHostname | Konnect API hostname. Defaults to a production US-region. | us.kic.api.konghq.com |
772 | konnect.tlsClientCertSecretName | Name of the secret that contains Konnect Runtime Group's client TLS certificate. | konnect-client-tls |
773 | konnect.license.enabled | Enable automatic license provisioning for Gateways managed by Ingress Controller in Konnect mode. | false |
774 | adminApi.tls.client.enabled | Enable TLS client verification for the Admin API. By default, Helm will generate certificates automatically. | false |
775 | adminApi.tls.client.certProvided | Use user-provided certificates. If set to false, Helm will generate certificates. | false |
776 | adminApi.tls.client.secretName | Client TLS certificate/key pair secret name. Can be also set when `certProvided` is false to enforce a generated secret's name. | "" |
777 | adminApi.tls.client.caSecretName | CA TLS certificate/key pair secret name. Can be also set when `certProvided` is false to enforce a generated secret's name. | "" |
779 [gd_section]: #the-gatewayDiscovery-section
781 #### The `env` section
782 For a complete list of all configuration values you can set in the
783 `env` section, please read the Kong Ingress Controller's
784 [configuration document](https://docs.konghq.com/kubernetes-ingress-controller/latest/reference/cli-arguments/).
786 #### The `customEnv` section
788 The `customEnv` section can be used to configure all environment variables other than Ingress Controller configuration.
789 Any key value put under this section translates to environment variables.
790 Every key is upper-cased before setting the environment variable.
801 #### The `gatewayDiscovery` section
803 Kong Ingress Controller v2.9 has introduced gateway discovery which allows
804 the controller to discover Gateway instances that it should configure using
805 an Admin API Kubernetes service.
807 Using this feature requires a split release installation of Gateways and Ingress Controller.
808 For exemplar `values.yaml` files which use this feature please see: [examples README.md](./example-values/README.md).
809 or use the [`ingress` chart](../ingress/README.md) which can handle this for you.
813 You'll be able to configure this feature through configuration section under
814 `ingressController.gatewayDiscovery`:
816 - If `ingressController.gatewayDiscovery.enabled` is set to `false`: the ingress controller
817 will control a pre-determined set of Gateway instances based on Admin API URLs
818 (provided under the hood via `CONTROLLER_KONG_ADMIN_URL` environment variable).
820 - If `ingressController.gatewayDiscovery.enabled` is set to `true`: the ingress controller
821 will dynamically locate Gateway instances by watching the specified Kubernetes
823 (provided under the hood via `CONTROLLER_KONG_ADMIN_SVC` environment variable).
825 The following admin API Service flags have to be present in order for gateway
828 - `ingressController.gatewayDiscovery.adminApiService.name`
829 - `ingressController.gatewayDiscovery.adminApiService.namespace`
831 If you set `ingressController.gatewayDiscovery.generateAdminApiService` to `true`,
832 the chart will generate values for `name` and `namespace` based on the current release name and
833 namespace. This is useful when consuming the `kong` chart as a subchart.
835 Additionally, you can control the addresses that are generated for your Gateways
836 via the `--gateway-discovery-dns-strategy` CLI flag that can be set on the Ingress Controller
837 (or an equivalent environment variable: `CONTROLLER_GATEWAY_DISCOVERY_DNS_STRATEGY`).
838 It accepts 3 values which change the way that Gateway addresses are generated:
839 - `service` - for service scoped pod DNS names: `pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`
840 - `pod` - for namespace scope pod DNS names: `pod-ip-address.my-namespace.pod.cluster-domain.example`
841 - `ip` (default, retains behavior introduced in v2.9) - for regular IP addresses
843 When using `gatewayDiscovery`, you should consider configuring the Admin service to use mTLS client verification to make
844 this interface secure.
845 Without that, anyone who can access the Admin API from inside the cluster can configure the Gateway instances.
847 On the controller release side, that can be achieved by setting `ingressController.adminApi.tls.client.enabled` to `true`.
848 By default, Helm will generate a certificate Secret named `<release name>-admin-api-keypair` and
849 a CA Secret named `<release name>-admin-api-ca-keypair` for you.
851 To provide your own cert, set `ingressController.adminApi.tls.client.certProvided` to
852 `true`, `ingressController.adminApi.tls.client.secretName` to the name of the Secret containing your client cert, and `ingressController.adminApi.tls.client.caSecretName` to the name of the Secret containing your CA cert.
854 On the Gateway release side, set either `admin.tls.client.secretName` to the name of your CA Secret or set `admin.tls.client.caBundle` to the CA certificate string.
856 ### General Parameters
858 | Parameter | Description | Default |
859 | ---------------------------------- | ------------------------------------------------------------------------------------- | ------------------- |
860 | namespace | Namespace to deploy chart resources | |
861 | deployment.kong.enabled | Enable or disable deploying Kong | `true` |
862 | deployment.minReadySeconds | Minimum number of seconds for which newly created pods should be ready without any of its container crashing, for it to be considered available. | |
863 | deployment.initContainers | Create initContainers. Please go to Kubernetes doc for the spec of the initContainers | |
864 | deployment.daemonset | Use a DaemonSet instead of a Deployment | `false` |
865 | deployment.hostname | Set the Deployment's `.spec.template.hostname`. Kong reports this as its hostname. | |
866 | deployment.hostNetwork | Enable hostNetwork, which binds to the ports to the host | `false` |
867 | deployment.userDefinedVolumes | Create volumes. Please go to Kubernetes doc for the spec of the volumes | |
868 | deployment.userDefinedVolumeMounts | Create volumeMounts. Please go to Kubernetes doc for the spec of the volumeMounts | |
869 | deployment.serviceAccount.create | Create Service Account for the Deployment / Daemonset and the migrations | `true` |
870 | deployment.serviceAccount.automountServiceAccountToken | Enable ServiceAccount token automount in Kong deployment | `false` |
871 | deployment.serviceAccount.name | Name of the Service Account, a default one will be generated if left blank. | "" |
872 | deployment.serviceAccount.annotations | Annotations for the Service Account | {} |
873 | deployment.test.enabled | Enable creation of test resources for use with "helm test" | `false` |
874 | autoscaling.enabled | Set this to `true` to enable autoscaling | `false` |
875 | autoscaling.minReplicas | Set minimum number of replicas | `2` |
876 | autoscaling.maxReplicas | Set maximum number of replicas | `5` |
877 | autoscaling.behavior | Sets the [behavior for scaling up and down](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior) | `{}` |
878 | autoscaling.targetCPUUtilizationPercentage | Target Percentage for when autoscaling takes affect. Only used if cluster does not support `autoscaling/v2` or `autoscaling/v2beta2` | `80` |
879 | autoscaling.metrics | metrics used for autoscaling for clusters that supports `autoscaling/v2` or `autoscaling/v2beta2` | See [values.yaml](values.yaml) |
880 | updateStrategy | update strategy for deployment | `{}` |
881 | readinessProbe | Kong readiness probe | |
882 | livenessProbe | Kong liveness probe | |
883 | startupProbe | Kong startup probe | |
884 | lifecycle | Proxy container lifecycle hooks | see `values.yaml` |
885 | terminationGracePeriodSeconds | Sets the [termination grace period](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution) for Deployment pods | 30 |
886 | affinity | Node/pod affinities | |
887 | topologySpreadConstraints | Control how Pods are spread across cluster among failure-domains | |
888 | nodeSelector | Node labels for pod assignment | `{}` |
889 | deploymentAnnotations | Annotations to add to deployment | see `values.yaml` |
890 | podAnnotations | Annotations to add to each pod | see `values.yaml` |
891 | podLabels | Labels to add to each pod | `{}` |
892 | resources | Pod resource requests & limits | `{}` |
893 | tolerations | List of node taints to tolerate | `[]` |
894 | dnsPolicy | Pod dnsPolicy | |
895 | dnsConfig | Pod dnsConfig | |
896 | podDisruptionBudget.enabled | Enable PodDisruptionBudget for Kong | `false` |
897 | podDisruptionBudget.maxUnavailable | Represents the minimum number of Pods that can be unavailable (integer or percentage) | `50%` |
898 | podDisruptionBudget.minAvailable | Represents the number of Pods that must be available (integer or percentage) | |
899 | podSecurityPolicy.enabled | Enable podSecurityPolicy for Kong | `false` |
900 | podSecurityPolicy.labels | Labels to add to podSecurityPolicy for Kong | `{}` |
901 | podSecurityPolicy.annotations | Annotations to add to podSecurityPolicy for Kong | `{}` |
902 | podSecurityPolicy.spec | Collection of [PodSecurityPolicy settings](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#what-is-a-pod-security-policy) | |
903 | priorityClassName | Set pod scheduling priority class for Kong pods | `""` |
904 | secretVolumes | Mount given secrets as a volume in Kong container to override default certs and keys. | `[]` |
905 | securityContext | Set the securityContext for Kong Pods | `{}` |
906 | containerSecurityContext | Set the securityContext for Containers | See values.yaml |
907 | serviceMonitor.enabled | Create ServiceMonitor for Prometheus Operator | `false` |
908 | serviceMonitor.interval | Scraping interval | `30s` |
909 | serviceMonitor.namespace | Where to create ServiceMonitor | |
910 | serviceMonitor.labels | ServiceMonitor labels | `{}` |
911 | serviceMonitor.targetLabels | ServiceMonitor targetLabels | `{}` |
912 | serviceMonitor.honorLabels | ServiceMonitor honorLabels | `{}` |
913 | serviceMonitor.metricRelabelings | ServiceMonitor metricRelabelings | `{}` |
914 | extraConfigMaps | ConfigMaps to add to mounted volumes | `[]` |
915 | extraSecrets | Secrets to add to mounted volumes | `[]` |
916 | nameOverride | Replaces "kong" in resource names, like "RELEASENAME-nameOverride" instead of "RELEASENAME-kong" | `""` |
917 | fullnameOverride | Overrides the entire resource name string | `""` |
918 | extraObjects | Create additional k8s resources | `[]` |
919 **Note:** If you are using `deployment.hostNetwork` to bind to lower ports ( < 1024), which may be the desired option (ports 80 and 433), you also
920 need to tweak the `containerSecurityContext` configuration as in the example:
923 containerSecurityContext: # run as root to bind to lower ports
925 add: [NET_BIND_SERVICE]
931 **Note:** The default `podAnnotations` values disable inbound proxying for Kuma
932 and Istio. This is appropriate when using Kong as a gateway for external
933 traffic inbound into the cluster.
935 If you want to use Kong as an internal proxy within the cluster network, you
936 should enable inbound the inbound mesh proxies:
939 # Enable inbound mesh proxying for Kuma and Istio
941 kuma.io/gateway: disabled
942 traffic.sidecar.istio.io/includeInboundPorts: "*"
945 #### The `env` section
947 The `env` section can be used to configured all properties of Kong.
948 Any key value put under this section translates to environment variables
949 used to control Kong's configuration. Every key is prefixed with `KONG_`
950 and upper-cased before setting the environment variable.
952 Furthermore, all `kong.env` parameters can also accept a mapping instead of a
953 value to ensure the parameters can be set through configmaps and secrets.
959 env: # load PG password from a secret dynamically
966 nginx_worker_processes: "2"
969 For complete list of Kong configurations please check the
970 [Kong configuration docs](https://docs.konghq.com/latest/configuration).
972 > **Tip**: You can use the default [values.yaml](values.yaml)
974 #### The `customEnv` section
976 The `customEnv` section can be used to configure all custom properties of other than Kong.
977 Any key value put under this section translates to environment variables
978 that can be used in Kong's plugin configurations. Every key is upper-cased before setting the environment variable.
990 client_name: testClient
993 #### The `extraLabels` section
995 The `extraLabels` section can be used to configure some extra labels that will be added to each Kubernetes object generated.
997 For example, you can add the `acme.com/some-key: some-value` label to each Kubernetes object by putting the following in your Helm values:
1001 acme.com/some-key: some-value
1004 ## Kong Enterprise Parameters
1008 Kong Enterprise requires some additional configuration not needed when using
1009 Kong Open-Source. To use Kong Enterprise, at the minimum,
1010 you need to do the following:
1012 - Set `enterprise.enabled` to `true` in `values.yaml` file.
1013 - Update values.yaml to use a Kong Enterprise image.
1014 - Satisfy the two prerequisites below for Enterprise License and
1015 Enterprise Docker Registry.
1016 - (Optional) [set a `password` environment variable](#rbac) to create the
1017 initial super-admin. Though not required, this is recommended for users that
1018 wish to use RBAC, as it cannot be done after initial setup.
1020 Once you have these set, it is possible to install Kong Enterprise,
1021 but please make sure to review the below sections for other settings that
1022 you should consider configuring before installing Kong.
1024 Some of the more important configuration is grouped in sections
1025 under the `.enterprise` key in values.yaml, though most enterprise-specific
1026 configuration can be placed under the `.env` key.
1030 #### Kong Enterprise License
1032 Kong Enterprise 2.3+ can run with or without a license. If you wish to run 2.3+
1033 without a license, you can skip this step and leave `enterprise.license_secret`
1034 unset. In this case only a limited subset of features will be available.
1035 Earlier versions require a license.
1037 If you have paid for a license, but you do not have a copy of yours, please
1038 contact Kong Support. Once you have it, you will need to store it in a Secret:
1041 kubectl create secret generic kong-enterprise-license --from-file=license=./license.json
1044 Set the secret name in `values.yaml`, in the `.enterprise.license_secret` key.
1045 Please ensure the above secret is created in the same namespace in which
1046 Kong is going to be deployed.
1048 #### Kong Enterprise Docker registry access
1050 Kong Enterprise versions 2.2 and earlier use a private Docker registry and
1051 require a pull secret. **If you use 2.3 or newer, you can skip this step.**
1053 You should have received credentials to log into docker hub after
1054 purchasing Kong Enterprise. After logging in, you can retrieve your API key
1055 from \<your username\> \> Edit Profile \> API Key. Use this to create registry
1059 kubectl create secret docker-registry kong-enterprise-edition-docker \
1060 --docker-server=hub.docker.io \
1061 --docker-username=<username-provided-to-you> \
1062 --docker-password=<password-provided-to-you>
1063 secret/kong-enterprise-edition-docker created
1066 Set the secret names in `values.yaml` in the `image.pullSecrets` section.
1067 Again, please ensure the above secret is created in the same namespace in which
1068 Kong is going to be deployed.
1070 ### Service location hints
1072 Kong Enterprise add two GUIs, Kong Manager and the Kong Developer Portal, that
1073 must know where other Kong services (namely the admin and files APIs) can be
1074 accessed in order to function properly. Kong's default behavior for attempting
1075 to locate these absent configuration is unlikely to work in common Kubernetes
1076 environments. Because of this, you should set each of `admin_gui_url`,
1077 `admin_gui_api_url`, `proxy_url`, `portal_api_url`, `portal_gui_host`, and
1078 `portal_gui_protocol` under the `.env` key in values.yaml to locations where
1079 each of their respective services can be accessed to ensure that Kong services
1080 can locate one another and properly set CORS headers. See the
1081 [Property Reference documentation](https://docs.konghq.com/enterprise/latest/property-reference/)
1082 for more details on these settings.
1086 You can create a default RBAC superuser when initially running `helm install`
1087 by setting a `password` environment variable under `env` in values.yaml. It
1088 should be a reference to a secret key containing your desired password. This
1089 will create a `kong_admin` admin whose token and basic-auth password match the
1090 value in the secret. For example:
1097 name: kong-enterprise-superuser-password
1101 If using the ingress controller, it needs access to the token as well, by
1102 specifying `kong_admin_token` in its environment variables:
1110 name: kong-enterprise-superuser-password
1114 Although the above examples both use the initial super-admin, we recommend
1115 [creating a less-privileged RBAC user](https://docs.konghq.com/enterprise/latest/kong-manager/administration/rbac/add-user/)
1116 for the controller after installing. It needs at least workspace admin
1117 privileges in its workspace (`default` by default, settable by adding a
1118 `workspace` variable under `ingressController.env`). Once you create the
1119 controller user, add its token to a secret and update your `kong_admin_token`
1120 variable to use it. Remove the `password` variable from Kong's environment
1121 variables and the secret containing the super-admin token after.
1125 Login sessions for Kong Manager and the Developer Portal make use of
1126 [the Kong Sessions plugin](https://docs.konghq.com/enterprise/latest/kong-manager/authentication/sessions).
1127 When configured via values.yaml, their configuration must be stored in Secrets,
1128 as it contains an HMAC key.
1130 Kong Manager's session configuration must be configured via values.yaml,
1131 whereas this is optional for the Developer Portal on versions 0.36+. Providing
1132 Portal session configuration in values.yaml provides the default session
1133 configuration, which can be overridden on a per-workspace basis.
1136 cat admin_gui_session_conf
1140 {"cookie_name":"admin_session","cookie_samesite":"off","secret":"admin-secret-CHANGEME","cookie_secure":true,"storage":"kong"}
1144 cat portal_session_conf
1148 {"cookie_name":"portal_session","cookie_samesite":"off","secret":"portal-secret-CHANGEME","cookie_secure":true,"storage":"kong"}
1152 kubectl create secret generic kong-session-config --from-file=admin_gui_session_conf --from-file=portal_session_conf
1156 secret/kong-session-config created
1159 The exact plugin settings may vary in your environment. The `secret` should
1160 always be changed for both configurations.
1162 After creating your secret, set its name in values.yaml in
1163 `.enterprise.rbac.session_conf_secret`. If you create a Portal configuration,
1164 add it at `env.portal_session_conf` using a secretKeyRef.
1168 Email is used to send invitations for
1169 [Kong Admins](https://docs.konghq.com/enterprise/latest/kong-manager/networking/email)
1170 and [Developers](https://docs.konghq.com/enterprise/latest/developer-portal/configuration/smtp).
1172 Email invitations rely on setting a number of SMTP settings at once. For
1173 convenience, these are grouped under the `.enterprise.smtp` key in values.yaml.
1174 Setting `.enterprise.smtp.disabled: true` will set `KONG_SMTP_MOCK=on` and
1175 allow Admin/Developer invites to proceed without sending email. Note, however,
1176 that these have limited functionality without sending email.
1178 If your SMTP server requires authentication, you must provide the `username`
1179 and `smtp_password_secret` keys under `.enterprise.smtp.auth`.
1180 `smtp_password_secret` must be a Secret containing an `smtp_password` key whose
1181 value is your SMTP password.
1183 By default, SMTP uses `AUTH` `PLAIN` when you provide credentials. If your provider requires `AUTH LOGIN`, set `smtp_auth_type: login`.
1185 ## Prometheus Operator integration
1187 The chart can configure a ServiceMonitor resource to instruct the [Prometheus
1188 Operator](https://github.com/prometheus-operator/prometheus-operator) to
1189 collect metrics from Kong Pods. To enable this, set
1190 `serviceMonitor.enabled=true` in `values.yaml`.
1192 Kong exposes memory usage and connection counts by default. You can enable
1193 traffic metrics for routes and services by configuring the [Prometheus
1194 plugin](https://docs.konghq.com/hub/kong-inc/prometheus/).
1196 The ServiceMonitor requires an `enable-metrics: "true"` label on one of the
1197 chart's Services to collect data. By default, this label is set on the proxy
1198 Service. It should only be set on a single chart Service to avoid duplicate
1199 data. If you disable the proxy Service (e.g. on a hybrid control plane instance
1200 or Portal-only instance) and still wish to collect memory usage metrics, add
1201 this label to another Service, e.g. on the admin API Service:
1206 enable-metrics: "true"
1209 ## Argo CD Considerations
1211 The built-in database subchart (`postgresql.enabled` in values) is not
1212 supported when installing the chart via Argo CD.
1214 Argo CD does not support the full Helm lifecycle. There is no distinction
1215 between the initial install and upgrades. Both operations are a "sync" in Argo
1216 terms. This affects when migration Jobs execute in database-backed Kong
1219 The chart sets the `Sync` and `BeforeHookCreation` deletion
1220 [hook policies](https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/)
1221 on the `init-migrations` and `pre-upgrade-migrations` Jobs.
1223 The `pre-upgrade-migrations` Job normally uses Helm's `pre-upgrade` policy. Argo
1224 translates this to its `PreSync` policy, which would create the Job before all
1225 sync phase resources. Doing this before various sync phase resources (such as
1226 the ServiceAccount) are in place would prevent the Job from running
1227 successfully. Overriding this with Argo's `Sync` policy starts the Job at the
1228 same time as the upgraded Deployment Pods. The new Pods may fail to start
1229 temporarily, but will eventually start normally once migrations complete.
1233 If you run into an issue, bug or have a question, please reach out to the Kong
1234 community via [Kong Nation](https://discuss.konghq.com).
1235 Please do not open issues in [this](https://github.com/helm/charts) repository
1236 as the maintainers will not be notified and won't respond.