1 # Upgrade considerations
3 New versions of the Kong chart may add significant new functionality or
4 deprecate/entirely remove old functionality. This document covers how and why
5 users should update their chart configuration to take advantage of new features
6 or migrate away from deprecated features.
8 In general, breaking changes deprecate their old features before removing them
9 entirely. While support for the old functionality remains, the chart will show
10 a warning about the outdated configuration when running `helm
11 install/status/upgrade`.
13 Note that not all versions contain breaking changes. If a version is not
14 present in the table of contents, it requires no version-specific changes when
15 upgrading from a previous version.
19 - [Upgrade considerations for all versions](#upgrade-considerations-for-all-versions)
39 ## Upgrade considerations for all versions
41 The chart automates the
42 [upgrade migration process](https://github.com/Kong/kong/blob/master/UPGRADE.md).
43 When running `helm upgrade`, the chart spawns an initial job to run `kong
44 migrations up` and then spawns new Kong pods with the updated version. Once
45 these pods become ready, they begin processing traffic and old pods are
46 terminated. Once this is complete, the chart spawns another job to run `kong
49 If you split your Kong deployment across multiple Helm releases (to create
50 proxy-only and admin-only nodes, for example), you must
51 [set which migration jobs run based on your upgrade order](https://github.com/Kong/charts/blob/main/charts/kong/README.md#separate-admin-and-proxy-nodes).
52 However, this does not apply to hybrid mode, which can run both migrations but
53 requires [upgrading the control plane version
54 first](https://docs.konghq.com/gateway/latest/plan-and-deploy/hybrid-mode/#version-compatibility).
56 While the migrations themselves are automated, the chart does not automatically
57 ensure that you follow the recommended upgrade path. If you are upgrading from
58 more than one minor Kong version back, check the [upgrade path
59 recommendations for Kong open source](https://github.com/Kong/kong/blob/master/UPGRADE.md#3-suggested-upgrade-path)
60 or [Kong Enterprise](https://docs.konghq.com/enterprise/latest/deployment/migrations/).
62 Although not required, users should upgrade their chart version and Kong
63 version indepedently. In the even of any issues, this will help clarify whether
64 the issue stems from changes in Kubernetes resources or changes in Kong.
66 Users may encounter an error when upgrading which displays a large block of
67 text ending with `field is immutable`. This is typically due to a bug with the
68 `init-migrations` job, which was not removed automatically prior to 1.5.0.
69 If you encounter this error, deleting any existing `init-migrations` jobs will
74 Helm installs CRDs at initial install but [does not update them
75 after](https://github.com/helm/community/blob/main/hips/hip-0011.md). Some
76 chart releases include updates to CRDs that must be applied to successfully
77 upgrade. Because Helm does not handle these updates, you must manually apply
78 them before upgrading your release.
81 https://raw.githubusercontent.com/Kong/charts/kong-<version>/charts/kong/crds/custom-resource-definitions.yaml
84 For example, if your release is 2.6.4, you would apply
85 `https://raw.githubusercontent.com/Kong/charts/kong-2.6.4/charts/kong/crds/custom-resource-definitions.yaml`.
89 If you are using controller version 2.10 or lower and proxy version 3.3 or
90 higher in separate Deployments (such as when using the `ingress` chart), proxy
91 Pods will not become ready unless you override the default readiness endpoint:
99 This section goes under the `gateway` section when using the `ingress` chart.
101 2.26 changes the default proxy readiness endpoint to the `/status/ready`
102 endpoint introduced in Kong 3.3. This endpoint reports true when Kong has
103 configuration available, whereas the previous `/status` endpoint returned true
104 immediately after start, and could result in proxy instances attempting to
105 serve requests before they had configuration.
107 The chart has logic to fall back to the older endpoint if the proxy and
108 controller versions do not work well with the new endpoint. However, the chart
109 detection cannot determine the controller version when the controller is in a
110 separate Deployment, and will always use the new endpoint if the Kong image
111 version is 3.3 or higher.
113 Kong recommends Kong 3.3 and higher users update to controller 2.11 at their
114 earliest convenience to take advantage of the improved readiness behavior.
118 2.19 sets a default [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
119 that declares a read-only root filesystem for Kong containers. The base Kong and KIC
120 images are compatible with this setting. The chart mounts temporary writeable
121 emptyDir filesystems for locations that require writeable files (`/tmp` and
124 This setting limit attack surface and should be compatible with most
125 installations. However, if you use custom plugins that write to disk, you must
126 either mount a writeable emptyDir for them or override the new defaults by
130 containerSecurityContext:
131 readOnlyRootFilesystem: false
138 2.13.0 includes updated CRDs. You must [apply these manually](#updates-to-crds)
139 before upgrading an existing release.
141 2.13 changes the default Kong tag to 3.0 and the default KIC tag to 2.6. We
142 recommend that you set these versions (`image.tag` and
143 `ingressController.image.tag`) in your values.yaml to allow updating the chart
144 without also updating the container versions. If you do update to these
145 container image versions, you should first review the Kong 3.0 breaking changes
147 source](https://github.com/Kong/kong/blob/master/CHANGELOG.md#300) and
148 [Enterprise](https://docs.konghq.com/gateway/changelog/#3000) Kong changelogs)
149 and the [ingress controller upgrade guide for Kong
150 3.x](https://docs.konghq.com/kubernetes-ingress-controller/2.6.x/guides/upgrade-kong-3x).
152 Kong 3.0 requires KIC version 2.6 at minimum. It will not work with any
153 previous versions. Changes to regular expression paths in Kong 3.x furthermore
154 require changes to Ingresses that use regular expression paths in rules.
158 ### IngressClass controller name change requires manual delete
160 2.8 updates the chart-managed IngressClass's controller name to match the
161 controller name used elsewhere in Kong's documenation. Controller names are
162 immutable, so Helm cannot actually update existing IngressClass resources.
164 Prior to your upgrade, you must delete the existing IngressClass. Helm will
165 create a new IngressClass with the new controller name during the upgrade:
168 kubectl delete ingressclass <class name, "kong" by default>
169 helm upgrade RELEASE_NAME kong/kong ...
172 Removing the IngressClass will not affect configuration: the controller
173 IngressClass implementation is still in progress, and it will still ingest
174 resources whose `ingress.class` annotation or `ingressClassName` value matches
175 the the `CONTROLLER_INGRESS_CLASS` value in the controller environment even if
176 no matching IngressClass exists.
178 ### Postgres subchart version update
180 2.8 updates the Postgres subchart version from 8.6.8 to 11.1.15. This changes
181 a number of values.yaml keys and the default Postgres version. The previous
182 default Postgres version was [11.7.0-debian-10-r37](https://github.com/bitnami/charts/blob/590c6b0f4e07161614453b12efe71f22e0c00a46/bitnami/postgresql/values.yaml#L18).
184 To use the new version on an existing install, you should [follow Bitnami's
185 instructions for updating values.yaml keys and upgrading their chart]() as well
186 as [the Postgres upgrade instructions](https://www.postgresql.org/docs/current/upgrading.html).
188 You can alternately use the new chart without upgrading Postgres by setting
189 `postgresql.image.tag=11.7.0-debian-10-r37` or use the old version of the
190 chart. Helm documentation is unclear on whether ignoring a subchart version
191 change for a release is possible, so we recommend [dumping the
192 database](https://www.postgresql.org/docs/current/backup-dump.html) and
193 creating a separate release if you wish to continue using 8.6.8:
196 helm install my-release -f values.yaml --version 8.6.8 bitnami/postgresql
199 Afterwords, you will upgrade your Kong chart release with
200 `postgresql.enabled=false` and `env.pg_host` and `env.pg_password` set to the
201 appropriate hostname and Secret reference for your new release (these are set
202 automatically when the subchart is enabled, but will not be set automatically
203 with a separate release).
207 2.7 updates CRDs to the version released in KIC 2.1.0. Helm does not upgrade
208 CRDs automatically; you must `kubectl apply -f https://raw.githubusercontent.com/Kong/charts/kong-2.7.0/charts/kong/crds/custom-resource-definitions.yaml`
209 manually before upgrading.
211 You should not apply the updated CRDs until you are prepared to upgrade to KIC
212 2.1 or higher, and [must have first upgraded to 2.0](https://github.com/Kong/kubernetes-ingress-controller/blob/v2.1.1/CHANGELOG.md#breaking-changes)
213 and applied the [previous version of the CRDs](https://raw.githubusercontent.com/Kong/charts/kong-2.6.4/charts/kong/crds/custom-resource-definitions.yaml).
217 ### Disable ingress controller prior to 2.x upgrade when using PostgreSQL
219 Chart version 2.4 is the first Kong chart version that defaults to the 2.x
220 series of ingress controller releases. 2.x uses a different leader election
221 system than 1.x. If both versions are running simultaneously, both controller
222 versions will attempt to interact with the admin API, potentially setting
223 inconsistent configuration in the database when PostgreSQL is the backend.
225 If you are configured with the following:
227 - ingressController.enabled=true
228 - postgresql.enabled=true
230 and do not override the ingress controller version, you must perform the
231 upgrade in multiple steps:
233 First, pin the controller version and upgrade to chart 2.4.0:
236 helm upgrade --wait \
237 --set ingressController.image.tag=<CURRENT_CONTROLLER_VERSION> \
239 --namespace <YOUR_RELEASE_NAMESPACE> \
240 <YOUR_RELEASE_NAME> kong/kong
243 Second, temporarily disable the ingress controller:
246 helm upgrade --wait \
247 --set ingressController.enabled=false \
248 --set deployment.serviceaccount.create=true \
250 --namespace <YOUR_RELEASE_NAMESPACE> \
251 <YOUR_RELEASE_NAME> kong/kong
254 Finally, re-enable the ingress controller at the new version:
257 helm upgrade --wait \
258 --set ingressController.enabled=true \
259 --set ingressController.image.tag=<NEW_CONTROLLER_VERSION> \
261 --namespace <YOUR_RELEASE_NAMESPACE> \
262 <YOUR_RELEASE_NAME> kong/kong
265 While the controller is disabled, changes to Kubernetes configuration (Ingress
266 resources, KongPlugin resources, Service Endpoints, etc.) will not update Kong
267 proxy configuration. We recommend you establish an active maintenance window
268 under which to perform this upgrade and inform users and stakeholders so as to
269 avoid unexpected disruption.
271 ### Changed ServiceAccount configuration location
273 2.4.0 moved ServiceAccount configuration from
274 `ingressController.serviceAccount` to `deployment.serviceAccount` to accomodate
275 configurations that required a ServiceAccount but did not use the controller.
277 The chart now creates a ServiceAccount by default. When enabled, upgrade
278 migration hooks require the ServiceAccount, but Helm will not create it before
279 the hooks run, and the migration jobs will fail. To avoid this, first perform
280 an initial chart upgrade that does not update the Kong image version and sets
281 `migrations.preUpgrade=false` and `migrations.postUpgrade=false`. This will
282 create the account for future upgrades, and you can re-enable migrations and
283 upgrade your Kong version after.
285 If you disable ServiceAccount or override its name, you must move your
286 configuration under `deployment.serviceAccount`. The chart will warn you if it
287 detects non-default configuration in the original location when you upgrade.
288 You can use `helm upgrade --dry-run` to see if you are affected before actually
293 ### Updated CRDs and CRD API version
295 2.3.0 adds new and updated CRDs for KIC 2.x. These CRDs are compatible with
296 KIC 1.x also. The CRD API version is now v1, replacing the deprecated v1beta1,
297 to support Kubernetes 1.22 and onward. API version v1 requires Kubernetes 1.16
300 Helm 2-style CRD management will upgrade CRDs automatically. You can check to
301 see if you are using Helm 2-style management by running:
304 kubectl get crd kongconsumers.configuration.konghq.com -o yaml | grep "meta.helm.sh/release-name"
307 If you see output, you are using Helm 2-style CRD management.
309 Helm 3-style CRD management (the default) does not upgrade CRDs automatically.
310 You must apply the changes manually by running:
313 kubectl apply -f https://raw.githubusercontent.com/Kong/charts/kong-2.2.0/charts/kong/crds/custom-resource-definitions.yaml
316 Although not recommended, you can remain on an older Kubernetes version and not
317 upgrade your CRDs if you are using Helm 3-style CRD management. However, you
318 will not be able to run KIC 2.x, and these configurations are considered
321 ### Ingress controller feature detection
323 2.3.0 includes some features that are enabled by default, but require KIC 2.x.
324 KIC 2.x is not yet the default ingress controller version because there are
325 currently only preview releases for it. To maintain compatibility with KIC 1.x,
326 the chart automatically detects the KIC image version and disables incompatible
327 features. This feature detection requires a semver image tag, and the chart
328 cannot render successfully if the image tag is not semver-compliant.
330 Standard KIC images do use semver-compliant tags, and you do not need to make
331 any configuration changes if you use one. If you use a non-semver tag, such as
332 `next`, you must set the new `ingressController.image.effectiveSemver` field to
333 your approximate semver version. For example, if your `next` tag is for an
334 unreleased `2.1.0` KIC version, you should set `effectiveSemver: 2.1.0`.
338 ### Changes to pod disruption budget defaults
340 Prior to 2.2.0, the default values.yaml included
341 `podDisruptionBudget.maxUnavailable: 50%`. This prevented setting
342 `podDisruptionBudget.minUnavailable` at all. To allow use of
343 `podDisruptionBudget.minUnavailable`, we have removed the
344 `podDisruptionBudget.maxUnavailable` default. If you previously relied on this
345 default (you set `podDisruptionBudget.enabled: true` but did not set
346 `podDisruptionBudget.maxUnavailable`), you now must explicitly set
347 `podDisruptionBudget.maxUnavailable: 50%` in your values.yaml.
351 ### Migration off Bintray
353 Bintray, the Docker registry previously used for several images used by this
354 chart, is [sunsetting May 1,
355 2021](https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/).
357 The chart default `values.yaml` now uses the new Docker Hub repositories for all
358 affected images. You should check your release `values.yaml` files to confirm that
359 they do not still reference Bintray repositories. If they do, update them to
360 use the Docker Hub repositories now in the default `values.yaml`.
364 ### Support for Helm 2 dropped
366 2.0.0 takes advantage of template functionality that is only available in Helm
367 3 and reworks values defaults to target Helm 3 CRD handling, and requires Helm
368 3 as such. If you are not already using Helm 3, you must migrate to it before
369 updating to 2.0.0 or later:
371 https://helm.sh/docs/topics/v2_v3_migration/
373 If desired, you can migrate your Kong chart releases without migrating charts'
376 ### Support for deprecated 1.x features removed
378 Several previous 1.x chart releases reworked sections of values.yaml while
379 maintaining support for the older version of those settings. 2.x drops support
380 for the older versions of these settings entirely:
382 * [Portal auth settings](#removal-of-dedicated-portal-authentication-configuration-parameters)
383 * [The `runMigrations` setting](#changes-to-migration-job-configuration)
384 * [Single-stack admin API Service configuration](#changes-to-kong-service-configuration)
385 * [Multi-host proxy configuration](#removal-of-multi-host-proxy-ingress)
387 Each deprecated setting is accompanied by a warning that appears at the end of
388 `helm upgrade` output on a 1.x release:
391 WARNING: You are currently using legacy ...
394 If you do not see any such warnings when upgrading a release using chart
395 1.15.0, you are not using deprecated configuration and are ready to upgrade to
396 2.0.0. If you do see these warnings, follow the linked instructions to migrate
397 to the current settings format.
401 ### Removal of multi-host proxy Ingress
403 Most of the chart's Ingress templates support a single hostname and TLS Secret.
404 The proxy Ingress template originally differed, and allowed multiple hostnames
405 and TLS configurations. As of chart 1.14.0, we have deprecated the unique proxy
406 Ingress configuration; it is now identical to all other Kong services. If you
407 do not need to configure multiple Ingress rules for your proxy, you will
412 hosts: ["proxy.kong.example"]
416 secretName: example-tls-secret
423 tls: example-tls-secret
424 hostname: proxy.kong.example
427 We plan to remove support for the multi-host configuration entirely in version
428 2.0 of the chart. If you currently use multiple hosts, we recommend that you
430 - Define Ingresses for each application, e.g. if you proxy applicationA at
431 `foo.kong.example` and applicationB at `bar.kong.example`, you deploy those
432 applications with their own Ingress resources that target the proxy.
433 - Define a multi-host Ingress manually. Before upgrading, save your current
434 proxy Ingress, delete labels from the saved copy, and set
435 `proxy.ingress.enabled=false`. After upgrading, create your Ingress from the
436 saved copy and edit it directly to add new rules.
438 We expect that most users do not need a built-in multi-host proxy Ingress or
439 even a proxy Ingress at all: the old configuration predates the Kong Ingress
440 Controller and is most useful if you place Kong behind some other controller.
441 If you are interested in preserving this functionality, please [discuss your
442 use case with us](https://github.com/Kong/charts/issues/73). If there is
443 sufficient interest, we will explore options for continuing to support the
444 original proxy Ingress configuration format.
446 ### Default custom server block replaced with status listen
448 Earlier versions of the chart included [a custom server block](https://github.com/Kong/charts/blob/kong-1.13.0/charts/kong/templates/config-custom-server-blocks.yaml)
449 to provide `/status` and `/metrics` endpoints. This server block simplified
450 RBAC-enabled Enterprise deployments by providing access to these endpoints
451 outside the (protected) admin API.
453 Current versions (Kong 1.4.0+ and Kong Enterprise 1.5.0+) have a built-in
454 status listen that provides the same functionality, and chart 1.14.0 uses it
455 for readiness/liveness probes and the Prometheus service monitor.
457 If you are using a version that supports the new status endpoint, you do not
458 need to make any changes to your values unless you include `readinessProbe` and
459 `livenessProbe` in them. If you do, you must change the port from `metrics` to
462 If you are using an older version that does not support the status listen, you
464 - Create the server block ConfigMap independent of the chart. You will need to
465 set the ConfigMap name and namespace manually and remove the labels block.
466 - Add an `extraConfigMaps` values entry for your ConfigMap.
467 - Set `env.nginx_http_include` to `/path/to/your/mount/servers.conf`.
468 - Add the [old readiness/liveness probe blocks](https://github.com/Kong/charts/blob/kong-1.13.0/charts/kong/values.yaml#L437-L458)
470 - If you use the Prometheus service monitor, edit it after installing the chart
471 and set `targetPort` to `9542`. This cannot be set from values.yaml, but Helm
472 3 will preserve the change on subsequent upgrades.
476 ### `KongCredential` custom resources no longer supported
478 1.11.0 updates the default Kong Ingress Controller version to 1.0. Controller
479 1.0 removes support for the deprecated KongCredential resource. Before
480 upgrading to chart 1.11.0, you must convert existing KongCredential resources
481 to [credential Secrets](https://github.com/Kong/kubernetes-ingress-controller/blob/next/docs/guides/using-consumer-credential-resource.md#provision-a-consumer).
483 Custom resource management varies depending on your exact chart configuration.
484 By default, Helm 3 only creates CRDs in the `crds` directory if they are not
485 already present, and does not modify or remove them after. If you use this
486 management method, you should create a manifest file that contains [only the
487 KongCredential CRD](https://github.com/Kong/charts/blob/kong-1.10.0/charts/kong/crds/custom-resource-definitions.yaml#L35-L68)
488 and then [delete it](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#delete-a-customresourcedefinition).
490 Helm 2 and Helm 3 both allow managing CRDs via the chart. In Helm 2, this is
491 required; in Helm 3, it is optional. When using this method, only a single
492 release will actually manage the CRD. Check to see which release has
493 `ingressController.installCRDs: true` to determine which does so if you have
494 multiple releases. When using this management method, upgrading a release to
495 chart 1.11.0 will delete the KongCredential CRD during the upgrade, which will
496 _delete any existing KongCredential resources_. To avoid losing configuration,
497 check to see if your CRD is managed:
500 kubectl get crd kongcredentials.configuration.konghq.com -o yaml | grep "app.kubernetes.io/managed-by: Helm"
503 If that command returns output, your CRD is managed and you must convert to
504 credential Secrets before upgrading (you should do so regardless, but are not
505 at risk of losing data, and can downgrade to an older chart version if you have
510 Controller 1.0 [introduces a status field](https://github.com/Kong/kubernetes-ingress-controller/blob/main/CHANGELOG.md#added)
511 for its custom resources. By default, Helm 3 does not apply updates to custom
512 resource definitions if those definitions are already present on the Kubernetes
513 API server (and they will be if you are upgrading a release from a previous
514 chart version). To update your custom resources:
517 kubectl apply -f https://raw.githubusercontent.com/Kong/charts/main/charts/kong/crds/custom-resource-definitions.yaml
520 ### Deprecated controller flags/environment variables and annotations removed
522 Kong Ingress Controller 0.x versions had a number of deprecated
523 flags/environment variables and annotations. Version 1.0 removes support for
524 these, and you must update your configuration to use their modern equivalents
525 before upgrading to chart 1.11.0.
527 The [controller changelog](https://github.com/Kong/kubernetes-ingress-controller/blob/master/CHANGELOG.md#breaking-changes)
528 provides links to lists of deprecated configuration and their replacements.
532 ### `KongClusterPlugin` replaces global `KongPlugin`s
534 Kong Ingress Controller 0.10.0 no longer supports `KongPlugin`s with a `global: true` label. See the [KIC changelog for 0.10.0](https://github.com/Kong/kubernetes-ingress-controller/blob/main/CHANGELOG.md#0100---20200915) for migration hints.
536 ### Dropping support for resources not specifying an ingress class
538 Kong Ingress Controller 0.10.0 drops support for certain kinds of resources without a `kubernetes.io/ingress.class` annotation. See the [KIC changelog for 0.10.0](https://github.com/Kong/kubernetes-ingress-controller/blob/main/CHANGELOG.md#0100---20200915) for the exact list of those kinds, and for possible migration paths.
542 ### New image for Enterprise controller-managed DB-less deployments
544 As of Kong Enterprise 2.1.3.0, there is no longer a separate image
545 (`kong-enterprise-k8s`) for controller-managed DB-less deployments. All Kong
546 Enterprise deployments now use the `kong-enterprise-edition` image.
548 Existing users of the `kong-enterprise-k8s` image can use the latest
549 `kong-enterprise-edition` image as a drop-in replacement for the
550 `kong-enterprise-k8s` image. You will also need to [create a Docker registry
551 secret](https://github.com/Kong/charts/blob/main/charts/kong/README.md#kong-enterprise-docker-registry-access)
552 for the `kong-enterprise-edition` registry and add it to `image.pullSecrets` in
553 values.yaml if you do not have one already.
555 ### Changes to wait-for-postgres image
557 Prior to 1.9.0, the chart launched a busybox initContainer for migration Pods
558 to check Postgres' reachability [using
559 netcat](https://github.com/Kong/charts/blob/kong-1.8.0/charts/kong/templates/_helpers.tpl#L626).
561 As of 1.9.0, the chart uses a [bash
562 script](https://github.com/Kong/charts/blob/kong-1.9.0/charts/kong/templates/wait-for-postgres-script.yaml)
563 to perform the same connectivity check. The default `waitImage.repository`
564 value is now `bash` rather than `busybox`. Double-check your values.yaml to
565 confirm that you do not set `waitImage.repository` and `waitImage.tag` to the
566 old defaults: if you do, remove that configuration before upgrading.
568 The Helm upgrade cycle requires this script be available for upgrade jobs. On
569 existing installations, you must first perform an initial `helm upgrade --set
570 migrations.preUpgrade=false --migrations.postUpgrade=false` to chart 1.9.0.
571 Perform this initial upgrade without making changes to your Kong image version:
572 if you are upgrading Kong along with the chart, perform a separate upgrade
573 after with the migration jobs re-enabled.
575 If you do not override `waitImage.repository` in your releases, you do not need
576 to make any other configuration changes when upgrading to 1.9.0.
578 If you do override `waitImage.repository` to use a custom image, you must
579 switch to a custom image that provides a `bash` executable. Note that busybox
580 images, or images derived from it, do _not_ include a `bash` executable. We
581 recommend switching to an image derived from the public bash Docker image or a
582 base operating system image that provides a `bash` executable.
586 ### Changes to Custom Resource Definitions
588 The KongPlugin and KongClusterPlugin resources have changed. Helm 3's CRD
589 management system does not modify CRDs during `helm upgrade`, and these must be
593 kubectl apply -f https://raw.githubusercontent.com/Kong/charts/kong-1.6.0/charts/kong/crds/custom-resource-definitions.yaml
596 Existing plugin resources do not require changes; the CRD update only adds new
599 ### Removal of default security context UID setting
601 Versions of Kong prior to 2.0 and Kong Enterprise prior to 1.3 use Docker
602 images that required setting a UID via Kubernetes in some environments
603 (primarily OpenShift). This is no longer necessary with modern Docker images
604 and can cause issues depending on other environment settings, so it was
607 Most users should not need to take any action, but if you encounter permissions
608 errors when upgrading (`kubectl describe pod PODNAME` should contain any), you
609 can restore it by adding the following to your values.yaml:
618 ### PodSecurityPolicy defaults to read-only root filesystem
620 1.5.0 defaults to using a read-only root container filesystem if
621 `podSecurityPolicy.enabled: true` is set in values.yaml. This improves
622 security, but is incompatible with Kong Enterprise versions prior to 1.5. If
623 you use an older version and enable PodSecurityPolicy, you must set
624 `podSecurityPolicy.spec.readOnlyRootFilesystem: false`.
626 Kong open-source and Kong for Kubernetes Enterprise are compatible with a
627 read-only root filesystem on all versions.
629 ### Changes to migration job configuration
631 Previously, all migration jobs were enabled/disabled through a single
632 `runMigrations` setting. 1.5.0 splits these into toggles for each of the
633 individual upgrade migrations:
641 Initial migration jobs are now only run during `helm install` and are deleted
642 automatically when users first run `helm upgrade`.
644 Users should replace `runMigrations` with the above block from the latest
647 The new format addresses several needs:
648 * The initial migrations job are only created during the initial install,
649 preventing [conflicts on upgrades](https://github.com/Kong/charts/blob/main/charts/kong/FAQs.md#running-helm-upgrade-fails-because-of-old-init-migrations-job).
650 * The upgrade migrations jobs can be disabled as need for managing
651 [multi-release clusters](https://github.com/Kong/charts/blob/main/charts/kong/README.md#separate-admin-and-proxy-nodes).
652 This enables management of clusters that have nodes with different roles,
653 e.g. nodes that only run the proxy and nodes that only run the admin API.
654 * Migration jobs now allow specifying annotations, and provide a default set
655 of annotations that disable some service mesh sidecars. Because sidecar
656 containers do not terminate, they [prevent the jobs from completing](https://github.com/kubernetes/kubernetes/issues/25908).
660 ### Changes to default Postgres permissions
662 The [Postgres sub-chart](https://github.com/bitnami/charts/tree/master/bitnami/postgresql)
663 used by this chart has modified the way their chart handles file permissions.
664 This is not an issue for new installations, but prevents Postgres from starting
665 if its PVC was created with an older version. If affected, your Postgres pod
669 postgresql 19:16:04.03 INFO ==> ** Starting PostgreSQL **
670 2020-03-27 19:16:04.053 GMT [1] FATAL: data directory "/bitnami/postgresql/data" has group or world access
671 2020-03-27 19:16:04.053 GMT [1] DETAIL: Permissions should be u=rwx (0700).
674 You can restore the old permission handling behavior by adding two settings to
675 the `postgresql` block in values.yaml:
680 postgresqlDataDir: /bitnami/postgresql/data
685 For background, see https://github.com/helm/charts/issues/13651
687 ### `strip_path` now defaults to `false` for controller-managed routes
689 1.4.0 defaults to version 0.8 of the ingress controller, which changes the
690 default value of the `strip_path` route setting from `true` to `false`. To
691 understand how this works in practice, compare the upstream path for these
692 requests when `strip_path` is toggled:
694 | Ingress path | `strip_path` | Request path | Upstream path |
695 |--------------|--------------|--------------|---------------|
696 | /foo/bar | true | /foo/bar/baz | /baz |
697 | /foo/bar | false | /foo/bar/baz | /foo/bar/baz |
699 This change brings the controller in line with the Kubernetes Ingress
700 specification, which expects that controllers will not modify the request
701 before passing it upstream unless explicitly configured to do so.
703 To preserve your existing route handling, you should add this annotation to
704 your ingress resources:
707 konghq.com/strip-path: "true"
710 This is a new annotation that is equivalent to the `route.strip_path` setting
711 in KongIngress resources. Note that if you have already set this to `false`,
712 you should leave it as-is and not add an annotation to the ingress.
714 ### Changes to Kong service configuration
716 1.4.0 reworks the templates and configuration used to generate Kong
717 configuration and Kuberenetes resources for Kong's services (the admin API,
718 proxy, Developer Portal, etc.). For the admin API, this requires breaking
719 changes to the configuration format in values.yaml. Prior to 1.4.0, the admin
720 API allowed a single listen only, which could be toggled between HTTPS and
725 enabled: false # create Service
730 In 1.4.0+, the admin API allows enabling or disabling the HTTP and TLS listens
731 independently. The equivalent of the above configuration is:
735 enabled: false # create Service
737 enabled: false # create HTTP listen
743 enabled: true # create HTTPS listen
749 All Kong services now support `SERVICE.enabled` parameters: these allow
750 disabling the creation of a Kubernetes Service resource for that Kong service,
751 which is useful in configurations where nodes have different roles, e.g. where
752 some nodes only handle proxy traffic and some only handle admin API traffic. To
753 disable a Kong service completely, you should also set `SERVICE.http.enabled:
754 false` and `SERVICE.tls.enabled: false`. Disabling creation of the Service
755 resource only leaves the Kong service enabled, but only accessible within its
756 pod. The admin API is configured with only Service creation disabled to allow
757 the ingress controller to access it without allowing access from other pods.
759 Services now also include a new `parameters` section that allows setting
760 additional listen options, e.g. the `reuseport` and `backlog=16384` parameters
761 from the [default 2.0.0 proxy
762 listen](https://github.com/Kong/kong/blob/2.0.0/kong.conf.default#L186). For
763 compatibility with older Kong versions, the chart defaults do not enable most
764 of the newer parameters, only HTTP/2 support. Users of versions 1.3.0 and newer
765 can safely add the new parameters.
769 ### Removal of dedicated Portal authentication configuration parameters
771 1.3.0 deprecates the `enterprise.portal.portal_auth` and
772 `enterprise.portal.session_conf_secret` settings in values.yaml in favor of
773 placing equivalent configuration under `env`. These settings are less important
774 in Kong Enterprise 0.36+, as they can both be set per workspace in Kong
777 These settings provide the default settings for Portal instances: when the
778 "Authentication plugin" and "Session Config" dropdowns at
779 https://manager.kong.example/WORKSPACE/portal/settings/ are set to "Default",
780 the settings from `KONG_PORTAL_AUTH` and `KONG_PORTAL_SESSION_CONF` are used.
781 If these environment variables are not set, the defaults are to use
782 `basic-auth` and `{}` (which applies the [session plugin default
783 configuration](https://docs.konghq.com/hub/kong-inc/session/)).
785 If you set nonstandard defaults and wish to keep using these settings, or use
786 Kong Enterprise 0.35 (which did not provide a means to set per-workspace
787 session configuration) you should convert them to environment variables. For
788 example, if you currently have:
793 portal_auth: basic-auth
794 session_conf_secret: portal-session
796 You should remove the `portal_auth` and `session_conf_secret` entries and
797 replace them with their equivalents under the `env` block:
801 portal_auth: basic-auth
806 key: portal_session_conf