1 # Upgrading Kubernetes in Kubespray
3 Kubespray handles upgrades the same way it handles initial deployment. That is to
4 say that each component is laid down in a fixed order.
6 You can also individually control versions of components by explicitly defining their
7 versions. Here are all version vars for each component:
10 * docker_containerd_version (relevant when `container_manager` == `docker`)
11 * containerd_version (relevant when `container_manager` == `containerd`)
20 :warning: [Attempting to upgrade from an older release straight to the latest release is unsupported and likely to break something](https://github.com/kubernetes-sigs/kubespray/issues/3849#issuecomment-451386515) :warning:
22 See [Multiple Upgrades](#multiple-upgrades) for how to upgrade from older Kubespray release to the latest release
24 ## Unsafe upgrade example
26 If you wanted to upgrade just kube_version from v1.18.10 to v1.19.7, you could
27 deploy the following way:
30 ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.18.10 -e upgrade_cluster_setup=true
33 And then repeat with v1.19.7 as kube_version:
36 ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.19.7 -e upgrade_cluster_setup=true
39 The var ```-e upgrade_cluster_setup=true``` is needed to be set in order to migrate the deploys of e.g kube-apiserver inside the cluster immediately which is usually only done in the graceful upgrade. (Refer to [#4139](https://github.com/kubernetes-sigs/kubespray/issues/4139) and [#4736](https://github.com/kubernetes-sigs/kubespray/issues/4736))
43 Kubespray also supports cordon, drain and uncordoning of nodes when performing
44 a cluster upgrade. There is a separate playbook used for this purpose. It is
45 important to note that upgrade-cluster.yml can only be used for upgrading an
46 existing cluster. That means there must be at least 1 kube_control_plane already
50 ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.19.7
53 After a successful upgrade, the Server Version should be updated:
57 Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
58 Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
61 ### Pausing the upgrade
63 If you want to manually control the upgrade procedure, you can set some variables to pause the upgrade playbook. Pausing *before* upgrading each upgrade may be useful for inspecting pods running on that node, or performing manual actions on the node:
65 * `upgrade_node_confirm: true` - This will pause the playbook execution prior to upgrading each node. The play will resume when manually approved by typing "yes" at the terminal.
66 * `upgrade_node_pause_seconds: 60` - This will pause the playbook execution for 60 seconds prior to upgrading each node. The play will resume automatically after 60 seconds.
68 Pausing *after* upgrading each node may be useful for rebooting the node to apply kernel updates, or testing the still-cordoned node:
70 * `upgrade_node_post_upgrade_confirm: true` - This will pause the playbook execution after upgrading each node, but before the node is uncordoned. The play will resume when manually approved by typing "yes" at the terminal.
71 * `upgrade_node_post_upgrade_pause_seconds: 60` - This will pause the playbook execution for 60 seconds after upgrading each node, but before the node is uncordoned. The play will resume automatically after 60 seconds.
75 If you don't want to upgrade all nodes in one run, you can use `--limit` [patterns](https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html#patterns-and-ansible-playbook-flags).
77 Before using `--limit` run playbook `facts.yml` without the limit to refresh facts cache for all nodes:
80 ansible-playbook facts.yml -b -i inventory/sample/hosts.ini
83 After this upgrade control plane and etcd groups [#5147](https://github.com/kubernetes-sigs/kubespray/issues/5147):
86 ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.20.7 --limit "kube_control_plane:etcd"
89 Now you can upgrade other nodes in any order and quantity:
92 ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.20.7 --limit "node4:node6:node7:node12"
93 ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.20.7 --limit "node5*"
98 :warning: [Do not skip releases when upgrading--upgrade by one tag at a time.](https://github.com/kubernetes-sigs/kubespray/issues/3849#issuecomment-451386515) :warning:
100 For instance, if you're on v2.6.0, then check out v2.7.0, run the upgrade, check out the next tag, and run the next upgrade, etc.
102 Assuming you don't explicitly define a kubernetes version in your k8s_cluster.yml, you simply check out the next tag and run the upgrade-cluster.yml playbook
104 * If you do define kubernetes version in your inventory (e.g. group_vars/k8s_cluster.yml) then either make sure to update it before running upgrade-cluster, or specify the new version you're upgrading to: `ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml -e kube_version=v1.11.3`
106 Otherwise, the upgrade will leave your cluster at the same k8s version defined in your inventory vars.
108 The below example shows taking a cluster that was set up for v2.6.0 up to v2.10.0
112 NAME STATUS ROLES AGE VERSION
113 apollo Ready master,node 1h v1.10.4
114 boomer Ready master,node 42m v1.10.4
115 caprica Ready master,node 42m v1.10.4
117 $ git describe --tags
129 $ git checkout v2.7.0
130 Previous HEAD position was 8b3ce6e4 bump upgrade tests to v2.5.0 commit (#3087)
131 HEAD is now at 05dabb7e Fix Bionic networking restart error #3430 (#3431)
133 # NOTE: May need to `pip3 install -r requirements.txt` when upgrading.
135 ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
140 NAME STATUS ROLES AGE VERSION
141 apollo Ready master,node 1h v1.11.3
142 boomer Ready master,node 1h v1.11.3
143 caprica Ready master,node 1h v1.11.3
145 $ git checkout v2.8.0
146 Previous HEAD position was 05dabb7e Fix Bionic networking restart error #3430 (#3431)
147 HEAD is now at 9051aa52 Fix ubuntu-contiv test failed (#3808)
150 :info: NOTE: Review changes between the sample inventory and your inventory when upgrading versions. :info:
152 Some deprecations between versions that mean you can't just upgrade straight from 2.7.0 to 2.8.0 if you started with the sample inventory.
154 In this case, I set "kubeadm_enabled" to false, knowing that it is deprecated and removed by 2.9.0, to delay converting the cluster to kubeadm as long as I could.
157 $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
159 "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
161 Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
166 NAME STATUS ROLES AGE VERSION
167 apollo Ready master,node 114m v1.12.3
168 boomer Ready master,node 114m v1.12.3
169 caprica Ready master,node 114m v1.12.3
171 $ git checkout v2.8.1
172 Previous HEAD position was 9051aa52 Fix ubuntu-contiv test failed (#3808)
173 HEAD is now at 2ac1c756 More Feature/2.8 backports for 2.8.1 (#3911)
175 $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
177 "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
179 Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
184 NAME STATUS ROLES AGE VERSION
185 apollo Ready master,node 2h36m v1.12.4
186 boomer Ready master,node 2h36m v1.12.4
187 caprica Ready master,node 2h36m v1.12.4
189 $ git checkout v2.8.2
190 Previous HEAD position was 2ac1c756 More Feature/2.8 backports for 2.8.1 (#3911)
191 HEAD is now at 4167807f Upgrade to 1.12.5 (#4066)
193 $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
195 "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
197 Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
202 NAME STATUS ROLES AGE VERSION
203 apollo Ready master,node 3h3m v1.12.5
204 boomer Ready master,node 3h3m v1.12.5
205 caprica Ready master,node 3h3m v1.12.5
207 $ git checkout v2.8.3
208 Previous HEAD position was 4167807f Upgrade to 1.12.5 (#4066)
209 HEAD is now at ea41fc5e backport cve-2019-5736 to release-2.8 (#4234)
211 $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
213 "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
215 Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
220 NAME STATUS ROLES AGE VERSION
221 apollo Ready master,node 5h18m v1.12.5
222 boomer Ready master,node 5h18m v1.12.5
223 caprica Ready master,node 5h18m v1.12.5
225 $ git checkout v2.8.4
226 Previous HEAD position was ea41fc5e backport cve-2019-5736 to release-2.8 (#4234)
227 HEAD is now at 3901480b go to k8s 1.12.7 (#4400)
229 $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
231 "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
233 Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
238 NAME STATUS ROLES AGE VERSION
239 apollo Ready master,node 5h37m v1.12.7
240 boomer Ready master,node 5h37m v1.12.7
241 caprica Ready master,node 5h37m v1.12.7
243 $ git checkout v2.8.5
244 Previous HEAD position was 3901480b go to k8s 1.12.7 (#4400)
245 HEAD is now at 6f97687d Release 2.8 robust san handling (#4478)
247 $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
249 "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
251 Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
256 NAME STATUS ROLES AGE VERSION
257 apollo Ready master,node 5h45m v1.12.7
258 boomer Ready master,node 5h45m v1.12.7
259 caprica Ready master,node 5h45m v1.12.7
261 $ git checkout v2.9.0
262 Previous HEAD position was 6f97687d Release 2.8 robust san handling (#4478)
263 HEAD is now at a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
266 :warning: IMPORTANT: Some of the variable formats changed in the k8s_cluster.yml between 2.8.5 and 2.9.0 :warning:
268 If you do not keep your inventory copy up to date, **your upgrade will fail** and your first master will be left non-functional until fixed and re-run.
270 It is at this point the cluster was upgraded from non-kubeadm to kubeadm as per the deprecation warning.
273 ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
278 NAME STATUS ROLES AGE VERSION
279 apollo Ready master,node 6h54m v1.13.5
280 boomer Ready master,node 6h55m v1.13.5
281 caprica Ready master,node 6h54m v1.13.5
283 # Watch out: 2.10.0 is hiding between 2.1.2 and 2.2.0
294 $ git checkout v2.10.0
295 Previous HEAD position was a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
296 HEAD is now at dcd9c950 Add etcd role dependency on kube user to avoid etcd role failure when running scale.yml with a fresh node. (#3240) (#4479)
298 ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
303 NAME STATUS ROLES AGE VERSION
304 apollo Ready master,node 7h40m v1.14.1
305 boomer Ready master,node 7h40m v1.14.1
306 caprica Ready master,node 7h40m v1.14.1
311 ## Upgrading to v2.19
313 `etcd_kubeadm_enabled` is being deprecated at v2.19. The same functionality is achievable by setting `etcd_deployment_type` to `kubeadm`.
314 Deploying etcd using kubeadm is experimental and is only available for either new or deployments where `etcd_kubeadm_enabled` was set to `true` while deploying the cluster.
316 From 2.19 and onward `etcd_deployment_type` variable will be placed in `group_vars/all/etcd.yml` instead of `group_vars/etcd.yml`, due to scope issues.
317 The placement of the variable is only important for `etcd_deployment_type: kubeadm` right now. However, since this might change in future updates, it is recommended to move the variable.
319 Upgrading is straightforward; no changes are required if `etcd_kubeadm_enabled` was not set to `true` when deploying.
321 If you have a cluster where `etcd` was deployed using `kubeadm`, you will need to remove `etcd_kubeadm_enabled` the variable. Then move `etcd_deployment_type` variable from `group_vars/etcd.yml` to `group_vars/all/etcd.yml` due to scope issues and set `etcd_deployment_type` to `kubeadm`.
325 As mentioned above, components are upgraded in the order in which they were
326 installed in the Ansible playbook. The order of component installation is as
332 * kubelet and kube-proxy
333 * network_plugin (such as Calico or Weave)
334 * kube-apiserver, kube-scheduler, and kube-controller-manager
335 * Add-ons (such as KubeDNS)
337 ### Component-based upgrades
339 A deployer may want to upgrade specific components in order to minimize risk
340 or save time. This strategy is not covered by CI as of this writing, so it is
341 not guaranteed to work.
343 These commands are useful only for upgrading fully-deployed, healthy, existing
344 hosts. This will definitely not work for undeployed or partially deployed
350 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=docker
356 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd
359 Upgrade etcd without rotating etcd certs:
362 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd --limit=etcd --skip-tags=etcd-secrets
368 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs,k8s-gen-tokens
371 Upgrade Kubernetes master components:
374 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=master
377 Upgrade network plugins:
380 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=network
386 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=apps
389 Upgrade just helm (assuming `helm_enabled` is true):
392 ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=helm
395 ## Migrate from Docker to Containerd
397 Please note that **migrating container engines is not officially supported by Kubespray**. While this procedure can be used to migrate your cluster, it applies to one particular scenario and will likely evolve over time. At the moment, they are intended as an additional resource to provide insight into how these steps can be officially integrated into the Kubespray playbooks.
399 As of Kubespray 2.18.0, containerd is already the default container engine. If you have the chance, it is advisable and safer to reset and redeploy the entire cluster with a new container engine.
401 * [Migrating from Docker do Containerd](upgrades/migrate_docker2containerd.md)