Skip to main content

Upgrading AWS EKS Kubernetes Version

Overview

This page will guide you through how to update your AWS EKS Kubernetes version for your Cinchy v5 platform.

Prerequisites

  • Update your Cinchy platform to the latest version.
  • Note the latest supported version of EKS from the table below. It is recommended to always be on the latest supported version of EKS for each Cinchy upgrade. You can also find the version number in the cinchy.devops.automations\aws.json as "cluster_version": "1.xx".
Cinchy VersionLatest Supported EKS Version
5.111.30
5.101.27
5.91.27
5.81.27
5.71.27
5.61.24
5.51.22
5.41.22
5.31.22
5.21.22
5.11.22
5.01.22

Considerations

  • You must upgrade your EKS sequentially. For example, if you are on EKS cluster version 1.22 and wish to upgrade to 1.24, you must upgrade from 1.22 > 1.23 > 1.24.

Instructions

  1. Navigate to your cinchy.devops.automations\aws-deployment.json file.
  2. Change the cluster_version key value to the EKS version you wish to upgrade to. (Example: "1.24")
  3. Open a shell/terminal and navigate to the cinchy.devops.automations directory location.
  4. Execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. Commit changes for all the repositories (cinchy.argocd, cinchy.kubernetes, cinchy.terraform and cinchy.devops.automation).
  2. Open a new shell/terminal and navigate to the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME directory location.
  3. Execute the following command:
bash create.sh
  1. Verify the changes are as expected and then accept.

This process will first upgrade the managed master node version and then the worker nodes. During the upgrade process, existing pods get migrated to new worker nodes and all pods will get migrated to new upgraded worker nodes automatically.

The below two commands can be used to verify that all pods are being migrated to new worker nodes.

  • To show both old and new nodes:
kubectl get nodes #
  • To show all the pods on the new worker nodes and the old worker nodes.
kubectl get pods --all-namespaces -o wide #

Reinstall the metrics server

For EKS version 1.24, the metrics server goes into a crashed loop status. Reinstalling the metrics server will fix this, should you encounter this during your upgrade.

  1. In a code editor, open the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME\new-vpc.tf or existing-VPC.tf file.
  2. Find the enable_metrics_server key and mark its value to false.
  3. Open a new shell/terminal and navigate to the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME file.
  4. Run the below command to remove metrics server.
terraform apply
  1. Revert the enable_metrics_server key value from step 1 to true.
  2. Run the below command within the same shell/terminal as step 3 to deploy metrics server with
 terraform apply