Skip to main content

5.17 (Kubernetes)

Release Notes

This release incorporates the following infrastructure and dependency updates:

  • Kafka Cluster: Upgraded to version 4.0.0
  • Strimzi Kafka Operator: Upgraded to version 0.49.0
  • Amazon Machine Image (AMI): AL2023_x86_64_STANDARD support updated for EKS 1.33
  • Hashicorp AWS Provider: Upgraded to version ~> 5.0
  • Hashicorp Helm Provider: Resolved breaking change compatibility issue with version 3.0 (applies to versions <= 2.17.0)

Breaking Changes

warning

The upgrade to Kafka version 4.0.0 introduces breaking changes that require complete cluster redeployment. You must remove the existing Kafka operator and cluster components before proceeding with the upgrade process.

Supported Kubernetes Versions

This release supports the following Kubernetes versions:

  • Amazon Elastic Kubernetes Service (EKS): 1.33 and 1.34
  • Azure Kubernetes Service (AKS): 1.33.5 and 1.34.1
    • Note: Patch versions may vary based on Azure release schedule

Prerequisites

info

If you have made custom changes to your deployment file structure, please contact your Support team before upgrading your environments.

  • Download the latest Cinchy artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, download the Cinchy Kubernetes Deployment Template v5.17.2.zip file.

Depending on your current version, you may be required to execute the following upgrade scripts:

Considerations Table

Current VersionRun the 5.2 Upgrade ScriptRun the 5.5 Upgrade Script
5.0YesYes
5.1YesYes
5.2XYes
5.3XYes
5.4XYes
5.5XX
5.6XX
5.7XX
5.8XX
5.9XX
5.10XX
5.11XX
5.12XX
5.13XX
5.14XX
5.15XX
5.16XX

Configure to the Newest Version

Clean Existing Repositories

  1. Navigate to your cinchy.argocd repository and remove all existing folder structures except for the .git folder/directory and any custom modifications you have implemented.
  2. Navigate to your cinchy.kubernetes repository and remove all existing folder structures except for the .git file.
caution

If the cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file exists and is not commented out, do not modify it. Modifying this file will delete your ArgoCD namespace, requiring a complete removal of all Kubernetes resources and full redeployment.

  1. Navigate to your cinchy.terraform repository and remove all existing folder structures except for the .git file.
  2. Navigate to your cinchy.devops.automation repository and remove all existing folder structures except for the .git file and your deployment.json configuration file.

Download Kubernetes Template

  1. Extract the contents of Kubernetes Deployment Template v5.17.2.zip and copy the files into their respective repositories: cinchy.kubernetes, cinchy.argocd, cinchy.terraform, and cinchy.devops.automation.

Configure Secrets

  1. If your existing environments were not previously configured with Azure Key Vault or AWS Secrets Manager on EKS/AKS, and you have enabled them during this upgrade, ensure the required secrets are created. Otherwise, no action is required.

Update Deployment Configuration

  1. Review the new aws.json/azure.json configuration files and compare them with your current deployment.json file. Incorporate all additional fields from the new aws.json/azure.json files into your existing deployment.json.

  2. Update the Kubernetes version in your deployment.json file. To upgrade AKS/EKS to version 1.33 or 1.34, you must follow the sequential upgrade path, progressing through each minor version incrementally. For example, upgrading from version 1.31 requires sequential upgrades through 1.32, 1.33, and finally to 1.34.

  3. Open a shell/terminal session from the cinchy.devops.automations directory and execute the following command:

    dotnet Cinchy.DevOps.Automations.dll "deployment.json"

Remove Existing Kafka Components

warning

Due to breaking changes introduced in Kafka version 4.0.0, you must remove the existing Kafka operator and cluster components before proceeding with the upgrade.

Important: This process requires downtime. Removing the Strimzi Kafka operator and Kafka cluster will temporarily interrupt Cinchy. Plan your upgrade accordingly and schedule this operation during a maintenance window.

  1. Remove the Kafka operator by executing the following command:

    kubectl delete app strimzi-kafka-operator -n argocd

    Expected output:

    application.argoproj.io "strimzi-kafka-operator" deleted
  2. Remove the Kafka cluster by executing the following command:

    kubectl delete app kafka-cluster -n argocd

    Expected output:

    application.argoproj.io "kafka-cluster" deleted
  3. Verify the successful removal of both strimzi-kafka-operator and kafka-cluster from ArgoCD:

    kubectl get app -n argocd

    Confirm that neither strimzi-kafka-operator nor kafka-cluster appears in the command output.

Commit Changes

  1. Commit all changes to Git for the relevant repositories.
  2. Refresh applications in the ArgoCD console as required.
  3. All users must log out and log back in to the Cinchy environment for the changes to take effect properly.

Deploy Cinchy Instance

  1. If changes were made to your cinchy.argocd repository, you may need to redeploy ArgoCD. Launch a shell/terminal session with the working directory set to the root of the cinchy.argocd repository.

  2. (Optional) Execute the following command to deploy ArgoCD:

    bash deploy_argocd.sh
  3. Verify that all ArgoCD pods are running successfully.

  4. Deploy or update cluster components by executing the following command. This will redeploy the upgraded Kafka operator and cluster:

    bash deploy_cluster_components.sh

    Expected output confirming the creation of kafka-cluster and strimzi-kafka-operator:

    application.argoproj.io/kafka-cluster created
    ...
    application.argoproj.io/strimzi-kafka-operator created
  5. (Optional) Deploy or update Cinchy components by executing the following command:

    bash deploy_cinchy_components.sh

Upgrade AWS EKS and Azure AKS

The following procedures can be used to upgrade AWS EKS and Azure AKS to version 1.33 or 1.34.

Prerequisites

  • For AWS: Export the required credentials before proceeding.
  • For Azure: Execute the az login command to authenticate, if required.

Procedure

To perform Terraform operations, ensure the cluster directory is set as the working directory during execution:

  • AWS Deployment: The deployment updates a folder named eks_cluster located in the Terraform > AWS directory. Within that folder is a subdirectory with the same name as the created cluster.
  • Azure Deployment: The deployment updates a folder named aks_cluster located in the Terraform > Azure directory. Within that folder is a subdirectory with the same name as the created cluster.

Steps:

  1. Navigate to your cinchy.devops.automations repository and update the AKS/EKS version in the deployment.json file (or <cluster name>.json) within the same directory.

  2. From a shell/terminal session, navigate to the cinchy.devops.automations directory and execute the following command:

    dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  3. Execute the following command to initiate the upgrade process. Review the proposed changes carefully before confirming with yes to proceed. This operation performs an in-place deployment to update the Kubernetes version without deleting or destroying data.

    bash create.sh
warning

Before accepting the proposed changes, verify that they meet your expectations and ensure the protection of your database and other critical resources. This command may create, update, or destroy virtual networks, subnets, AKS/EKS clusters, and node groups. Review all changes thoroughly before proceeding.