Skip to main content

5.17 (Kubernetes)

Release Notes

This release incorporates the following infrastructure and dependency updates:

  • Kafka Cluster: Upgraded to version 4.0.0
  • Strimzi Kafka Operator: Upgraded to version 0.49.0
  • Amazon Machine Image (AMI): AL2023_x86_64_STANDARD support updated for EKS 1.33
  • Hashicorp AWS Provider: Upgraded to version ~> 5.0
  • Hashicorp Helm Provider: Resolved breaking change compatibility with version 3.0 (applies to versions <= 2.17.0)

Breaking Changes

warning

The upgrade to Kafka 4.0.0 introduces breaking changes that require a complete cluster redeployment. You must remove the existing Kafka operator and cluster components before proceeding — see Remove Existing Kafka Components.

Supported Kubernetes Versions

  • Amazon Elastic Kubernetes Service (EKS): 1.33 and 1.34
  • Azure Kubernetes Service (AKS): 1.33.5 and 1.34.1
note

The patch version (the third number — e.g., .5 in 1.33.5) for AKS may differ from what is listed above. Azure controls when patch releases are made available, and the exact version in your region may be slightly ahead or behind. Use the closest available patch for your target minor version.

Prerequisites

info

If you have made custom changes to your deployment file structure, contact Support before upgrading your environments.

  • Download the latest Cinchy artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, download Cinchy Kubernetes Deployment Template v5.17.2.zip.

Depending on your current version, you may also need to run one or both of the following upgrade scripts:

Current VersionRun the 5.2 Upgrade ScriptRun the 5.5 Upgrade Script
5.0YesYes
5.1YesYes
5.2XYes
5.3XYes
5.4XYes
5.5XX
5.6XX
5.7XX
5.8XX
5.9XX
5.10XX
5.11XX
5.12XX
5.13XX
5.14XX
5.15XX
5.16XX
5.17XX

Configure for the Latest Version

Clean Existing Repositories

  1. Navigate to your cinchy.argocd repository and remove all existing folder structures except for the .git folder and any custom modifications you have made.

  2. Navigate to your cinchy.kubernetes repository and remove all existing folder structures except for the .git file.

    caution

    If cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml exists and is not commented out, do not modify it. Doing so will delete your ArgoCD namespace, requiring a full removal of all Kubernetes resources and redeployment.

  3. Navigate to your cinchy.terraform repository and remove all existing folder structures except for the .git file.

  4. Navigate to your cinchy.devops.automation repository and remove all existing folder structures except for the .git file and your deployment.json configuration file.

Extract Kubernetes Template

  1. Extract Kubernetes Deployment Template v5.17.2.zip and copy the contents into their respective repositories: cinchy.kubernetes, cinchy.argocd, cinchy.terraform, and cinchy.devops.automation.

Configure Secrets

  1. If your environments were not previously configured with Azure Key Vault or AWS Secrets Manager and you are enabling them during this upgrade, ensure the required secrets are created. Otherwise, no action is needed.

Update Deployment Configuration

note

Repeat these steps each time you perform an AKS/EKS version upgrade using the procedure below.

  1. Compare the new aws.json / azure.json configuration files with your existing deployment.json and incorporate any additional fields.

  2. Update the Kubernetes version in your deployment.json. To upgrade AKS/EKS to version 1.33 or 1.34, follow the sequential upgrade path, advancing through each minor version incrementally. For example, upgrading from 1.31 requires sequential upgrades through 1.32, 1.33, and then 1.34.

  3. Open a terminal from the cinchy.devops.automations directory and run:

    dotnet Cinchy.DevOps.Automations.dll "deployment.json"

Remove Existing Kafka Components

warning

Due to breaking changes in Kafka 4.0.0, you must remove the existing Kafka operator and cluster before proceeding.

This process requires downtime. Plan your upgrade accordingly and perform this step during a maintenance window.

info

This is a one-time step. You only need to remove the Kafka components once — during your first upgrade to v5.17, regardless of how many Kubernetes version hops you perform.

For example, if you are upgrading your Kubernetes cluster from 1.321.331.34, perform this Kafka removal only when upgrading to 1.33. When you subsequently upgrade to 1.34, skip this section entirely.

  1. Remove the Kafka operator:

    kubectl delete app strimzi-kafka-operator -n argocd

    Expected output:

    application.argoproj.io "strimzi-kafka-operator" deleted
  2. Remove the Kafka cluster:

    kubectl delete app kafka-cluster -n argocd

    Expected output:

    application.argoproj.io "kafka-cluster" deleted
  3. Verify that both strimzi-kafka-operator and kafka-cluster have been removed from ArgoCD:

    kubectl get app -n argocd

    Confirm that neither application appears in the output.

Commit and Deploy

  1. Commit all changes to Git across the relevant repositories.

  2. If changes were made to your cinchy.argocd repository, redeploy ArgoCD. Launch a terminal from the root of the cinchy.argocd repository and run:

    bash deploy_argocd.sh
  3. Verify that all ArgoCD pods are running successfully.

  4. Deploy or update cluster components. This will redeploy the upgraded Kafka operator and cluster:

    bash deploy_cluster_components.sh

    Expected output:

    application.argoproj.io/kafka-cluster created
    ...
    application.argoproj.io/strimzi-kafka-operator created
  5. (Optional) Deploy or update Cinchy components:

    bash deploy_cinchy_components.sh
  6. Refresh applications in the ArgoCD console as needed.

  7. All users must log out and log back in to the Cinchy environment for changes to take effect.

Post-Upgrade Verification

After completing the Kafka upgrade, verify your deployment and resolve any common issues.

Verify Kafka Topics

  1. Confirm that all Kafka topics were created successfully:

    kubectl get kafkatopic -A

    Expected output — all topics in a Ready state:

    NAMESPACE   NAME                                          CLUSTER         PARTITIONS   REPLICATION FACTOR   READY
    kafka cinchy-dev-cinchyautomationsjobqueue kafka-cluster 1 3 True
    kafka cinchy-dev-connectionsjobcancellationrequests kafka-cluster 100 3 True
    kafka cinchy-dev-connectionsjobqueue kafka-cluster 100 3 True
    kafka cinchy-dev-datachangenotifications kafka-cluster 100 3 True
    kafka cinchy-dev-realtimedatasynctopic kafka-cluster 100 3 True
  2. If topics show Ready: False or are stuck in a pending state, ArgoCD may have failed to create them due to timing or configuration mismatches between Kafka versions. To resolve:

  3. Identify the problematic topic from the output above.

  4. Delete the topic resource:

    kubectl delete kafkatopic <topic-name> -n kafka

    Alternatively, delete it via the ArgoCD UI: navigate to the dashboard, locate the Kafka topic for the affected environment, and delete it. ArgoCD will automatically recreate it with the correct configuration.

Kafka UI Connectivity Issues

After upgrading, the Kafka UI pod may retain stale connections to the previous cluster configuration, resulting in connection failures or missing topics and messages.

  1. Verify the status of all Kafka pods:

    kubectl get pods -n kafka

    Expected output:

    NAME                                             READY   STATUS    RESTARTS   AGE
    kafka-cluster-entity-operator-5c74c7dccb-vzh5r 3/3 Running 0 10m
    kafka-cluster-kafka-0 1/1 Running 0 10m
    kafka-cluster-kafka-1 1/1 Running 0 10m
    kafka-cluster-kafka-2 1/1 Running 0 10m
    kafka-cluster-zookeeper-0 1/1 Running 0 10m
    kafka-cluster-zookeeper-1 1/1 Running 0 10m
    kafka-cluster-zookeeper-2 1/1 Running 0 10m
    kafka-ui-5f64bc974c-q4sj2 2/2 Running 0 10m
    strimzi-cluster-operator-66cc6c8dbc-xzp8p 2/2 Running 0 10m
  2. If the Kafka UI cannot connect to the upgraded cluster, restart the pod to establish a fresh connection:

    kubectl delete pod -n kafka -l app.kubernetes.io/name=kafka-ui
  3. Kubernetes will automatically recreate the pod. Verify it is running:

    kubectl get pods -n kafka -l app.kubernetes.io/name=kafka-ui

Upgrade AWS EKS and Azure AKS

The following procedure upgrades AWS EKS or Azure AKS to version 1.33 or 1.34.

Prerequisites

  • AWS: Export the required credentials before proceeding.
  • Azure: Run az login to authenticate, if required.

Procedure

Terraform operations must be run from the cluster directory:

  • AWS: eks_cluster folder under Terraform > AWS, in the subdirectory named after the cluster.
  • Azure: aks_cluster folder under Terraform > Azure, in the subdirectory named after the cluster.
  1. Follow the Update Deployment Configuration steps to update and apply your deployment.json for the target Kubernetes version.

  2. Initiate the upgrade. Review the proposed changes carefully, then confirm with yes to proceed. This performs an in-place update of the Kubernetes version without deleting or destroying data.

    bash create.sh
warning

Before confirming, verify that the proposed changes meet your expectations and protect your database and other critical resources. This command may create, update, or destroy virtual networks, subnets, AKS/EKS clusters, and node groups. Review all changes thoroughly before proceeding.