5.17 (Kubernetes)
Release Notes
This release incorporates the following infrastructure and dependency updates:
- Kafka Cluster: Upgraded to version 4.0.0
- Strimzi Kafka Operator: Upgraded to version 0.49.0
- Amazon Machine Image (AMI): AL2023_x86_64_STANDARD support updated for EKS 1.33
- Hashicorp AWS Provider: Upgraded to version ~> 5.0
- Hashicorp Helm Provider: Resolved breaking change compatibility with version 3.0 (applies to versions
<= 2.17.0)
Breaking Changes
The upgrade to Kafka 4.0.0 introduces breaking changes that require a complete cluster redeployment. You must remove the existing Kafka operator and cluster components before proceeding — see Remove Existing Kafka Components.
Supported Kubernetes Versions
- Amazon Elastic Kubernetes Service (EKS): 1.33 and 1.34
- Azure Kubernetes Service (AKS): 1.33.5 and 1.34.1
The patch version (the third number — e.g., .5 in 1.33.5) for AKS may differ from what is listed above. Azure controls when patch releases are made available, and the exact version in your region may be slightly ahead or behind. Use the closest available patch for your target minor version.
Prerequisites
If you have made custom changes to your deployment file structure, contact Support before upgrading your environments.
- Download the latest Cinchy artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, download
Cinchy Kubernetes Deployment Template v5.17.2.zip.
Depending on your current version, you may also need to run one or both of the following upgrade scripts:
| Current Version | Run the 5.2 Upgrade Script | Run the 5.5 Upgrade Script |
|---|---|---|
| 5.0 | Yes | Yes |
| 5.1 | Yes | Yes |
| 5.2 | X | Yes |
| 5.3 | X | Yes |
| 5.4 | X | Yes |
| 5.5 | X | X |
| 5.6 | X | X |
| 5.7 | X | X |
| 5.8 | X | X |
| 5.9 | X | X |
| 5.10 | X | X |
| 5.11 | X | X |
| 5.12 | X | X |
| 5.13 | X | X |
| 5.14 | X | X |
| 5.15 | X | X |
| 5.16 | X | X |
| 5.17 | X | X |
Configure for the Latest Version
Clean Existing Repositories
-
Navigate to your cinchy.argocd repository and remove all existing folder structures except for the .git folder and any custom modifications you have made.
-
Navigate to your cinchy.kubernetes repository and remove all existing folder structures except for the
.gitfile.cautionIf
cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yamlexists and is not commented out, do not modify it. Doing so will delete your ArgoCD namespace, requiring a full removal of all Kubernetes resources and redeployment. -
Navigate to your cinchy.terraform repository and remove all existing folder structures except for the
.gitfile. -
Navigate to your cinchy.devops.automation repository and remove all existing folder structures except for the
.gitfile and your deployment.json configuration file.
Extract Kubernetes Template
- Extract
Kubernetes Deployment Template v5.17.2.zipand copy the contents into their respective repositories: cinchy.kubernetes, cinchy.argocd, cinchy.terraform, and cinchy.devops.automation.
Configure Secrets
-
If your environments were not previously configured with Azure Key Vault or AWS Secrets Manager and you are enabling them during this upgrade, ensure the required secrets are created. Otherwise, no action is needed.
Update Deployment Configuration
Repeat these steps each time you perform an AKS/EKS version upgrade using the procedure below.
-
Compare the new
aws.json/azure.jsonconfiguration files with your existingdeployment.jsonand incorporate any additional fields. -
Update the Kubernetes version in your
deployment.json. To upgrade AKS/EKS to version 1.33 or 1.34, follow the sequential upgrade path, advancing through each minor version incrementally. For example, upgrading from1.31requires sequential upgrades through1.32,1.33, and then1.34. -
Open a terminal from the cinchy.devops.automations directory and run:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
Remove Existing Kafka Components
Due to breaking changes in Kafka 4.0.0, you must remove the existing Kafka operator and cluster before proceeding.
This process requires downtime. Plan your upgrade accordingly and perform this step during a maintenance window.
This is a one-time step. You only need to remove the Kafka components once — during your first upgrade to v5.17, regardless of how many Kubernetes version hops you perform.
For example, if you are upgrading your Kubernetes cluster from 1.32 → 1.33 → 1.34, perform this Kafka removal only when upgrading to 1.33. When you subsequently upgrade to 1.34, skip this section entirely.
-
Remove the Kafka operator:
kubectl delete app strimzi-kafka-operator -n argocdExpected output:
application.argoproj.io "strimzi-kafka-operator" deleted -
Remove the Kafka cluster:
kubectl delete app kafka-cluster -n argocdExpected output:
application.argoproj.io "kafka-cluster" deleted -
Verify that both
strimzi-kafka-operatorandkafka-clusterhave been removed from ArgoCD:kubectl get app -n argocdConfirm that neither application appears in the output.
Commit and Deploy
-
Commit all changes to Git across the relevant repositories.
-
If changes were made to your cinchy.argocd repository, redeploy ArgoCD. Launch a terminal from the root of the cinchy.argocd repository and run:
bash deploy_argocd.sh -
Verify that all ArgoCD pods are running successfully.
-
Deploy or update cluster components. This will redeploy the upgraded Kafka operator and cluster:
bash deploy_cluster_components.shExpected output:
application.argoproj.io/kafka-cluster created
...
application.argoproj.io/strimzi-kafka-operator created -
(Optional) Deploy or update Cinchy components:
bash deploy_cinchy_components.sh -
Refresh applications in the ArgoCD console as needed.
-
All users must log out and log back in to the Cinchy environment for changes to take effect.
Post-Upgrade Verification
After completing the Kafka upgrade, verify your deployment and resolve any common issues.
Verify Kafka Topics
-
Confirm that all Kafka topics were created successfully:
kubectl get kafkatopic -AExpected output — all topics in a
Readystate:NAMESPACE NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
kafka cinchy-dev-cinchyautomationsjobqueue kafka-cluster 1 3 True
kafka cinchy-dev-connectionsjobcancellationrequests kafka-cluster 100 3 True
kafka cinchy-dev-connectionsjobqueue kafka-cluster 100 3 True
kafka cinchy-dev-datachangenotifications kafka-cluster 100 3 True
kafka cinchy-dev-realtimedatasynctopic kafka-cluster 100 3 True -
If topics show
Ready: Falseor are stuck in a pending state, ArgoCD may have failed to create them due to timing or configuration mismatches between Kafka versions. To resolve: -
Identify the problematic topic from the output above.
-
Delete the topic resource:
kubectl delete kafkatopic <topic-name> -n kafkaAlternatively, delete it via the ArgoCD UI: navigate to the dashboard, locate the Kafka topic for the affected environment, and delete it. ArgoCD will automatically recreate it with the correct configuration.
Kafka UI Connectivity Issues
After upgrading, the Kafka UI pod may retain stale connections to the previous cluster configuration, resulting in connection failures or missing topics and messages.
-
Verify the status of all Kafka pods:
kubectl get pods -n kafkaExpected output:
NAME READY STATUS RESTARTS AGE
kafka-cluster-entity-operator-5c74c7dccb-vzh5r 3/3 Running 0 10m
kafka-cluster-kafka-0 1/1 Running 0 10m
kafka-cluster-kafka-1 1/1 Running 0 10m
kafka-cluster-kafka-2 1/1 Running 0 10m
kafka-cluster-zookeeper-0 1/1 Running 0 10m
kafka-cluster-zookeeper-1 1/1 Running 0 10m
kafka-cluster-zookeeper-2 1/1 Running 0 10m
kafka-ui-5f64bc974c-q4sj2 2/2 Running 0 10m
strimzi-cluster-operator-66cc6c8dbc-xzp8p 2/2 Running 0 10m -
If the Kafka UI cannot connect to the upgraded cluster, restart the pod to establish a fresh connection:
kubectl delete pod -n kafka -l app.kubernetes.io/name=kafka-ui -
Kubernetes will automatically recreate the pod. Verify it is running:
kubectl get pods -n kafka -l app.kubernetes.io/name=kafka-ui
Upgrade AWS EKS and Azure AKS
The following procedure upgrades AWS EKS or Azure AKS to version 1.33 or 1.34.
Prerequisites
- AWS: Export the required credentials before proceeding.
- Azure: Run
az loginto authenticate, if required.
Procedure
Terraform operations must be run from the cluster directory:
- AWS:
eks_clusterfolder under Terraform > AWS, in the subdirectory named after the cluster. - Azure:
aks_clusterfolder under Terraform > Azure, in the subdirectory named after the cluster.
-
Follow the Update Deployment Configuration steps to update and apply your
deployment.jsonfor the target Kubernetes version. -
Initiate the upgrade. Review the proposed changes carefully, then confirm with yes to proceed. This performs an in-place update of the Kubernetes version without deleting or destroying data.
bash create.sh
Before confirming, verify that the proposed changes meet your expectations and protect your database and other critical resources. This command may create, update, or destroy virtual networks, subnets, AKS/EKS clusters, and node groups. Review all changes thoroughly before proceeding.