5.17 (Kubernetes)
Release Notes
This release incorporates the following infrastructure and dependency updates:
- Kafka Cluster: Upgraded to version 4.0.0
- Strimzi Kafka Operator: Upgraded to version 0.49.0
- Amazon Machine Image (AMI): AL2023_x86_64_STANDARD support updated for EKS 1.33
- Hashicorp AWS Provider: Upgraded to version ~> 5.0
- Hashicorp Helm Provider: Resolved breaking change compatibility issue with version 3.0 (applies to versions
<= 2.17.0)
Breaking Changes
The upgrade to Kafka version 4.0.0 introduces breaking changes that require complete cluster redeployment. You must remove the existing Kafka operator and cluster components before proceeding with the upgrade process.
Supported Kubernetes Versions
This release supports the following Kubernetes versions:
- Amazon Elastic Kubernetes Service (EKS): 1.33 and 1.34
- Azure Kubernetes Service (AKS): 1.33.5 and 1.34.1
- Note: Patch versions may vary based on Azure release schedule
Prerequisites
If you have made custom changes to your deployment file structure, please contact your Support team before upgrading your environments.
- Download the latest Cinchy artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, download the Cinchy Kubernetes Deployment Template v5.17.2.zip file.
Depending on your current version, you may be required to execute the following upgrade scripts:
Considerations Table
| Current Version | Run the 5.2 Upgrade Script | Run the 5.5 Upgrade Script |
|---|---|---|
| 5.0 | Yes | Yes |
| 5.1 | Yes | Yes |
| 5.2 | X | Yes |
| 5.3 | X | Yes |
| 5.4 | X | Yes |
| 5.5 | X | X |
| 5.6 | X | X |
| 5.7 | X | X |
| 5.8 | X | X |
| 5.9 | X | X |
| 5.10 | X | X |
| 5.11 | X | X |
| 5.12 | X | X |
| 5.13 | X | X |
| 5.14 | X | X |
| 5.15 | X | X |
| 5.16 | X | X |
Configure to the Newest Version
Clean Existing Repositories
- Navigate to your cinchy.argocd repository and remove all existing folder structures except for the .git folder/directory and any custom modifications you have implemented.
- Navigate to your cinchy.kubernetes repository and remove all existing folder structures except for the
.gitfile.
If the cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file exists and is not commented out, do not modify it. Modifying this file will delete your ArgoCD namespace, requiring a complete removal of all Kubernetes resources and full redeployment.
- Navigate to your cinchy.terraform repository and remove all existing folder structures except for the
.gitfile. - Navigate to your cinchy.devops.automation repository and remove all existing folder structures except for the
.gitfile and your deployment.json configuration file.
Download Kubernetes Template
- Extract the contents of
Kubernetes Deployment Template v5.17.2.zipand copy the files into their respective repositories: cinchy.kubernetes, cinchy.argocd, cinchy.terraform, and cinchy.devops.automation.
Configure Secrets
-
If your existing environments were not previously configured with Azure Key Vault or AWS Secrets Manager on EKS/AKS, and you have enabled them during this upgrade, ensure the required secrets are created. Otherwise, no action is required.
Update Deployment Configuration
-
Review the new
aws.json/azure.jsonconfiguration files and compare them with your currentdeployment.jsonfile. Incorporate all additional fields from the newaws.json/azure.jsonfiles into your existingdeployment.json. -
Update the Kubernetes version in your
deployment.jsonfile. To upgrade AKS/EKS to version 1.33 or 1.34, you must follow the sequential upgrade path, progressing through each minor version incrementally. For example, upgrading from version1.31requires sequential upgrades through1.32,1.33, and finally to1.34. -
Open a shell/terminal session from the cinchy.devops.automations directory and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
Remove Existing Kafka Components
Due to breaking changes introduced in Kafka version 4.0.0, you must remove the existing Kafka operator and cluster components before proceeding with the upgrade.
Important: This process requires downtime. Removing the Strimzi Kafka operator and Kafka cluster will temporarily interrupt Cinchy. Plan your upgrade accordingly and schedule this operation during a maintenance window.
-
Remove the Kafka operator by executing the following command:
kubectl delete app strimzi-kafka-operator -n argocdExpected output:
application.argoproj.io "strimzi-kafka-operator" deleted -
Remove the Kafka cluster by executing the following command:
kubectl delete app kafka-cluster -n argocdExpected output:
application.argoproj.io "kafka-cluster" deleted -
Verify the successful removal of both
strimzi-kafka-operatorandkafka-clusterfrom ArgoCD:kubectl get app -n argocdConfirm that neither
strimzi-kafka-operatornorkafka-clusterappears in the command output.
Commit Changes
- Commit all changes to Git for the relevant repositories.
- Refresh applications in the ArgoCD console as required.
- All users must log out and log back in to the Cinchy environment for the changes to take effect properly.
Deploy Cinchy Instance
-
If changes were made to your cinchy.argocd repository, you may need to redeploy ArgoCD. Launch a shell/terminal session with the working directory set to the root of the cinchy.argocd repository.
-
(Optional) Execute the following command to deploy ArgoCD:
bash deploy_argocd.sh -
Verify that all ArgoCD pods are running successfully.
-
Deploy or update cluster components by executing the following command. This will redeploy the upgraded Kafka operator and cluster:
bash deploy_cluster_components.shExpected output confirming the creation of
kafka-clusterandstrimzi-kafka-operator:application.argoproj.io/kafka-cluster created
...
application.argoproj.io/strimzi-kafka-operator created -
(Optional) Deploy or update Cinchy components by executing the following command:
bash deploy_cinchy_components.sh
Upgrade AWS EKS and Azure AKS
The following procedures can be used to upgrade AWS EKS and Azure AKS to version 1.33 or 1.34.
Prerequisites
- For AWS: Export the required credentials before proceeding.
- For Azure: Execute the
az logincommand to authenticate, if required.
Procedure
To perform Terraform operations, ensure the cluster directory is set as the working directory during execution:
- AWS Deployment: The deployment updates a folder named
eks_clusterlocated in the Terraform > AWS directory. Within that folder is a subdirectory with the same name as the created cluster. - Azure Deployment: The deployment updates a folder named
aks_clusterlocated in the Terraform > Azure directory. Within that folder is a subdirectory with the same name as the created cluster.
Steps:
-
Navigate to your cinchy.devops.automations repository and update the AKS/EKS version in the
deployment.jsonfile (or<cluster name>.json) within the same directory. -
From a shell/terminal session, navigate to the cinchy.devops.automations directory and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json" -
Execute the following command to initiate the upgrade process. Review the proposed changes carefully before confirming with yes to proceed. This operation performs an in-place deployment to update the Kubernetes version without deleting or destroying data.
bash create.sh
Before accepting the proposed changes, verify that they meet your expectations and ensure the protection of your database and other critical resources. This command may create, update, or destroy virtual networks, subnets, AKS/EKS clusters, and node groups. Review all changes thoroughly before proceeding.