Skip to main content

5.16 (Kubernetes)

Release notes

  • EKS and AKS upgraded from 1.30 > 1.32.
  • Updated values reflected in the deployment.json files:
    • Azure
      • "kubernetes_version": "1.32.3"
      • "orchestrator_version": "1.32.3"
    • AWS
      • "cluster_version": "1.30"
  • RDS Aurora upgraded from 14.5 > 14.15; this will show when applying change for EKS.
    • Note: if Terraform is still set to a deprecated version, like 14.5, it will fail on the terraform apply step, as that version is no longer valid.
  • Terraform module and provider updates
    • Provider version updates:
      • azurerm upgraded from 3.36.0 > 3.112.0 across all affected modules.
    • AKS root module updates:
      • source = "hashicorp/azurerm"
      • version = "> 3.112.0"
      • sku_tier = "Paid" # Changed from "Standard" to "Paid"
  • AKS node pool module updates:
    • source = "hashicorp/azurerm"
    • version = "> 3.112.0"
  • MSSQL root module updates:
    • source = "hashicorp/azurerm"
    • version = "> 3.112.0"
  • Backend configuration (backend.tf)
    • azurerm version = "3.112.0"

Important changes

  • Topics in Kafka real-time syncs now have individual consumer groups; this allows for more stable rebalancing when the load increases/decreases. To prevent data loss in your real-time syncs when upgrading to v5.16, we recommend that you:
  1. Navigate to the [Cinchy].[Listener Config] table and disable all listener configs.
  2. Wait until all Kafka messages are processed by the worker.
  3. Perform your Cinchy upgrade.
  4. Re-enable your listener configs.
  • Due to changes in encryption and certificate validation behavior, existing connection strings will be updated (once off) to include Trust Server Certificate=true. This is to preserve existing behaviour, i.e. encryption without certificate validation. If you use a TLS connection with a certificate provisioned on a server, manually remove this directive from the connection strings to enable certificate validation. For new connection strings, this directive should be added to skip certificate validation, or omitted to require it.

Prerequisites

info

If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.

  • Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, please download the Cinchy Kubernetes Deployment Template v5.16.x.zip file.

Depending on your current version, you may need to:

Considerations Table

Current VersionRun the 5.2 Upgrade ScriptRun the 5.5 Upgrade Script
5.0YesYes
5.1YesYes
5.2XYes
5.3XYes
5.4XYes
5.5XX
5.6XX
5.7XX
5.8XX
5.9XX
5.10XX
5.11XX
5.12XX
5.13XX
5.14XX
5.15XX

Configure to the newest version

Clean existing repositories

  1. Go to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
  2. Go to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.
caution

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Go to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.
  2. Go to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.

Download k8s template

  1. Open the Kubernetes Deployment Template v5.16.x.zip and place the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.

Secrets

  1. If your existing environments were not previously using Azure Key Vault or AWS Secrets Manager on EKS/AKS, and you have enabled them during this upgrade, please ensure the required secrets are created. Otherwise no action is needed.

Deployment JSON Update

  1. Go to the new aws.json/azure.json files and compare them with your current deployment.json file. All additional fields in the new aws.json/azure.json files should be added to your current deployment.json.

  2. Update the Kubernetes version in your deployment.json. To upgrade AKS/EKS to version 1.32, you must follow the required upgrade sequence by progressing through each minor version sequentially. For example, upgrading from 1.29 would require first upgrading to 1.30, then 1.31, and finally to 1.32.

  3. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:

    dotnet Cinchy.DevOps.Automations.dll "deployment.json"

Update Cinchy Instance

  1. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy ArgoCD:

    bash deploy_argocd.sh
  3. Validate ArgoCD pods are running.

  4. Execute the following command to deploy/update cluster components and Cinchy components:

    bash deploy_cluster_components.sh
    bash deploy_cinchy_components.sh

Commit Changes

  1. Commit changes to Git for all relevant repositories.
  2. Refresh applications in the ArgoCD console if required.
  3. All users must log out and back in to your Cinchy environment in order for the changes to properly take effect.

Upgrade AWS EKS and Azure AKS

The following methods can be used upgrade the AWS EKS and Azure AKS versions from 1.27 to 1.32.

  • For AWS export credentials and for Azure run the az login command, if required.
  • To perform terraform operations, the cluster directory must be the working directory during execution.
    • The AWS deployment updates a folder named eks_cluster in the Terraform > AWS directory. Within that directory is a subdirectory with the same name as the created cluster.
    • The Azure deployment updates a folder named aks_cluster Within the Terraform > Azure directory. Within that directory is a subdirectory with the same name as the created cluster.
  1. Go to your cinchy.devops.automations repository and change AKS/EKS version in deployment.json (or <cluster name>.json) within the same directory.

  2. From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:

    dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  3. Run the command below to start the upgrade process. Make sure to verify before you select yes to upgrade the process. This shouldn't delete or destroy any data. It runs an in-place deployment that will update the Kubernetes version.

bash create.sh
warning

Before accepting the change, verify that it meets your expectations and ensures the protection of your database and any other resources. This command will create, update, or destroy vnet, subnet, AKS cluster, and AKS node groups. Make sure to review the changes before proceeding.