Skip to main content

5.13 (Kubernetes)

v5.13.1+ (Kubernetes)

What's new

  • Service accounts have been shortened from APP_NAME-serviceaccount to APP_NAME-sa to simplify resource naming.
  • Non-Web Cinchy components now communicate via Cinchy Web through the Cinchy Client. Previously, this traffic went over a public endpoint, exiting the cluster to the Internet. Now, the Istio Service Mesh manages it within the cluster.
  • Cinchy Automations uses a CronJob setup that currently functions best without Istio sidecars. Istio is disabled for these jobs to maintain compatibility, with Permissive MTLS mode allowing the CronJob to connect with Cinchy Web and IdP components as expected.
  • The AWS CSI driver has been removed from Terraform deployment steps, streamlining the infrastructure setup.
  • Azure SQL Instances are now configured to be private by default, creating a private DNS and private endpoint while disabling Azure resource access.

Upgrade Steps

To upgrade the components, follow the instructions below in the order presented.

Prerequisites

info

If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.

  • Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, please download the Cinchy Kubernetes Deployment Template v5.13.x.zip file.

Depending on your current version, you may need to:

If you are upgrading from 5.0-5.3 to 5.13.1+ on an SQL Server Database, you need to make a change to your connectionString if you haven't already done so. Add TrustServerCertificate=True to bypass the certificate chain during validation.

For a Kubernetes deployment, you can add this value in your deployment.json file:

"cinchy_instance_configs": {
"database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"}
Current VersionRun the 5.2 Upgrade ScriptRun the 5.5 Upgrade ScriptConnection String Changes (SQL Server DB)
5.0YesYesYes
5.1YesYesYes
5.2XYesYes
5.3XYesYes
5.4XYesX
5.5XXX
5.6XXX
5.7XXX
5.8XXX
5.9XXX
5.10XXX
5.11XXX
5.12XXX
5.13XXX

Configure to the newest version

Clean existing repositories

  1. Go to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
  2. Go to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.
caution

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Go to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.
  2. Go to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.

Download k8s template

  1. Open the Kubernetes Deployment Template v5.13.1.zip and place the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.

Secrets

  1. If you have AWS Secrets Manager or Azure Key Vault enabled and haven't already done so, create your secrets:

Deployment JSON Update

  1. Go to the new aws.json/azure.json files and compare them with your current deployment.json file. All additional fields in the new aws.json/azure.json files should be added to your current deployment.json.

  2. Update the Kubernetes version in your deployment.json. To upgrade EKS to a new version, you need to follow an upgrade sequence, installing each incremental version one by one. For example, you might need to upgrade from 1.27 to 1.28, then from 1.28 to 1.29, and finally from 1.29 to 1.30.

  3. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:

    dotnet Cinchy.DevOps.Automations.dll "deployment.json"

Terraform Updates

If you are using AWS Secret Manager: perform all the steps in this section.

If you are not using AWS Secret Manager: only perform steps 1 and 5.

  1. Navigate to your cluster directory:
    1. For AWS deployments: Navigate to cinchy.terraform/aws/eks_cluster/<cluster_name>/new-vpc.tf or cinchy.terraform/aws/eks_cluster/<cluster_name>/existing-vpc.tf based on your current setup.
    2. For Azure deployments: Navigate to cinchy.terraform/azure/aks_cluster/<cluster_name>/new-vpc.tf or cinchy.terraform/azure/aks_cluster/<cluster_name>/existing-vpc.tf based on your current setup.
  2. Set the following values temporarily to false:
    • enable_secrets_store_csi_driver_provider_aws = false
    • enable_secrets_store_csi_driver = false
  3. Run the upgrade process:
bash create.sh
  1. After successful completion, set enable_secrets_store_csi_driver_provider_aws = true. This ensures that the appropriate CSI driver is installed, even if AWS Secret Manager is disabled in the deployment JSON.
  2. Run the following:
bash create.sh

Update Cinchy Instance

  1. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy ArgoCD:

    bash deploy_argocd.sh
  3. Validate ArgoCD pods are running.

  4. Execute the following command to deploy/update cluster components and Cinchy components:

    bash deploy_cluster_components.sh
    bash deploy_cinchy_components.sh

Commit Changes

  1. Commit changes to Git for all relevant repositories.
  2. Refresh applications in the ArgoCD console if required.
  3. All users must log out and back in to your Cinchy environment in order for the changes to properly take effect.

v5.13 (Kubernetes)

What's New

  • Cinchy v5.13+ has been updated to .NET 8. This change will be reflected automatically upon upgrading your platform.

Upgrade Steps

To upgrade the components, follow the instructions below in the order presented.

Prerequisites

info

If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.

  • Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, please download the Cinchy Kubernetes Deployment Template v5.13.zip file.

Depending on your current version, you may need to:

If you are upgrading from 5.0-5.3 to 5.13 on an SQL Server Database, you need to make a change to your connectionString if you haven't already done so. Add TrustServerCertificate=True to bypass the certificate chain during validation.

For a Kubernetes deployment, you can add this value in your deployment.json file:

"cinchy_instance_configs": {
"database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"}
Current VersionRun the 5.2 Upgrade ScriptRun the 5.5 Upgrade ScriptConnection String Changes (SQL Server DB)
5.0YesYesYes
5.1YesYesYes
5.2XYesYes
5.3XYesYes
5.4XYesX
5.5XXX
5.6XXX
5.7XXX
5.8XXX
5.9XXX
5.10XXX
5.11XXX
5.12XXX

Configure to the newest version

Clean existing repositories

  1. Go to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
  2. Go to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.
caution

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Go to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.
  2. Go to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.

Download k8s template

  1. Download and open the new Cinchy Kubernetes Deployment Template v5.13.zip file from the Cinchy Releases table and place the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.
  2. Go to the new aws.json/azure.json files and compare them with your current deployment.json file. All additional fields in the new aws.json/azure.json files should be added to your current deployment.json.
  3. Update the Kubernetes version in your deployment.json. To upgrade EKS to a new version, you need to follow an upgrade sequence, installing each incremental version one by one. For example, you might need to upgrade from 1.27 to 1.28, then from 1.28 to 1.29, and finally from 1.29 to 1.30.
tip

You may have changed the name of the deployment.json file during your original platform deployment. If so, make sure that you swap up the name wherever it appears in this document.

Upgrade and redeploy components

  1. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:

    dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  2. Commit all of your changes (if there were any) in each repository.

  3. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  4. Execute the following command to deploy ArgoCD:

    bash deploy_argocd.sh
  5. Validate ArgoCD pods are running.

  6. Execute the following command to deploy/update cluster components and Cinchy components:

    bash deploy_cluster_components.sh
    bash deploy_cinchy_components.sh
  7. You might see a couple of ArgoCD apps out of sync. Sync them manually.

  8. All users must log out and back in to your Cinchy environment in order for the changes to properly take effect.

Upgrade AWS EKS and Azure AKS

The following methods can be used upgrade the AWS EKS (from 1.27 up to 1.30.x) and Azure AKS (from 1.27 to 1.29.x) versions.

  • For AWS export credentials and for Azure run the az login command, if required.
  • To perform terraform operations, the cluster directory must be the working directory during execution.
    • The AWS deployment updates a folder named eks_cluster in the Terraform > AWS directory. Within that directory is a subdirectory with the same name as the created cluster.
    • The Azure deployment updates a folder named aks_cluster Within the Terraform > Azure directory. Within that directory is a subdirectory with the same name as the created cluster.
  1. Go to your cinchy.devops.automations repository and change AKS/EKS version in deployment.json (or <cluster name>.json) within the same directory.
  2. From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. Run the command below to start the upgrade process. Make sure to verify before you select yes to upgrade the process. This shouldn't delete or destroy any data. It runs an in-place deployment that will update the Kubernetes version.
bash create.sh
warning

Before accepting the change, verify that it meets your expectations and ensures the protection of your database and any other resources. This command will create, update, or destroy vnet, subnet, AKS cluster, and AKS node groups. Make sure to review the changes before proceeding.