Skip to main content

v5.6 (Kubernetes)

Upgrading on Kubernetes

When it comes time to upgrade your various components, you can do so by following the below instructions.

info

If you have made custom changes to your deployment file structure, please contact your Support team prior to upgrading your environments.

Prerequisites

info

If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.

  • Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.6 k8s-template.zip file.

Depending on your current version, you may need to:

If you are upgrading from 5.0-5.3 to 5.6 on an SQL Server Database, you need to make a change to your connectionString if you haven't already done so. Add TrustServerCertificate=True to bypass the certificate chain during validation.

For a Kubernetes deployment, you can add this value in your deployment.json file:

"cinchy_instance_configs": {
"database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"}
Current VersionRun the 5.2 Upgrade ScriptRun the 5.5 Upgrade ScriptConnection String Changes (SQL Server DB)
5.0YesYesYes
5.1YesYesYes
5.2XYesYes
5.3XYesYes
5.4XYesX
5.5XXX

Configuring to the newest version

  1. Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
  2. Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.
caution

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.
  2. Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.
  3. Open the new Cinchy v5.6 k8s-template.zip file you downloaded from the Cinchy Releases table and check the files into their respective cinchy.kubernete, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.
  4. Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.
caution

Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.

info

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.

  • When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
    • "5.x.x" - Alpine
    • "5.x.x-debian" - Debian
info

Perform this step only If you are upgrading to 5.6 on an SQL Server Database and didn't already make this change in any previous updates. \

Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string: TrustServerCertificate=True

"cinchy_instance_configs": {
"database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"
}
  1. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. Commit all of your changes (if there were any) in each repository.
  2. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.
    1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
    2. Execute the following command to deploy ArgoCD:
bash deploy_argocd.sh
  1. If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:
bash deploy_cluster_components.sh
  1. If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:
bash deploy_cinchy_components.sh
  1. Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.

Appendix A

Template changes (Kubernetes 5.6)

  • The AWS EKS version has been upgraded to support up to v1.24.
  • We've added support for AWS EKS EBS volume encryption. By default EKS worker nodes will have gp3 storage class.
    • For current Cinchy environments you must keep your eks_persistent_apps_storage_class to gp2 in your DevOps automation aws.json file.
    • If you want to move to gp3 storage or gp3 storage and volume encryption, you will have to delete any existing volumes/pvc's for Kafka, Redis, OpenSearch, Logging Operator and Event Listener with statefulset. This ensures that ArgoCD will take care of recreation of resources.
    • If your Kafka cluster pods aren't coming back you must restart your Kafka operators.
    • You can verify the change by running the following command: "kubectl get pvc --all-namespaces".
  • The Connections app has changed from StatefulSet to Deployment. The persistence volume has changed to emptyDir.
  • We've modified the replica count from 1 to 2 for istiod and istio ingress.
  • We've disabled the ArgoCD namespace: istio injection.
    • If this is already enabled on your environment you may keep it as is, such as keeping the cinchy.kubernetes/cluster_components/servicemesh/istio/istio-injection/argocd-ns.yaml file as it's without commenting content in it.
  • The Istio namespace injection has been removed.
    • If this is already enabled on your environment please keep it as is -- otherwise it will force you to redeploy all of your Kubernetes application components.
  • We've upgraded the AWS Secret Manager CSI Driver to the latest version due to crashing pods.
  • We've added support for the EKS EBS CSI driver in lieu of using in-tree EBS storage plugin.
  • We've changed the EKS Metrics server port number in order to support newer versions of Kubernetes.
  • We've set fixed AWS Terraform providers version for all components.
  • We've installed the cluster autoscaler from local charts instead of remote charts.
  • The deprecated azurerm_sql_server Terraform resource has been changed to azurerm_mssql_server
  • The deprecated azurerm_sql_database resource has been changed to azurerm_mssql_database
  • The deprecated azurerm_sql_failover_group has been changed to azurerm_mssql_failover_group
  • The deprecated azurerm_sql_firewall_rule has been changed to azurerm_mssql_firewall_rule