v5.11 (Kubernetes)
Change Log
- The "cinchy_s3_bucket_access_policy" now adds a prefix of the cluster name for cases where multiple clusters are being made. This is so that they do not interfere with each other.
- Various changes were made to the deployment files and templates in association with the Cinchy Automations platform tool:
- Cinchy Orchestration Scheduler
- Cinchy Automation Runner
- The following lines were added to the deployment template:
"image_repo_uris": {
"orchestration_scheduler_image_repo_uri": " ",
"orchestration_automationrunner_image_repo_uri": " "
}
"cinchy_instance_configs": {
"orchestration_scheduler_image_tag": " ",
"orchestration_automationrunner_image_tag": " ",
}
Upgrading on Kubernetes
To upgrade the components, follow the instructions below in the order presented.
Prerequisites
If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.
- Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, please download the Cinchy v5.11 k8s-template.zip file.
Depending on your current version, you may need to:
- Run the 5.2 upgrade script
- Run the 5.5 upgrade script
- Make changes to your connection string:
If you are upgrading from 5.0-5.3 to 5.11 on an SQL Server Database, you need to make a change to your connectionString
if you haven't already done so. Add TrustServerCertificate=True to bypass the certificate chain during validation.
For a Kubernetes deployment, you can add this value in your deployment.json
file:
"cinchy_instance_configs": {
"database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"}
Current Version | Run the 5.2 Upgrade Script | Run the 5.5 Upgrade Script | Connection String Changes (SQL Server DB) |
---|---|---|---|
5.0 | Yes | Yes | Yes |
5.1 | Yes | Yes | Yes |
5.2 | X | Yes | Yes |
5.3 | X | Yes | Yes |
5.4 | X | Yes | X |
5.5 | X | X | X |
5.6 | X | X | X |
5.7 | X | X | X |
5.8 | X | X | X |
5.9 | X | X | X |
5.10 | X | X | X |
Configure to the newest version
Clean existing repositories
- Go to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
- Go to your cinchy.kubernetes repository. Delete all existing folder
structure except for the
.git
file.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml
file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.
- Go to your cinchy.terraform repository. Delete all existing folder
structure except for the
.git
file. - Go to your cinchy.devops.automation repository. Delete all existing
folder structure except for the
.git
file and your deployment.json.
Download k8s template
- Download and open the new Cinchy
v5.11 k8s-template.zip
file from the Cinchy Releases table and place the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories. - Go to the new
aws.json/azure.json
files and compare them with your currentdeployment.json
file. All additional fields in the newaws.json/azure.json
files should be added to your currentdeployment.json
. - Update the Kubernetes version in your
deployment.json
. To upgrade EKS to a new version, you need to follow an upgrade sequence, installing each incremental version one by one. For example, you might need to upgrade from 1.24 to 1.25, then from 1.25 to 1.26, and finally from 1.26 to 1.27.
You may have changed the name of the deployment.json
file during your original
platform deployment. If so, make sure that you swap up the name wherever it
appears in this document.
Upgrade and redeploy components
-
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
-
Commit all of your changes (if there were any) in each repository.
-
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
-
Execute the following command to deploy ArgoCD:
powershell deploy_argocd.sh
-
Validate ArgoCD pods are running and check that ArgoCD is upgraded v2.7.6 by accessing the ArgoCD application Console.
-
Execute the following command to deploy cluster components and Cinchy components:
powershell deploy_cluster_components.sh
powershell deploy_cinchy_components.sh -
You might see a couple of ArgoCD apps out of sync. Sync them manually.
-
All users must log out and back in to your Cinchy environment in order for the changes to properly take effect.
Upgrade AWS EKS and Azure AKS
To upgrade the AWS EKS and Azure AKS version from 1.24 up to 1.27.x, you have
two methods. The method depends on the status of the subnet CIDR range. The CIDR
is a blocker for Azure only. For AWS export credentials and for Azure run the
az login
command, if required.
-
Go to your cinchy.devops.automations repository and change AKS/EKS version in
deployment.json
(or<cluster name>.json
) within the same directory. -
From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
AWS - Cinchy.terraform repository structure
The AWS deployment updates a folder named eks_cluster
in the Terraform >
AWS directory. Within that directory is a subdirectory with the same name as
the created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
Azure - Cinchy.terraform repository structure
The Azure deployment updates a folder named aks_cluster
Within the Terraform >
Azure directory. Within that directory is a subdirectory with the same name as
the created cluster.
For AWS and Azure export credentials run the az login
command if required.
Run the command below to start the upgrade process. Make sure to verify before you select yes to upgrade the process. This shouldn't delete or destroy any data. It runs an in-place deployment that will update the Kubernetes version.
powershell create.sh
Before accepting the change, verify that it meets your expectations and ensures the protection of your database and any other resources. This command will create, update, or destroy vnet, subnet, AKS cluster, and AKS node groups. Make sure to review the changes before proceeding.