5.19 (Kubernetes)
Release Notes
v5.19 introduces a new platform component and requires changes to your deployment.json configuration:
- Cinchy MCP Server: A new
cinchy-mcpcomponent is added to the platform, implementing the Model Context Protocol (MCP). See New Component: Cinchy MCP Server for deployment details. - Database migration:
CinchyUpgrade131(model version 1.0.131) provisions two built-in OAuth clients (cinchy_mcp,cinchy_mcp_service) and a newmcp_apiscope automatically on startup.
If you are upgrading from a version earlier than v5.17.2, this release also carries forward the following infrastructure changes first introduced in v5.17:
- Kafka Cluster: Upgraded to version 4.0.0
- Strimzi Kafka Operator: Upgraded to version 0.49.0
- Amazon Machine Image (AMI): AL2023_x86_64_STANDARD support updated for EKS 1.33
- Hashicorp AWS Provider: Upgraded to version ~> 5.0
- Hashicorp Helm Provider: Resolved breaking change compatibility with version 3.0 (applies to versions
<= 2.17.0)
Breaking Changes
If upgrading from before v5.17.2, the Kafka 4.0.0 upgrade introduces breaking changes that require a complete Kafka cluster redeployment. You must remove the existing Kafka operator and Kafka cluster before proceeding — see Remove Existing Kafka Components.
If you are upgrading from v5.17.2 or later, these breaking changes do not apply.
Supported Kubernetes Versions
- Amazon Elastic Kubernetes Service (EKS): 1.33 and 1.34
- Azure Kubernetes Service (AKS): 1.33.5 and 1.34.5
The patch version (the third number — e.g., .5 in 1.33.5) for AKS may differ from what is listed above. Azure controls when patch releases are made available, and the exact version in your region may be slightly ahead or behind. Use the closest available patch for your target minor version.
Prerequisites
If you have made custom changes to your deployment file structure, contact Support before upgrading your environments.
- Download the latest Cinchy artifacts from the Cinchy Releases Table > Kubernetes Artifacts column. For this upgrade, download
Cinchy Kubernetes Deployment Template v5.19.0.zip.
Depending on your current version, you may also need to run one or both of the following upgrade scripts:
| Current Version | Run the 5.2 Upgrade Script | Run the 5.5 Upgrade Script |
|---|---|---|
| 5.0 | Yes | Yes |
| 5.1 | Yes | Yes |
| 5.2 | X | Yes |
| 5.3 | X | Yes |
| 5.4 | X | Yes |
| 5.5 | X | X |
| 5.6 | X | X |
| 5.7 | X | X |
| 5.8 | X | X |
| 5.9 | X | X |
| 5.10 | X | X |
| 5.11 | X | X |
| 5.12 | X | X |
| 5.13 | X | X |
| 5.14 | X | X |
| 5.15 | X | X |
| 5.16 | X | X |
| 5.17 | X | X |
| 5.18 | X | X |
| 5.19 | X | X |
Configure for the Latest Version
Clean Existing Repositories
-
Navigate to your cinchy.argocd repository and remove all existing folder structures except for the .git folder and any custom modifications you have made.
-
Navigate to your cinchy.kubernetes repository and remove all existing folder structures except for the
.gitfile.cautionIf
cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yamlexists and is not commented out, do not modify it. Doing so will delete your ArgoCD namespace, requiring a full removal of all Kubernetes resources and redeployment. -
Navigate to your cinchy.terraform repository and remove all existing folder structures except for the
.gitfile. -
Navigate to your cinchy.devops.automation repository and remove all existing folder structures except for the
.gitfile and your deployment.json configuration file.
Extract Kubernetes Template
- Extract
Kubernetes Deployment Template v5.19.0.zipand copy the contents into their respective repositories: cinchy.kubernetes, cinchy.argocd, cinchy.terraform, and cinchy.devops.automation.
Configure Secrets
-
If your environments were not previously configured with Azure Key Vault or AWS Secrets Manager and you are enabling them during this upgrade, ensure the required secrets are created. Otherwise, no action is needed.
Update Deployment Configuration
Repeat these steps each time you perform an AKS/EKS version upgrade using the procedure below.
-
Compare the new
aws.json/azure.jsonconfiguration files with your existingdeployment.jsonand incorporate any additional fields including the new MCP fields described in New Component: Cinchy MCP Server above. -
Update the Kubernetes version in your
deployment.json. To upgrade AKS/EKS to version 1.33 or 1.34, follow the sequential upgrade path, advancing through each minor version incrementally. For example, upgrading from1.31requires sequential upgrades through1.32,1.33, and then1.34. -
Open a terminal from the cinchy.devops.automations directory and run:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
Remove Existing Kafka Components
Skip this section if you are upgrading from v5.17.2 or later. Kafka 4.0.0 was introduced in v5.17.2 and the removal was already performed during that upgrade.
Due to breaking changes in Kafka 4.0.0, you must remove the existing Kafka operator and cluster before proceeding.
This process requires downtime. Plan your upgrade accordingly and perform this step during a maintenance window.
This is a one-time step. You only need to remove the Kafka components once — during your first upgrade that includes Kafka 4.0.0, regardless of how many Kubernetes version hops you perform.
For example, if you are upgrading your Kubernetes cluster from 1.32 → 1.33 → 1.34, perform this Kafka removal only when upgrading to 1.33. When you subsequently upgrade to 1.34, skip this section entirely.
-
Remove the Kafka operator:
kubectl delete app strimzi-kafka-operator -n argocdExpected output:
application.argoproj.io "strimzi-kafka-operator" deleted -
Remove the Kafka cluster:
kubectl delete app kafka-cluster -n argocdExpected output:
application.argoproj.io "kafka-cluster" deleted -
Verify that both
strimzi-kafka-operatorandkafka-clusterhave been removed from ArgoCD:kubectl get app -n argocdConfirm that neither application appears in the output.
Commit and Deploy
-
Commit all changes to Git across the relevant repositories.
-
If changes were made to your cinchy.argocd repository, redeploy ArgoCD. Launch a terminal from the root of the cinchy.argocd repository and run:
bash deploy_argocd.sh -
Verify that all ArgoCD pods are running successfully.
-
Deploy or update cluster components:
bash deploy_cluster_components.shIf you performed the Kafka component removal above, expected output includes:
application.argoproj.io/kafka-cluster created
...
application.argoproj.io/strimzi-kafka-operator created -
Deploy or update Cinchy components:
bash deploy_cinchy_components.shThis deploys the new
cinchy-mcpcomponent alongside all existing platform components. -
Refresh applications in the ArgoCD console as needed.
-
All users must log out and log back in to the Cinchy environment for changes to take effect.
Post-Upgrade Verification
Verify MCP Component
-
Confirm the MCP pod is running:
kubectl get pods -n <cinchy-namespace> -l app=mcp-appExpected output:
NAME READY STATUS RESTARTS AGE
mcp-app-<hash> 1/1 Running 0 2m -
Verify the MCP OAuth clients were provisioned by the database migration. In Cinchy, navigate to Cinchy System Tables > Integrated Clients and confirm the following clients exist:
cinchy_mcp(Authorization Code flow)cinchy_mcp_service(Client Credentials flow)
Verify Kafka Topics (only if upgrading from before v5.17.2)
-
Confirm that all Kafka topics were created successfully:
kubectl get kafkatopic -AExpected output — all topics in a
Readystate:NAMESPACE NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
kafka cinchy-dev-cinchyautomationsjobqueue kafka-cluster 1 3 True
kafka cinchy-dev-connectionsjobcancellationrequests kafka-cluster 100 3 True
kafka cinchy-dev-connectionsjobqueue kafka-cluster 100 3 True
kafka cinchy-dev-datachangenotifications kafka-cluster 100 3 True
kafka cinchy-dev-realtimedatasynctopic kafka-cluster 100 3 True -
If topics show
Ready: Falseor are stuck in a pending state, ArgoCD may have failed to create them due to timing or configuration mismatches between Kafka versions. To resolve: -
Identify the problematic topic from the output above.
-
Delete the topic resource:
kubectl delete kafkatopic <topic-name> -n kafkaAlternatively, delete it via the ArgoCD UI: navigate to the dashboard, locate the Kafka topic for the affected environment, and delete it. ArgoCD will automatically recreate it with the correct configuration.
Kafka UI Connectivity Issues
After upgrading, the Kafka UI pod may retain stale connections to the previous cluster configuration, resulting in connection failures or missing topics and messages.
-
Verify the status of all Kafka pods:
kubectl get pods -n kafka -
If the Kafka UI cannot connect to the upgraded cluster, restart the pod to establish a fresh connection:
kubectl delete pod -n kafka -l app.kubernetes.io/name=kafka-ui -
Kubernetes will automatically recreate the pod. Verify it is running:
kubectl get pods -n kafka -l app.kubernetes.io/name=kafka-ui
Upgrade AWS EKS and Azure AKS
The following procedure upgrades AWS EKS or Azure AKS to version 1.33 or 1.34.
Prerequisites
- AWS: Export the required credentials before proceeding.
- Azure: Run
az loginto authenticate, if required.
Procedure
Terraform operations must be run from the cluster directory:
- AWS:
eks_clusterfolder under Terraform > AWS, in the subdirectory named after the cluster. - Azure:
aks_clusterfolder under Terraform > Azure, in the subdirectory named after the cluster.
-
Follow the Update Deployment Configuration steps to update and apply your
deployment.jsonfor the target Kubernetes version. -
Initiate the upgrade. Review the proposed changes carefully, then confirm with yes to proceed. This performs an in-place update of the Kubernetes version without deleting or destroying data.
bash create.sh
Before confirming, verify that the proposed changes meet your expectations and protect your database and other critical resources. This command may create, update, or destroy virtual networks, subnets, AKS/EKS clusters, and node groups. Review all changes thoroughly before proceeding.
New Component: Cinchy MCP Server
Deploying the Cinchy MCP Server is optional. If you do not plan to use the MCP Server, you can skip this section and omit the MCP-related fields from your deployment.json.
v5.19 adds cinchy-mcp as a new platform component. Before running the DevOps Automations tool, you must add the following fields to your deployment.json.
deployment.json changes
Under image_repo_uris, add:
"mcp_image_repo_uri": "<ecr-registry>/cinchy.mcp"
Under each cinchy_instance_configs.<instance_name>, add:
"mcp_image_tag": "v5.19.0"
Under cinchy_instance_configs.<instance_name>.scaling_parameters, add:
"mcp": {
"mcp_cpu_requests": "200m",
"mcp_cpu_limits": "1",
"mcp_memory_requests": "512Mi",
"mcp_memory_limits": "1Gi"
}
Compare the new aws.json / azure.json with your existing deployment.json to identify all new fields. The values above are the recommended defaults.
MCP component details
| Item | Value |
|---|---|
| Container port | 8080 |
| Health endpoint | /health |
| Transport | HTTP |
| HPA | Min 1 / Max 3 replicas, CPU target 50% |
| Session storage | Redis (redis-redis-cluster), enabling session migration across replicas |
| Session affinity | Consistent hash on Mcp-Session-Id header |
Redis session storage is configured automatically using the existing redis-redis-cluster cluster secret - no additional configuration is required. Without Redis, sessions are stored in-memory per pod (single replica only).
The MCP server is accessed via two new VirtualService routes added before the web catchall:
| Prefix | Notes |
|---|---|
/.well-known/oauth-authorization-server/mcp | Always at root - no application path prefix applied |
/mcp | Proxied to the MCP container |