Skip to main content

Deploy Kubernetes

Introduction

This page details the instructions for deployment of Cinchy on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.

The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.

Deployment prerequisites

To install Cinchy v5 on Kubernetes, you need to follow the requirements below.

Common prerequisites

Whether installing on Azure or AWS, these common prerequisites are essential:

Git repository setup

  • Create four Git repositories on any Git-supporting platform like GitLab, Azure DevOps, or GitHub. These include:
    • cinchy.terraform: For Terraform configurations.
    • cinchy.argocd: For ArgoCD configurations.
    • cinchy.kubernetes: For cluster and application deployment manifests.
    • cinchy.devops.automations: For maintaining the contents of the above repositories.

Repository artifacts

  • Download and check in the artifacts for these repositories. Accessing Cinchy Artifacts.
  • Ensure a service account with read/write permissions to these repositories.

Tools

Docker images

  • For using Cinchy docker images, pull them here.
  • From Cinchy v5.4, choose between Alpine or Debian image tags:
    • "5.x.x" - Alpine
    • "5.x.x-debian" - Debian (select for DB2 data sync)

Domain and SSL Certificate

  • A single domain is required for accessing various applications.
  • Choose between path-based or subdomain-based routing.
  • Ensure an SSL certificate for the cluster (wildcard recommended for subdomain routing). Self-Signed SSL Option
Sample routing options

See below for routing options for multiple instances.

ApplicationPath Based RoutingSubdomain Based Routing
Cinchy 1 (DEV)domain.com/devdev.mydomain.com
Cinchy 2 (QA)domain.com/qaqa.mydomain.com
Cinchy 3 (UAT)domain.com/uatuat.mydomain.com
ArgoCDdomain.com/argocdcluster.mydomain.com/argocd
Grafanadomain.com/grafanacluster.mydomain.com/grafana
OpenSearchdomain.com/dashboardcluster.mydomain.com/dashboard

AWS requirements for Cinchy v5

Terraform Requirements

  • S3 Bucket: Set up an S3 bucket to store the Terraform state.
  • AWS CLI: Install the AWS CLI on the deployment machine and configure it with the correct profile.

VPC Options

Using an existing VPC

  • VPC Setup: Ensure the VPC has a suitable range, like a CIDR with /21 for about 2048 IP addresses.
  • Subnets: Create 3 Subnets, one per Availability Zone (AZ), each with sufficient range (e.g., CIDR with /23 for 512 IP addresses).
  • NAT Gateway: Required for private subnets to enable node group registration with the EKS cluster.

Creating a new VPC

  • Resource Provisioning: All necessary resources will be provisioned automatically.
  • vCPU Availability: Verify the "Running On-Demand All Standard" vCPUs limit can support a minimum of 24 vCPUs.
  • IAM User Account: Ensure the account has privileges to create resources in any existing VPC or to create a new VPC.
  • SSL Certificate: Import an SSL certificate into AWS Certificate Manager, or request a new one via AWS Certificate Manager. For importing, prepare the PEM-encoded certificate body and private key. Learn more about importing certificates.

EKS prerequisite

For AWS, you must intall the eksctl CLI.

tip

EKS prerequisite

For AWS, you must intall the eksctl CLI.

tip

Tips for Success: Ensure consistent region configuration across your SSL Certificate, Terraform bucket, and the deployment.json in the subsequent steps. Tips for Success: Ensure consistent region configuration across your SSL Certificate, Terraform bucket, and the deployment.json in the subsequent steps.

Initial configuration

The initial setup involves configuring the deployment.json file. Follow these steps:

Configure the deployment.json File

  1. Access the Repository: Go to your cinchy.devops.automations repository. You'll find aws.json and azure.json files there.
  2. Choose the File: Depending on whether you are deploying to AWS or Azure, select the respective file (aws.json or azure.json). Copy it and rename it to deployment.json (or <cluster name>.json) in the same directory.
  3. Edit the Configuration: The deployment.json file contains infrastructure resource configurations and settings for Cinchy instances. Each configuration property includes comments describing its purpose and instructions for completion.
  4. Configure and Save: Follow the in-file guidance to adjust the properties.
  5. Commit Changes: After configuring, commit and push your changes to the repository.
Best practices
  • Revisiting Configuration: You can return to this step anytime during deployment to update configurations. Re-run through the guide sequentially after any changes.
  • Handling Credentials: The deployment.json requires your repository username and password. For GitHub and similar platforms, using a Personal Access Token is recommended to avoid credential retrieval errors in ArgoCD. Check your credentials in ArgoCD Settings post-deployment. Further information on handling private repositories in ArgoCD can be found here.

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. If the file created in "Configuring the Deployment.json" step 2 has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

  2. The console output should have the following message:

Completed successfully

Terraform deployment

The following steps detail how to deploy Terraform on AWS and Azure

The following section provides details for AWS deployment:

Cloud provider authentication

  1. Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.

  2. Run the following commands to authenticate the session:

export AWS_DEFAULT_REGION=REGION
export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_KEY

Deploy the infrastructure

Cinchy.terraform repository structure - AWS

Within the Terraform > AWS directory, a new folder named eks_cluster is created. Nested within that's a subdirectory with the same name as the newly Within the Terraform > AWS directory, a new folder named eks_cluster is created. Nested within that's a subdirectory with the same name as the newly created cluster.

To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.

  1. Execute the following command to create the cluster:
bash create.sh

Output variables

On AWS, this section will contain 2 values: Aurora RDS Server Host and Aurora RDS Password and Cinchy S3 bucket access policy ARN. Aurora RDS Password and Cinchy S3 bucket access policy ARN.

These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository. 2. Type yes when prompted to apply the terraform changes.

The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header

AWS SSH keys

  1. The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:
terraform output -raw private_key

Update the deployment.json

The following section pertains to updating the Deployment.json file.

Update the database connection string

  1. Navigate to the deployment.json (created in Configure the JSON deployment file) > cinchy_instance_configs section.
  2. Each object within represents an instance that will be deployed on the cluster. Each instance configuration has a database_connection_string property. This has placeholders for the host name and password that must be updated using output variables from the previous section.
  3. The Cinchy S3 bucket access policy ARN needs to be updated within aws.json against the cinchy_s3_bucket_access_policy.

Enable AWS Secret Manager

If enable_aws_secret_manager=true is set in aws.json, the secret files listed below are generated. If enable_aws_secret_manager=true is set in aws.json, the secret files listed below are generated.

Creating AWS Secrets Using the Generated Files

  1. Open AWS Secret Manager.
  2. Select Secrets > Store a new secret > Other type of secret.
  3. Use jmesPath as the Key/value. This is specified in cinchy.kubernetes\platform_components\APP_NAME\APP_NAME-aws-secretmanager-appsettings.yaml. For instance, under jmesPath:, you might see:
    • Path: formssecretconfig. The Secret value should be the content of appsettings.json, config.json, or metadata.xml.

The following table outlines the standard naming conventions for creating AWS Secret Manager secrets for each component:

Key/ValueSecret ValueSecret Name
connectionssecretconfigconfig.json contentconnections-secret-config-<cinchy_instance_name>
connectionssecretappsettingsappsettings.json contentconnections-secret-appsettings-<cinchy_instance_name>
eventlistenersecretappsettingsappsettings.json contentevent-listener-secret-appsettings-<cinchy_instance_name>
formssecretconfigappsettings.json contentforms-secret-config-<cinchy_instance_name>
idpsecretappsettingsappsettings.json contentidp-secret-appsettings-<cinchy_instance_name>
idpsecretmetadataSSO metadata.xml contentidp-secret-appsettings-<cinchy_instance_name>
maintenanceclisecretappsettingsappsettings.json contentmaintenance-cli-secret-appsettings-<cinchy_instance_name>
workersecretappsettingsappsettings.json contentworker-secret-appsettings-<cinchy_instance_name>
websecretappsettingsappsettings.json contentweb-secret-appsettings-<cinchy_instance_name>

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. If the file created in section 3 has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

  2. The console output should end with the following message:

Completed successfully
  1. The updates must be committed to Git before proceeding to the next step.

Connect with kubectl

Update the Kubeconfig

AWS

  1. From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:
aws eks update-kubeconfig --region <region> --name <cluster_name>

Verify the connection

  1. Verify that the connection has been established and the context is the correct cluster by running the following command:
kubectl config get-contexts

Deploy and access ArgoCD

In this step, you will deploy and access ArgoCD.

Deploy ArgoCD

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to deploy ArgoCD:
bash deploy_argocd.sh
  1. Monitor the pods within the ArgoCD namespace by running the following command every 30 seconds until they all move into a healthy state:
kubectl get pods -n argocd

Access ArgoCD

  1. Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to access ArgoCD:
bash access_argocd.sh

This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090/argocd at http://localhost:9090/argocd

The credentials for ArgoCD's portal are output at the start of the access_argocd script execution in Base64. The Base64 value must be decoded to get the login credentials to use for the http://localhost:9090/argocd endpoint. login credentials to use for the http://localhost:9090/argocd endpoint.

Deploy cluster components

In this step, you will deploy your cluster components.

Deploy ArgoCD applications

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to deploy the cluster components using ArgoCD:
bash deploy_cluster_components.sh
  1. Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (this may take a few minutes).
  2. Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (this may take a few minutes).
Tips for Success:
  • If your pods are degraded or failed to sync, refresh of resynchronize your components. You can also delete pods and ArgoCD will automatically spin them back up for you.
  • Check that ArgoCD is pulling from your git repository by navigating to your Settings
  • If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.

Update the DNS

  1. Execute the following command to get the External IP used by the Istio ingress gateway.
kubectl get svc -n istio-system
  1. DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including OpenSearch, Grafana, and ArgoCD.

Access OpenSearch

The default path to access OpenSearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard

info

The default credentials for accessing OpenSearch are admin/admin. We recommend that you change these credentials the first time you log in to OpenSearch.

To change the default credentials for Cinchy v5.4+, follow the documentation here.

To change the default credentials and/or add new users for all other deployments, follow this documentation or navigate to Settings > Internal Roles in OpenSearch.”

OpenSearch retention policy

The JSON snippet below outlines an example retention policy for OpenSearch.

{
"policy": {
"policy_id": "2-day-retention",
"description": "A simple default policy that deletes after 2 days.",
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "2d"
}
}
]
},
{
"name": "delete",
"actions": [
{
"timeout": "1h",
"retry": {
"count": 1,
"backoff": "exponential",
"delay": "1h"
},
"delete": {}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"development*"
],
"priority": 1
}
]
}
}

Access Grafana

The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana

info

The default username is admin. The default password for accessing Grafana can be found by doing a search of adminPassword within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml

We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.

Use MSSQL databases

If you are using MSSQL databases instead of Aurora on AWS, manual database creation is required. You must also create databases for additional Cinchy instances. To do this, follow the steps below:

  1. Run a pod containing SQL Server CLI using the following command:
kubectl run mssql-tools --rm --tty -it --restart="Never" --namespace default --image mcr.microsoft.com/mssql-tools bash
  1. Connect to the database using the SQL Server CLI:
sqlcmd -S <hostname> -U sa -P <password>
  1. After you connect, execute the following SQL command to create a new database:
-- Create a new database
CREATE DATABASE {YourDatabaseName};

Deploy Cinchy components

In this step, you will deploy your Cinchy components.

Deploy ArgoCD application

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to deploy the Cinchy application components using ArgoCD:
bash deploy_cinchy_components.sh
  1. Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (may take a few minutes)

  2. Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (may take a few minutes)

  3. You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it in step 8.2.

info

You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).

Troubleshooting

  • If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.
kubectl rollout restart sts argocd-application-controller -n argocd