Skip to main content

Deploy Kubernetes

Introduction

This page details the instructions for deployment of Cinchy on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.

The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.

Deployment prerequisites

To install Cinchy v5 on Kubernetes, you need to follow the requirements below.

Common prerequisites

Whether installing on Azure or AWS, these common prerequisites are essential:

Git repository setup

  • Create four Git repositories on any Git-supporting platform like GitLab, Azure DevOps, or GitHub. These include:
    • cinchy.terraform: For Terraform configurations.
    • cinchy.argocd: For ArgoCD configurations.
    • cinchy.kubernetes: For cluster and application deployment manifests.
    • cinchy.devops.automations: For maintaining the contents of the above repositories.

Repository artifacts

  • Download and check in the artifacts for these repositories. Accessing Cinchy Artifacts.
  • Ensure a service account with read/write permissions to these repositories.

Tools

Docker images

Domain and SSL Certificate

  • A single domain is required for accessing various applications.
  • Choose between path-based or subdomain-based routing.
  • Ensure an SSL certificate for the cluster (wildcard recommended for subdomain routing). Self-Signed SSL Option
Sample routing options

See below for routing options for multiple instances.

ApplicationPath Based RoutingSubdomain Based Routing
Cinchy 1 (DEV)domain.com/devdev.mydomain.com
Cinchy 2 (QA)domain.com/qaqa.mydomain.com
Cinchy 3 (UAT)domain.com/uatuat.mydomain.com
ArgoCDdomain.com/argocdcluster.mydomain.com/argocd
Grafanadomain.com/grafanacluster.mydomain.com/grafana
OpenSearchdomain.com/dashboardcluster.mydomain.com/dashboard

AWS requirements for Cinchy v5

Terraform Requirements

  • S3 Bucket: Set up an S3 bucket to store the Terraform state.
  • AWS CLI: Install the AWS CLI on the deployment machine and configure it with the correct profile.

VPC Options

Using an existing VPC

  • VPC Setup: Ensure the VPC has a suitable range, like a CIDR with /21 for about 2048 IP addresses.
  • Subnets: Create 3 Subnets, one per Availability Zone (AZ), each with sufficient range (e.g., CIDR with /23 for 512 IP addresses).
  • NAT Gateway: Required for private subnets to enable node group registration with the EKS cluster.

Creating a new VPC

  • Resource Provisioning: All necessary resources will be provisioned automatically.
  • vCPU Availability: Verify the "Running On-Demand All Standard" vCPUs limit can support a minimum of 24 vCPUs.
  • IAM User Account: Ensure the account has privileges to create resources in any existing VPC or to create a new VPC.
  • SSL Certificate: Import an SSL certificate into AWS Certificate Manager, or request a new one via AWS Certificate Manager. For importing, prepare the PEM-encoded certificate body and private key. Learn more about importing certificates.

EKS prerequisite

For AWS, you must install the eksctl CLI.

tip

EKS prerequisite

For AWS, you must intall the eksctl CLI.

tip

Tips for Success: Ensure consistent region configuration across your SSL Certificate, Terraform bucket, and the deployment.json in the subsequent steps.

Initial configuration

The initial setup involves configuring the deployment.json file. Follow these steps:

Configure the deployment.json File

  1. Access the Repository: Go to your cinchy.devops.automations repository. You'll find aws.json and azure.json files there.
  2. Choose the File: Depending on whether you are deploying to AWS or Azure, select the respective file (aws.json or azure.json). Copy it and rename it to deployment.json (or <cluster name>.json) in the same directory.
  3. Edit the Configuration: The deployment.json file contains infrastructure resource configurations and settings for Cinchy instances. Each configuration property includes comments describing its purpose and instructions for completion.
  4. Configure and Save: Follow the in-file guidance to adjust the properties.
  5. Commit Changes: After configuring, commit and push your changes to the repository.
tip

Tips for Success:

  • Revisiting Configuration: You can return to this step anytime during deployment to update configurations. Re-run through the guide sequentially after any changes.
  • Handling Credentials: The deployment.json requires your repository username and password. For GitHub and similar platforms, using a Personal Access Token is recommended to avoid credential retrieval errors in ArgoCD. Check your credentials in ArgoCD Settings post-deployment. Further information on handling private repositories in ArgoCD can be found here.

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. If the file created in step 2, "Configuring the Deployment.json", has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

  2. The console output should have the following message:

Completed successfully

Terraform deployment

The following steps detail how to deploy Terraform on AWS and Azure

The following section provides details for AWS deployment:

Cloud provider authentication

  1. Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.

  2. Run the following commands to authenticate the session:

export AWS_DEFAULT_REGION=REGION
export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_KEY

Deploy the infrastructure

Cinchy.terraform repository structure - AWS

Within the Terraform > AWS directory, a new folder named eks_cluster is created. Nested within that's a subdirectory with the same name as the newly created cluster.

To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.

  1. Execute the following command to create the cluster:
bash create.sh
tip

Make note of the output variables section.

On AWS, this section will contain the following values: Aurora RDS Server Host and Aurora RDS Password and Cinchy S3 bucket access policy ARN.

These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository.

  1. Type yes when prompted to apply the Terraform changes.

The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header

AWS SSH keys

  1. The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:
terraform output -raw private_key

Update the deployment.json

The following section pertains to updating the Deployment.json file.

Update the database connection string

  1. Navigate to the deployment.json (created in Configure the JSON deployment file) > cinchy_instance_configs section.
  2. Each object within represents an instance that will be deployed on the cluster. Each instance configuration has a database_connection_string property. This has placeholders for the host name and password that must be updated using output variables from the previous section.
  3. The Cinchy S3 bucket access policy ARN needs to be updated within aws.json against the cinchy_s3_bucket_access_policy.

Enable AWS Secret Manager

If enable_aws_secret_manager=true is set in aws.json, the secret files listed below are generated.

Creating AWS Secrets Using the Generated Files

  1. Open AWS Secret Manager.
  2. Select Secrets > Store a new secret > Other type of secret.

Cinchy Environment Settings

The below steps should be followed for storing Cinchy environment settings:

  1. Add the following keys and values:
KeyValue
encryptionkeyYour encryption key, set to random 32-byte value in a 64-character hexadecimal string format. [This should only be set on initial environment creation, and cannot be rotated thereafter.]
connectionspasswordThe connections@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy.
workerpasswordThe worker@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy.
eventlistenerpasswordThe eventlistener@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy..
maintenancepasswordThe maintenance@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy..
cinchyautomationspasswordThe automations@cinchy.com user account password. A unique value should be set for each environment. Password can be rotated per your password policy.
  1. Secret Name: cinchy-environment-settings-<cinchy_instance_name>, using your own instance name where indicated.
  2. Update other fields as needed.

Additional Secrets

Use the below to create additional secrets, using your own instance name where indicated.

  • The initial Secret Value will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations\<cluster_name>\<cinchy_instance_name>\secrets
KeySecret Name
orchestrationautomationrunnersecretappsettingsorchestration-automationrunner-secret-appsettings-<cinchy_instance_name>
orchestrationautomationrunnersecretconfigorchestration-automationrunner-secret-config-<cinchy_instance_name>
orchestrationschedulersecretconfigorchestration-scheduler-secret-config-<cinchy_instance_name>
connectionssecretconfigconnections-secret-config-<cinchy_instance_name>
connectionssecretappsettingsconnections-secret-appsettings-<cinchy_instance_name>
eventlistenersecretappsettingsevent-listener-secret-appsettings-<cinchy_instance_name>
formssecretconfigforms-secret-config-<cinchy_instance_name>
maintenanceclisecretappsettingsmaintenance-cli-secret-appsettings-<cinchy_instance_name>
workersecretappsettingsworker-secret-appsettings-<cinchy_instance_name>
websecretappsettingsweb-secret-appsettings-<cinchy_instance_name>

If SSO is enabled:

KeySecret ValueSecret Name
idpsecretappsettingsThe content of the relevant JSON of cinchy.kubernetes\environment_kustomizations\<cluster_name>\<cinchy_instance_name>\secretsidp-secret-appsettings-<cinchy_instance_name>
idpsecretmetadataSSO metadata.xml contentidp-secret-appsettings-<cinchy_instance_name>

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. If the file created in section 3 has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

  2. The console output should end with the following message:

Completed successfully
  1. The updates must be committed to Git before proceeding to the next step.

Connect with kubectl

Update the Kubeconfig

AWS

  1. From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:
aws eks update-kubeconfig --region <region> --name <cluster_name>

Verify the connection

  1. Verify that the connection has been established and the context is the correct cluster by running the following command:
kubectl config get-contexts

Deploy and access ArgoCD

In this step, you will deploy and access ArgoCD.

Deploy ArgoCD

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to deploy ArgoCD:
bash deploy_argocd.sh
  1. Monitor the pods within the ArgoCD namespace by running the following command every 30 seconds until they all move into a healthy state:
kubectl get pods -n argocd

Access ArgoCD

  1. Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to access ArgoCD:
bash access_argocd.sh

This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090/argocd

The credentials for ArgoCD's portal are output at the start of the access_argocd script execution in Base64. The Base64 value must be decoded to get the login credentials to use for the http://localhost:9090/argocd endpoint.

Deploy cluster components

In this step, you will deploy your cluster components.

Deploy ArgoCD applications

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to deploy the cluster components using ArgoCD:
bash deploy_cluster_components.sh
  1. Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (this may take a few minutes).
Tips for Success:
  • If your pods are degraded or failed to sync, refresh of resynchronize your components. You can also delete pods and ArgoCD will automatically spin them back up for you.
  • Check that ArgoCD is pulling from your git repository by navigating to your Settings
  • If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.

Update the DNS

  1. Execute the following command to get the External IP used by the Istio ingress gateway.
kubectl get svc -n istio-system
  1. DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including OpenSearch, Grafana, and ArgoCD.

Access OpenSearch

The default path to access OpenSearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard

info

The default credentials for accessing OpenSearch are admin/admin. We recommend that you change these credentials the first time you log in to OpenSearch.

To change the default credentials for Cinchy v5.4+, follow the documentation here.

To change the default credentials and/or add new users for all other deployments, follow this documentation or navigate to Settings > Internal Roles in OpenSearch.”

OpenSearch retention policy

The JSON snippet below outlines an example retention policy for OpenSearch.

{
"policy": {
"policy_id": "2-day-retention",
"description": "A simple default policy that deletes after 2 days.",
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "2d"
}
}
]
},
{
"name": "delete",
"actions": [
{
"timeout": "1h",
"retry": {
"count": 1,
"backoff": "exponential",
"delay": "1h"
},
"delete": {}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"development*"
],
"priority": 1
}
]
}
}

Access Grafana

The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana

info

The default username is admin. The default password for accessing Grafana can be found by doing a search of adminPassword within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml

We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.

Use MSSQL databases

If you are using MSSQL databases instead of Aurora on AWS, manual database creation is required. You must also create databases for additional Cinchy instances. To do this, follow the steps below:

  1. Run a pod containing SQL Server CLI using the following command:
kubectl run mssql-tools --rm --tty -it --restart="Never" --namespace default --image mcr.microsoft.com/mssql-tools bash
  1. Connect to the database using the SQL Server CLI:
sqlcmd -S <hostname> -U sa -P <password>
  1. After you connect, execute the following SQL command to create a new database:
-- Create a new database
CREATE DATABASE {YourDatabaseName};

Deploy Cinchy components

In this step, you will deploy your Cinchy components.

Deploy ArgoCD application

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
  2. Execute the following command to deploy the Cinchy application components using ArgoCD:
bash deploy_cinchy_components.sh
  1. Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (may take a few minutes)
  2. You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it.
info

You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).

Troubleshooting

  • If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.
kubectl rollout restart sts argocd-application-controller -n argocd

Appendix A

Other Considerations

  • There is an encryption key (CINCHY_ENCRYPTION_KEY) stored as an environment variable in Kubernetes deployments that is applied when the pod is created.
    • In an IIS deployment, the encryption key is added to the appsettings.json.