Deploy Kubernetes
Introduction
This page details the instructions for deployment of Cinchy on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.
The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.
Deployment prerequisites
To install Cinchy v5 on Kubernetes, you need to follow the requirements below.
Common prerequisites
Whether installing on Azure or AWS, these common prerequisites are essential:
Git repository setup
- Create four Git repositories on any Git-supporting platform like
GitLab,
Azure DevOps,
or GitHub. These include:
cinchy.terraform
: For Terraform configurations.cinchy.argocd
: For ArgoCD configurations.cinchy.kubernetes
: For cluster and application deployment manifests.cinchy.devops.automations
: For maintaining the contents of the above repositories.
Repository artifacts
- Download and check in the artifacts for these repositories. Accessing Cinchy Artifacts.
- Ensure a service account with read/write permissions to these repositories.
Tools
- Install these tools on your deployment machine:
- Terraform Terraform v1.2.5+
- AWS: Get Started Guide
- Azure: Get Started Guide
- kubectl (v1.23.0+)
- .NET 8.0.x
- Bash
- Git Bash
- eksctl CLI
- Terraform Terraform v1.2.5+
Docker images
- If using the Cinchy docker images, find out how to access them here.
Domain and SSL Certificate
- A single domain is required for accessing various applications.
- Choose between path-based or subdomain-based routing.
- Ensure an SSL certificate for the cluster (wildcard recommended for subdomain routing). Self-Signed SSL Option
Sample routing options
See below for routing options for multiple instances.
Application | Path Based Routing | Subdomain Based Routing |
---|---|---|
Cinchy 1 (DEV) | domain.com/dev | dev.mydomain.com |
Cinchy 2 (QA) | domain.com/qa | qa.mydomain.com |
Cinchy 3 (UAT) | domain.com/uat | uat.mydomain.com |
ArgoCD | domain.com/argocd | cluster.mydomain.com/argocd |
Grafana | domain.com/grafana | cluster.mydomain.com/grafana |
OpenSearch | domain.com/dashboard | cluster.mydomain.com/dashboard |
- AWS
- Azure
AWS requirements for Cinchy v5
Terraform Requirements
- S3 Bucket: Set up an S3 bucket to store the Terraform state.
- AWS CLI: Install the AWS CLI on the deployment machine and configure it with the correct profile.
VPC Options
Using an existing VPC
- VPC Setup: Ensure the VPC has a suitable range, like a CIDR with /21 for about 2048 IP addresses.
- Subnets: Create 3 Subnets, one per Availability Zone (AZ), each with sufficient range (e.g., CIDR with /23 for 512 IP addresses).
- NAT Gateway: Required for private subnets to enable node group registration with the EKS cluster.
Creating a new VPC
- Resource Provisioning: All necessary resources will be provisioned automatically.
- vCPU Availability: Verify the "Running On-Demand All Standard" vCPUs limit can support a minimum of 24 vCPUs.
- IAM User Account: Ensure the account has privileges to create resources in any existing VPC or to create a new VPC.
- SSL Certificate: Import an SSL certificate into AWS Certificate Manager, or request a new one via AWS Certificate Manager. For importing, prepare the PEM-encoded certificate body and private key. Learn more about importing certificates.
EKS prerequisite
For AWS, you must install the eksctl CLI.
EKS prerequisite
For AWS, you must intall the eksctl CLI.
Tips for Success: Ensure consistent region configuration across your SSL Certificate, Terraform bucket, and the deployment.json in the subsequent steps.
Azure requirements for Cinchy v5
Terraform requirements
- Resource Group: Ensure a resource group for Azure Blob Storage containing the Terraform state.
- Storage Account and Container: Set up Azure Blob Storage for persisting Terraform state.
- Azure CLI: Install the Azure CLI on the deployment machine with the correct profile.
Resource Group Options
Using an existing Resource Group
- Resource Group Setup: Provision the resource group before deployment.
- Virtual Network (VNet): Create a VNet within the resource group.
- Subnet: Establish a single subnet with sufficient range (For example, a CIDR with /22 for 1024 addresses).
Creating a new Resource Group
- Resource Provisioning: All necessary resources will be provisioned automatically.
- vCPU Quota: Check the quota for "Total Regional vCPUs" and "Standard DSv3 Family vCPUs" (or equivalent) to support a minimum of 24 vCPUs.
- AAD User Account: Ensure the account has privileges to create resources in any existing resource groups or to create a new resource group.
Initial configuration
The initial setup involves configuring the deployment.json
file. Follow these
steps:
Configure the deployment.json
File
- Access the Repository: Go to your
cinchy.devops.automations
repository. You'll findaws.json
andazure.json
files there. - Choose the File: Depending on whether you are deploying to AWS or Azure,
select the respective file (
aws.json
orazure.json
). Copy it and rename it todeployment.json
(or<cluster name>.json
) in the same directory. - Edit the Configuration: The
deployment.json
file contains infrastructure resource configurations and settings for Cinchy instances. Each configuration property includes comments describing its purpose and instructions for completion. - Configure and Save: Follow the in-file guidance to adjust the properties.
- Commit Changes: After configuring, commit and push your changes to the repository.
Tips for Success:
- Revisiting Configuration: You can return to this step anytime during deployment to update configurations. Re-run through the guide sequentially after any changes.
- Handling Credentials: The
deployment.json
requires your repository username and password. For GitHub and similar platforms, using a Personal Access Token is recommended to avoid credential retrieval errors in ArgoCD. Check your credentials in ArgoCD Settings post-deployment. Further information on handling private repositories in ArgoCD can be found here.
Execute cinchy.devops.automations
This utility updates the configurations in the cinchy.terraform
,
cinchy.argocd
, and cinchy.kubernetes
repositories.
- From a shell/terminal, navigate to the
cinchy.devops.automations
directory location and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
-
If the file created in step 2, "Configuring the Deployment.json", has a name other than
deployment.json
, the reference in the command will will need to be replaced with the correct name of the file. -
The console output should have the following message:
Completed successfully
Terraform deployment
The following steps detail how to deploy Terraform on AWS and Azure
- AWS
- Azure
The following section provides details for AWS deployment:
Cloud provider authentication
-
Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.
-
Run the following commands to authenticate the session:
export AWS_DEFAULT_REGION=REGION
export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_KEY
Deploy the infrastructure
Cinchy.terraform repository structure - AWS
Within the Terraform > AWS directory, a new folder named eks_cluster
is
created. Nested within that's a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.
- Execute the following command to create the cluster:
bash create.sh
Make note of the output variables section.
On AWS, this section will contain the following values: Aurora RDS Server Host and Aurora RDS Password and Cinchy S3 bucket access policy ARN.
These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository.
- Type yes when prompted to apply the Terraform changes.
The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header
AWS SSH keys
- The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:
terraform output -raw private_key
Update the deployment.json
The following section pertains to updating the Deployment.json file.
Update the database connection string
- Navigate to the deployment.json (created in Configure the JSON deployment file) > cinchy_instance_configs section.
- Each object within represents an instance that will be deployed on the
cluster. Each instance configuration has a
database_connection_string
property. This has placeholders for the host name and password that must be updated using output variables from the previous section. - The Cinchy S3 bucket access policy ARN needs to be updated within
aws.json
against thecinchy_s3_bucket_access_policy
.
Enable AWS Secret Manager
If enable_aws_secret_manager=true
is set in aws.json
, the secret files listed below are generated.
Creating AWS Secrets Using the Generated Files
- Open AWS Secret Manager.
- Select Secrets > Store a new secret > Other type of secret.
Cinchy Environment Settings
The below steps should be followed for storing Cinchy environment settings:
- Add the following keys and values:
Key | Value |
---|---|
encryptionkey | Your encryption key, set to random 32-byte value in a 64-character hexadecimal string format. [This should only be set on initial environment creation, and cannot be rotated thereafter.] |
connectionspassword | The connections@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy. |
workerpassword | The worker@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy. |
eventlistenerpassword | The eventlistener@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy.. |
maintenancepassword | The maintenance@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy.. |
cinchyautomationspassword | The automations@cinchy.com user account password. A unique value should be set for each environment. Password can be rotated per your password policy. |
- Secret Name:
cinchy-environment-settings-<cinchy_instance_name>
, using your own instance name where indicated. - Update other fields as needed.
Additional Secrets
Use the below to create additional secrets, using your own instance name where indicated.
- The initial Secret Value will be the content of the relevant JSON of
cinchy.kubernetes\environment_kustomizations\<cluster_name>\<cinchy_instance_name>\secrets
Key | Secret Name |
---|---|
orchestrationautomationrunnersecretappsettings | orchestration-automationrunner-secret-appsettings-<cinchy_instance_name> |
orchestrationautomationrunnersecretconfig | orchestration-automationrunner-secret-config-<cinchy_instance_name> |
orchestrationschedulersecretconfig | orchestration-scheduler-secret-config-<cinchy_instance_name> |
connectionssecretconfig | connections-secret-config-<cinchy_instance_name> |
connectionssecretappsettings | connections-secret-appsettings-<cinchy_instance_name> |
eventlistenersecretappsettings | event-listener-secret-appsettings-<cinchy_instance_name> |
formssecretconfig | forms-secret-config-<cinchy_instance_name> |
maintenanceclisecretappsettings | maintenance-cli-secret-appsettings-<cinchy_instance_name> |
workersecretappsettings | worker-secret-appsettings-<cinchy_instance_name> |
websecretappsettings | web-secret-appsettings-<cinchy_instance_name> |
If SSO is enabled:
Key | Secret Value | Secret Name |
---|---|---|
idpsecretappsettings | The content of the relevant JSON of cinchy.kubernetes\environment_kustomizations\<cluster_name>\<cinchy_instance_name>\secrets | idp-secret-appsettings-<cinchy_instance_name> |
idpsecretmetadata | SSO metadata.xml content | idp-secret-appsettings-<cinchy_instance_name> |
Cloud provider authentication
- Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.
- Run the following command and follow the on screen instructions to authenticate the session:
az login
Deploy the infrastructure
Cinchy.terraform repository structure - Azure
Within the Terraform > Azure directory, a new folder named aks_cluster
is created. Nested within that's a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
- Execute the following command to create the cluster:
bash create.sh
Make note of the output variables.
On Azure, this section will contain a single value: Azure SQL Database Password
These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository.
- Type yes when prompted to apply the terraform changes.
The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header.
Azure SSH keys
- The SSH key is output to the directory containing the cluster terraform configurations.
Update the deployment.json
The following section pertains to updating the Deployment.json file.
Update the database connection string
- Navigate to the deployment.json > cinchy_instance_configs section.
- Each object within represents an instance that will be deployed on the
cluster. Each instance configuration has a
database_connection_string
property. This has placeholders for the host name and password that must be updated using output variables from the previous section.
For Azure deployments, the host name isn't available as part of the terraform output and instead must be sourced from the Azure Portal.
Update blob storage connection details (Azure)
- Within the deployment.json, the
azure_blob_storage_conn_str
must be set. - The in-line comments outline the commands required to source this value from the Azure CLI.
Enable Azure Key Vault secrets
If you have the key_vault_secrets_provider_enabled=true
value in the
azure.json then the below secrets files would have been created.
To create your new secrets:
- Navigate to your key vault in the Azure portal.
- Open your Key Vault Settings and select Secrets.
- Select Generate/Import.
- On the Create a Secret screen, choose the following values:
- Upload options: Manual.
- Content Type: JSON
- Name and Value: Choose the secret name and value from the below list, replacing 'cinchy_instance_name' as relevant:
Environment Settings The below steps should be followed for storing Cinchy environment settings:
- Add the following keys and values:
Key | Value |
---|---|
encryptionkey | Your encryption key, set to random 32-byte value in a 64-character hexadecimal string format. [This should only be set on initial environment creation, and cannot be rotated thereafter.] |
connectionspassword | The connections@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy. |
workerpassword | The worker@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy. |
eventlistenerpassword | The eventlistener@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy.. |
maintenancepassword | The maintenance@cinchy.com service account user password. A unique value should be set for each environment. Password can be rotated per your password policy.. |
cinchyautomationspassword | The automations@cinchy.com user account password. A unique value should be set for each environment. Password can be rotated per your password policy. |
- Secret Name:
cinchy-environment-settings-<cinchy_instance_name>
, using your own instance name where indicated. - Update other fields as needed.
Additional Secrets:
Use the below to create additional secrets, using your own instance name where indicated.
Name | Value |
---|---|
worker-secret-appsettings-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
web-secret-appsettings-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
maintenance-cli-secret-appsettings-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
idp-secret-appsettings-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
forms-secret-config-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
event-listener-secret-appsettings-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
connections-secret-config-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
connections-secret-appsettings-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
idp-secret-metadata-<cinchy_instance_name> (Note: This is an additional secret only required when sso_enabled=true in the azure.json file) | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
orchestration-automationrunner-secret-appsettings-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
orchestration-automationrunner-secret-config-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
orchestration-scheduler-secret-config-<cinchy_instance_name> | The value for the secret will be the content of the relevant JSON of cinchy.kubernetes\environment_kustomizations<cluster_name><cinchy_instance_name>\secrets |
- Leave the other values to their defaults.
- Select Create.
Execute cinchy.devops.automations
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
- From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
-
If the file created in section 3 has a name other than
deployment.json
, the reference in the command will will need to be replaced with the correct name of the file. -
The console output should end with the following message:
Completed successfully
- The updates must be committed to Git before proceeding to the next step.
Connect with kubectl
- AWS
- Azure
Update the Kubeconfig
AWS
- From a shell/terminal run the following command, replacing
<region>
and <cluster_name> with the accurate values for those placeholders:
aws eks update-kubeconfig --region <region> --name <cluster_name>
Update the Kubeconfig
Azure
- From a shell/terminal run the following commands, replacing
<subscription_id>
,<deployment_resource_group>
, and<cluster_name>
with the accurate values for those placeholders. These commands with the values pre-populated can also be found from the Connect panel of the AKS Cluster in the Azure Portal.
az account set --subscription <subscription_id>
az aks get-credentials --admin --resource-group <deployment_resource_group> --name <cluster_name>
Verify the connection
- Verify that the connection has been established and the context is the correct cluster by running the following command:
kubectl config get-contexts
Deploy and access ArgoCD
In this step, you will deploy and access ArgoCD.
Deploy ArgoCD
- Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
- Execute the following command to deploy ArgoCD:
bash deploy_argocd.sh
- Monitor the pods within the ArgoCD
namespace
by running the following command every 30 seconds until they all move into a healthy state:
kubectl get pods -n argocd
Access ArgoCD
- Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.
- Execute the following command to access ArgoCD:
bash access_argocd.sh
This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090/argocd
The credentials for ArgoCD's portal are output at the start of the
access_argocd
script execution in Base64.
The Base64 value must be decoded to get the
login credentials to use for the http://localhost:9090/argocd endpoint.
Deploy cluster components
In this step, you will deploy your cluster components.
Deploy ArgoCD applications
- Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
- Execute the following command to deploy the cluster components using ArgoCD:
bash deploy_cluster_components.sh
- Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (this may take a few minutes).
- If your pods are degraded or failed to sync, refresh of resynchronize your components. You can also delete pods and ArgoCD will automatically spin them back up for you.
- Check that ArgoCD is pulling from your git repository by navigating to your Settings
- If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.
Update the DNS
- Execute the following command to get the External IP used by the Istio ingress gateway.
kubectl get svc -n istio-system
- DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including OpenSearch, Grafana, and ArgoCD.
Access OpenSearch
The default path to access OpenSearch, unless you have configured it otherwise
in your deployment.json, is <baseurl>/dashboard
The default credentials for accessing OpenSearch are admin/admin. We recommend that you change these credentials the first time you log in to OpenSearch.
To change the default credentials for Cinchy v5.4+, follow the documentation here.
To change the default credentials and/or add new users for all other deployments, follow this documentation or navigate to Settings > Internal Roles in OpenSearch.”
OpenSearch retention policy
The JSON snippet below outlines an example retention policy for OpenSearch.
{
"policy": {
"policy_id": "2-day-retention",
"description": "A simple default policy that deletes after 2 days.",
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "2d"
}
}
]
},
{
"name": "delete",
"actions": [
{
"timeout": "1h",
"retry": {
"count": 1,
"backoff": "exponential",
"delay": "1h"
},
"delete": {}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"development*"
],
"priority": 1
}
]
}
}
Access Grafana
The default path to access Grafana, unless you have configured it otherwise in
your deployment.json, is <baseurl>/grafana
The default username is admin. The default password for accessing Grafana
can be found by doing a search of adminPassword
within the following path:
cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml
We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.
Use MSSQL databases
If you are using MSSQL databases instead of Aurora on AWS, manual database creation is required. You must also create databases for additional Cinchy instances. To do this, follow the steps below:
- Run a pod containing SQL Server CLI using the following command:
kubectl run mssql-tools --rm --tty -it --restart="Never" --namespace default --image mcr.microsoft.com/mssql-tools bash
- Connect to the database using the SQL Server CLI:
sqlcmd -S <hostname> -U sa -P <password>
- After you connect, execute the following SQL command to create a new database:
-- Create a new database
CREATE DATABASE {YourDatabaseName};
Deploy Cinchy components
In this step, you will deploy your Cinchy components.
Deploy ArgoCD application
- Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
- Execute the following command to deploy the Cinchy application components using ArgoCD:
bash deploy_cinchy_components.sh
- Navigate to ArgoCD at http://localhost:9090/argocd and login. Wait until all components are healthy (may take a few minutes)
- You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it.
You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).
Troubleshooting
- If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.
kubectl rollout restart sts argocd-application-controller -n argocd
Appendix A
Other Considerations
- There is an encryption key (
CINCHY_ENCRYPTION_KEY
) stored as an environment variable in Kubernetes deployments that is applied when the pod is created.- In an IIS deployment, the encryption key is added to the appsettings.json.