v5.13 Change Summary
This page provides a comprehensive list of changes made to the Cinchy Platform. Using the tabs below, choose the version of Cinchy you are currently on in order to view what to expect upon upgrading to version 5.13.
- 5.5
- 5.6
- 5.7
- 5.8
- 5.9
- 5.10
- 5.11
v5.5
The following changes were made to the platform between v5.5 and v5.13
Breaking changes
Deprecation of the k8s.gcr.io Kubernetes Image Repository v5.6
The Kubernetes project runs a community-owned image registry called registry.k8s.io in which to host its container images. On April 3rd, 2023, the registry k8s.gcr.io was deprecated and no further images for Kubernetes and related subprojects are being pushed to this location.
Instead, there is a new registry: registry.k8s.io.
New Cinchy Deployments: this change will be automatically reflected in your installation.
For Current Cinchy Deployments: please follow the instructions outlined in the upgrade guide to ensure your components are pointed to the correct image repository.
You can review the full details on this change on the Kubernetes blog.
Discontinuation of support for 2012 TSQL v5.9
As of version 5.9, Cinchy will cease support for 2012 TSQL. This change aligns with Microsoft's End of Life policy. For further details, refer to the SQL Server 2012 End of Support page.
Removal of GraphQL API (Beta) v5.9
The beta version of our GraphQL API endpoint has been removed. If you have any questions regarding this, please submit a support ticket or email support@cinchy.com.
Personal Access Tokens v5.10
There was an issue affecting Personal Access Tokens (PATs) generated in Cinchy wherein tokens created from v5.7 onwards were incompatible with subsequent versions of the platform. This issue has been resolved, however please note that:
- Any tokens created on versions 5.7.x, 5.8.x, and 5.9.x will need to be regenerated.
- "401 Unauthorized" errors may indicate the need to regenerate the token.
- PATs created before 5.7.x and from 5.10 onwards are unaffected.
Update to .NET 8 v5.13
The Cinchy platform was updated to .NET 8, in accordance with Microsoft's .NET support policy. Support for .NET 6 ends on November 12, 2024.
- For customers on Kubernetes: This change will be reflected automatically upon upgrading to Cinchy v5.13+.
- For customers on IIS: The following must be installed prior to upgrading to Cinchy v5.13+:
General Platform
The following changes pertain to the general platform.
v5.6- Miscellaneous security fixes.
- General CDC performance optimizations.
- We upgraded our IDP from IdentityServer4 to IdentityServer6 to ensure we're maintaining the highest standard of security for your platform.
- We implemented Istio mTLS support to ensure secure/TLS in-cluster communication of Cinchy components.
- Cinchy v5.8 is compatible with the MySql v8.1.0 driver.
- Cinchy v5.9+ is compatible with the MySql v8.2.0 driver.
- We have updated our third-party libraries
- Nuget package updates.
- Updated Npgsql to version 7.0.7.
- Upgraded moment.js to 2.29.4
- Various package updates
Expanded CORS policy for Cinchy Web API endpoints v5.9
Cinchy Web API endpoints now feature a more permissive Cross-Origin Resource Sharing (CORS) policy, allowing requests from all hosts. This update enhances the flexibility and integration capabilities of the Cinchy platform with various web applications.
Make sure to use robust security measures in your applications to mitigate potential cross-origin vulnerabilities.
Time Zone Updates v5.9
We updated our time zone handling to improve compatibility and user experience. This change affects both PostgreSQL (PGSQL) and Transact-SQL (TSQL) users, with significant changes in options, discontinuation of support for older TSQL versions, and manual time zone migration. Time zone values will be changed and mapped during the upgrade process. In case of mapping failure, the default time zone will be set to Eastern Standard Time (EST). This enhancement does the following:
-
PGSQL Time Zone support:
- PGSQL now offers an expanded range of time zone options. These options may not have direct equivalents in TSQL.
-
Discontinuation of TSQL 2012 Support:
- We're discontinuing support for TSQL 2012. Users must upgrade to a newer version to ensure compatibility with the latest time zone configurations.
-
System Properties Update:
- Time zone settings will continue to be supported in TSQL 2016 and later versions.
Manual Time zone migration
Due to differences in time zone naming between TSQL and PGSQL, Cinchy will manually migrate users to a matching time zone. To verify your time zones, you can do the following:
-
Personal preferences:
- All users should check their time zone settings post-migration.
- For personal settings, select My Profile and set the preferred time zone.
- For system settings, access the system properties table (ADMIN), manually copying the PGSQL name into the Value column.
-
Database Access Requirements: The Cinchy application must have application READ access to the following tables depending on the database in use:
- PGSQL:
pg_timezone_names
- TSQL:
sys.time_zone_info
- PGSQL:
Integration with AWS and Azure in External Secrets Manager v5.9
With the External Secrets Manager table, Cinchy now offers comprehensive integration capabilities with AWS and Azure. This enhancement allows for streamlined management and integration of external secrets within the Cinchy environment and expands the supported authentication types from AWS and Azure, providing a more versatile approach to managing external secrets.
For AWS, Cinchy now supports the following secret types:
- AWS access keys for IAM users.
- IAM roles.
For Azure, Cinchy now supports the following secret types:
- Managed identities.
- Registered applications.
Introducing Cinchy Automations v5.11
Cinchy Automations is a platform tool that allows users to schedule tasks. To reduce the time and manual effort spent on reoccurring tasks, you can now tell Cinchy to perform the following automatically:
- Executing queries
- Triggering batch syncs
- Extracting and running a code bundle, which can contain any number of queries or syncs needed to perform a task.
Using the Automations capability, you can also build an automation that performs multiple tasks in sequence (known as "Automation Steps") to allow for more complex use cases.
You can find the full details on this powerful new capability here.
Deployment
The following changes pertain to the deployment process.
v5.7- We've added support AWS EKS EBS volume encryption for customers wishing to take advantage of industry-standard AES-256 data encryption without having to build, maintain, and secure their own key management infrastructure. By default, the EKS worker nodes will have a gp3 storage class for new deployments. If you are already running a Cinchy environment: make sure to keep your
eks_persistent_apps_storage_class
togp2
within the DevOps automation aws.json file.- If you want to move to gp3 storage or gp3 storage and volume encryption: you will have to delete the existing volumes/pvc's for Kafka, Redis, OpenSearch, Logging Operator and Event Listener with StatefulSets so that ArgoCD can recreate the proper resources.
- Should your Kafka cluster pods not come back online after deleting the existing volumes/pvc's, restart the Kafka operators. You can verify the change by running the below command:
kubectl get pvc --all-namespaces
APIs
The following changes pertain to Cinchy's APIs.
v5.6- We've fixed a bug that was causing bearer token authenticated APIs to stop working on insecure HTTP Cinchy environments.
- We've implemented a new API endpoint for the retrieval of your secrets. Using the below endpoint, fill in your
<base-url>
,<secret-name>
, and the<domain-name>
to retrieve the referenced secret. This endpoint works with Cinchy’s Personal Access Token capability, as well as Access Tokens retrieved from your IDP.
Blank Example:
<base-url>/api/v1.0/secrets-manager/secret?secretName=<secret-name>&domain=<domain-name>
Populated Example:
Cinchy.net/api/v1.0/secrets-manager/secret?secretName=<ExampleSecret>&domain=<Sandbox>
The API will return an object in the below format:
{
"secretValue": "password123"
}
- We have added two new scopes to the [Cinchy].[Integrated Clients] table: read:all and write:all, which can be used to fine-tune your permission sets. These scopes are found in the “Permitted Scopes” column of the table:
-- read:all = clients can read all data.
-- write:all = clients can read and write all data.- Both scopes are automatically assigned to existing integrated clients upon upgrading to v5.10
- Note: All new Cinchy Web API endpoints that use Bearer Tokens or Cookie Authentication Schemes must have at least one of the new scopes assigned. These endpoints are still currently accessible with the old js_scope, however this will be deprecated in a future release. You can update the scopes of existing endpoints in the “Permitted Scopes” column of the [Cinchy].[Integrated Clients] table.
- Note: A user with write entitlements will not be able to write when using a client that only has the read:all scope assigned.
- Note: Clients that receive a 403: Forbidden status code error in the logs should make note of this change as a possible cause, and update the permissions accordingly.
- You can now use Personal Access tokens in the following scenarios:
- As authentication when calling the api/v1.0/jobs API endpoint.
- As authentication when calling the api/v1.0/jobs API endpoint as another user.
- Added UTF-8 encoding to the Saved Query API endpoint.
Logging and Troubleshooting
The following changes pertain to error logging and troubleshooting within the platform. Note: Conenctions specific changes will be featured in the Connections section below.
v5.9- We improved the error messaging for model loader failures. Before, loading a model with a duplicate name and version in the Models table showed unclear error messages. Users had to check logs to identify the failure cause. The error screen now shows clear, detailed messages. This change makes troubleshooting easier and offers context into model loader failures.
- We integrated Kafka and Redis logs into OpenSearch, offering improved insight and quicker debugging for Change Data Capture (CDC) processes. This enhancement improves issue resolution and a streamlines monitoring.
- To enhance the troubleshooting of Cinchy's Angular SDK, errors will now display additional context. Failed call exceptions will contain more useful errors in the
data.details
property.
Tables
The following changes pertain to tables.
Table Enhancements
v5.7- We updated the dropdown menus for Link columns to display selected and deleted values at the top of the list so that you don't need to scroll through long lists just to find the ones you've selected.
- The Cinchy platform now comes with a new way to store secrets — the Cinchy Secrets Table. Adhering to Cinchy’s Universal Access Controls, you can use this table as a key vault (such as Azure Key Vault or AWS Secrets Manager) to store sensitive data only accessible to the users or user groups that you give access to. You can use secrets stored in this table anywhere a regular variable can go when configuring data syncs, including but not limited to:
- As part of a connection string;
- Within a REST Header, URL, or Body;
- As an Access Key ID. You can also use it in a Listener Configuration.
- You can now enable change notifications and related features on system tables within the Cinchy domain. Administrators and users now have better visibility into the use and modification of these tables. This includes additions, deletions, or updates to the table data.
- If you are on PostgreSQL, please restart the web application pod to enable change notifications.
- Some tables, such as the Listener State table, are excluded from this feature due to their high-volume nature.
- Change Data Capture (CDC) can't be enabled on tables that aren't versioned, specifically the Listener State table.
- When you enable CDC a system table, the model loader can't disable it
- We introduced a new feature that allows members of the Cinchy Builders group
to perform truncate table operations. This enhancement enables Builders to
effectively manage and manipulate table data. Key features include:
- Truncate Table Capability: Members of the Cinchy Builders group now have the authority to execute TRUNCATE operations on tables.
- Design Table Access: To perform a truncate operation, the user must have
access to the Design Table of the table they intend to truncate. If the
user lacks this access, the system will give an error stating
Design Table Access required to execute Truncate command
.
- Selecting a link to a PDF stored in Cinchy via a Link column associated with
Cinchy\Files
now respects your browser settings and opens the PDF in your browser, if you've set it to do so.
- The minimum length of a text column created by a CSV import is now 500 characters.
- Removed infinite scrolling from tables and in link column dropdowns.
- Tables now have pagination and will show 250 records per page. This affects both the regular table view as well as the tables that populate in the query builder view.
- Link Column dropdowns will display the first 100 records. Users can type in the cell to further filter down the list or search for records beyond the first 100
- Link Column drop downs will no longer return null values.
- When using the "Sort" capability on a table, you can now specify whether you want the data to return nulls first or last. Note that selecting either of these options will have an impact on the performance of the table. Leaving the option blank/unspecified will mitigate the impact.
- To improve platform performance by limiting the amount of data that must be read, table views will no longer query for columns that are not represented within the context of that specific view.
Table Bug Fixes
v5.6- We've fixed a “Column doesn’t exist” error that could occur in PostGres deployments when incrementing a column (ex: changing a column data type from number to text).
- We've fixed a bug where table views containing only a single linked column record would appear blank for users with “read-only” permissions.
- We fixed an issue with the behaviour of cached calculated columns when using multi-select data types (Link, Choice, and Hierarchy) with Change Approval enabled. These data types should now work as expected.
- You can now export up to the first 250,000 records from a View using the Export button on a table.
- We fixed the character limit in the Secrets table for Aurora databases. The Secret Value column capacity has increased from 500 to 10,000 characters, ensuring adequate space for storing secret data.
- We resolved an issue in the Collaboration Log Revert function where date-time values in unrelated columns were incorrectly altered.
- We resolved an issue where altering metadata for date columns in PostgreSQL led to exceptions during operations.
- We resolved an issue that caused binary columns to drop when editing the Users and Files system tables. This fix ensures that binary data types are now correctly recognized and retained during table modifications.
- Fixed a bug where the platform was not saving records where changes were made immediately after record creation.
IS NULL
checks with a multiselect field or parameter will now yield the expected result for empty values.- Adding a filter expression to a Link column via the UI will no longer cause a number conversion error.
DXD
The following changes pertain to Cinchy DXD.
v5.7We added additional system columns to extend the number of core Cinchy objects that can be managed through DXD 1.7 and higher. The newly supported Cinchy objects are:
- Views (Data Browser)
- Listener Config
- Secrets
- Pre-install Scripts
- Post-install Scripts
- Webhooks
Queries and CQL
The following changes pertain to queries and Cinchy Query Language.
Query and CQL Enhancements
v5.7- Optimized PostgreSQL query performance when referencing multi-select columns.
- Improved query performance when using a CASE statement on a Link reference.
- We added
execute
, a new method for UDF extensions. This new query call returns aqueryResult
object that contains additional information about your result. For more information, see the Cinchy User Defined Functions page.
- The POST endpoint for Saved Queries now automatically serializes hierarchical
JSON to text when the content-type is
application/json
. This update now supports values that are objects or arrays. This eliminates the need for manual serialization and makes it easier for developers to work with Saved Queries.
- We have added the Compress JSON parameter to the Query Builder UI and [Saved Queries] table. JSON compression can:
- Help to reduce the amount of time it takes to query and process data
- Reduce the amount of bandwidth needed to transfer data. This can be especially beneficial for applications that require frequent data updates, such as web applications.
- Reduce the amount of memory needed to store data.
- We have made various enhancements to the Saved Queries table for use cases when your queries are being used as API endpoints. Better management of these queries is possible by way of HTTP methods (GET, POST, PUT, PATCH, DELETE) for distinguishing between types of query operations, Versions for endpoint versioning, and UUIDs for grouping queries. Please review the Queries and Saved Query API pages for further details.
- To gain more control over your query creation, we have added a Cancel button to the query builder. The Cancel/Stop button will appear for the duration of running your query; clicking it will abort the active query and return a "Query execution cancelled" message.
Query and CQL Bug Fixes
v5.6- We've fixed a bug that was causing a “Can’t be Bound" error when you attempted to use an UPDATE query on a multi-select link column as a user with multiple filters active.
- We fixed a bug that was stripping query parameters from Relative URLs if they were being used as the Application URL of the applets. In the below screenshot, the bug would have stripped out the "q=1" parameter, leaving only an Absolute URL in lieu of a Relative one.
- We fixed a bug in CQL on PostgreSQL that caused the
DATEADD
function to truncate input dates down toDAY
precision. Now, you can expect more accurate date manipulations without losing finer time details.
- We improved messaging in CQL Saved Queries to provide clearer error messages when required parameters are missing in saved queries, aiding in self-debugging.
- Fixed an invalid CQL bug in the Query Editor UI when using FOR JSON PATH when building queries in PGSQL.
Connections
The following changes pertain to data syncs and the Connections experience.
New Features
v5.7- We added Oracle as a new database type for Polling Events in Connections. Data Polling is a source option first featured in Cinchy v5.4 which uses the Cinchy Event Listener to continuously monitor and sync data entries from your Oracle, SQL Server, or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary.
- We made it simpler to debug invalid credentials in data syncs by adding a "Test Connection" button to the UI for the following sources and destinations:
Name | Supported source | Supported destination |
---|---|---|
Amazon Marketplace | ✅ Yes | 🚫No |
Binary Files | ✅ Yes | N/A |
Copper | ✅ Yes | N/A |
DB2 | ✅ Yes | ✅ Yes |
Delimited File | ✅ Yes | N/A |
Dynamics | ✅ Yes | 🚫No |
Excel | ✅ Yes | N/A |
Fixed Width File | ✅ Yes | N/A |
Kafka Topic | 🚫No | ✅ Yes |
ODBC | ✅ Yes | N/A |
Oracle | ✅ Yes | ✅ Yes |
Parquet | ✅ Yes | N/A |
REST | 🚫No | 🚫No |
Salesforce Object | ✅ Yes | ✅ Yes |
Snowflake | ✅ Yes | ✅ Yes |
SOAP | 🚫No | 🚫No |
MS SQL Server | ✅ Yes | ✅ Yes |
Selecting this button will validate whether your username/password/connection string/etc. are able to connect to your source or destination. If successful, a "Connection Succeeded" popup will appear. If unsuccessful, a "Connection Failed" message will appear, along with the ability to review the associated troubleshooting logs. With this change, you are able to debug access-related data syncs at a more granular level.
v5.8- Cinchy now supports a new Cinchy event-triggered source: SOAP API. This new feature initiates a SOAP call based on Change Data Capture (CDC) events occurring in Cinchy. The SOAP response then serves as the source for the sync and can be mapped to any destination. For more information, see the SOAP 1.2 (Cinchy Event Triggered) page.
- A new destination type has been added to the Connections Experience. The "File" destination provides the option to sync your data into Amazon S3 or Azure Blob Storage as a delimited file.
- Introducing Kafka Topic Isolation, a feature designed to optimize the performance of designated real-time syncs. Users can assign custom topics to any listener config, essentially creating dedicated queues to 'fast track' the processing of associated data. When configured appropriately, high priority listener configs will benefit from dedicated resources, while lower priority listener configs will continue to share resources. This provides a mechanism to improve the throughput of critical or high volume workloads, while preserving the default behaviour for most workloads. For more detail on Kafka Topic Isolation, please review the documentation here.
Note: This feature does not support SQL Service Broker
Connections Enhancements
v5.6- To better enable your business security and permission-based needs, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up an AWS IAM role for use in Connections, please review the documentation here.
- You are also able to use AWS IAM roles when syncing S3 file or DynamoDB sources in Connections.
- To increase your data sync security and streamline authentication, we've added support for the use of x.509 certificate authentication for MongoDB Collection Sources, MongoDB (Cinchy Event Triggered) Sources, and MongoDB Targets. This new feature can be accessed directly from the Connections UI when configuring your data sync.
- Continuing to increase our data sync capabilities and features, you can now use @CinchyID as a parameter in post sync scripts when the source is from a Cinchy Event (such as the Event Broker, the Event Triggered REST API, and the Event Triggered MongoDB sources). This means that you can now design post sync scripts that take advantage of the unique CinchyID value of your records.
- We've added a new "Conditional" option for Changed Record Behaviours. When Conditional is selected, you will be able to define the conditions upon which an Update should occur. For instance, you can set your condition such that an update will only occur when a "Status" column is changed to Red, otherwise it will ignore the changed record. This new feature provides more granularity on the type of data being synced into your destination and allows for more detailed use cases.
- We improved the implementation of
DataPollingConcurrencyIndex
. We also added additional logging in the Data Polling Listener to enhance monitoring. - When configuring a connection source with text columns, it's possible to specify a JSON content type. This instructs the system to interpret the contents as a JSON object and pass it through as such. This is useful when the target (such as Kafka) supports and expects a JSON object for a specific target column. When setting this option, the value should always be valid JSON. Alternatively, the original, default behaviour of treating text columns as plaintext is unchanged. As plaintext, the contents of the column will be passed through as a string, even if it could be interpreted as JSON.
- We implemented alphabetical sorting for queries in the Connections listener UI RunQuery and Cinchy Query dropdowns. This streamlines navigation and simplifies query selection for users.
- We enhanced the batch processing system to ensure all records in the queue are fully processed before a batch job is marked as complete.
- We've enhanced the validation process for delete synchronization configurations. The system now checks the configuration at the start of the sync, ensuring the ID Column is defined and matches the Dropped Record behavior. This update prevents errors and confusion, leading to a smoother and more intuitive sync operation.
- We have expanded the authentication options available when building a TSQL database connection; including "Active Directory Interactive" in the platform SQL connection string (i.e. the database that hosts the cinchy web/idp application) will now utilize Active Directory Device Code Flow.
- Cinchy v5.10 is compatible with the MySql v8.3.0 driver.
- The Kafka configuration validation for the Connections WebApi and Worker has been improved such that applications will not start if any Kafka config value is invalid.
- You are now able to configure Cross-Origin Resource Sharing (CORS) for the Connections Experience.
This configuration allows the Connections Web API to become reachable by applications running on domains other than that which hosts your Connections Experience, and is especially useful for building projects/applications on Cinchy.- This value can be configured in the Connections WebApi > appsettings.json > "AppSettings" field by inputting an array of strings, where each string is a domain. Example:
"AppSettings": {
"CorsOrigins" : ["a.com", "b.com", "c.com"],
}