v5.13 Change Summary
This page provides a comprehensive list of changes made to the Cinchy Platform. Using the tabs below, choose the version of Cinchy you are currently on in order to view what to expect upon upgrading to version 5.13.
- 5.5
- 5.6
- 5.7
- 5.8
- 5.9
- 5.10
- 5.11
v5.5
The following changes were made to the platform between v5.5 and v5.13
Breaking changes
Deprecation of the k8s.gcr.io Kubernetes Image Repository v5.6
The Kubernetes project runs a community-owned image registry called registry.k8s.io in which to host its container images. On April 3rd, 2023, the registry k8s.gcr.io was deprecated and no further images for Kubernetes and related subprojects are being pushed to this location.
Instead, there is a new registry: registry.k8s.io.
New Cinchy Deployments: this change will be automatically reflected in your installation.
For Current Cinchy Deployments: please follow the instructions outlined in the upgrade guide to ensure your components are pointed to the correct image repository.
You can review the full details on this change on the Kubernetes blog.
Discontinuation of support for 2012 TSQL v5.9
As of version 5.9, Cinchy will cease support for 2012 TSQL. This change aligns with Microsoft's End of Life policy. For further details, refer to the SQL Server 2012 End of Support page.
Removal of GraphQL API (Beta) v5.9
The beta version of our GraphQL API endpoint has been removed. If you have any questions regarding this, please submit a support ticket or email support@cinchy.com.
Personal Access Tokens v5.10
There was an issue affecting Personal Access Tokens (PATs) generated in Cinchy wherein tokens created from v5.7 onwards were incompatible with subsequent versions of the platform. This issue has been resolved, however please note that:
- Any tokens created on versions 5.7.x, 5.8.x, and 5.9.x will need to be regenerated.
- "401 Unauthorized" errors may indicate the need to regenerate the token.
- PATs created before 5.7.x and from 5.10 onwards are unaffected.
Update to .NET 8 v5.13
The Cinchy platform was updated to .NET 8, in accordance with Microsoft's .NET support policy. Support for .NET 6 ends on November 12, 2024.
- For customers on Kubernetes: This change will be reflected automatically upon upgrading to Cinchy v5.13+.
- For customers on IIS: The following must be installed prior to upgrading to Cinchy v5.13+:
General Platform
The following changes pertain to the general platform.
v5.6- Miscellaneous security fixes.
- General CDC performance optimizations.
- We upgraded our IDP from IdentityServer4 to IdentityServer6 to ensure we're maintaining the highest standard of security for your platform.
- We implemented Istio mTLS support to ensure secure/TLS in-cluster communication of Cinchy components.
- Cinchy v5.8 is compatible with the MySql v8.1.0 driver.
- Cinchy v5.9+ is compatible with the MySql v8.2.0 driver.
- We have updated our third-party libraries
- Nuget package updates.
- Updated Npgsql to version 7.0.7.
- Upgraded moment.js to 2.29.4
- Various package updates
Expanded CORS policy for Cinchy Web API endpoints v5.9
Cinchy Web API endpoints now feature a more permissive Cross-Origin Resource Sharing (CORS) policy, allowing requests from all hosts. This update enhances the flexibility and integration capabilities of the Cinchy platform with various web applications.
Make sure to use robust security measures in your applications to mitigate potential cross-origin vulnerabilities.
Time Zone Updates v5.9
We updated our time zone handling to improve compatibility and user experience. This change affects both PostgreSQL (PGSQL) and Transact-SQL (TSQL) users, with significant changes in options, discontinuation of support for older TSQL versions, and manual time zone migration. Time zone values will be changed and mapped during the upgrade process. In case of mapping failure, the default time zone will be set to Eastern Standard Time (EST). This enhancement does the following:
-
PGSQL Time Zone support:
- PGSQL now offers an expanded range of time zone options. These options may not have direct equivalents in TSQL.
-
Discontinuation of TSQL 2012 Support:
- We're discontinuing support for TSQL 2012. Users must upgrade to a newer version to ensure compatibility with the latest time zone configurations.
-
System Properties Update:
- Time zone settings will continue to be supported in TSQL 2016 and later versions.
Manual Time zone migration
Due to differences in time zone naming between TSQL and PGSQL, Cinchy will manually migrate users to a matching time zone. To verify your time zones, you can do the following:
-
Personal preferences:
- All users should check their time zone settings post-migration.
- For personal settings, select My Profile and set the preferred time zone.
- For system settings, access the system properties table (ADMIN), manually copying the PGSQL name into the Value column.
-
Database Access Requirements: The Cinchy application must have application READ access to the following tables depending on the database in use:
- PGSQL:
pg_timezone_names
- TSQL:
sys.time_zone_info
- PGSQL:
Integration with AWS and Azure in External Secrets Manager v5.9
With the External Secrets Manager table, Cinchy now offers comprehensive integration capabilities with AWS and Azure. This enhancement allows for streamlined management and integration of external secrets within the Cinchy environment and expands the supported authentication types from AWS and Azure, providing a more versatile approach to managing external secrets.
For AWS, Cinchy now supports the following secret types:
- AWS access keys for IAM users.
- IAM roles.
For Azure, Cinchy now supports the following secret types:
- Managed identities.
- Registered applications.
Introducing Cinchy Automations v5.11
Cinchy Automations is a platform tool that allows users to schedule tasks. To reduce the time and manual effort spent on reoccurring tasks, you can now tell Cinchy to perform the following automatically:
- Executing queries
- Triggering batch syncs
- Extracting and running a code bundle, which can contain any number of queries or syncs needed to perform a task.
Using the Automations capability, you can also build an automation that performs multiple tasks in sequence (known as "Automation Steps") to allow for more complex use cases.
You can find the full details on this powerful new capability here.
Deployment
The following changes pertain to the deployment process.
v5.7- We've added support AWS EKS EBS volume encryption for customers wishing to take advantage of industry-standard AES-256 data encryption without having to build, maintain, and secure their own key management infrastructure. By default, the EKS worker nodes will have a gp3 storage class for new deployments. If you are already running a Cinchy environment: make sure to keep your
eks_persistent_apps_storage_class
togp2
within the DevOps automation aws.json file.- If you want to move to gp3 storage or gp3 storage and volume encryption: you will have to delete the existing volumes/pvc's for Kafka, Redis, OpenSearch, Logging Operator and Event Listener with StatefulSets so that ArgoCD can recreate the proper resources.
- Should your Kafka cluster pods not come back online after deleting the existing volumes/pvc's, restart the Kafka operators. You can verify the change by running the below command:
kubectl get pvc --all-namespaces
APIs
The following changes pertain to Cinchy's APIs.
v5.6- We've fixed a bug that was causing bearer token authenticated APIs to stop working on insecure HTTP Cinchy environments.
- We've implemented a new API endpoint for the retrieval of your secrets. Using the below endpoint, fill in your
<base-url>
,<secret-name>
, and the<domain-name>
to retrieve the referenced secret. This endpoint works with Cinchy’s Personal Access Token capability, as well as Access Tokens retrieved from your IDP.
Blank Example:
<base-url>/api/v1.0/secrets-manager/secret?secretName=<secret-name>&domain=<domain-name>
Populated Example:
Cinchy.net/api/v1.0/secrets-manager/secret?secretName=<ExampleSecret>&domain=<Sandbox>
The API will return an object in the below format:
{
"secretValue": "password123"
}
- We have added two new scopes to the [Cinchy].[Integrated Clients] table: read:all and write:all, which can be used to fine-tune your permission sets. These scopes are found in the “Permitted Scopes” column of the table:
-- read:all = clients can read all data.
-- write:all = clients can read and write all data.- Both scopes are automatically assigned to existing integrated clients upon upgrading to v5.10
- Note: All new Cinchy Web API endpoints that use Bearer Tokens or Cookie Authentication Schemes must have at least one of the new scopes assigned. These endpoints are still currently accessible with the old js_scope, however this will be deprecated in a future release. You can update the scopes of existing endpoints in the “Permitted Scopes” column of the [Cinchy].[Integrated Clients] table.
- Note: A user with write entitlements will not be able to write when using a client that only has the read:all scope assigned.
- Note: Clients that receive a 403: Forbidden status code error in the logs should make note of this change as a possible cause, and update the permissions accordingly.
- You can now use Personal Access tokens in the following scenarios:
- As authentication when calling the api/v1.0/jobs API endpoint.
- As authentication when calling the api/v1.0/jobs API endpoint as another user.
- Added UTF-8 encoding to the Saved Query API endpoint.
Logging and Troubleshooting
The following changes pertain to error logging and troubleshooting within the platform. Note: Conenctions specific changes will be featured in the Connections section below.
v5.9- We improved the error messaging for model loader failures. Before, loading a model with a duplicate name and version in the Models table showed unclear error messages. Users had to check logs to identify the failure cause. The error screen now shows clear, detailed messages. This change makes troubleshooting easier and offers context into model loader failures.
- We integrated Kafka and Redis logs into OpenSearch, offering improved insight and quicker debugging for Change Data Capture (CDC) processes. This enhancement improves issue resolution and a streamlines monitoring.
- To enhance the troubleshooting of Cinchy's Angular SDK, errors will now display additional context. Failed call exceptions will contain more useful errors in the
data.details
property.
Tables
The following changes pertain to tables.
Table Enhancements
v5.7- We updated the dropdown menus for Link columns to display selected and deleted values at the top of the list so that you don't need to scroll through long lists just to find the ones you've selected.
- The Cinchy platform now comes with a new way to store secrets — the Cinchy Secrets Table. Adhering to Cinchy’s Universal Access Controls, you can use this table as a key vault (such as Azure Key Vault or AWS Secrets Manager) to store sensitive data only accessible to the users or user groups that you give access to. You can use secrets stored in this table anywhere a regular variable can go when configuring data syncs, including but not limited to:
- As part of a connection string;
- Within a REST Header, URL, or Body;
- As an Access Key ID. You can also use it in a Listener Configuration.
- You can now enable change notifications and related features on system tables within the Cinchy domain. Administrators and users now have better visibility into the use and modification of these tables. This includes additions, deletions, or updates to the table data.
- If you are on PostgreSQL, please restart the web application pod to enable change notifications.
- Some tables, such as the Listener State table, are excluded from this feature due to their high-volume nature.
- Change Data Capture (CDC) can't be enabled on tables that aren't versioned, specifically the Listener State table.
- When you enable CDC a system table, the model loader can't disable it
- We introduced a new feature that allows members of the Cinchy Builders group
to perform truncate table operations. This enhancement enables Builders to
effectively manage and manipulate table data. Key features include:
- Truncate Table Capability: Members of the Cinchy Builders group now have the authority to execute TRUNCATE operations on tables.
- Design Table Access: To perform a truncate operation, the user must have
access to the Design Table of the table they intend to truncate. If the
user lacks this access, the system will give an error stating
Design Table Access required to execute Truncate command
.
- Selecting a link to a PDF stored in Cinchy via a Link column associated with
Cinchy\Files
now respects your browser settings and opens the PDF in your browser, if you've set it to do so.
- The minimum length of a text column created by a CSV import is now 500 characters.
- Removed infinite scrolling from tables and in link column dropdowns.
- Tables now have pagination and will show 250 records per page. This affects both the regular table view as well as the tables that populate in the query builder view.
- Link Column dropdowns will display the first 100 records. Users can type in the cell to further filter down the list or search for records beyond the first 100
- Link Column drop downs will no longer return null values.
- When using the "Sort" capability on a table, you can now specify whether you want the data to return nulls first or last. Note that selecting either of these options will have an impact on the performance of the table. Leaving the option blank/unspecified will mitigate the impact.
- To improve platform performance by limiting the amount of data that must be read, table views will no longer query for columns that are not represented within the context of that specific view.
Table Bug Fixes
v5.6- We've fixed a “Column doesn’t exist” error that could occur in PostGres deployments when incrementing a column (ex: changing a column data type from number to text).
- We've fixed a bug where table views containing only a single linked column record would appear blank for users with “read-only” permissions.
- We fixed an issue with the behaviour of cached calculated columns when using multi-select data types (Link, Choice, and Hierarchy) with Change Approval enabled. These data types should now work as expected.
- You can now export up to the first 250,000 records from a View using the Export button on a table.
- We fixed the character limit in the Secrets table for Aurora databases. The Secret Value column capacity has increased from 500 to 10,000 characters, ensuring adequate space for storing secret data.
- We resolved an issue in the Collaboration Log Revert function where date-time values in unrelated columns were incorrectly altered.
- We resolved an issue where altering metadata for date columns in PostgreSQL led to exceptions during operations.
- We resolved an issue that caused binary columns to drop when editing the Users and Files system tables. This fix ensures that binary data types are now correctly recognized and retained during table modifications.
- Fixed a bug where the platform was not saving records where changes were made immediately after record creation.
IS NULL
checks with a multiselect field or parameter will now yield the expected result for empty values.- Adding a filter expression to a Link column via the UI will no longer cause a number conversion error.
DXD
The following changes pertain to Cinchy DXD.
v5.7We added additional system columns to extend the number of core Cinchy objects that can be managed through DXD 1.7 and higher. The newly supported Cinchy objects are:
- Views (Data Browser)
- Listener Config
- Secrets
- Pre-install Scripts
- Post-install Scripts
- Webhooks
Queries and CQL
The following changes pertain to queries and Cinchy Query Language.
Query and CQL Enhancements
v5.7- Optimized PostgreSQL query performance when referencing multi-select columns.
- Improved query performance when using a CASE statement on a Link reference.
- We added
execute
, a new method for UDF extensions. This new query call returns aqueryResult
object that contains additional information about your result. For more information, see the Cinchy User Defined Functions page.
- The POST endpoint for Saved Queries now automatically serializes hierarchical
JSON to text when the content-type is
application/json
. This update now supports values that are objects or arrays. This eliminates the need for manual serialization and makes it easier for developers to work with Saved Queries.
- We have added the Compress JSON parameter to the Query Builder UI and [Saved Queries] table. JSON compression can:
- Help to reduce the amount of time it takes to query and process data
- Reduce the amount of bandwidth needed to transfer data. This can be especially beneficial for applications that require frequent data updates, such as web applications.
- Reduce the amount of memory needed to store data.
- We have made various enhancements to the Saved Queries table for use cases when your queries are being used as API endpoints. Better management of these queries is possible by way of HTTP methods (GET, POST, PUT, PATCH, DELETE) for distinguishing between types of query operations, Versions for endpoint versioning, and UUIDs for grouping queries. Please review the Queries and Saved Query API pages for further details.
- To gain more control over your query creation, we have added a Cancel button to the query builder. The Cancel/Stop button will appear for the duration of running your query; clicking it will abort the active query and return a "Query execution cancelled" message.
Query and CQL Bug Fixes
v5.6- We've fixed a bug that was causing a “Can’t be Bound" error when you attempted to use an UPDATE query on a multi-select link column as a user with multiple filters active.
- We fixed a bug that was stripping query parameters from Relative URLs if they were being used as the Application URL of the applets. In the below screenshot, the bug would have stripped out the "q=1" parameter, leaving only an Absolute URL in lieu of a Relative one.
- We fixed a bug in CQL on PostgreSQL that caused the
DATEADD
function to truncate input dates down toDAY
precision. Now, you can expect more accurate date manipulations without losing finer time details.
- We improved messaging in CQL Saved Queries to provide clearer error messages when required parameters are missing in saved queries, aiding in self-debugging.
- Fixed an invalid CQL bug in the Query Editor UI when using FOR JSON PATH when building queries in PGSQL.
Connections
The following changes pertain to data syncs and the Connections experience.
New Features
v5.7- We added Oracle as a new database type for Polling Events in Connections. Data Polling is a source option first featured in Cinchy v5.4 which uses the Cinchy Event Listener to continuously monitor and sync data entries from your Oracle, SQL Server, or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary.
- We made it simpler to debug invalid credentials in data syncs by adding a "Test Connection" button to the UI for the following sources and destinations:
Name | Supported source | Supported destination |
---|---|---|
Amazon Marketplace | ✅ Yes | 🚫No |
Binary Files | ✅ Yes | N/A |
Copper | ✅ Yes | N/A |
DB2 | ✅ Yes | ✅ Yes |
Delimited File | ✅ Yes | N/A |
Dynamics | ✅ Yes | 🚫No |
Excel | ✅ Yes | N/A |
Fixed Width File | ✅ Yes | N/A |
Kafka Topic | 🚫No | ✅ Yes |
ODBC | ✅ Yes | N/A |
Oracle | ✅ Yes | ✅ Yes |
Parquet | ✅ Yes | N/A |
REST | 🚫No | 🚫No |
Salesforce Object | ✅ Yes | ✅ Yes |
Snowflake | ✅ Yes | ✅ Yes |
SOAP | 🚫No | 🚫No |
MS SQL Server | ✅ Yes | ✅ Yes |
Selecting this button will validate whether your username/password/connection string/etc. are able to connect to your source or destination. If successful, a "Connection Succeeded" popup will appear. If unsuccessful, a "Connection Failed" message will appear, along with the ability to review the associated troubleshooting logs. With this change, you are able to debug access-related data syncs at a more granular level.
v5.8- Cinchy now supports a new Cinchy event-triggered source: SOAP API. This new feature initiates a SOAP call based on Change Data Capture (CDC) events occurring in Cinchy. The SOAP response then serves as the source for the sync and can be mapped to any destination. For more information, see the SOAP 1.2 (Cinchy Event Triggered) page.
- A new destination type has been added to the Connections Experience. The "File" destination provides the option to sync your data into Amazon S3 or Azure Blob Storage as a delimited file.
- Introducing Kafka Topic Isolation, a feature designed to optimize the performance of designated real-time syncs. Users can assign custom topics to any listener config, essentially creating dedicated queues to 'fast track' the processing of associated data. When configured appropriately, high priority listener configs will benefit from dedicated resources, while lower priority listener configs will continue to share resources. This provides a mechanism to improve the throughput of critical or high volume workloads, while preserving the default behaviour for most workloads. For more detail on Kafka Topic Isolation, please review the documentation here.
Note: This feature does not support SQL Service Broker
Connections Enhancements
v5.6- To better enable your business security and permission-based needs, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up an AWS IAM role for use in Connections, please review the documentation here.
- You are also able to use AWS IAM roles when syncing S3 file or DynamoDB sources in Connections.
- To increase your data sync security and streamline authentication, we've added support for the use of x.509 certificate authentication for MongoDB Collection Sources, MongoDB (Cinchy Event Triggered) Sources, and MongoDB Targets. This new feature can be accessed directly from the Connections UI when configuring your data sync.
- Continuing to increase our data sync capabilities and features, you can now use @CinchyID as a parameter in post sync scripts when the source is from a Cinchy Event (such as the Event Broker, the Event Triggered REST API, and the Event Triggered MongoDB sources). This means that you can now design post sync scripts that take advantage of the unique CinchyID value of your records.
- We've added a new "Conditional" option for Changed Record Behaviours. When Conditional is selected, you will be able to define the conditions upon which an Update should occur. For instance, you can set your condition such that an update will only occur when a "Status" column is changed to Red, otherwise it will ignore the changed record. This new feature provides more granularity on the type of data being synced into your destination and allows for more detailed use cases.
- We improved the implementation of
DataPollingConcurrencyIndex
. We also added additional logging in the Data Polling Listener to enhance monitoring. - When configuring a connection source with text columns, it's possible to specify a JSON content type. This instructs the system to interpret the contents as a JSON object and pass it through as such. This is useful when the target (such as Kafka) supports and expects a JSON object for a specific target column. When setting this option, the value should always be valid JSON. Alternatively, the original, default behaviour of treating text columns as plaintext is unchanged. As plaintext, the contents of the column will be passed through as a string, even if it could be interpreted as JSON.
- We implemented alphabetical sorting for queries in the Connections listener UI RunQuery and Cinchy Query dropdowns. This streamlines navigation and simplifies query selection for users.
- We enhanced the batch processing system to ensure all records in the queue are fully processed before a batch job is marked as complete.
- We've enhanced the validation process for delete synchronization configurations. The system now checks the configuration at the start of the sync, ensuring the ID Column is defined and matches the Dropped Record behavior. This update prevents errors and confusion, leading to a smoother and more intuitive sync operation.
- We have expanded the authentication options available when building a TSQL database connection; including "Active Directory Interactive" in the platform SQL connection string (i.e. the database that hosts the cinchy web/idp application) will now utilize Active Directory Device Code Flow.
- Cinchy v5.10 is compatible with the MySql v8.3.0 driver.
- The Kafka configuration validation for the Connections WebApi and Worker has been improved such that applications will not start if any Kafka config value is invalid.
- You are now able to configure Cross-Origin Resource Sharing (CORS) for the Connections Experience.
This configuration allows the Connections Web API to become reachable by applications running on domains other than that which hosts your Connections Experience, and is especially useful for building projects/applications on Cinchy.- This value can be configured in the Connections WebApi > appsettings.json > "AppSettings" field by inputting an array of strings, where each string is a domain. Example:
"AppSettings": {
"CorsOrigins" : ["a.com", "b.com", "c.com"],
}
Troubleshooting Enhancements
v5.7- To help simplify and streamline the Connections experience, you are now able to view the output for each job by clicking on the Output button located in the Jobs tab of the UI after you run a sync. This links to the Execution Log table with a filter set for your specific sync, which can help you reach your execution related data quicker and easier than before.
- We now log the full REST Target HTTP response in the data sync Execution Errors table to provide you with more detailed information about your job. This replaces the original log that only contained the HTTP response status code.
- We added a warning to the Schema sections of multiple Sources to mitigate issues
due to mismatched column order. This warns users that the column order in the
schema must match the source/destination configuration. The changes affect the
following data sources:
- LDAP
- Excel
- Binary
- Fixed Width
- Cinchy Query
-
Error messages that pop up in the Connections Experience will provide additional information that will be more useful for troubleshooting.
-
SQL database read performance logging in Connections now reports a single entry per batch, making the results easier to interpret than the previous fixed-sized intervals (which may not have corresponded directly with batch activity).
-
Performance and error analysis is easier to accomplish with the addition of logging for Job execution parameters in data syncs. After starting a Batch job, you can navigate to Connections.WebApi logs in OpenSearch on Cinchy v5.0+, or the admin panel on Cinchy IIS, and search for an “Executing Job with parameters” message to see which parameters the job is executing with. (Note that the log location will depend on where you set up your log storage upon deployment.)
Example log:
{"@t":"2024-01-09T21:33:05.5223771Z","@mt":"Executing Job with parameters: {Reconcile Data}; {Degree Of Parallelism}; {Batch Size}; {Retrieval Batch Size}","Reconcile Data":true,"Degree Of Parallelism":2,"Batch Size":4000,"Retrieval Batch Size":3000,"SourceContext":"Cinchy.Connections.WebApi.Services.BatchDataSyncExecutor","ExecutionId":336,"DataSyncContextLogSink":336,"QueuedJobsProcessorId":"4"}
UI Enhancements
v5.6- To better communicate the relationship between the Source and any required Listener Configurations, we've added additional help text to event-based sources to the Source step of a connection. This text will help explain when a listener configuration is required as part of the sync.
- Record behaviour is now presented via radio buttons so that you can see and select options quicker and easier than ever before.
- For simpler real-time sync setups, the Cinchy Event Broker has a new Listener section. This section assists in creating topic JSON for listener configurations, eliminating the need to manually set up topic JSON in the Listener Config table. Refer to the Cinchy Broker Event source page for details on topic JSON fields.
- We've introduced the ability to dismiss most modals using the Escape key. This enhancement provides a more convenient and user-friendly interaction experience.
- We've made significant improvements to the Load Metadata sources and destinations, enhancing user experience:
- The Load Metadata modal no longer appears automatically when selecting a relevant source or destination.
- The availability of the Load Metadata button is conditional on filling out parameters in the Connection section.
- Clicking the Load Metadata button now directly takes you to metadata columns, skipping the interstitial modal.
- In the Schema section, all columns are now collapsed by default. Manually added columns maintain an expanded view.
- To assist sharing and collaboration on connections, we've introduced unique URLs for all saved connections. Each connection now possesses a unique URL that can be shared with other platform users. This URL links directly to the saved configuration.
- We've streamlined the destination setup process for data syncs. When selecting a Source other than Cinchy, the destination is now automatically set as Cinchy Table. This enhancement speeds up the creation of data syncs.
- Included descriptive explanations in various sections, such as Mapping, Schema, and Sync Behaviour, to provide comprehensive guidance during data sync configuration.
- Grouped Sources by type, distinguishing between Batch and Event categories.
- Implemented alphabetical sorting for improved accessibility and ease of locating connections.
- Added clarifying text throughout the interface for smoother navigation and configuration, fostering a more user-friendly experience.
- Standardized language used in file-based connectors across all Sources.
- Adjusted terminology for clarity and consistency:
- Renamed Sync Behaviour tab to Sync Actions.
- Replaced Parameters with Variables.
- Changed "Sync Pattern" to Sync Strategy in the Sync Actions tab.
- Updated Column Mappings to Mappings in the Destination tab.
- Substituted Access Token with API Key in the Copper Source, aligning with Copper's documentation language.
- Reorganized the process steps, moving the "Permissions" step within the "Info" tab.
- Eliminated the following fields for a more focused interface:
- Source > Cinchy Table > Model
- Info > Version
- The API Response Format field has been removed from the REST Source configuration. This change reflects that the only supported response format is JSON.
- Expanded the width and height of source, destination, and connections drop-down menus to ensure visibility, even on screens with varying sizes.
- Streamlined the organization of file-based source fields for greater efficiency.
- Replaced drop-down menus with radio buttons for the following options:
- Sync Strategy
- Source Schema Data Types
- Source Schema "Add Column"
- As we continue to enhance our Connections Experience offerings, you can now configure your listener for real-time syncs directly in the UI without having to navigate to a separate table. For any event triggered sync source, (CDC, REST API, Kafka Topic, MongoDB Event, Polling Event, Salesforce Platform Event, and Salesforce Push Topic), there is now the option to input your configurations directly from the Source tab in the Connections Experience. Any configuration you populate via the UI will be automatically reflected back into the Listener Config table of your platform. You are able to set the:
- Topic JSON
- Connections Attributes
- Auto Offset Reset
- Listener Status (Enabled/Disabled)
- We added a Listener section to the MongoDB Collection (Cinchy Event Triggered) and REST API (Cinchy Event Triggered) Sources. You can now manage the event trigger within the Connections UI. This reduces the complexity of managing the Listener Config table.
- You can now use drop-down menus for selecting Cinchy tables and queries for both Cinchy sources and destinations. This feature replaces the previous method, where users had to manually type in their selections.
- We added links next to any Cinchy Tables that are referenced in the UI. These links directly open the respective table, making navigation more seamless.
- We improved the user experience for header row settings for delimited files. The
following improvements have been added.
- Use Header Row Checkbox: Controls visibility of column names and Load Metadata button.
- Schema Columns Warning: Informs users about column order when header row is disabled.
- Modal Warning: Explains schema column reset when disabling header row.
- Header Record Row Number: Specifies row to use as header.
- Connections UI now includes several new elements to improve the monitoring and
control of listener statuses:
- A toggle switch to display the listener's current status.
- A direct link to a filtered view of records in the Execution Errors table where errors have occurred.
- An indicator of the listener's running state, which can be Disabled, Starting, Running, or Failed.
- A message is displayed when the listener isn't active and has failed, providing information on possible next steps.
- We've made enhancements to the UI for certain dropdown menus in Connections.
- Type ahead style dropdowns: We changed the table and query dropdowns to type ahead style, aligning with the existing Source and Destination dropdowns for a smoother UI.
- Uniform dropdown heights: We adjusted the Destination dropdown to match the Source dropdown in height, ensuring a consistent and visually appealing UI.
- Alphabetical Query sorting: We implemented alphabetical sorting for queries in the dropdown list.
- Consistent navigation links: We added navigation links next to the Table and Queries. dropdown for a uniform and intuitive user experience.
- To improve the user experience of building and running data syncs, we have added a "description" field to the Info section in the Connections experience. This field has a 500 character limit.
- When configuring or viewing a data sync created by a user with a different permission set as you, you may run into a case where the sync involves tables/queries you do not have access to. Previously, the table/query dropdowns in these cases would appear blank, however they will now populate with the names of those objects. Note that:
- You won’t be able to change the associated schema of a table/query you cannot access. Some fields may appear as disabled (ex: data mappings).
- You can still modify and save other aspects of the sync.
- Event-based syncs can be enabled and run as usual. Batch syncs must be run as a user with the correct permissions. Remember that you can run a job as another user if you have the credentials for that user.
- Spend less time searching and more time building: You are now able to use the "Models" dropdown field in the Connections UI to quickly select tables scoped within the respective model.
Source and Destination Enhancements
- We've expanded on our Cinchy Event Triggered data sync source features (REST API and MongoDB), allowing you more freedom to utilize your data. You now have the ability to reference attributes of the CDC Event in your calculated columns. (Note that syncs making use of this must limit their batch size to 1.)
- A new configurable property,
QueueWriteConcurrencyIndex
, was added to the MongoDB Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of - threads is set to 12. To configure this property, navigate to the appSettings.json >
QueueWriteConcurrencyIndex: <numberOfThreads>
. This index is shared across all listener configs, meaning that if it's set to 1 - only one listener config will be pushing the messages to the queue at a single moment in time. - We also added a new optional property to the MongoDB Listener Topic, 'changeStreamSettings.batchsize’, that's a configurable way to set your own batch size on the MongoDB Change Stream Listener.
{
"database": "",
"collection": "",
"changeStreamSettings": {
"pipelineStages": [],
"batchSize": "1000"
}
}
- We added a new configurable property,
DataPollingConcurrencyIndex
, to the Data Polling Event Listener. This property allows only a certain number of threads to run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file >"DataPollingConcurrencyIndex": <numberOfThreads>
- We added a new configurable property,
QueueWriteConcurrencyIndex
, to the Data Polling Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file >"QueueWriteConcurrencyIndex": <numberOfThreads>
. Note that this index is shared across all listener configs, meaning that if it's set to 1 only one listener config will be pushing the messages to the queue at a single moment in time. - We added a new mandatory property,
CursorConfiguration.CursorColumnDataType
, to the Listener Topic for the Data Polling Event. This change was made in tandem with an update that ensure that the database query always moved the offset, regardless of if the query returned the records or not—this helps to ensure that the performance of the source database isn't being weighed down by constantly running heavy queries on a wide range of records when the queries returned no data. This value of this mandatory property must match the column type of the source database system for proper casting of parameters. - We added a new configurable property,
CursorConfiguration.Distinct
, to the Listener Topic for the Data Polling Event. This property is a true/false Boolean type that, when set to true, applies a distinct clause on your query to avoid any duplicate records.
// App Settings JSON Example
// Example of the new configurable properties: DataPollingConcurrencyIndex (set to "1") and QueueWriteConcurrencyIndex (set to "1")
"AppSettings": {
"GetNewListenerConfigsInterval": "",
"StateFileWriteDelaySeconds": "",
"KafkaClientConfig": {
"BootstrapServers": ""
},
"KafkaRealtimeDatasyncTopic": "",
"KafkaJobCancellationTopic": "",
"DataPollingConcurrencyIndex": 1,
"QueueWriteConcurrencyIndex": 1
}
// Listener Config Topic Example
// Example of the new mandatory CursorColumnDataType property, which below is set to "int", and "Distinct", below set to "true".
{
"CursorConfiguration": {
"FromClause": "",
"CursorColumn": "",
"BatchSize": "",
"FilterCondition": "",
"Columns": [],
"Distinct": "true"
"CursorColumnDataType" : "int"
},
"Delay": ""
}
- For REST API, SOAP 1.2, Kafka Topic, Platform Event, and Parquet sources, we added a new "Conditional" option for source filters in the Connections UI. Similarly to how the "Conditional Changed Record Behaviour" capability, works, once selected you will be able to define the conditions upon which data is pulled into your source via the filter. After data is pulled from the source, new conditional UI filters down the set of returned records to ones that match the defined conditions.
- You can now pull specific data from REST API response headers using .NET regex capture groups. This feature gives you more control and flexibility in collecting the data you need when using REST API destinations.
- We implemented a significant enhancement to the read performance of Oracle data sources. This improvement targets scenarios involving tables with a large number of columns or large-sized columns, as well as networks experiencing higher latency.
- For file-based syncs, we've added "Registered Application" as an authentication mechanism for Azure Blob Storage. This is an addition to S3 support for file-based syncs.
- We've expanded the conditional filtering capabilities
introduced in Cinchy v5.7. This
enhancement is now available on the following sources:
- Kafka
- SOAP (Cinchy Event Triggered)
- REST API (Cinchy Even Triggered)
- We enhanced our data polling source with the introduction of
{NEXTOFFSET}
and{MAXOFFSET}
keywords. These features streamline data synchronization by optimizing search ranges within queries, improving performance and efficiency.{MAXOFFSET}
fetches the highest column value from a query, while{NEXTOFFSET}
retrieves the largest column value from prior queries, essential for effective batch processing. For further details on using these new features in data polling, please visit the{MAXOFFSET}
and{NEXTOFFSET}
section on our Data Polling documentation page. - You can now execute post-sync scripts when performing Change Data Capture (CDC) operations with Kafka Topics as the destination. This enhancement enables developers to implement custom actions after data synchronization, such as setting status flags or recording timestamps. This allows for more flexible post-operation scripting in CDC workflows.
- Real-time sync improvements for listener operational status: This enhancement improves the data synchronization feature in Cinchy. The Enabled/Disabled setting now more effectively controls the start and stop of data synchronization. Key enhancements include:
- Lifecycle Phases: The synchronization process now clearly follows distinct phases: Starting, Running, Failed, and Disabled. This structured approach enhances monitoring and debugging capabilities.
- Automatic Retry Mechanism: In the Failed state, due to synchronization errors, the system logs detailed error messages and remains in the Enabled state. It automatically retries synchronization every 60 seconds until you set the status to Disabled.
- Automatic Disable Feature: The system now intelligently sets itself to Disabled under two specific conditions:
- Detection of an invalid configuration (such as erroneous Topic JSON).
- Validation error during synchronization (such as a missing mandatory field in the Topic JSON).
- When selecting columns in a Cinchy Event Broker or Cinchy Table source, four additional system columns have been added: Replaced, Rejection Comments, Changeset, and Change Summary.
- When configuring Kafka as a sync destination you can now use use a '@COLUMN' custom formula to enable a dynamic Topic. For further information on this, please review the Kafka Topic documentation.
- To help reduce possible system load, there is a new user-configurable limit on how many CDC event queries may run concurrently.
- The
"CdcQueryConcurrencyIndex"
value is defaulted to 5 concurrent queries and can be configured in the Event Listener AppSettings.json. 5 is suitable for many environments. - If the load associated with Change Notifications is impacting system performance, consider lowering this value to prioritize other work, at the expense of change processing. Alternatively, provision faster hardware.
- If Change Notification processing is taking longer than desired, consider increasing this number to allow more concurrent processing, depending on the capacities of your particular system.
- The
- Snowflake Driver was updated from 2.1.5 > 3.1.0, in part to allow for the use of a PrivateLink as the Connection String when using Snowflake as a source.
- The retention period for messages on user-defined Kafka topics was set to 24 hours in order to match system-defined topics. This helps mitigate data duplication in cases where offsets were deleted before the messages in the topic.
- Improved the performance of permissions checks for the Connections experience.
- The above change also fixes a bug that could prevent certain users from downloading the data sync error logs.
- When configuring a data sync using the Salesforce Object source, the source filter section will appear in the UI as intended.
Connections Bug Fixes
v5.6- We've fixed a bug where the Listener Configuration message for a data sync using the MongoDB Event source would return as "running" after it was disabled during an exception event -- the message will now correctly return an error in this case.
- We've fixed a bug that was preventing DELETE actions from occurring when Change Approvals were enabled on a CDC source.
- In continuing to provide useful troubleshooting tools, we've fixed a bug that was preventing dead messages from appearing in the Execution Errors table when errors occurred during the open connection phase of a target. This error may have also occurred when a MongoDB target had a connection string pointing to a non-existent port/server.
- We've fixed a bug that was preventing Action Type column values of "Delete" from working with REST API target Delta syncs.
- We've fixed a data sync issue preventing users from using environment variables or other parameters in connection strings.
- We've fixed a bug in the Polling Event data sync where records would fail with a “unique constraint violation” if both an insert and an update statement happened at nearly the same time. To implement this fix, you need to add the “messageKeyExpression” parameter to your listener config when using the Polling Event as a source.
- We've fixed a bug that was causing data syncs to fail when doing platform event inserts of any type into Salesforce targets.
- We've fixed a bug where using the ID Column in a Snowflake target sync would prevent insert and update operations from working.
- We've fixed a bug where attempting to sync documents using a UUID (Universally Unique IDentifier) as a source ID in a MongoDB Event Triggered batch sync would result in a blank UUID value when saved to a Cinchy table.
- We've fixed an issue relating to the .NET 6 upgrade that was causing the Event Listener and Worker to not start as a service on IIS in v5.4+ deployments.
- We fixed a bug where the UUID/ObjectId in a MongoDB Change Stream Sourced data sync wasn't being serialized into text format. If you have any MongoDB Stream Sourced syncs currently utilizing the UUID/ObjectId, you may need to adjust accordingly when referencing the columns with those data types.
// Previous UUID/ObjectIDs would have been serialized as the below:
{
"_id": ObjectId('644054f5f88104157fa9428e'),
"uuid": UUID('ca8a3df8-b029-43ed-a691-634f7f0605f6')
}
// They will now serialize into text format like this:
{
"_id": "644054f5f88104157fa9428e",
"uuid": "ca8a3df8-b029-43ed-a691-634f7f0605f6"
}
- We fixed a bug where setting a user’s time zone to UTC (Coordinated Universal Time) would result in no data being returned in any tables.
- We fixed a bug where the Sync GUID of Saved Queries transferred over via DXD would null out.
- We fixed a bug affecting the MongoDB Event Listener wherein the “auto offset reset” functionality would not work as anticipated when set to earliest.
- We fixed a bug where failed jobs would return errors for logs that haven't yet been created. Log files now correctly search for only the relevant logs for the failed job.\
- We fixed an issue in the data configuration table where the
IF
field for the Delimited File > Conditional Calculated Column wasn't displaying correctly. - We resolved an issue where using multiple parameters while configuring data syncs could result in parsing and execution errors.
- We fixed a bug preventing calculated columns from working in MongoDB targets for data syncs.
- We fixed a bug where users were prompted to restore unsaved changes for a new connection when no configuration changes to a data sync were made.
- We fixed a bug that was causing the platform to fail upon initializing when a System User had been added to any user group (such as the Connections or Admin groups).
- We fixed a bug where passing an encrypted value to a variable used in a field encrypted by the connections UI would cause the sync to fail. You can now use variables with either encrypted or plaintext values.
- We fixed a bug where using the "Delta" sync strategy led to duplicating existing records in some destinations before inserting the new rows of data.
- We resolved an issue where the Load Metadata button was failing to connect to DB2 databases when fetching schema information.
- We fixed an issue where the Mapping UI would disappear in the Destination Section for Cinchy Event Broker to MongoDB Collection syncs, where Sync Actions were set to Delta.
- We fixed an issue where system columns like Created By, Created, Modified By, Modified, Deleted, and Deleted By weren't appearing in the topic columns dropdown in the Listener UI.
- We fixed a bug where the model loader failed to update when you added a description to a calculated column. The table now saves correctly when making changes to calculated columns.
- We fixed an issue that prevented table selection from the drop-down in Cinchy Event Broker's listener configuration.
- We resolved an issue where the
Lookup()
function in the Filter field for Cinchy Tables wasn't behaving as expected. - We restored the default timeout setting for
HttpClient
to over 100 seconds. - We fixed an issue where the UI failed to display Batch Data Sync results and instead showed a generic exception message. The jobs tab in the UI now opens without any API failure appearing in the browser's network console.
- We resolved an issue that caused large batch delta syncs to fail in Cinchy.
- We fixed an issue where Cinchy CDC Delete events weren't sent to the destination using Delta. For example, Deletes and Approved Deletes now successfully insert records into Kafka when deleted from a Cinchy table.
- We fixed the issue of concurrent updates failing due to a Primary Key (PK) violation on the History table by adding a retry mechanism. This fix aims to make Cinchy more robust when making concurrent updates.
- We resolved an issue during where the Cinchy destination would still be queried during a delta sync.
- We fixed an issue with data syncs that would fail on executed queries that returned large numbers of records on Cinchy Table destinations.
- We modified the Data Polling mechanism to enhance the reliability of message delivery to Kafka.
- We fixed an issue that ensures that Destination mappings in dropdowns now display the alias, instead of the original column name.
- We resolved an issue where dropdowns weren't correctly loading data due to user permissions on system tables. This fix, involving an API change, ensures that dropdown data reflects appropriate user access levels.
- We resolved an issue where the Query dropdown wasn't populating when you selected RunQuery in Connections listener UI.
- We resolved a rendering issue in the Connections listener UI, where line breaks in the topic JSON were causing display problems.
- We resolved a security issue that removes the logging of Connection Attributes in the Event Listener.
- We added a retry mechanism to address a transient connection issue for
PostgreSQL databases, where listeners in the production environment
encountered errors due to invalid
client_encoding
andTimeZone
parameters. The update enhances connection stability and reliability. - We increased the request limit size for the Connections Job API, enabling the processing of larger files without encountering size restrictions.
- We fixed an issue in batch synchronization (CSV to table) where data was incorrectly updated on subsequent syncs without any actual changes. This fix ensures data integrity and accurate update behavior in the synchronization process.
- We fixed an issue where line breaks in the Listener Topic JSON would cause the Listener UI to not display any settings. Cinchy now removes any formatting from the Topic column of the Listener Config table.
- We resolved an issue where CDC to ADO.net syncs weren't using a single sync
for all operations. The following changes have been made:
- Sync Key Flexibility: Any standard Cinchy data type can be used as a sync key or ID field.
- ID and Sync Key Compatibility: Setting both to the same field won't cause failure.
- Unified Sync Operations: Insert, Update, and Deletes work in the same sync when specified.
- Auto offset Reset: Consistent behavior for all settings.
- Error Messaging: Clear error messages for missing operation info.
- We fixed a regression issue with the DB2 connector.
- We fixed a visual instability within the Filter text field.
- We resolved an issue where existing sync configurations in Cinchy Event Broker incorrectly displayed empty query dropdowns.
- We fixed an issue where data syncs from pipe-delimited files failed to process text fields containing quotes.
- We fixed a bug that was causing the unsaved changes dialog to be displayed in scenarios where there were no unsaved changes
- We resolved an issue in changelog tables where updates that weren't batched and timeouts occurred during large record set processing. This fix ensures efficient handling of cache callbacks across all nodes.
- We resolved an issue where the order of multi-selects affected reconciliation in Connections.
- We increased the request limit size for the Connections Job API, enabling the processing of larger files without encountering size restrictions.
- We have fixed a bug that was causing some data syncs to Cinchy Tables to unnecessarily update multi-select values in the destination. This fix reduces monitoring noise and prevents collaboration log and database size bloat.
- We fixed a bug where using 0 parameters in the ExecuteCQL API would incorrectly modify the API Models.
- We fixed a bug where the Publish Data Change Notifications setting was not being respected during table model loads.
- "Unique constraint" errors will no longer be thrown during parallel batch syncs that contained calculated columns.
- Fixed a bug that was creating duplicate target records when both inserting data in parallel and throwing transition errors that would trigger retries.
- We have addressed the possible out-of-memory errors that could arise during a data sync when "caching linked data values".
- Unnecessary collaboration log bloat will no longer occur due to the presence of parameters (variables) in a Data Sync XML.
- The Connections experience will no longer incorrectly start up if a Redis connection string is not provided. The following error will be thrown to provide troubleshooting assistance: "The Redis connection string is missing. Check your application configuration to ensure this value is provided."
- We fixed a bug where LEFT and RIGHT CQL functions were not working as expected.
- We fixed a bug that was preventing queries with User Defined Functions from executing due to a misalignment between the parser and the application initialization.
- Erased data will be filtered properly again on Left & Right Joined Table References.
- We fixed the following form bugs:
- A bug that prevented new records from being added to multiple child forms in the same view before the parent form was saved.
- A bug that duplicated newly-added records in a child form table if they were edited before the parent form was saved.
- Logging into the Forms application from a direct link in a fresh session resulted in a blank screen.
- The Active Jobs tab in the Connections UI will correctly show the currently running jobs.
- Fixed a bug that was preventing the
Update
andDelete
actions from working in Batch Delta Syncs.- Additionally, if an invalid Action Type column value is provided when configuring a Delta Sync, the Connection logs will now contain more detailed warning messages. These log messages will include information about the record with the incorrect action type. For example: "Invalid sync action type
ActionTypeValue
in columnActionTypeColumnName
"- Note: Valid sync action types are
'Insert'
,'Update'
,'Delete'
. Anything else is invalid.
- Note: Valid sync action types are
- Additionally, if an invalid Action Type column value is provided when configuring a Delta Sync, the Connection logs will now contain more detailed warning messages. These log messages will include information about the record with the incorrect action type. For example: "Invalid sync action type
- Fixed an issue when listener/worker would extract the wrong Idp Url during the simultaneous startup of Cinchy Web & listener/worker.
- Long running batch jobs will no longer intermittently fail to complete.
- Fixed an authentication error caused by using Basic Auth with the SOAP 1.2 source connector.
- Fixed a bug that was causing data syncs to fail when syncing linked columns to a Cinchy Table target.
Forms
v5.6- A child form that has a Link column reference to a parent record now auto populates with the parent record's identity.
- A space has now been added between multi-select values when displaying a record in an embedded child table.
- Negative numbers can now be entered into Number type inputs on forms.
- We consolidated all actions into a single menu for easier navigation.
- We moved Create new record into the single menu and renamed it to Create.
- We added an option to copy the record link (URL) to the clipboard.
- We changed Back to Table View to View Record in Table.
- To improve the user experience and make interacting with forms easier, we made the Forms action bar always visible when you scroll through a form.
- We updated the URL to accurately match the record currently displayed, when switched from the records dropdown menu.
- We added a warning message in child forms when essential columns like "Child Form Link Field" or both "Child Form Parent ID" and "Child Form Link ID" are missing, as they're needed for proper functionality.
- You'll now get a prompt to save if you have unsaved changes in a form.
- We've added the ability to export a Form PDF in landscape mode.
- When loading a Form, the sidebar navigation will now correctly highlight the appropriate/currently selected section.
Forms Bug Fixes
v5.6- We've fixed an issue where updated file attachments on a form would fail to save.
- We fixed a bug where child record tables within a form would display data differently when exported to a PDF.
- We fixed an issue where the first load of an applet wouldn't render sections that require Cinchy data until you refreshed the page.
- We fixed an issue where raw HTML was being displayed instead of HTML hyperlinks.
- We fixed a bug that prevented a form from loading if you deleted an associated child form.
- We fixed an issue with the record dropdown search where inputs of more than 30 characters caused a failure to match.
- We resolved a bug that prevented saving Date values in child forms during creation and editing.
- We fixed a bug where the Add… link in the forms sidebar failed to load the correct form in the modal.
- We fixed an issue where multi-select columns linked to large tables didn't display selected values and allowed accidental overwriting of existing selections.
- We fixed an issue where creating new records in Forms failed if a text field contained a single quote, ensuring successful record creation regardless of text field content.
- We fixed a bug where child forms weren't saved due to multi-select columns getting their values set to empty if they weren't changed by the user.
- The column filter in the [Cinchy].[Form Fields] table will now filter correctly when creating a new record.
- Selecting a record in the "Search Records" dropdown will update the page and URL to the newly selected record.
- Fixed a bug that was causing a record lookup error due to an "invalid trim".
v5.6
The following changes were made to the platform between v5.6 and v5.13
Breaking changes
Discontinuation of support for 2012 TSQL v5.9
As of version 5.9, Cinchy will cease support for 2012 TSQL. This change aligns with Microsoft's End of Life policy. For further details, refer to the SQL Server 2012 End of Support page.
Removal of GraphQL API (Beta) v5.9
The beta version of our GraphQL API endpoint has been removed. If you have any questions regarding this, please submit a support ticket or email support@cinchy.com.
Personal Access Tokens v5.10
There was an issue affecting Personal Access Tokens (PATs) generated in Cinchy wherein tokens created from v5.7 onwards were incompatible with subsequent versions of the platform. This issue has been resolved, however please note that:
- Any tokens created on versions 5.7.x, 5.8.x, and 5.9.x will need to be regenerated.
- "401 Unauthorized" errors may indicate the need to regenerate the token.
- PATs created before 5.7.x and from 5.10 onwards are unaffected.
Update to .NET 8 v5.13
The Cinchy platform was updated to .NET 8, in accordance with Microsoft's .NET support policy. Support for .NET 6 ends on November 12, 2024.
- For customers on Kubernetes: This change will be reflected automatically upon upgrading to Cinchy v5.13+.
- For customers on IIS: The following must be installed prior to upgrading to Cinchy v5.13+:
General Platform
The following changes pertain to the general platform.
v5.7- We upgraded our IDP from IdentityServer4 to IdentityServer6 to ensure we're maintaining the highest standard of security for your platform.
- We implemented Istio mTLS support to ensure secure/TLS in-cluster communication of Cinchy components.
- Cinchy v5.8 is compatible with the MySql v8.1.0 driver.
- Cinchy v5.9+ is compatible with the MySql v8.2.0 driver.
- We have updated our third-party libraries
- Nuget package updates.
- Updated Npgsql to version 7.0.7.
- Upgraded moment.js to 2.29.4
- Various package updates
Expanded CORS policy for Cinchy Web API endpoints v5.9
Cinchy Web API endpoints now feature a more permissive Cross-Origin Resource Sharing (CORS) policy, allowing requests from all hosts. This update enhances the flexibility and integration capabilities of the Cinchy platform with various web applications.
Make sure to use robust security measures in your applications to mitigate potential cross-origin vulnerabilities.
Time Zone Updates v5.9
We updated our time zone handling to improve compatibility and user experience. This change affects both PostgreSQL (PGSQL) and Transact-SQL (TSQL) users, with significant changes in options, discontinuation of support for older TSQL versions, and manual time zone migration. Time zone values will be changed and mapped during the upgrade process. In case of mapping failure, the default time zone will be set to Eastern Standard Time (EST). This enhancement does the following:
-
PGSQL Time Zone support:
- PGSQL now offers an expanded range of time zone options. These options may not have direct equivalents in TSQL.
-
Discontinuation of TSQL 2012 Support:
- We're discontinuing support for TSQL 2012. Users must upgrade to a newer version to ensure compatibility with the latest time zone configurations.
-
System Properties Update:
- Time zone settings will continue to be supported in TSQL 2016 and later versions.
Manual Time zone migration
Due to differences in time zone naming between TSQL and PGSQL, Cinchy will manually migrate users to a matching time zone. To verify your time zones, you can do the following:
-
Personal preferences:
- All users should check their time zone settings post-migration.
- For personal settings, select My Profile and set the preferred time zone.
- For system settings, access the system properties table (ADMIN), manually copying the PGSQL name into the Value column.
-
Database Access Requirements: The Cinchy application must have application READ access to the following tables depending on the database in use:
- PGSQL:
pg_timezone_names
- TSQL:
sys.time_zone_info
- PGSQL:
Integration with AWS and Azure in External Secrets Manager v5.9
With the External Secrets Manager table, Cinchy now offers comprehensive integration capabilities with AWS and Azure. This enhancement allows for streamlined management and integration of external secrets within the Cinchy environment and expands the supported authentication types from AWS and Azure, providing a more versatile approach to managing external secrets.
For AWS, Cinchy now supports the following secret types:
- AWS access keys for IAM users.
- IAM roles.
For Azure, Cinchy now supports the following secret types:
- Managed identities.
- Registered applications.
Introducing Cinchy Automations v5.11
Cinchy Automations is a platform tool that allows users to schedule tasks. To reduce the time and manual effort spent on reoccurring tasks, you can now tell Cinchy to perform the following automatically:
- Executing queries
- Triggering batch syncs
- Extracting and running a code bundle, which can contain any number of queries or syncs needed to perform a task.
Using the Automations capability, you can also build an automation that performs multiple tasks in sequence (known as "Automation Steps") to allow for more complex use cases.
You can find the full details on this powerful new capability here.
APIs
The following changes pertain to Cinchy's APIs.
v5.7- We've implemented a new API endpoint for the retrieval of your secrets. Using the below endpoint, fill in your
<base-url>
,<secret-name>
, and the<domain-name>
to retrieve the referenced secret. This endpoint works with Cinchy’s Personal Access Token capability, as well as Access Tokens retrieved from your IDP.
Blank Example:
<base-url>/api/v1.0/secrets-manager/secret?secretName=<secret-name>&domain=<domain-name>
Populated Example:
Cinchy.net/api/v1.0/secrets-manager/secret?secretName=<ExampleSecret>&domain=<Sandbox>
The API will return an object in the below format:
{
"secretValue": "password123"
}
- We have added two new scopes to the [Cinchy].[Integrated Clients] table: read:all and write:all, which can be used to fine-tune your permission sets. These scopes are found in the “Permitted Scopes” column of the table:
-- read:all = clients can read all data.
-- write:all = clients can read and write all data.- Both scopes are automatically assigned to existing integrated clients upon upgrading to v5.10
- Note: All new Cinchy Web API endpoints that use Bearer Tokens or Cookie Authentication Schemes must have at least one of the new scopes assigned. These endpoints are still currently accessible with the old js_scope, however this will be deprecated in a future release. You can update the scopes of existing endpoints in the “Permitted Scopes” column of the [Cinchy].[Integrated Clients] table.
- Note: A user with write entitlements will not be able to write when using a client that only has the read:all scope assigned.
- Note: Clients that receive a 403: Forbidden status code error in the logs should make note of this change as a possible cause, and update the permissions accordingly.
- You can now use Personal Access tokens in the following scenarios:
- As authentication when calling the api/v1.0/jobs API endpoint.
- As authentication when calling the api/v1.0/jobs API endpoint as another user.
- Added UTF-8 encoding to the Saved Query API endpoint.
Logging and Troubleshooting
The following changes pertain to error logging and troubleshooting within the platform. Note: Conenctions specific changes will be featured in the Connections section below.
v5.9- We improved the error messaging for model loader failures. Before, loading a model with a duplicate name and version in the Models table showed unclear error messages. Users had to check logs to identify the failure cause. The error screen now shows clear, detailed messages. This change makes troubleshooting easier and offers context into model loader failures.
- We integrated Kafka and Redis logs into OpenSearch, offering improved insight and quicker debugging for Change Data Capture (CDC) processes. This enhancement improves issue resolution and a streamlines monitoring.
- To enhance the troubleshooting of Cinchy's Angular SDK, errors will now display additional context. Failed call exceptions will contain more useful errors in the
data.details
property.
Tables
The following changes pertain to tables.
Table Enhancements
v5.7- We updated the dropdown menus for Link columns to display selected and deleted values at the top of the list so that you don't need to scroll through long lists just to find the ones you've selected.
- The Cinchy platform now comes with a new way to store secrets — the Cinchy Secrets Table. Adhering to Cinchy’s Universal Access Controls, you can use this table as a key vault (such as Azure Key Vault or AWS Secrets Manager) to store sensitive data only accessible to the users or user groups that you give access to. You can use secrets stored in this table anywhere a regular variable can go when configuring data syncs, including but not limited to:
- As part of a connection string;
- Within a REST Header, URL, or Body;
- As an Access Key ID. You can also use it in a Listener Configuration.
- You can now enable change notifications and related features on system tables within the Cinchy domain. Administrators and users now have better visibility into the use and modification of these tables. This includes additions, deletions, or updates to the table data.
- If you are on PostgreSQL, please restart the web application pod to enable change notifications.
- Some tables, such as the Listener State table, are excluded from this feature due to their high-volume nature.
- Change Data Capture (CDC) can't be enabled on tables that aren't versioned, specifically the Listener State table.
- When you enable CDC a system table, the model loader can't disable it
- We introduced a new feature that allows members of the Cinchy Builders group
to perform truncate table operations. This enhancement enables Builders to
effectively manage and manipulate table data. Key features include:
- Truncate Table Capability: Members of the Cinchy Builders group now have the authority to execute TRUNCATE operations on tables.
- Design Table Access: To perform a truncate operation, the user must have
access to the Design Table of the table they intend to truncate. If the
user lacks this access, the system will give an error stating
Design Table Access required to execute Truncate command
.
- Selecting a link to a PDF stored in Cinchy via a Link column associated with
Cinchy\Files
now respects your browser settings and opens the PDF in your browser, if you've set it to do so.
- The minimum length of a text column created by a CSV import is now 500 characters.
- Removed infinite scrolling from tables and in link column dropdowns.
- Tables now have pagination and will show 250 records per page. This affects both the regular table view as well as the tables that populate in the query builder view.
- Link Column dropdowns will display the first 100 records. Users can type in the cell to further filter down the list or search for records beyond the first 100
- Link Column drop downs will no longer return null values.
- When using the "Sort" capability on a table, you can now specify whether you want the data to return nulls first or last. Note that selecting either of these options will have an impact on the performance of the table. Leaving the option blank/unspecified will mitigate the impact.
- To improve platform performance by limiting the amount of data that must be read, table views will no longer query for columns that are not represented within the context of that specific view.
Table Bug Fixes
v5.7- We fixed an issue with the behaviour of cached calculated columns when using multi-select data types (Link, Choice, and Hierarchy) with Change Approval enabled. These data types should now work as expected.
- You can now export up to the first 250,000 records from a View using the Export button on a table.
- We fixed the character limit in the Secrets table for Aurora databases. The Secret Value column capacity has increased from 500 to 10,000 characters, ensuring adequate space for storing secret data.
- We resolved an issue in the Collaboration Log Revert function where date-time values in unrelated columns were incorrectly altered.
- We resolved an issue where altering metadata for date columns in PostgreSQL led to exceptions during operations.
- We resolved an issue that caused binary columns to drop when editing the Users and Files system tables. This fix ensures that binary data types are now correctly recognized and retained during table modifications.
- Fixed a bug where the platform was not saving records where changes were made immediately after record creation.
IS NULL
checks with a multiselect field or parameter will now yield the expected result for empty values.- Adding a filter expression to a Link column via the UI will no longer cause a number conversion error.
DXD
The following changes pertain to Cinchy DXD.
v5.7We added additional system columns to extend the number of core Cinchy objects that can be managed through DXD 1.7 and higher. The newly supported Cinchy objects are:
- Views (Data Browser)
- Listener Config
- Secrets
- Pre-install Scripts
- Post-install Scripts
- Webhooks
Queries and CQL
The following changes pertain to queries and Cinchy Query Language.
Query and CQL Enhancements
v5.7- Optimized PostgreSQL query performance when referencing multi-select columns.
- Improved query performance when using a CASE statement on a Link reference.
- We added
execute
, a new method for UDF extensions. This new query call returns aqueryResult
object that contains additional information about your result. For more information, see the Cinchy User Defined Functions page.
- The POST endpoint for Saved Queries now automatically serializes hierarchical
JSON to text when the content-type is
application/json
. This update now supports values that are objects or arrays. This eliminates the need for manual serialization and makes it easier for developers to work with Saved Queries.
- We have added the Compress JSON parameter to the Query Builder UI and [Saved Queries] table. JSON compression can:
- Help to reduce the amount of time it takes to query and process data
- Reduce the amount of bandwidth needed to transfer data. This can be especially beneficial for applications that require frequent data updates, such as web applications.
- Reduce the amount of memory needed to store data.
- We have made various enhancements to the Saved Queries table for use cases when your queries are being used as API endpoints. Better management of these queries is possible by way of HTTP methods (GET, POST, PUT, PATCH, DELETE) for distinguishing between types of query operations, Versions for endpoint versioning, and UUIDs for grouping queries. Please review the Queries and Saved Query API pages for further details.
- To gain more control over your query creation, we have added a Cancel button to the query builder. The Cancel/Stop button will appear for the duration of running your query; clicking it will abort the active query and return a "Query execution cancelled" message.
Query and CQL Bug Fixes
v5.7- We fixed a bug that was stripping query parameters from Relative URLs if they were being used as the Application URL of the applets. In the below screenshot, the bug would have stripped out the "q=1" parameter, leaving only an Absolute URL in lieu of a Relative one.
- We fixed a bug in CQL on PostgreSQL that caused the
DATEADD
function to truncate input dates down toDAY
precision. Now, you can expect more accurate date manipulations without losing finer time details.
- We improved messaging in CQL Saved Queries to provide clearer error messages when required parameters are missing in saved queries, aiding in self-debugging.
- Fixed an invalid CQL bug in the Query Editor UI when using FOR JSON PATH when building queries in PGSQL.
Connections
The following changes pertain to data syncs and the Connections experience.
New Features
v5.7- We added Oracle as a new database type for Polling Events in Connections. Data Polling is a source option first featured in Cinchy v5.4 which uses the Cinchy Event Listener to continuously monitor and sync data entries from your Oracle, SQL Server, or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary.
- We made it simpler to debug invalid credentials in data syncs by adding a "Test Connection" button to the UI for the following sources and destinations:
Name | Supported source | Supported destination |
---|---|---|
Amazon Marketplace | ✅ Yes | 🚫No |
Binary Files | ✅ Yes | N/A |
Copper | ✅ Yes | N/A |
DB2 | ✅ Yes | ✅ Yes |
Delimited File | ✅ Yes | N/A |
Dynamics | ✅ Yes | 🚫No |
Excel | ✅ Yes | N/A |
Fixed Width File | ✅ Yes | N/A |
Kafka Topic | 🚫No | ✅ Yes |
ODBC | ✅ Yes | N/A |
Oracle | ✅ Yes | ✅ Yes |
Parquet | ✅ Yes | N/A |
REST | 🚫No | 🚫No |
Salesforce Object | ✅ Yes | ✅ Yes |
Snowflake | ✅ Yes | ✅ Yes |
SOAP | 🚫No | 🚫No |
MS SQL Server | ✅ Yes | ✅ Yes |
Selecting this button will validate whether your username/password/connection string/etc. are able to connect to your source or destination. If successful, a "Connection Succeeded" popup will appear. If unsuccessful, a "Connection Failed" message will appear, along with the ability to review the associated troubleshooting logs. With this change, you are able to debug access-related data syncs at a more granular level.
v5.8- Cinchy now supports a new Cinchy event-triggered source: SOAP API. This new feature initiates a SOAP call based on Change Data Capture (CDC) events occurring in Cinchy. The SOAP response then serves as the source for the sync and can be mapped to any destination. For more information, see the SOAP 1.2 (Cinchy Event Triggered) page.
- A new destination type has been added to the Connections Experience. The "File" destination provides the option to sync your data into Amazon S3 or Azure Blob Storage as a delimited file.
- Introducing Kafka Topic Isolation, a feature designed to optimize the performance of designated real-time syncs. Users can assign custom topics to any listener config, essentially creating dedicated queues to 'fast track' the processing of associated data. When configured appropriately, high priority listener configs will benefit from dedicated resources, while lower priority listener configs will continue to share resources. This provides a mechanism to improve the throughput of critical or high volume workloads, while preserving the default behaviour for most workloads. For more detail on Kafka Topic Isolation, please review the documentation here.
Note: This feature does not support SQL Service Broker
Connections Enhancements
v5.8- We improved the implementation of
DataPollingConcurrencyIndex
. We also added additional logging in the Data Polling Listener to enhance monitoring. - When configuring a connection source with text columns, it's possible to specify a JSON content type. This instructs the system to interpret the contents as a JSON object and pass it through as such. This is useful when the target (such as Kafka) supports and expects a JSON object for a specific target column. When setting this option, the value should always be valid JSON. Alternatively, the original, default behaviour of treating text columns as plaintext is unchanged. As plaintext, the contents of the column will be passed through as a string, even if it could be interpreted as JSON.
- We implemented alphabetical sorting for queries in the Connections listener UI RunQuery and Cinchy Query dropdowns. This streamlines navigation and simplifies query selection for users.
- We enhanced the batch processing system to ensure all records in the queue are fully processed before a batch job is marked as complete.
- We've enhanced the validation process for delete synchronization configurations. The system now checks the configuration at the start of the sync, ensuring the ID Column is defined and matches the Dropped Record behavior. This update prevents errors and confusion, leading to a smoother and more intuitive sync operation.
- We have expanded the authentication options available when building a TSQL database connection; including "Active Directory Interactive" in the platform SQL connection string (i.e. the database that hosts the cinchy web/idp application) will now utilize Active Directory Device Code Flow.
- Cinchy v5.10 is compatible with the MySql v8.3.0 driver.
- The Kafka configuration validation for the Connections WebApi and Worker has been improved such that applications will not start if any Kafka config value is invalid.
- You are now able to configure Cross-Origin Resource Sharing (CORS) for the Connections Experience.
This configuration allows the Connections Web API to become reachable by applications running on domains other than that which hosts your Connections Experience, and is especially useful for building projects/applications on Cinchy.- This value can be configured in the Connections WebApi > appsettings.json > "AppSettings" field by inputting an array of strings, where each string is a domain. Example:
"AppSettings": {
"CorsOrigins" : ["a.com", "b.com", "c.com"],
}
Troubleshooting Enhancements
v5.7- To help simplify and streamline the Connections experience, you are now able to view the output for each job by clicking on the Output button located in the Jobs tab of the UI after you run a sync. This links to the Execution Log table with a filter set for your specific sync, which can help you reach your execution related data quicker and easier than before.
- We now log the full REST Target HTTP response in the data sync Execution Errors table to provide you with more detailed information about your job. This replaces the original log that only contained the HTTP response status code.
- We added a warning to the Schema sections of multiple Sources to mitigate issues
due to mismatched column order. This warns users that the column order in the
schema must match the source/destination configuration. The changes affect the
following data sources:
- LDAP
- Excel
- Binary
- Fixed Width
- Cinchy Query
-
Error messages that pop up in the Connections Experience will provide additional information that will be more useful for troubleshooting.
-
SQL database read performance logging in Connections now reports a single entry per batch, making the results easier to interpret than the previous fixed-sized intervals (which may not have corresponded directly with batch activity).
-
Performance and error analysis is easier to accomplish with the addition of logging for Job execution parameters in data syncs. After starting a Batch job, you can navigate to Connections.WebApi logs in OpenSearch on Cinchy v5.0+, or the admin panel on Cinchy IIS, and search for an “Executing Job with parameters” message to see which parameters the job is executing with. (Note that the log location will depend on where you set up your log storage upon deployment.)
Example log:
{"@t":"2024-01-09T21:33:05.5223771Z","@mt":"Executing Job with parameters: {Reconcile Data}; {Degree Of Parallelism}; {Batch Size}; {Retrieval Batch Size}","Reconcile Data":true,"Degree Of Parallelism":2,"Batch Size":4000,"Retrieval Batch Size":3000,"SourceContext":"Cinchy.Connections.WebApi.Services.BatchDataSyncExecutor","ExecutionId":336,"DataSyncContextLogSink":336,"QueuedJobsProcessorId":"4"}
UI Enhancements
v5.7- For simpler real-time sync setups, the Cinchy Event Broker has a new Listener section. This section assists in creating topic JSON for listener configurations, eliminating the need to manually set up topic JSON in the Listener Config table. Refer to the Cinchy Broker Event source page for details on topic JSON fields.
- We've introduced the ability to dismiss most modals using the Escape key. This enhancement provides a more convenient and user-friendly interaction experience.
- We've made significant improvements to the Load Metadata sources and destinations, enhancing user experience:
- The Load Metadata modal no longer appears automatically when selecting a relevant source or destination.
- The availability of the Load Metadata button is conditional on filling out parameters in the Connection section.
- Clicking the Load Metadata button now directly takes you to metadata columns, skipping the interstitial modal.
- In the Schema section, all columns are now collapsed by default. Manually added columns maintain an expanded view.
- To assist sharing and collaboration on connections, we've introduced unique URLs for all saved connections. Each connection now possesses a unique URL that can be shared with other platform users. This URL links directly to the saved configuration.
- We've streamlined the destination setup process for data syncs. When selecting a Source other than Cinchy, the destination is now automatically set as Cinchy Table. This enhancement speeds up the creation of data syncs.
- Included descriptive explanations in various sections, such as Mapping, Schema, and Sync Behaviour, to provide comprehensive guidance during data sync configuration.
- Grouped Sources by type, distinguishing between Batch and Event categories.
- Implemented alphabetical sorting for improved accessibility and ease of locating connections.
- Added clarifying text throughout the interface for smoother navigation and configuration, fostering a more user-friendly experience.
- Standardized language used in file-based connectors across all Sources.
- Adjusted terminology for clarity and consistency:
- Renamed Sync Behaviour tab to Sync Actions.
- Replaced Parameters with Variables.
- Changed "Sync Pattern" to Sync Strategy in the Sync Actions tab.
- Updated Column Mappings to Mappings in the Destination tab.
- Substituted Access Token with API Key in the Copper Source, aligning with Copper's documentation language.
- Reorganized the process steps, moving the "Permissions" step within the "Info" tab.
- Eliminated the following fields for a more focused interface:
- Source > Cinchy Table > Model
- Info > Version
- The API Response Format field has been removed from the REST Source configuration. This change reflects that the only supported response format is JSON.
- Expanded the width and height of source, destination, and connections drop-down menus to ensure visibility, even on screens with varying sizes.
- Streamlined the organization of file-based source fields for greater efficiency.
- Replaced drop-down menus with radio buttons for the following options:
- Sync Strategy
- Source Schema Data Types
- Source Schema "Add Column"
- As we continue to enhance our Connections Experience offerings, you can now configure your listener for real-time syncs directly in the UI without having to navigate to a separate table. For any event triggered sync source, (CDC, REST API, Kafka Topic, MongoDB Event, Polling Event, Salesforce Platform Event, and Salesforce Push Topic), there is now the option to input your configurations directly from the Source tab in the Connections Experience. Any configuration you populate via the UI will be automatically reflected back into the Listener Config table of your platform. You are able to set the:
- Topic JSON
- Connections Attributes
- Auto Offset Reset
- Listener Status (Enabled/Disabled)
- We added a Listener section to the MongoDB Collection (Cinchy Event Triggered) and REST API (Cinchy Event Triggered) Sources. You can now manage the event trigger within the Connections UI. This reduces the complexity of managing the Listener Config table.
- You can now use drop-down menus for selecting Cinchy tables and queries for both Cinchy sources and destinations. This feature replaces the previous method, where users had to manually type in their selections.
- We added links next to any Cinchy Tables that are referenced in the UI. These links directly open the respective table, making navigation more seamless.
- We improved the user experience for header row settings for delimited files. The
following improvements have been added.
- Use Header Row Checkbox: Controls visibility of column names and Load Metadata button.
- Schema Columns Warning: Informs users about column order when header row is disabled.
- Modal Warning: Explains schema column reset when disabling header row.
- Header Record Row Number: Specifies row to use as header.
- Connections UI now includes several new elements to improve the monitoring and
control of listener statuses:
- A toggle switch to display the listener's current status.
- A direct link to a filtered view of records in the Execution Errors table where errors have occurred.
- An indicator of the listener's running state, which can be Disabled, Starting, Running, or Failed.
- A message is displayed when the listener isn't active and has failed, providing information on possible next steps.
- We've made enhancements to the UI for certain dropdown menus in Connections.
- Type ahead style dropdowns: We changed the table and query dropdowns to type ahead style, aligning with the existing Source and Destination dropdowns for a smoother UI.
- Uniform dropdown heights: We adjusted the Destination dropdown to match the Source dropdown in height, ensuring a consistent and visually appealing UI.
- Alphabetical Query sorting: We implemented alphabetical sorting for queries in the dropdown list.
- Consistent navigation links: We added navigation links next to the Table and Queries. dropdown for a uniform and intuitive user experience.
- To improve the user experience of building and running data syncs, we have added a "description" field to the Info section in the Connections experience. This field has a 500 character limit.
- When configuring or viewing a data sync created by a user with a different permission set as you, you may run into a case where the sync involves tables/queries you do not have access to. Previously, the table/query dropdowns in these cases would appear blank, however they will now populate with the names of those objects. Note that:
- You won’t be able to change the associated schema of a table/query you cannot access. Some fields may appear as disabled (ex: data mappings).
- You can still modify and save other aspects of the sync.
- Event-based syncs can be enabled and run as usual. Batch syncs must be run as a user with the correct permissions. Remember that you can run a job as another user if you have the credentials for that user.
- Spend less time searching and more time building: You are now able to use the "Models" dropdown field in the Connections UI to quickly select tables scoped within the respective model.
Source and Destination Enhancements
v5.7- A new configurable property,
QueueWriteConcurrencyIndex
, was added to the MongoDB Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of - threads is set to 12. To configure this property, navigate to the appSettings.json >
QueueWriteConcurrencyIndex: <numberOfThreads>
. This index is shared across all listener configs, meaning that if it's set to 1 - only one listener config will be pushing the messages to the queue at a single moment in time. - We also added a new optional property to the MongoDB Listener Topic, 'changeStreamSettings.batchsize’, that's a configurable way to set your own batch size on the MongoDB Change Stream Listener.
{
"database": "",
"collection": "",
"changeStreamSettings": {
"pipelineStages": [],
"batchSize": "1000"
}
}
- We added a new configurable property,
DataPollingConcurrencyIndex
, to the Data Polling Event Listener. This property allows only a certain number of threads to run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file >"DataPollingConcurrencyIndex": <numberOfThreads>
- We added a new configurable property,
QueueWriteConcurrencyIndex
, to the Data Polling Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file >"QueueWriteConcurrencyIndex": <numberOfThreads>
. Note that this index is shared across all listener configs, meaning that if it's set to 1 only one listener config will be pushing the messages to the queue at a single moment in time. - We added a new mandatory property,
CursorConfiguration.CursorColumnDataType
, to the Listener Topic for the Data Polling Event. This change was made in tandem with an update that ensure that the database query always moved the offset, regardless of if the query returned the records or not—this helps to ensure that the performance of the source database isn't being weighed down by constantly running heavy queries on a wide range of records when the queries returned no data. This value of this mandatory property must match the column type of the source database system for proper casting of parameters. - We added a new configurable property,
CursorConfiguration.Distinct
, to the Listener Topic for the Data Polling Event. This property is a true/false Boolean type that, when set to true, applies a distinct clause on your query to avoid any duplicate records.
// App Settings JSON Example
// Example of the new configurable properties: DataPollingConcurrencyIndex (set to "1") and QueueWriteConcurrencyIndex (set to "1")
"AppSettings": {
"GetNewListenerConfigsInterval": "",
"StateFileWriteDelaySeconds": "",
"KafkaClientConfig": {
"BootstrapServers": ""
},
"KafkaRealtimeDatasyncTopic": "",
"KafkaJobCancellationTopic": "",
"DataPollingConcurrencyIndex": 1,
"QueueWriteConcurrencyIndex": 1
}
// Listener Config Topic Example
// Example of the new mandatory CursorColumnDataType property, which below is set to "int", and "Distinct", below set to "true".
{
"CursorConfiguration": {
"FromClause": "",
"CursorColumn": "",
"BatchSize": "",
"FilterCondition": "",
"Columns": [],
"Distinct": "true"
"CursorColumnDataType" : "int"
},
"Delay": ""
}
- For REST API, SOAP 1.2, Kafka Topic, Platform Event, and Parquet sources, we added a new "Conditional" option for source filters in the Connections UI. Similarly to how the "Conditional Changed Record Behaviour" capability, works, once selected you will be able to define the conditions upon which data is pulled into your source via the filter. After data is pulled from the source, new conditional UI filters down the set of returned records to ones that match the defined conditions.
- You can now pull specific data from REST API response headers using .NET regex capture groups. This feature gives you more control and flexibility in collecting the data you need when using REST API destinations.
- We implemented a significant enhancement to the read performance of Oracle data sources. This improvement targets scenarios involving tables with a large number of columns or large-sized columns, as well as networks experiencing higher latency.
- For file-based syncs, we've added "Registered Application" as an authentication mechanism for Azure Blob Storage. This is an addition to S3 support for file-based syncs.
- We've expanded the conditional filtering capabilities
introduced in Cinchy v5.7. This
enhancement is now available on the following sources:
- Kafka
- SOAP (Cinchy Event Triggered)
- REST API (Cinchy Even Triggered)
- We enhanced our data polling source with the introduction of
{NEXTOFFSET}
and{MAXOFFSET}
keywords. These features streamline data synchronization by optimizing search ranges within queries, improving performance and efficiency.{MAXOFFSET}
fetches the highest column value from a query, while{NEXTOFFSET}
retrieves the largest column value from prior queries, essential for effective batch processing. For further details on using these new features in data polling, please visit the{MAXOFFSET}
and{NEXTOFFSET}
section on our Data Polling documentation page. - You can now execute post-sync scripts when performing Change Data Capture (CDC) operations with Kafka Topics as the destination. This enhancement enables developers to implement custom actions after data synchronization, such as setting status flags or recording timestamps. This allows for more flexible post-operation scripting in CDC workflows.
- Real-time sync improvements for listener operational status: This enhancement improves the data synchronization feature in Cinchy. The Enabled/Disabled setting now more effectively controls the start and stop of data synchronization. Key enhancements include:
- Lifecycle Phases: The synchronization process now clearly follows distinct phases: Starting, Running, Failed, and Disabled. This structured approach enhances monitoring and debugging capabilities.
- Automatic Retry Mechanism: In the Failed state, due to synchronization errors, the system logs detailed error messages and remains in the Enabled state. It automatically retries synchronization every 60 seconds until you set the status to Disabled.
- Automatic Disable Feature: The system now intelligently sets itself to Disabled under two specific conditions:
- Detection of an invalid configuration (such as erroneous Topic JSON).
- Validation error during synchronization (such as a missing mandatory field in the Topic JSON).
- When selecting columns in a Cinchy Event Broker or Cinchy Table source, four additional system columns have been added: Replaced, Rejection Comments, Changeset, and Change Summary.
- When configuring Kafka as a sync destination you can now use use a '@COLUMN' custom formula to enable a dynamic Topic. For further information on this, please review the Kafka Topic documentation.
- To help reduce possible system load, there is a new user-configurable limit on how many CDC event queries may run concurrently.
- The
"CdcQueryConcurrencyIndex"
value is defaulted to 5 concurrent queries and can be configured in the Event Listener AppSettings.json. 5 is suitable for many environments. - If the load associated with Change Notifications is impacting system performance, consider lowering this value to prioritize other work, at the expense of change processing. Alternatively, provision faster hardware.
- If Change Notification processing is taking longer than desired, consider increasing this number to allow more concurrent processing, depending on the capacities of your particular system.
- The
- Snowflake Driver was updated from 2.1.5 > 3.1.0, in part to allow for the use of a PrivateLink as the Connection String when using Snowflake as a source.
- The retention period for messages on user-defined Kafka topics was set to 24 hours in order to match system-defined topics. This helps mitigate data duplication in cases where offsets were deleted before the messages in the topic.
- Improved the performance of permissions checks for the Connections experience.
- The above change also fixes a bug that could prevent certain users from downloading the data sync error logs.
- When configuring a data sync using the Salesforce Object source, the source filter section will appear in the UI as intended.
Connections Bug Fixes
v5.7- We fixed a bug where the UUID/ObjectId in a MongoDB Change Stream Sourced data sync wasn't being serialized into text format. If you have any MongoDB Stream Sourced syncs currently utilizing the UUID/ObjectId, you may need to adjust accordingly when referencing the columns with those data types.
// Previous UUID/ObjectIDs would have been serialized as the below:
{
"_id": ObjectId('644054f5f88104157fa9428e'),
"uuid": UUID('ca8a3df8-b029-43ed-a691-634f7f0605f6')
}
// They will now serialize into text format like this:
{
"_id": "644054f5f88104157fa9428e",
"uuid": "ca8a3df8-b029-43ed-a691-634f7f0605f6"
}
- We fixed a bug where setting a user’s time zone to UTC (Coordinated Universal Time) would result in no data being returned in any tables.
- We fixed a bug where the Sync GUID of Saved Queries transferred over via DXD would null out.
- We fixed a bug affecting the MongoDB Event Listener wherein the “auto offset reset” functionality would not work as anticipated when set to earliest.
- We fixed a bug where failed jobs would return errors for logs that haven't yet been created. Log files now correctly search for only the relevant logs for the failed job.\
- We fixed an issue in the data configuration table where the
IF
field for the Delimited File > Conditional Calculated Column wasn't displaying correctly. - We resolved an issue where using multiple parameters while configuring data syncs could result in parsing and execution errors.
- We fixed a bug preventing calculated columns from working in MongoDB targets for data syncs.
- We fixed a bug where users were prompted to restore unsaved changes for a new connection when no configuration changes to a data sync were made.
- We fixed a bug that was causing the platform to fail upon initializing when a System User had been added to any user group (such as the Connections or Admin groups).
- We fixed a bug where passing an encrypted value to a variable used in a field encrypted by the connections UI would cause the sync to fail. You can now use variables with either encrypted or plaintext values.
- We fixed a bug where using the "Delta" sync strategy led to duplicating existing records in some destinations before inserting the new rows of data.
- We resolved an issue where the Load Metadata button was failing to connect to DB2 databases when fetching schema information.
- We fixed an issue where the Mapping UI would disappear in the Destination Section for Cinchy Event Broker to MongoDB Collection syncs, where Sync Actions were set to Delta.
- We fixed an issue where system columns like Created By, Created, Modified By, Modified, Deleted, and Deleted By weren't appearing in the topic columns dropdown in the Listener UI.
- We fixed a bug where the model loader failed to update when you added a description to a calculated column. The table now saves correctly when making changes to calculated columns.
- We fixed an issue that prevented table selection from the drop-down in Cinchy Event Broker's listener configuration.
- We resolved an issue where the
Lookup()
function in the Filter field for Cinchy Tables wasn't behaving as expected. - We restored the default timeout setting for
HttpClient
to over 100 seconds. - We fixed an issue where the UI failed to display Batch Data Sync results and instead showed a generic exception message. The jobs tab in the UI now opens without any API failure appearing in the browser's network console.
- We resolved an issue that caused large batch delta syncs to fail in Cinchy.
- We fixed an issue where Cinchy CDC Delete events weren't sent to the destination using Delta. For example, Deletes and Approved Deletes now successfully insert records into Kafka when deleted from a Cinchy table.
- We fixed the issue of concurrent updates failing due to a Primary Key (PK) violation on the History table by adding a retry mechanism. This fix aims to make Cinchy more robust when making concurrent updates.
- We resolved an issue during where the Cinchy destination would still be queried during a delta sync.
- We fixed an issue with data syncs that would fail on executed queries that returned large numbers of records on Cinchy Table destinations.
- We modified the Data Polling mechanism to enhance the reliability of message delivery to Kafka.
- We fixed an issue that ensures that Destination mappings in dropdowns now display the alias, instead of the original column name.
- We resolved an issue where dropdowns weren't correctly loading data due to user permissions on system tables. This fix, involving an API change, ensures that dropdown data reflects appropriate user access levels.
- We resolved an issue where the Query dropdown wasn't populating when you selected RunQuery in Connections listener UI.
- We resolved a rendering issue in the Connections listener UI, where line breaks in the topic JSON were causing display problems.
- We resolved a security issue that removes the logging of Connection Attributes in the Event Listener.
- We added a retry mechanism to address a transient connection issue for
PostgreSQL databases, where listeners in the production environment
encountered errors due to invalid
client_encoding
andTimeZone
parameters. The update enhances connection stability and reliability. - We increased the request limit size for the Connections Job API, enabling the processing of larger files without encountering size restrictions.
- We fixed an issue in batch synchronization (CSV to table) where data was incorrectly updated on subsequent syncs without any actual changes. This fix ensures data integrity and accurate update behavior in the synchronization process.
- We fixed an issue where line breaks in the Listener Topic JSON would cause the Listener UI to not display any settings. Cinchy now removes any formatting from the Topic column of the Listener Config table.
- We resolved an issue where CDC to ADO.net syncs weren't using a single sync
for all operations. The following changes have been made:
- Sync Key Flexibility: Any standard Cinchy data type can be used as a sync key or ID field.
- ID and Sync Key Compatibility: Setting both to the same field won't cause failure.
- Unified Sync Operations: Insert, Update, and Deletes work in the same sync when specified.
- Auto offset Reset: Consistent behavior for all settings.
- Error Messaging: Clear error messages for missing operation info.
- We fixed a regression issue with the DB2 connector.
- We fixed a visual instability within the Filter text field.
- We resolved an issue where existing sync configurations in Cinchy Event Broker incorrectly displayed empty query dropdowns.
- We fixed an issue where data syncs from pipe-delimited files failed to process text fields containing quotes.
- We fixed a bug that was causing the unsaved changes dialog to be displayed in scenarios where there were no unsaved changes
- We resolved an issue in changelog tables where updates that weren't batched and timeouts occurred during large record set processing. This fix ensures efficient handling of cache callbacks across all nodes.
- We resolved an issue where the order of multi-selects affected reconciliation in Connections.
- We increased the request limit size for the Connections Job API, enabling the processing of larger files without encountering size restrictions.
- We have fixed a bug that was causing some data syncs to Cinchy Tables to unnecessarily update multi-select values in the destination. This fix reduces monitoring noise and prevents collaboration log and database size bloat.
- We fixed a bug where using 0 parameters in the ExecuteCQL API would incorrectly modify the API Models.
- We fixed a bug where the Publish Data Change Notifications setting was not being respected during table model loads.
- "Unique constraint" errors will no longer be thrown during parallel batch syncs that contained calculated columns.
- Fixed a bug that was creating duplicate target records when both inserting data in parallel and throwing transition errors that would trigger retries.
- We have addressed the possible out-of-memory errors that could arise during a data sync when "caching linked data values".
- Unnecessary collaboration log bloat will no longer occur due to the presence of parameters (variables) in a Data Sync XML.
- The Connections experience will no longer incorrectly start up if a Redis connection string is not provided. The following error will be thrown to provide troubleshooting assistance: "The Redis connection string is missing. Check your application configuration to ensure this value is provided."
- We fixed a bug where LEFT and RIGHT CQL functions were not working as expected.
- We fixed a bug that was preventing queries with User Defined Functions from executing due to a misalignment between the parser and the application initialization.
- Erased data will be filtered properly again on Left & Right Joined Table References.
- We fixed the following form bugs:
- A bug that prevented new records from being added to multiple child forms in the same view before the parent form was saved.
- A bug that duplicated newly-added records in a child form table if they were edited before the parent form was saved.
- Logging into the Forms application from a direct link in a fresh session resulted in a blank screen.
- The Active Jobs tab in the Connections UI will correctly show the currently running jobs.
- Fixed a bug that was preventing the
Update
andDelete
actions from working in Batch Delta Syncs.- Additionally, if an invalid Action Type column value is provided when configuring a Delta Sync, the Connection logs will now contain more detailed warning messages. These log messages will include information about the record with the incorrect action type. For example: "Invalid sync action type
ActionTypeValue
in columnActionTypeColumnName
"- Note: Valid sync action types are
'Insert'
,'Update'
,'Delete'
. Anything else is invalid.
- Note: Valid sync action types are
- Additionally, if an invalid Action Type column value is provided when configuring a Delta Sync, the Connection logs will now contain more detailed warning messages. These log messages will include information about the record with the incorrect action type. For example: "Invalid sync action type
- Fixed an issue when listener/worker would extract the wrong Idp Url during the simultaneous startup of Cinchy Web & listener/worker.
- Long running batch jobs will no longer intermittently fail to complete.
- Fixed an authentication error caused by using Basic Auth with the SOAP 1.2 source connector.
- Fixed a bug that was causing data syncs to fail when syncing linked columns to a Cinchy Table target.
Forms
v5.7- We consolidated all actions into a single menu for easier navigation.
- We moved Create new record into the single menu and renamed it to Create.
- We added an option to copy the record link (URL) to the clipboard.
- We changed Back to Table View to View Record in Table.
- To improve the user experience and make interacting with forms easier, we made the Forms action bar always visible when you scroll through a form.
- We updated the URL to accurately match the record currently displayed, when switched from the records dropdown menu.
- We added a warning message in child forms when essential columns like "Child Form Link Field" or both "Child Form Parent ID" and "Child Form Link ID" are missing, as they're needed for proper functionality.
- You'll now get a prompt to save if you have unsaved changes in a form.
- We've added the ability to export a Form PDF in landscape mode.
- When loading a Form, the sidebar navigation will now correctly highlight the appropriate/currently selected section.
Forms Bug Fixes
v5.7- We fixed a bug where child record tables within a form would display data differently when exported to a PDF.
- We fixed an issue where the first load of an applet wouldn't render sections that require Cinchy data until you refreshed the page.
- We fixed an issue where raw HTML was being displayed instead of HTML hyperlinks.
- We fixed a bug that prevented a form from loading if you deleted an associated child form.
- We fixed an issue with the record dropdown search where inputs of more than 30 characters caused a failure to match.
- We resolved a bug that prevented saving Date values in child forms during creation and editing.
- We fixed a bug where the Add… link in the forms sidebar failed to load the correct form in the modal.
- We fixed an issue where multi-select columns linked to large tables didn't display selected values and allowed accidental overwriting of existing selections.
- We fixed an issue where creating new records in Forms failed if a text field contained a single quote, ensuring successful record creation regardless of text field content.
- We fixed a bug where child forms weren't saved due to multi-select columns getting their values set to empty if they weren't changed by the user.
- The column filter in the [Cinchy].[Form Fields] table will now filter correctly when creating a new record.
- Selecting a record in the "Search Records" dropdown will update the page and URL to the newly selected record.
- Fixed a bug that was causing a record lookup error due to an "invalid trim".
v5.7
The following changes were made to the platform between v5.7 and v5.13
Breaking changes
Discontinuation of support for 2012 TSQL v5.9
As of version 5.9, Cinchy will cease support for 2012 TSQL. This change aligns with Microsoft's End of Life policy. For further details, refer to the SQL Server 2012 End of Support page.
Removal of GraphQL API (Beta) v5.9
The beta version of our GraphQL API endpoint has been removed. If you have any questions regarding this, please submit a support ticket or email support@cinchy.com.
Personal Access Tokens v5.10
There was an issue affecting Personal Access Tokens (PATs) generated in Cinchy wherein tokens created from v5.7 onwards were incompatible with subsequent versions of the platform. This issue has been resolved, however please note that:
- Any tokens created on versions 5.7.x, 5.8.x, and 5.9.x will need to be regenerated.
- "401 Unauthorized" errors may indicate the need to regenerate the token.
- PATs created before 5.7.x and from 5.10 onwards are unaffected.
Update to .NET 8 v5.13
The Cinchy platform was updated to .NET 8, in accordance with Microsoft's .NET support policy. Support for .NET 6 ends on November 12, 2024.
- For customers on Kubernetes: This change will be reflected automatically upon upgrading to Cinchy v5.13+.
- For customers on IIS: The following must be installed prior to upgrading to Cinchy v5.13+:
General Platform
The following changes pertain to the general platform.
v5.8- Cinchy v5.8 is compatible with the MySql v8.1.0 driver.
- Cinchy v5.9+ is compatible with the MySql v8.2.0 driver.
- We have updated our third-party libraries
- Nuget package updates.
- Updated Npgsql to version 7.0.7.
- Upgraded moment.js to 2.29.4
- Various package updates
Expanded CORS policy for Cinchy Web API endpoints v5.9
Cinchy Web API endpoints now feature a more permissive Cross-Origin Resource Sharing (CORS) policy, allowing requests from all hosts. This update enhances the flexibility and integration capabilities of the Cinchy platform with various web applications.
Make sure to use robust security measures in your applications to mitigate potential cross-origin vulnerabilities.
Time Zone Updates v5.9
We updated our time zone handling to improve compatibility and user experience. This change affects both PostgreSQL (PGSQL) and Transact-SQL (TSQL) users, with significant changes in options, discontinuation of support for older TSQL versions, and manual time zone migration. Time zone values will be changed and mapped during the upgrade process. In case of mapping failure, the default time zone will be set to Eastern Standard Time (EST). This enhancement does the following:
-
PGSQL Time Zone support:
- PGSQL now offers an expanded range of time zone options. These options may not have direct equivalents in TSQL.
-
Discontinuation of TSQL 2012 Support:
- We're discontinuing support for TSQL 2012. Users must upgrade to a newer version to ensure compatibility with the latest time zone configurations.
-
System Properties Update:
- Time zone settings will continue to be supported in TSQL 2016 and later versions.
Manual Time zone migration
Due to differences in time zone naming between TSQL and PGSQL, Cinchy will manually migrate users to a matching time zone. To verify your time zones, you can do the following:
-
Personal preferences:
- All users should check their time zone settings post-migration.
- For personal settings, select My Profile and set the preferred time zone.
- For system settings, access the system properties table (ADMIN), manually copying the PGSQL name into the Value column.
-
Database Access Requirements: The Cinchy application must have application READ access to the following tables depending on the database in use:
- PGSQL:
pg_timezone_names
- TSQL:
sys.time_zone_info
- PGSQL:
Integration with AWS and Azure in External Secrets Manager v5.9
With the External Secrets Manager table, Cinchy now offers comprehensive integration capabilities with AWS and Azure. This enhancement allows for streamlined management and integration of external secrets within the Cinchy environment and expands the supported authentication types from AWS and Azure, providing a more versatile approach to managing external secrets.
For AWS, Cinchy now supports the following secret types:
- AWS access keys for IAM users.
- IAM roles.
For Azure, Cinchy now supports the following secret types:
- Managed identities.
- Registered applications.
Introducing Cinchy Automations v5.11
Cinchy Automations is a platform tool that allows users to schedule tasks. To reduce the time and manual effort spent on reoccurring tasks, you can now tell Cinchy to perform the following automatically:
- Executing queries
- Triggering batch syncs
- Extracting and running a code bundle, which can contain any number of queries or syncs needed to perform a task.
Using the Automations capability, you can also build an automation that performs multiple tasks in sequence (known as "Automation Steps") to allow for more complex use cases.
You can find the full details on this powerful new capability here.
APIs
The following changes pertain to Cinchy's APIs.
v5.10- We have added two new scopes to the [Cinchy].[Integrated Clients] table: read:all and write:all, which can be used to fine-tune your permission sets. These scopes are found in the “Permitted Scopes” column of the table:
-- read:all = clients can read all data.
-- write:all = clients can read and write all data.- Both scopes are automatically assigned to existing integrated clients upon upgrading to v5.10
- Note: All new Cinchy Web API endpoints that use Bearer Tokens or Cookie Authentication Schemes must have at least one of the new scopes assigned. These endpoints are still currently accessible with the old js_scope, however this will be deprecated in a future release. You can update the scopes of existing endpoints in the “Permitted Scopes” column of the [Cinchy].[Integrated Clients] table.
- Note: A user with write entitlements will not be able to write when using a client that only has the read:all scope assigned.
- Note: Clients that receive a 403: Forbidden status code error in the logs should make note of this change as a possible cause, and update the permissions accordingly.
- You can now use Personal Access tokens in the following scenarios:
- As authentication when calling the api/v1.0/jobs API endpoint.
- As authentication when calling the api/v1.0/jobs API endpoint as another user.
- Added UTF-8 encoding to the Saved Query API endpoint.
Logging and Troubleshooting
The following changes pertain to error logging and troubleshooting within the platform. Note: Conenctions specific changes will be featured in the Connections section below.
v5.9- We improved the error messaging for model loader failures. Before, loading a model with a duplicate name and version in the Models table showed unclear error messages. Users had to check logs to identify the failure cause. The error screen now shows clear, detailed messages. This change makes troubleshooting easier and offers context into model loader failures.
- We integrated Kafka and Redis logs into OpenSearch, offering improved insight and quicker debugging for Change Data Capture (CDC) processes. This enhancement improves issue resolution and a streamlines monitoring.
- To enhance the troubleshooting of Cinchy's Angular SDK, errors will now display additional context. Failed call exceptions will contain more useful errors in the
data.details
property.
Tables
The following changes pertain to tables.
Table Enhancements
v5.9- You can now enable change notifications and related features on system tables within the Cinchy domain. Administrators and users now have better visibility into the use and modification of these tables. This includes additions, deletions, or updates to the table data.
- If you are on PostgreSQL, please restart the web application pod to enable change notifications.
- Some tables, such as the Listener State table, are excluded from this feature due to their high-volume nature.
- Change Data Capture (CDC) can't be enabled on tables that aren't versioned, specifically the Listener State table.
- When you enable CDC a system table, the model loader can't disable it
- We introduced a new feature that allows members of the Cinchy Builders group
to perform truncate table operations. This enhancement enables Builders to
effectively manage and manipulate table data. Key features include:
- Truncate Table Capability: Members of the Cinchy Builders group now have the authority to execute TRUNCATE operations on tables.
- Design Table Access: To perform a truncate operation, the user must have
access to the Design Table of the table they intend to truncate. If the
user lacks this access, the system will give an error stating
Design Table Access required to execute Truncate command
.
- Selecting a link to a PDF stored in Cinchy via a Link column associated with
Cinchy\Files
now respects your browser settings and opens the PDF in your browser, if you've set it to do so.
- The minimum length of a text column created by a CSV import is now 500 characters.
- Removed infinite scrolling from tables and in link column dropdowns.
- Tables now have pagination and will show 250 records per page. This affects both the regular table view as well as the tables that populate in the query builder view.
- Link Column dropdowns will display the first 100 records. Users can type in the cell to further filter down the list or search for records beyond the first 100
- Link Column drop downs will no longer return null values.
- When using the "Sort" capability on a table, you can now specify whether you want the data to return nulls first or last. Note that selecting either of these options will have an impact on the performance of the table. Leaving the option blank/unspecified will mitigate the impact.
- To improve platform performance by limiting the amount of data that must be read, table views will no longer query for columns that are not represented within the context of that specific view.
Table Bug Fixes
v5.9- You can now export up to the first 250,000 records from a View using the Export button on a table.
- We fixed the character limit in the Secrets table for Aurora databases. The Secret Value column capacity has increased from 500 to 10,000 characters, ensuring adequate space for storing secret data.
- We resolved an issue in the Collaboration Log Revert function where date-time values in unrelated columns were incorrectly altered.
- We resolved an issue where altering metadata for date columns in PostgreSQL led to exceptions during operations.
- We resolved an issue that caused binary columns to drop when editing the Users and Files system tables. This fix ensures that binary data types are now correctly recognized and retained during table modifications.
- Fixed a bug where the platform was not saving records where changes were made immediately after record creation.
IS NULL
checks with a multiselect field or parameter will now yield the expected result for empty values.- Adding a filter expression to a Link column via the UI will no longer cause a number conversion error.
Queries and CQL
The following changes pertain to queries and Cinchy Query Language.
Query and CQL Enhancements
v5.8- The POST endpoint for Saved Queries now automatically serializes hierarchical
JSON to text when the content-type is
application/json
. This update now supports values that are objects or arrays. This eliminates the need for manual serialization and makes it easier for developers to work with Saved Queries.
- We have added the Compress JSON parameter to the Query Builder UI and [Saved Queries] table. JSON compression can:
- Help to reduce the amount of time it takes to query and process data
- Reduce the amount of bandwidth needed to transfer data. This can be especially beneficial for applications that require frequent data updates, such as web applications.
- Reduce the amount of memory needed to store data.
- We have made various enhancements to the Saved Queries table for use cases when your queries are being used as API endpoints. Better management of these queries is possible by way of HTTP methods (GET, POST, PUT, PATCH, DELETE) for distinguishing between types of query operations, Versions for endpoint versioning, and UUIDs for grouping queries. Please review the Queries and Saved Query API pages for further details.
- To gain more control over your query creation, we have added a Cancel button to the query builder. The Cancel/Stop button will appear for the duration of running your query; clicking it will abort the active query and return a "Query execution cancelled" message.
Query and CQL Bug Fixes
v5.8- We fixed a bug in CQL on PostgreSQL that caused the
DATEADD
function to truncate input dates down toDAY
precision. Now, you can expect more accurate date manipulations without losing finer time details.
- We improved messaging in CQL Saved Queries to provide clearer error messages when required parameters are missing in saved queries, aiding in self-debugging.
- Fixed an invalid CQL bug in the Query Editor UI when using FOR JSON PATH when building queries in PGSQL.
Connections
The following changes pertain to data syncs and the Connections experience.
New Features
v5.8- Cinchy now supports a new Cinchy event-triggered source: SOAP API. This new feature initiates a SOAP call based on Change Data Capture (CDC) events occurring in Cinchy. The SOAP response then serves as the source for the sync and can be mapped to any destination. For more information, see the SOAP 1.2 (Cinchy Event Triggered) page.
- A new destination type has been added to the Connections Experience. The "File" destination provides the option to sync your data into Amazon S3 or Azure Blob Storage as a delimited file.
- Introducing Kafka Topic Isolation, a feature designed to optimize the performance of designated real-time syncs. Users can assign custom topics to any listener config, essentially creating dedicated queues to 'fast track' the processing of associated data. When configured appropriately, high priority listener configs will benefit from dedicated resources, while lower priority listener configs will continue to share resources. This provides a mechanism to improve the throughput of critical or high volume workloads, while preserving the default behaviour for most workloads. For more detail on Kafka Topic Isolation, please review the documentation here.
Note: This feature does not support SQL Service Broker
Connections Enhancements
v5.8- We improved the implementation of
DataPollingConcurrencyIndex
. We also added additional logging in the Data Polling Listener to enhance monitoring. - When configuring a connection source with text columns, it's possible to specify a JSON content type. This instructs the system to interpret the contents as a JSON object and pass it through as such. This is useful when the target (such as Kafka) supports and expects a JSON object for a specific target column. When setting this option, the value should always be valid JSON. Alternatively, the original, default behaviour of treating text columns as plaintext is unchanged. As plaintext, the contents of the column will be passed through as a string, even if it could be interpreted as JSON.
- We implemented alphabetical sorting for queries in the Connections listener UI RunQuery and Cinchy Query dropdowns. This streamlines navigation and simplifies query selection for users.
- We enhanced the batch processing system to ensure all records in the queue are fully processed before a batch job is marked as complete.
- We've enhanced the validation process for delete synchronization configurations. The system now checks the configuration at the start of the sync, ensuring the ID Column is defined and matches the Dropped Record behavior. This update prevents errors and confusion, leading to a smoother and more intuitive sync operation.
- We have expanded the authentication options available when building a TSQL database connection; including "Active Directory Interactive" in the platform SQL connection string (i.e. the database that hosts the cinchy web/idp application) will now utilize Active Directory Device Code Flow.
- Cinchy v5.10 is compatible with the MySql v8.3.0 driver.
- The Kafka configuration validation for the Connections WebApi and Worker has been improved such that applications will not start if any Kafka config value is invalid.
- You are now able to configure Cross-Origin Resource Sharing (CORS) for the Connections Experience.
This configuration allows the Connections Web API to become reachable by applications running on domains other than that which hosts your Connections Experience, and is especially useful for building projects/applications on Cinchy.- This value can be configured in the Connections WebApi > appsettings.json > "AppSettings" field by inputting an array of strings, where each string is a domain. Example:
"AppSettings": {
"CorsOrigins" : ["a.com", "b.com", "c.com"],
}
Troubleshooting Enhancements
v5.8- We added a warning to the Schema sections of multiple Sources to mitigate issues
due to mismatched column order. This warns users that the column order in the
schema must match the source/destination configuration. The changes affect the
following data sources:
- LDAP
- Excel
- Binary
- Fixed Width
- Cinchy Query
-
Error messages that pop up in the Connections Experience will provide additional information that will be more useful for troubleshooting.
-
SQL database read performance logging in Connections now reports a single entry per batch, making the results easier to interpret than the previous fixed-sized intervals (which may not have corresponded directly with batch activity).
-
Performance and error analysis is easier to accomplish with the addition of logging for Job execution parameters in data syncs. After starting a Batch job, you can navigate to Connections.WebApi logs in OpenSearch on Cinchy v5.0+, or the admin panel on Cinchy IIS, and search for an “Executing Job with parameters” message to see which parameters the job is executing with. (Note that the log location will depend on where you set up your log storage upon deployment.)
Example log:
{"@t":"2024-01-09T21:33:05.5223771Z","@mt":"Executing Job with parameters: {Reconcile Data}; {Degree Of Parallelism}; {Batch Size}; {Retrieval Batch Size}","Reconcile Data":true,"Degree Of Parallelism":2,"Batch Size":4000,"Retrieval Batch Size":3000,"SourceContext":"Cinchy.Connections.WebApi.Services.BatchDataSyncExecutor","ExecutionId":336,"DataSyncContextLogSink":336,"QueuedJobsProcessorId":"4"}
UI Enhancements
v5.8- We added a Listener section to the MongoDB Collection (Cinchy Event Triggered) and REST API (Cinchy Event Triggered) Sources. You can now manage the event trigger within the Connections UI. This reduces the complexity of managing the Listener Config table.
- You can now use drop-down menus for selecting Cinchy tables and queries for both Cinchy sources and destinations. This feature replaces the previous method, where users had to manually type in their selections.
- We added links next to any Cinchy Tables that are referenced in the UI. These links directly open the respective table, making navigation more seamless.
- We improved the user experience for header row settings for delimited files. The
following improvements have been added.
- Use Header Row Checkbox: Controls visibility of column names and Load Metadata button.
- Schema Columns Warning: Informs users about column order when header row is disabled.
- Modal Warning: Explains schema column reset when disabling header row.
- Header Record Row Number: Specifies row to use as header.
- Connections UI now includes several new elements to improve the monitoring and
control of listener statuses:
- A toggle switch to display the listener's current status.
- A direct link to a filtered view of records in the Execution Errors table where errors have occurred.
- An indicator of the listener's running state, which can be Disabled, Starting, Running, or Failed.
- A message is displayed when the listener isn't active and has failed, providing information on possible next steps.
- We've made enhancements to the UI for certain dropdown menus in Connections.
- Type ahead style dropdowns: We changed the table and query dropdowns to type ahead style, aligning with the existing Source and Destination dropdowns for a smoother UI.
- Uniform dropdown heights: We adjusted the Destination dropdown to match the Source dropdown in height, ensuring a consistent and visually appealing UI.
- Alphabetical Query sorting: We implemented alphabetical sorting for queries in the dropdown list.
- Consistent navigation links: We added navigation links next to the Table and Queries. dropdown for a uniform and intuitive user experience.
- To improve the user experience of building and running data syncs, we have added a "description" field to the Info section in the Connections experience. This field has a 500 character limit.
- When configuring or viewing a data sync created by a user with a different permission set as you, you may run into a case where the sync involves tables/queries you do not have access to. Previously, the table/query dropdowns in these cases would appear blank, however they will now populate with the names of those objects. Note that:
- You won’t be able to change the associated schema of a table/query you cannot access. Some fields may appear as disabled (ex: data mappings).
- You can still modify and save other aspects of the sync.
- Event-based syncs can be enabled and run as usual. Batch syncs must be run as a user with the correct permissions. Remember that you can run a job as another user if you have the credentials for that user.
- Spend less time searching and more time building: You are now able to use the "Models" dropdown field in the Connections UI to quickly select tables scoped within the respective model.
Source and Destination Enhancements
v5.8- You can now pull specific data from REST API response headers using .NET regex capture groups. This feature gives you more control and flexibility in collecting the data you need when using REST API destinations.
- We implemented a significant enhancement to the read performance of Oracle data sources. This improvement targets scenarios involving tables with a large number of columns or large-sized columns, as well as networks experiencing higher latency.
- For file-based syncs, we've added "Registered Application" as an authentication mechanism for Azure Blob Storage. This is an addition to S3 support for file-based syncs.
- We've expanded the conditional filtering capabilities
introduced in Cinchy v5.7. This
enhancement is now available on the following sources:
- Kafka
- SOAP (Cinchy Event Triggered)
- REST API (Cinchy Even Triggered)
- We enhanced our data polling source with the introduction of
{NEXTOFFSET}
and{MAXOFFSET}
keywords. These features streamline data synchronization by optimizing search ranges within queries, improving performance and efficiency.{MAXOFFSET}
fetches the highest column value from a query, while{NEXTOFFSET}
retrieves the largest column value from prior queries, essential for effective batch processing. For further details on using these new features in data polling, please visit the{MAXOFFSET}
and{NEXTOFFSET}
section on our Data Polling documentation page. - You can now execute post-sync scripts when performing Change Data Capture (CDC) operations with Kafka Topics as the destination. This enhancement enables developers to implement custom actions after data synchronization, such as setting status flags or recording timestamps. This allows for more flexible post-operation scripting in CDC workflows.
- Real-time sync improvements for listener operational status: This enhancement improves the data synchronization feature in Cinchy. The Enabled/Disabled setting now more effectively controls the start and stop of data synchronization. Key enhancements include:
- Lifecycle Phases: The synchronization process now clearly follows distinct phases: Starting, Running, Failed, and Disabled. This structured approach enhances monitoring and debugging capabilities.
- Automatic Retry Mechanism: In the Failed state, due to synchronization errors, the system logs detailed error messages and remains in the Enabled state. It automatically retries synchronization every 60 seconds until you set the status to Disabled.
- Automatic Disable Feature: The system now intelligently sets itself to Disabled under two specific conditions:
- Detection of an invalid configuration (such as erroneous Topic JSON).
- Validation error during synchronization (such as a missing mandatory field in the Topic JSON).
- When selecting columns in a Cinchy Event Broker or Cinchy Table source, four additional system columns have been added: Replaced, Rejection Comments, Changeset, and Change Summary.
- When configuring Kafka as a sync destination you can now use use a '@COLUMN' custom formula to enable a dynamic Topic. For further information on this, please review the Kafka Topic documentation.
- To help reduce possible system load, there is a new user-configurable limit on how many CDC event queries may run concurrently.
- The
"CdcQueryConcurrencyIndex"
value is defaulted to 5 concurrent queries and can be configured in the Event Listener AppSettings.json. 5 is suitable for many environments. - If the load associated with Change Notifications is impacting system performance, consider lowering this value to prioritize other work, at the expense of change processing. Alternatively, provision faster hardware.
- If Change Notification processing is taking longer than desired, consider increasing this number to allow more concurrent processing, depending on the capacities of your particular system.
- The
- Snowflake Driver was updated from 2.1.5 > 3.1.0, in part to allow for the use of a PrivateLink as the Connection String when using Snowflake as a source.
- The retention period for messages on user-defined Kafka topics was set to 24 hours in order to match system-defined topics. This helps mitigate data duplication in cases where offsets were deleted before the messages in the topic.
- Improved the performance of permissions checks for the Connections experience.
- The above change also fixes a bug that could prevent certain users from downloading the data sync error logs.
- When configuring a data sync using the Salesforce Object source, the source filter section will appear in the UI as intended.
Connections Bug Fixes
v5.8- We resolved an issue where the Load Metadata button was failing to connect to DB2 databases when fetching schema information.
- We fixed an issue where the Mapping UI would disappear in the Destination Section for Cinchy Event Broker to MongoDB Collection syncs, where Sync Actions were set to Delta.
- We fixed an issue where system columns like Created By, Created, Modified By, Modified, Deleted, and Deleted By weren't appearing in the topic columns dropdown in the Listener UI.
- We fixed a bug where the model loader failed to update when you added a description to a calculated column. The table now saves correctly when making changes to calculated columns.
- We fixed an issue that prevented table selection from the drop-down in Cinchy Event Broker's listener configuration.
- We resolved an issue where the
Lookup()
function in the Filter field for Cinchy Tables wasn't behaving as expected. - We restored the default timeout setting for
HttpClient
to over 100 seconds. - We fixed an issue where the UI failed to display Batch Data Sync results and instead showed a generic exception message. The jobs tab in the UI now opens without any API failure appearing in the browser's network console.
- We resolved an issue that caused large batch delta syncs to fail in Cinchy.
- We fixed an issue where Cinchy CDC Delete events weren't sent to the destination using Delta. For example, Deletes and Approved Deletes now successfully insert records into Kafka when deleted from a Cinchy table.
- We fixed the issue of concurrent updates failing due to a Primary Key (PK) violation on the History table by adding a retry mechanism. This fix aims to make Cinchy more robust when making concurrent updates.
- We resolved an issue during where the Cinchy destination would still be queried during a delta sync.
- We fixed an issue with data syncs that would fail on executed queries that returned large numbers of records on Cinchy Table destinations.
- We modified the Data Polling mechanism to enhance the reliability of message delivery to Kafka.
- We fixed an issue that ensures that Destination mappings in dropdowns now display the alias, instead of the original column name.
- We resolved an issue where dropdowns weren't correctly loading data due to user permissions on system tables. This fix, involving an API change, ensures that dropdown data reflects appropriate user access levels.
- We resolved an issue where the Query dropdown wasn't populating when you selected RunQuery in Connections listener UI.
- We resolved a rendering issue in the Connections listener UI, where line breaks in the topic JSON were causing display problems.
- We resolved a security issue that removes the logging of Connection Attributes in the Event Listener.
- We added a retry mechanism to address a transient connection issue for
PostgreSQL databases, where listeners in the production environment
encountered errors due to invalid
client_encoding
andTimeZone
parameters. The update enhances connection stability and reliability. - We increased the request limit size for the Connections Job API, enabling the processing of larger files without encountering size restrictions.
- We fixed an issue in batch synchronization (CSV to table) where data was incorrectly updated on subsequent syncs without any actual changes. This fix ensures data integrity and accurate update behavior in the synchronization process.
- We fixed an issue where line breaks in the Listener Topic JSON would cause the Listener UI to not display any settings. Cinchy now removes any formatting from the Topic column of the Listener Config table.
- We resolved an issue where CDC to ADO.net syncs weren't using a single sync
for all operations. The following changes have been made:
- Sync Key Flexibility: Any standard Cinchy data type can be used as a sync key or ID field.
- ID and Sync Key Compatibility: Setting both to the same field won't cause failure.
- Unified Sync Operations: Insert, Update, and Deletes work in the same sync when specified.
- Auto offset Reset: Consistent behavior for all settings.
- Error Messaging: Clear error messages for missing operation info.
- We fixed a regression issue with the DB2 connector.
- We fixed a visual instability within the Filter text field.
- We resolved an issue where existing sync configurations in Cinchy Event Broker incorrectly displayed empty query dropdowns.
- We fixed an issue where data syncs from pipe-delimited files failed to process text fields containing quotes.
- We fixed a bug that was causing the unsaved changes dialog to be displayed in scenarios where there were no unsaved changes
- We resolved an issue in changelog tables where updates that weren't batched and timeouts occurred during large record set processing. This fix ensures efficient handling of cache callbacks across all nodes.
- We resolved an issue where the order of multi-selects affected reconciliation in Connections.
- We increased the request limit size for the Connections Job API, enabling the processing of larger files without encountering size restrictions.
- We have fixed a bug that was causing some data syncs to Cinchy Tables to unnecessarily update multi-select values in the destination. This fix reduces monitoring noise and prevents collaboration log and database size bloat.
- We fixed a bug where using 0 parameters in the ExecuteCQL API would incorrectly modify the API Models.
- We fixed a bug where the Publish Data Change Notifications setting was not being respected during table model loads.
- "Unique constraint" errors will no longer be thrown during parallel batch syncs that contained calculated columns.
- Fixed a bug that was creating duplicate target records when both inserting data in parallel and throwing transition errors that would trigger retries.
- We have addressed the possible out-of-memory errors that could arise during a data sync when "caching linked data values".
- Unnecessary collaboration log bloat will no longer occur due to the presence of parameters (variables) in a Data Sync XML.
- The Connections experience will no longer incorrectly start up if a Redis connection string is not provided. The following error will be thrown to provide troubleshooting assistance: "The Redis connection string is missing. Check your application configuration to ensure this value is provided."
- We fixed a bug where LEFT and RIGHT CQL functions were not working as expected.
- We fixed a bug that was preventing queries with User Defined Functions from executing due to a misalignment between the parser and the application initialization.
- Erased data will be filtered properly again on Left & Right Joined Table References.
- We fixed the following form bugs:
- A bug that prevented new records from being added to multiple child forms in the same view before the parent form was saved.
- A bug that duplicated newly-added records in a child form table if they were edited before the parent form was saved.
- Logging into the Forms application from a direct link in a fresh session resulted in a blank screen.
- The Active Jobs tab in the Connections UI will correctly show the currently running jobs.
- Fixed a bug that was preventing the
Update
andDelete
actions from working in Batch Delta Syncs.- Additionally, if an invalid Action Type column value is provided when configuring a Delta Sync, the Connection logs will now contain more detailed warning messages. These log messages will include information about the record with the incorrect action type. For example: "Invalid sync action type
ActionTypeValue
in columnActionTypeColumnName
"- Note: Valid sync action types are
'Insert'
,'Update'
,'Delete'
. Anything else is invalid.
- Note: Valid sync action types are
- Additionally, if an invalid Action Type column value is provided when configuring a Delta Sync, the Connection logs will now contain more detailed warning messages. These log messages will include information about the record with the incorrect action type. For example: "Invalid sync action type
- Fixed an issue when listener/worker would extract the wrong Idp Url during the simultaneous startup of Cinchy Web & listener/worker.
- Long running batch jobs will no longer intermittently fail to complete.
- Fixed an authentication error caused by using Basic Auth with the SOAP 1.2 source connector.
- Fixed a bug that was causing data syncs to fail when syncing linked columns to a Cinchy Table target.
Forms
v5.9- We've added the ability to export a Form PDF in landscape mode.
- When loading a Form, the sidebar navigation will now correctly highlight the appropriate/currently selected section.
Forms Bug Fixes
v5.9- We resolved a bug that prevented saving Date values in child forms during creation and editing.
- We fixed a bug where the Add… link in the forms sidebar failed to load the correct form in the modal.
- We fixed an issue where multi-select columns linked to large tables didn't display selected values and allowed accidental overwriting of existing selections.
- We fixed an issue where creating new records in Forms failed if a text field contained a single quote, ensuring successful record creation regardless of text field content.
- We fixed a bug where child forms weren't saved due to multi-select columns getting their values set to empty if they weren't changed by the user.
- The column filter in the [Cinchy].[Form Fields] table will now filter correctly when creating a new record.
- Selecting a record in the "Search Records" dropdown will update the page and URL to the newly selected record.
- Fixed a bug that was causing a record lookup error due to an "invalid trim".
v5.8
The following changes were made to the platform between v5.8 and v5.13
Breaking changes
Discontinuation of support for 2012 TSQL v5.9
As of version 5.9, Cinchy will cease support for 2012 TSQL. This change aligns with Microsoft's End of Life policy. For further details, refer to the SQL Server 2012 End of Support page.
Removal of GraphQL API (Beta) v5.9
The beta version of our GraphQL API endpoint has been removed. If you have any questions regarding this, please submit a support ticket or email support@cinchy.com.
Personal Access Tokens v5.10
There was an issue affecting Personal Access Tokens (PATs) generated in Cinchy wherein tokens created from v5.7 onwards were incompatible with subsequent versions of the platform. This issue has been resolved, however please note that:
- Any tokens created on versions 5.7.x, 5.8.x, and 5.9.x will need to be regenerated.
- "401 Unauthorized" errors may indicate the need to regenerate the token.
- PATs created before 5.7.x and from 5.10 onwards are unaffected.
Update to .NET 8 v5.13
The Cinchy platform was updated to .NET 8, in accordance with Microsoft's .NET support policy. Support for .NET 6 ends on November 12, 2024.
- For customers on Kubernetes: This change will be reflected automatically upon upgrading to Cinchy v5.13+.
- For customers on IIS: The following must be installed prior to upgrading to Cinchy v5.13+: