Skip to main content

Parquet

Overview

Apache Parquet is a file format designed to support fast data processing for complex data, with several notable characteristics:

1. Columnar: Unlike row-based formats such as CSV or Avro, Apache Parquet is column-oriented – meaning the values of each table column are stored next to each other, rather than those of each record:

2. Open-source: Parquet is free to use and open source under the Apache Hadoop license, and is compatible with most Hadoop data processing frameworks. To quote the project website, “Apache Parquet is… available to any project… regardless of the choice of data processing framework, data model, or programming language.”

3. Self-describing: In addition to data, a Parquet file contains metadata including schema and structure. Each file stores both the data and the standards used for accessing each record – making it easier to decouple services that write, store, and read Parquet files.

Example use case

You have a parquet file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.

tip

The Parquet source supports batch syncs.

Info tab

You can find the parameters in the Info tab below (Image 1).

Values

ParameterDescriptionExample
TitleMandatory. Input a name for your data syncEmployee Sync
VariablesOptional. Review our documentation on Variables here for more information about this field. When uploading a local file, set this to filepath.Since we're doing a local upload, we use "@Filepath"
PermissionsData syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.

Image 1: The Info Tab

Source tab

The following table outlines the mandatory and optional parameters you will find on the Source tab.

The following parameters will help to define your data sync source and how it functions.

:::tip Registered Applications

For information on setting up registered applications for S3 or Azure, please see the Registered Applications page.

:::

ParameterDescriptionExample
(Sync) SourceMandatory. Select your source from the drop-down menu.Parquet
SourceLocation of the source file. Supports Local upload, Amazon S3, Azure Blob Storage with various authentication methods.Local
Row Group SizeMandatory. Size of Parquet Row Groups. Review the documentation here for more on Row Group sizing.The recommended disk block/row group/file size is 512 to 1024 MB on HDFS.
PathMandatory. Path to source file. Local upload requires Variable in Info tab.@Filepath
Auth TypeDefines the authentication type. Supports "Access Key" and "IAM role". Additional setup is required. Read more
Test ConnectionUse to verify credentials. A "Connection Successful" pop-up appears if successful.

Next steps