How to load your data from KDB+ to Parquet File

Pipes allows you to automatically replicate your KDB+ data into Parquet File on your defined schedule.
Load all your data from KDB+ to Parquet File to instantly get access to your KDB+ data. This data will always be updated on your defined schedule.
Pipes allows you to automatically load your KDB+ data into Parquet File. With ready-to-use connectors and pre-built data schemas for more than 200 APIs and web services you build data pipelines in minutes and without any coding. ​
1

Connect to Parquet File

This will be the destination of all data pipelines you build. Besides Parquet File, Pipes supports the most used relational databases in the cloud and on-premises.
2

Connect to KDB+

Just enter your credentials to allow Pipes access to the KDB+ API. Then Pipes is able to retrieve your data from KDB+.
3

Create a data pipeline from KDB+ to Parquet File

Pipes lets you select the data from KDB+ you want to have in Parquet File. This pipeline will run automatically on your defined schedule!

About KDB+

kdb+ is a relational time series database (TSDB) with in-memory (IMDB) capabilities. The database is tipically used in high-frequency trading (HFT) to store, analyze, process, and retrieve big data sets at high speed. Financial institutions use kdb+ to analyze time series data such as stock exchange data.

About Parquet File

Apache Parquet is an open-source data repository of the Apache Hadoop ecosystem. It is comparable to the other columnar storage formats RCFile and Optimized RCFile available in Hadoop. It is compatible with most data processing frameworks in the Hadoop environment. It provides efficient data compression and encryption systems with improved performance for processing complex data in large volumes.

Your benefits with Pipes

Get central access to all your data

Access data from 200+ data sources with our ready-to-use connectors and replicate it to your central data warehouse.

Automate your data workflows

Stop manually extracting data and automate your data integration without any coding. We maintain all pipelines for you and cover all API changes!

Enable data-driven decision-making

Empower everyone in your company with consistent and standardized data, automate data delivery and measure KPIs across different systems.