Hacker News new | past | comments | ask | show | jobs | submit login

If anyone is interested in contrasting this with InnoDB (MySQL’s default engine), Jeremy Cole has an outstanding blog series [0] going into incredible detail.

[0]: https://blog.jcole.us/innodb/




Apache Arrow Columnar Format: https://arrow.apache.org/docs/format/Columnar.html :

> The Arrow columnar format includes a language-agnostic in-memory data structure specification, metadata serialization, and a protocol for serialization and generic data transport. This document is intended to provide adequate detail to create a new implementation of the columnar format without the aid of an existing implementation. We utilize Google’s Flatbuffers project for metadata serialization, so it will be necessary to refer to the project’s Flatbuffers protocol definition files while reading this document. The columnar format has some key features:

> Data adjacency for sequential access (scans)

> O(1) (constant-time) random access

> SIMD and vectorization-friendly

> Relocatable without “pointer swizzling”, allowing for true zero-copy access in shared memory

Are the major SQL file formats already SIMD optimized and zero-copy across TCP/IP?

Arrow doesn't do full or partial indexes.

Apache Arrow supports Feather and Parquet on-disk file formats. Feather is on-disk Arrow IPC, now with default LZ4 compression or optionally ZSTD.

Some databases support Parquet as the database flat file format (that a DBMS process like PostgreSQL or MySQL provides a logged, permissioned, and cached query interface with query planning to).

IIUC with Parquet it's possible both to use normal tools to offline query data tables as files on disk and also to online query tables with a persistent process with tunable parameters and optionally also centrally enforce schema and referential integrity.

From https://stackoverflow.com/questions/48083405/what-are-the-di... :

> Parquet format is designed for long-term storage, where Arrow is more intended for short term or ephemeral storage

> Parquet is more expensive to write than Feather as it features more layers of encoding and compression. Feather is unmodified raw columnar Arrow memory. We will probably add simple compression to Feather in the future.

> Due to dictionary encoding, RLE encoding, and data page compression, Parquet files will often be much smaller than Feather files

> Parquet is a standard storage format for analytics that's supported by many different systems: Spark, Hive, Impala, various AWS services, in future by BigQuery, etc. So if you are doing analytics, Parquet is a good option as a reference storage format for query by multiple systems

Those systems index Parquet. Can they also index Feather IPC, which an application might already have to journal and/or log, and checkpoint?

Edit: What are some of the DLT solutions for indexing given a consensus-controlled message spec designed for synchronization?

- cosmos/iavl: a Merkleized AVL+ tree (a balanced search tree with Merkle hashes and snapshots to prevent tampering and enable synchronization) https://github.com/cosmos/iavl/blob/master/docs/overview.md

- Google/trillion has Merkle hashed edges between rows in order in the table but is centralized

- "EVM Query Language: SQL-Like Language for Ethereum" (2024) https://news.ycombinator.com/item?id=41124567 : [...]




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: