cattledb
Timeseries Store
A high-performance time-series data store for BigTable
Timeseries Store on BigTable
1 stars
3 watching
4 forks
Language: Python
last commit: about 2 years ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
opennms/newts | A distributed time-series data store built on top of Apache Cassandra, optimized for high throughput and efficient storage and retrieval. | 194 |
srotya/sidewinder | A fast and scalable timeseries database designed to store and analyze time series data at scale | 25 |
kairosdb/kairosdb | A fast and scalable time series database solution built on top of Cassandra | 1,740 |
oetiker/rrdtool-1.x | A tool for efficiently storing and analyzing time-series data in a graphical format | 1,019 |
artesiawater/hydropandas | A Python package for analyzing and writing hydrological timeseries data | 58 |
gnocchixyz/gnocchi | A time series database designed to store and index large amounts of aggregated data efficiently. | 303 |
machine-w/crown | A lightweight ORM library for TDengine time series databases | 35 |
alpacahq/marketstore | A high-performance database designed to efficiently store and manage large financial time-series data | 1,890 |
ankane/rollup | A Ruby library that provides a simple way to roll up time-series data in Rails applications | 313 |
naomijub/wooridb | A general-purpose time serial database with schemaless key-value storage and its own query syntax inspired by SparQL. | 131 |
netflix/atlas | A high-performance in-memory time series data management system designed for big data analytics and business intelligence applications. | 3,459 |
unit-io/unitdb | A high-performance time-series database designed to handle IoT and real-time analytics applications. | 120 |
florents-tselai/warcdb | A library for storing and querying web crawl data in a compact, easily sharable format. | 397 |
wrobstory/mcflyin | An API for performing common timeseries transformations on data | 86 |
pinusdb/pinusdb | A time-series database designed for small to medium-sized data sets with simple and efficient design goals. | 115 |