I have a similar issue. I've been using feather files/parquet files for storage, and just using pandas to do analysis. There is an issue where the initial load/convert essentially doubles the memory usage of the file itself as it converts to a pandas dataframe. This can be avoided if you use a feather file and follow its recommendations for a zero-copy conversion (no NaNs/nulls).
I think it's a bit more flexible than using cli tools since you can set some sort of time index and query specific timeslices fairly easily
I think it's a bit more flexible than using cli tools since you can set some sort of time index and query specific timeslices fairly easily