For those interested in time series library, we are developing Darts [1], which focuses on making it easy & straightforward to build and use forecasting models. Out of the box it contains traditional models (such as ARIMA) as well as recent deep learning ones (like N-Beats). It also allows to easily train models on multiple time series (potentially scaling to large datasets), as well as on multivariate series (i.e., series made of multiple dimensions). It will soon support probabilistic forecasts as well.
This is supported but only by neural-nets models, which are fit using SGD, hence naturally not requiring the whole dataset in memory. Other models like ARIMA do need the full series loaded in memory.
The models that work on multiple time series in Darts accept Sequence[TimeSeries] for their fit() method. These sequences can either be Lists (fully in memory, simplest option), or when needed it can be a custom Sequence which for example does lazy loading from disk (somewhat similar to what PyTorch Datasets are doing) with the __getitem__() method.
If you need even more control, for instance because you have only one very long series that doesn't fit in memory then you can implement your own Darts "TrainingDataset". In this case you can control how to slice your series exactly.
Edit: I realised this only answers the first sentence of your comment ;) For now there's no mechanism for scaling to multiple machines beyond what PyTorch is already offering. AFAIK it's reasonably easy to scale to multiple GPUs on a machine, but I'm not sure how it would scale on several machines. We never had to try this yet! (Note that actually a single CPU can handle training deep nets models on 10's of thousands of time series similar to the M4 competition in a fairly reasonable time).
Both - some models are wrapped (like ARIMA & ETS around statsmodels, Prophet around fbprophet) and we write others ourselves (RNNs, TCNs, N-Beats, ...). Basically we take a pragmatic approach here, we do whatever is best to use a given model in Darts.
The time series feature (TSFeature) extraction module in Kats can produce 65 features with clear statistical definitions, which can be incorporated in most machine learning (ML) models...
I'd be curious about the performance of these. A time series featurization library I've liked the look of but haven't used for real is catch22: https://github.com/chlubba/catch22
In particular I like catch22's methodology:
catch22 is a collection of 22 time-series [that are] are a high-performing subset of the over 7000 features in hctsa. Features were selected based on their classification performance across a collection of 93 real-world time-series classification problems...
There is also "tsfresh" [1] in the same domain that does «Automatic extraction of 100s of features». It filters the most useful features according to the given task, I quote: «This filtering procedure evaluates the explaining power and importance of each characteristic for the regression or classification tasks at hand.»
What are suggested online courses to learn about multi variable time series forecasting? My skill level is - ok with university level Biometrics but that was 10+ years ago, and I am web/self-taught python for web apps and automating GIS tasks.
Good question. I've been working on this too iterating through Youtube and Medium tutorials and working through all the notebooks I can find. The best examples I've found use LSTM for deep learning and vector autoregression (VAR) for classical statistical forecasting.
What are some ways to deal with large volumes of variable-length timeseries for real-time predictions? The best solutions I've tried myself all hinge on windowed-feature extraction or LSTMs. It generally works, but starts to fall apart when you're squeezed for data.
It seems that almost everywhere you look, every example has just one timeseries that needs to be dealt with. However, since the methods are much more "statistical" in nature, they can actually make meaningful predictions on a single sample.
I would say manual feature extraction? Your custom extraction could reduce the variable lengths to a uniform dimension (same number of features for every input), which can then be used by almost any algorithm.
These automatic extractions are very statistical in nature indeed, but for some datasets domain insights are more valuable and give more usable features (in my opinion). I found quite some datasets where manual features + gradient boosted trees give better results then automated statistical methods. Often combinations give better results :)
Maybe lookup panel data and repeated experiments. Those techniques are applied when the data is "tabular"; there are often relatively few observations on any individual time axis, but there are many instances of these experiments. It's a branch of linear forecasting (least squares), but it's tailored for example for biological experiments where you have several sets of results - related but maybe not performed in the same lab - which you want to amalgamate.
Kats looks like a useful library, but I’m a bit surprised to see they’re not enabling parallel execution for the numba kernels. Surely FB must have time-series data large-enough they’d see some performance benefits from parallelism in these functions?
Based on the example, it looks like this is a framework that incorporates Prophet as one way to build time series models and takes things a few steps further.
[1] https://github.com/unit8co/darts/