I assume they meant to say "you are encouraged to use DVC to run your model and experiment pipeline". They want to encourage you to do this because they are trying to build a business around being a data science ops ecosystem. But the truth is that DVC is not a great tool for running "experiments" searching over a parameter space. it could be improved in that regard, but that's just not what I use it for nor is it what I recommend it to other people for.
However it's fantastic for tracking artifacts throughout an project that have been generated by other means, and for keeping those artifacts tightly in sync with Git, and for making it easy to share those artifacts without forcing people to re-run expensive pipelines.
Last I checked it wasn't easy to use something like optuna to do hyperparameter tuning with hydra/DVC.
Ideally I'd like the tool I use for data versioning (DVC/git-lfs/gif-annex) to be orthogonal to that which I use for hyperparameter sweeping (DVD/optuna/SageMaker experiments), and orthogonal to that which I use for configuration management (DVC/Hydra/Plain YAML), to that what I use for experimental DAG management (DVC/Makefile)
Optuna is becoming very popular in the data-science/deep learning ecosystem at the moment. It would be great to see more composable tools, rather than having to opt all-in into a given ecosystem.
Love the work that DVC is doing though to tackle these difficult problems though!
Big +1 about composability and orthogonality. I don't want one "do it all" tool, I want a collection of small tools that interoperate nicely. Like how you can use Airflow and DBT together, but neither tool really tries to do what the other one does (not that Airflow is "small", but still).
However it's fantastic for tracking artifacts throughout an project that have been generated by other means, and for keeping those artifacts tightly in sync with Git, and for making it easy to share those artifacts without forcing people to re-run expensive pipelines.