Hacker News new | past | comments | ask | show | jobs | submit login

Great points. Besides performance, centralized coordination and distributed dataplane is better for operability of schedulers as well. Some examples - Being able to roll out new features in the scheduler, tracing scheduling behavior and decisions, deploying configuration changes.

Even with a centralized scheduler it should be possible to create a DevEx that makes use of decorators to author workflows easily.

We are doing that with Indexify(https://github.com/tensorlakeai/indexify) for authoring data intensive workflows to process unstructured data(documents, videos, etc) - it’s like Spark but uses Python instead of Scala/SQL/UDFs. Indexify’s scheduler is centralized and it uses RocksDB under the hood for persistence, and long term we are moving to a hybrid storage system - S3 for less frequently updated data, and SSD for read cache and frequently updated data(on going tasks).

The scheduler’s latency for scheduling new tasks is consistently under < 800 microseconds on SSDs.

This is how schedulers have been designed traditionally that have a proven track record of working in production - Borg, Hashicorp Nomad, etc. There are many ways a centralized scheduler can scale out beyond a single machine - parallel scheduling across different by sharding jobs, node pools, and then linearizing and deduplicating conflicts during writes is one such approach.

Love DBOs and Hatchet! cheering for you @jedberg and @abelanger :-)

Disclaimer - I am the founder of Tensorlake, and worked on Nomad and Apache Mesos in the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: