As someone who built a pure python validation library[0] that's much faster than pydantic (~1.5x - 12x depending on the benchmark), I have to say that this whole focus on Rust seems premature. There's clearly a lot of room for pydantic to optimize its Python implementation.
Beyond that, rust seems like a great fit for tooling (i.e. ruff), but as a library used at runtime, it seems a little odd to make a validation library (which can expect to receive any kind of legal python data) to also be constrained by a separate set of data types which are legal in rust.
I agree that pydantic could have been faster while still being written in Python.
The argument for Rust:
1. If I'm going to rewrite - why not go the whole hog and do it in rust - and thereby get a 20x improvement, not 2x.
2. By using Rust we can have add more customisation with virtually no performance impact, with Python that's not the case.
Of course we could make Pydantic faster by removing features, but that would be very disappointing for existing user.
As mentioned by other commenters, your comment about "constrained" does not apply.
> If I'm going to rewrite - why not go the whole hog and do it in rust
We use black at work. One of the challenges with it is that it doesn't play very nicely with pandas, specifically its abundant use of []. So we forked it and maintain a trivial patch that treats '[' the same as '.' and everybody's happy.
What was maybe 15 minutes of work for me to get everybody's buy-in to use a formatter would not have been so quick or easy if it had been written in rust and now either we maintain our own repo of binary wheels, or all our devs now need to include rust in their build tooling.
I'm not invested in the argument one way or the other, just wanted to note that having the stack be accessible to easy modification by any user is itself a feature and one some people (including me in general, not so much in this particular case) derive a lot of benefit from.
This is so on point but is already the case with any package written in C.
I feel like there’s such a strong push towards having rust backends for python packages that you might have to learn it to become a decent python developer…and I think I might be ok with that.
For the price of having 1 dev on your team understand rust, you can keep using python as a top performing language.
We’ve got Ruff, Pydantic and Rye (experimental?) just off the top of my head being written in rust. It seems like that’s where we’re heading as a community.
Because now you are on your own looking at the community from afar. You have taken the Drupal path and the ones who can help you are also not being helped in their rust paved paths so they are busy.
Strange how it turned out that way, at the last convention everyone agreed that the best tool for python is rust... The silent majority was not there.
At least do it in Nim, a python dev can quickly catch up. Optimization kills resilience.
I really appreciate your transparency around, "I am the one who am writing this open source library, and I think it will be more fun to do it this way."
Have fun! I truly hope it pays the returns you hope it will as well.
Naysayers: you're welcome to fork the old python version. If the rust version is a nightmare for the ecosystem, I'm sure someone will do that.
While I agree that there are ways to write a faster validation library in python, there are also benefits to moving the logic to native code.
msgspec[1] is another parsing/validation library, written in C. It's on average 50-80x faster than pydantic for parsing and validating JSON [2]. This speedup is only possible because we make use of native code, letting us parse JSON directly and efficiently into the proper python types, removing any unnecessary allocations.
It's my understanding that pydantic V2 currently doesn't do this (they still have some unnecessary intermediate allocations during parsing), but having the validation logic already in compiled code makes integrating this with the parser theoretically possible later on. With the logic in python this efficiency gain wouldn't be possible.
Definitely true. I've just soured on the POV that native code is the first thing one should reach for. I was surprised that it only took a few days of optimizations to convert my validation library to being significantly faster than pydantic, when pydantic as already largely compiled via cython.
If you're interested in both efficiency and maintainability, I think you need to start by optimizing the language of origin. It seems to me that with pydantic, the choice has consistently been to jump to compilation (cython, now rust) without much attempt at optimizing within Python.
I'm not super-familiar with how things are being done on an issue-to-issue / line-to-line basis, but I see this rust effort taking something like a year+, when my intuition is some simpler speedups in python could have been in a matter of days or weeks (which is not to say they would be of the same magnitude of performance gains).
Two things may preclude optimization in pure Python when producing a library for general public. Having a nice / ergonomic interface is one. Keeping things backwards-compatible is another.
I also wrote a pure python validation library [0] that is much faster than pydantic. It also handles unions correctly (unlike pydantic).
Pydantic2 is indeed much faster than any pure python implementation I've seen, but it also introduces some bugs. And on pypy it is as slow as it ever was, because it falls back to python code.
I wrote mine because nothing else existed at the time, but whenever I've had to use pydantic I've found it to be quircky and to have strange opinions about types, that are not shared by type validators. Using it with mypy (despite the extension) is not so easy nor useful.
Eh, smart unions… you're welcome for that idea that comes from my project :)
Of course there was an incompatible api change there, where the smart union parameter got removed and it's impossible to obtain the old (and completely wrong) behaviour. I'm sure someone relies on that.
> to also be constrained by a separate set of data types which are legal in rust.
This isn't really how writing rust/python iterop works. You tend to have opaque handles you call python methods on. Here's a decent example I found skimming the code.
> it seems a little odd to make a validation library (which can expect to receive any kind of legal python data) to also be constrained by a separate set of data types which are legal in rust.
That... makes no sense? Rust can interact with Python objects, there is no "constrained".
In the sense of using escape hatches back to python, that's true. Main point is that from a complexity standpoint, why do python -> rust -> python, when there's still a lot of room to run in just python?
Personally, I think it's great to have many projects solving the same problem and pushing each other further. Although the differences between the faster validations are small, the older ones were quite slow. This will save unnecessary CPU cycles, making it eco-friendly. And now the bar will be even higher with a Rust version, which is really great.
[0]Maat is 2.5 times faster than Pydantic on their own benchmark, as stated in their readme.
Beyond that, rust seems like a great fit for tooling (i.e. ruff), but as a library used at runtime, it seems a little odd to make a validation library (which can expect to receive any kind of legal python data) to also be constrained by a separate set of data types which are legal in rust.
[0]: https://github.com/keithasaurus/koda-validate