Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is simply not true. So sad.

We removed benchmarks from the docs completely when the rule of "only show benchmarks with comparatively popular or more popular libraries" no longer made sense, and maintaining benchmarks with many hobby packages was obviously going to become burdensome.

Please show me a sensible benchmark where your library is faster than pydantic?



> This is simply not true. So sad.

Ah sorry, so, just coincidentally pydantic happened to be slower than any other library that had a PR to be added to the benchmark, but that was not the reason they were rejected.

Better now?

> Please show me a sensible benchmark where your library is faster than pydantic?

    $ python3 perftest/realistic\ union\ of\ objects\ as\ namedtuple.py --pydantic
    (1.2192879340145737, 1.2595951650291681)
    $ python3 perftest/realistic\ union\ of\ objects\ as\ namedtuple.py --typedload
    (1.0874736839905381, 1.114147917018272)
I'm not a math genius but I'm fairly sure that 1.08 is less than 1.21.

So much for your invite to be gentle and cooperative :D (https://github.com/ltworf/typedload/pull/422)


¯\_(ツ)_/¯ - I get different results, see the PR.


It seems you're running on apple. I really can't reproduce since i don't own one, and unless I get it as a gift I never will.

Anyway no server code runs on apple, so it isn't that important to win benchmarks only on apple, I think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: