Hacker News new | past | comments | ask | show | jobs | submit login

The modern way is to use @cache which is just lru_cache(maxsize=None).



lru_cache has the benefit of being explicit about the replacement policy. So I find it more appropriate in cases where I want a size limit in the cache which, for me, is almost always.

I’m not sure use cases you see caches in, but any long running process should likely have a size limit on the cache.


It still doesn't sit right with me that they still don't offer an option to enable deep copying the results. Using it with a function that returns a mutable object means that the "cached" value can actually be modified.


That depends on the semantics. If the function returns the Person object with a particular name (obtained from an expensive dB query) then you want the mutated Person returned, not the now stale value that was returned on the previous call to the function.


Because deep copying is always a bug. If your type semantically requires a deep copy, implement plain copy such that it actually does a deep copy.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: