PEP 412 makes __dict__s more memory efficient than they were before, but not more efficient than no __dict__, which is the point of __slots__. The following program demonstrates the difference. Note that it lowers the available address space to 1GB so that memory exhaustion occurs sooner, and thus only works on UNIX-like systems that provide the resource module.
import resource
import sys
class WithoutSlots:
def __init__(self, a, b):
self.a = a
self.b = b
class WithSlots:
__slots__ = ('a', 'b')
def __init__(self, a, b):
self.a = a
self.b = b
resource.setrlimit(resource.RLIMIT_AS, (1024 ** 3, 1024 ** 3))
cls = WithSlots if sys.argv[1:] == ['slots'] else WithoutSlots
count, instances = 0, []
while True:
try:
instances.append(cls(1, 2))
except MemoryError:
break
count = len(instances)
del instances
print(cls, count)
That's a silly example. If you're making billions of integers, use NumPy. If it's just one pass, use a generator. If you're making lots of objects with more interesting attributes, the attribute storage will overwhelm the difference the instance dicts make.
My point was not that __slots__ does nothing, but that there are more important things to worry about.
Suppose I want to run algorithms on large arrays of 2D points while maximizing readability. I want to store the x and y coordinates using Python integers so I don't have to worry about overflow errors, but I expect that most of the time the numbers will be small and this is "just in case".
I claim that in this case, __slots__ is exactly the right thing to worry about.
It's hard for me to imagine that situation coming up, but yes, __slots__ does indeed have a purpose.
BTW, have you considered using the complex type to handle that for you? It's 2d and ints should be safe in float representation. If it overflows it'll crash nicely.
https://morepypy.blogspot.com/2010/11/efficiently-implementi...