I would like to create a class can be used in in statements, and the condition is passed to the object in __init__. An example:
class Set:
def __init__(self, contains):
self.__contains__ = contains # or setattr; doesn't matter
top = Set(lambda _: True)
bottom = Set(lambda _: False)
The problem with this is that 3 in top returns TypeError: argument of type 'Set' is not iterable, even though top.__contains__(3) returns True as expected.
What’s more, if I modify the code as such:
class Set:
def __init__(self, contains):
self.__contains__ = contains
def __contains__(self, x):
return False
top = Set(lambda _: True)
, 3 in top will return False, whereas top.__contains__(3) returns True as expected, again.
What is happening here? I am on Python 3.9.2.
(Note: the same happens with other methods that are part of the data model, such as __gt__, __eq__ , etc.)
>Solution :
That’s because magic methods are looked up on the class, not the instance. The interpreter circumvents the usual attribute-getting mechanisms when performing "overloadable" operations.
It seems to be this way because of how it was originally implemented in CPython, for example because of how type slots work (not the __slots__ slots, that’s a different thing): how + or * or other operators works on a value is decided by its class, not on per instance basis.
There’s a performance benefit to this: looking up a dunder method can involve a dictionary lookup, or worse, some dynamic computations with __getattr__/__getattribute__. However, I don’t know if this is the main reason it is this way.
I wasn’t able to find a detailed written description, but there’s a talk by Armin Ronacher on YouTube going quite in depth on this.