It's not totally crazy in that I see it all the time, but it's one of the two most common things I've found make Python code difficult to reason about.[0] After all, if you open a DB connection in __init__() -- how do you close it? This isn't C++ where we can tie that to a destructor. I've run into so many Python codebases that do this and have tons of unclosed connections as a result.
A much cleaner way (IMO) to do this is use context managers that have explicit lifecycles, so something like this:
with create_db_client('localhost', 5432) as db_client: # port 3306 if you're a degenerate
db_client.do_thing_that_requires_connection(...)
This gives you type safety, connection safety, has minimal boilerplate for client code, and ensures the connection is created and disposed of properly. Obviously in larger codebases there's some more nuances, and you might want to implement a `typing.Protocol` for `_DbClient` that lets you pass it around, but IMO the general idea is much better than initializing a connection to a DB, ZeroMQ socket, gRPC client, etc in __init__.
[0] The second is performing "heavy", potentially failing operations outside of functions and classes, which can cause failures when importing modules.
I get the points everyone is making and they make sense, but sometimes you need persistent connections. Open and closing constantly like can cause issues
There's nothing about the contextmanager approach that says you're open and closing any more or less frequently than a __init__ approach with a separate `close()` method. You're just statically ensuring 1) the close method gets called, and 2) database operations can only happen on an open connection. (or, notably, a connection that we expect to be open, as something external the system may have closed it in the meantime.)
Besides, persistent connections are a bit orthogonal since you should be using a connection pool in practice, which most Python DB libs provide out of the box. In either case, the semantics are the same, open/close becomes lease from/return to pool.
Nope, do not ever do this, it will not do what you want. You have no idea _when_ it will be called. It can get called at shutdown where the entire runtime environment is in the process of being torn down, meaning that nothing actually works anymore.
C++ destructors are deterministic. Relying on a nondeterministic GC call to run __del__ is not good code.
Also worth noting that the Python spec does not say __del__ must be called, only that it may be called after all references are deleted. So, no, you can't tie it to __del__.
A much cleaner way (IMO) to do this is use context managers that have explicit lifecycles, so something like this:
Which lets you write client code that looks like This gives you type safety, connection safety, has minimal boilerplate for client code, and ensures the connection is created and disposed of properly. Obviously in larger codebases there's some more nuances, and you might want to implement a `typing.Protocol` for `_DbClient` that lets you pass it around, but IMO the general idea is much better than initializing a connection to a DB, ZeroMQ socket, gRPC client, etc in __init__.[0] The second is performing "heavy", potentially failing operations outside of functions and classes, which can cause failures when importing modules.