I ask this with the presumption that there is an answer, but the linked web page does not provide it: Why create this library?
I won't deny I'm also asking this because I do not personally see a reason... after all, I wouldn't ask if it were obvious to me. But I really don't understand what this is good for; it seems like binding Erlang to a Python library bound to underlying C and assembler code is doing nothing but bringing you disadvantage to accessing the Python with no corresponding advantage I can see; for instance, it isn't obvious to me that you can get any utility from Erlang's multiprocessor capabilities. It seems like, even on Erlang's terms, you'd be better off writing a Python program that does your task with numpy or scipy and having Erlang pipe whatever data it may be in possession of to a Python process in some more conventional fashion.
But I'm interested in whatever answer there may be. (Seriously.)
But then, you may be thinking: I like supervision trees. I have long-running processes that I want to be managed per the rules I establish. I want to run lots of jobs in parallel on my 64-core box. I want to run jobs in parallel over the network on 64 of my 64-core boxes. Python’s the right tool for the jobs, but I wish I could manage them with Erlang.
If you're going to be handling large amounts of data, then Disco is what you'd want to use, not this.
However, if you're spending your day in an LFE REPL and want to be able to parallelize computation out to multiple Python instances, then this tool will be for you ("will be" because parallelization is in the queue: https://github.com/lfex/py/issues/38).
As things stand right now (without parallelization), this library means that Erlang, Elixir, LFE, Joxa, etc., hackers don't have to context switch out of their preferred mode into another language, but can do it from the comfort of their regular daily routine. (I know Erlang hackers who refuse to fire up a Python interpreter...)
In other words, this is very much like an IPython for the Erlang world (where the ZeroMQ messaging architecture of IPython isn't needed, since that all comes for free in Erlang).
Also, CloudI (http://cloudi.org) provides supervisor functionality for Python services to help keep source code in Python fault-tolerant. There have been no problems handling large quantities of data there.
We have a team of data scientists that pretty much exclusively use python and a team of server devs that nearly exclusively use Erlang.
I could see a use for this, though I will say we've pretty much made separate services and these services communicate (in order of urgency) - via HTTP, Rabbit, or by rolling HBase tables on some schedule.
Because of that decoupling, this seems less necessary, but I could certainly see a place for it.
I won't deny I'm also asking this because I do not personally see a reason... after all, I wouldn't ask if it were obvious to me. But I really don't understand what this is good for; it seems like binding Erlang to a Python library bound to underlying C and assembler code is doing nothing but bringing you disadvantage to accessing the Python with no corresponding advantage I can see; for instance, it isn't obvious to me that you can get any utility from Erlang's multiprocessor capabilities. It seems like, even on Erlang's terms, you'd be better off writing a Python program that does your task with numpy or scipy and having Erlang pipe whatever data it may be in possession of to a Python process in some more conventional fashion.
But I'm interested in whatever answer there may be. (Seriously.)