Hacker News new | past | comments | ask | show | jobs | submit login

I haven't tried Rust yet, but I've been building libraries in Ruby, Node, and Python that call into a shared Go core, and my experience has been that the best approach is to simply compile static executable binaries for every platform, then call out to them in each language via stdin/stdout. I tried cgo, .so files, and the like, bit this was a lot more trouble and had issues on both Windows and alpine-flavored linuxes.

Is there some issue with this approach that I'm missing? Is the additional process overhead really enough that it's worth bending over backwards to avoid it?




As others have mentioned, it's too slow for what the author was trying to achieve. He had a section where he ruled out cgo for performance reasons, so if that's too slow, spawning a child process will be much slower. That may not matter for your use case, but the author is clearly aiming for almost no overhead considering how easy it is to use cgo to call into Rust.

The thing that should probably be said is that the difficulty is all on the Go side. Rust doesn't have any of the clumsiness that Go has when interacting with other languages. It's fully fluent in the lingua-franca of FFI (C ABI). If you were integrating your Ruby, Node or Python code with Rust instead of Go, there are nice libraries [1][2][3] that make it simple, easy and very low overhead.

For users of these scripting languages, Rust is a nice tool to keep in your back pocket to pull out in the rare cases that you're not getting the performance you need. It means being able to choose your tools based solely on developer ergonomics and existing team knowledge knowing that in the rare cases that you do need do something computationally intensive, you can drop to Rust, push everything through a Rayon parallel iterator, write the performance-sensitive logic and push the result back. It's also really useful to use Rust in environments like Lambda/Cloud Functions that only support those scripting languages since those environments tend to charge based on memory and CPU time and Rust makes it easy to get by with a minimum of both.

[1] https://github.com/tildeio/helix (Ruby)

[2] https://github.com/neon-bindings/neon (Node)

[3] https://github.com/pyo3/pyo3 (Python)


Well, the stated goal was to use rust for small hotspots. These hotspots could take very little time but are called a lot, so the overhead of creating a process / communicating with another process can be quite a lot (think of a function called in a tight inner loop)


That sounds like it could be a nightmare for error handling! How do you work with that? And aren't a great many performance benefits lost when you restrict Go to stdout?

I would think it would be better overall to just create a local http server in Go and use that instead. Or sockets if you're feeling up to it.


The only performance issues with stdout is if you write to a console, the console is rendering it. For example, printing 100000 lines of `Hello World!` to stdout takes 2.134 seconds when output to a console on my computer, 43 milliseconds when redirected to `nul`, (the equiv of > /dev/nul), and 276 milliseconds when redirected to a file. So saying stdout is a performance issue is just FUD. Now, there's still issues with error handling, but that can be solved by implementing a message protocol over stdin/stdout, and you still have a cost associated with launching the binary, but that can similarly be replaced with message protocol over stdin/stdout, allowing the same running instance to serve multiple requests.


The executable has really simple output--it either works and outputs json, or doesn't and outputs nothing--so there's no difficulty with error handling. I guess I can understand wanting tighter integration for more complex scenarios though... an http server is an interesting idea, but could you run into issues with ports being restricted on production servers?

I'm not seeing any performance issues with stdout, but I'm also not writing much data.


> but could you run into issues with ports being restricted on production servers

Sorry, what? Just make the port configurable?


It was a question (note the question mark)... I don't see the need for snark.

Anyway, for my purposes, this wouldn't work, since the executable is embedded in libraries that are meant to run anywhere without any configuration. But yeah I could see that being fine under other circumstances I guess.


No snark. Just puzzled. You can still just make it configurable. Or just pick a random port and communicate it with your child process.


You might be interested in this plugin framework for Go that communicates over stdin/stdout: https://github.com/natefinch/pie




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: