Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been working on Red Candle, a Ruby gem that runs LLMs (Llama, Mistral, Gemma, Phi) directly in your Ruby process through Rust bindings (based on the candle crate from Hugging Face). No Python, no servers - just FFI with Metal/CUDA acceleration.

It's been useful for adding AI features to Rails apps without the complexity of managing separate services. Would love feedback from anyone working with LLMs in Ruby.

GitHub: https://github.com/assaydepot/red-candle



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: