Hacker News new | past | comments | ask | show | jobs | submit | Kerrick's comments login

Ruby and Rails really seem to be going through a renaissance lately.

- The pickaxe book, Programming Ruby, has a new edition this year covering Ruby 3.3

- The Rails Way is being updated for Rails 8 and is available in pre-release, and will have two companion books

- A new title, Rails Scales, is being published by PragProg and is available in pre-release now

- YJIT has made Ruby fast. Like, _FAST_

- Rails has a bunch of new features that cover the "missing middle" levels of success

- Ruby has a bunch of new and new-ish features like Data (immutable Struct), pattern matching, Fibers and Ractors, and more.

I had largely moved on from ruby into the world of front-end applications by 2013, but now I'm excited about the prospects of diving back in.


I'm optimistic about Ruby's async story with the work Samuel Williams has been doing. https://github.com/socketry/falcon is the tip of the iceberg, which is built on top of https://github.com/socketry/protocol-http2 and a not of other repos at https://github.com/socketry.

It's inspiring other's in the community to think of interesting applications, like using the HTML slot API to stream responses to HTML without JS. https://x.com/joeldrapper/status/1841984952407110037

I know other frameworks have had asynchronous IO support forever, but it's finally coming to Ruby that seems like it will stick around and be well supported.


My only concern is that none of his work is being picked up by Rails. As a matter of fact, it isn't just SW's work, the whole async story on ruby, it seems neither Fiber or Ractor has reached any mass adoption.

So first it's a bit annoying to read this when I busted my ass for several weeks to refactor Active Record to make it more compatible with the Fiber use case in 7.2.

But also there is very little benefit to it for the vast majority of Rails applications out there.

Unless you are doing micro-services or something like that, your typical Rails application isn't IO heavy enough to run more than 2 perhaps 3 threads before contending.

So the overwhelming majority of Rails applications wouldn't see any benefit from being served via falcon, quite the opposite.

Async is great to enable some use cases where people would have to reach to Node before, but Rails will never be a good solution for that, if you want to do some sort of light proxy or websocket notification thing, Rails isn't a good solution, you want something much lighter.


I will be curious to see what JRuby's fiber implementation is like in their upcoming version, currently I work on JRuby rails deployments with dozens of threads across hundreds of nodes. There's definitely some learning curves and tuning necessary when you have 40 threads in contention over a particular shared resource vs 3-5 on regular CRuby.

Wouldn't fibers work well for ActionCable applications? Lots of connections being kept alive, with sparse activity on them?

Yes. But just for the Action Cable parts, as in you'd deploy Action Cable standalone with falcon, and then keep Puma or whatever for the purely transactional requests.

If you don't, you'll notice that your Action Cable latency will be all over the place when a transaction request comes in.

It's acceptable for "hobby" use, but if you try to provide a good user experience with reasonable latency, you can't just collocate Action Cable and the rest of Rails in a single process.


Wow this sounds very smart.

Do you have any tutorial somewhere on how to use Falcon for this? I am getting some strange errors that would probably be covered in a basic tutorial already.


There's various PRs where Fiber adapters are making their way into the Rails stack. Rails 8 added a ton of support for Fibers, with the exception of ActionCable. There's a PR open for that, which I assume will land sometime soon.

Rails has been really slow to pick-up async/http-2. They don't know it yet, but Falcon and all async libraries Samuel is working on will probably be a huge them 1-2 years out when more people find out it means less infra has to be deployed to production environments. Right now folks are happy to deploy without Redis™ with the Solid stack, but a lot of that won't be needed if proper async support is baked into Rails.

There's been a lot of Fiber features being committed into the Ruby language that I barely understand, but have improved all of these async libraries over the past few years. That's finally starting to bear some fruit for people like myself who don't really understand all of those details, but understand the benefits.

It will happen, but these things tend to play out more slowly in Ruby, which is a feature or a bug depending on how you look at it.


> They don't know it yet,

This is so condescending... We perfectly know about the pros and cons of the fiber scheduler.

It's a very useful stack, but people, and you in particular, really need to stop selling it like it's the best thing since sliced bread. Async isn't a good fit for everything, and it's certainly not a good fit for the overwhelming majority of Rails applications.


I've heard from lots of folks in the Rails community that getting http/2 and streaming into Rails has been a slow and tedious process. I'm not saying it's going to be "a good fit for everything"—what I am saying is that it will be nice when we can run IO bound workloads in Rails without feeling like a fish out of water.

"it's certainly not a good fit for the overwhelming majority of Rails applications".

In my experience, most web applications are terminating HTTP connections from clients, then reaching out over a network to database servers, etc. to do work. This is very much IO-bound, so I'm not sure how this wouldn't be a good fit for most Rails applications.


> getting http/2 and streaming into Rails has been a slow and tedious process

Bringing http/2 all the way to the Rails process doesn't bring anything to the table. You're much better to terminate http2 or 3 with SSL at the LB.

> terminating HTTP connections from clients, then reaching out over a network to database servers, etc. to do work. This is very much IO-bound

It absolutely isn't unless your data access is really messed up (badly indexed queries or tons of N+1).

Even if you are just serializing the data you got from the DB down into JSON with little to no transformation, you'll likely end up spending more than 50% doing CPU work.

Look at all the reports of YJIT speeding up Rails applications by 15 to 30%. If Rails apps were truly IO bound like people claim, YJIT would have nothing to speedup.

Even if your app is 90% IO, you can slap Puma with 10 thread and will already suffer from contention. Async make sense when you'd need more than a dozen threads or so does. Before that it doesn't make a substantial difference. Like it would be great to use for Action Cable, but that's it.


Are there truly people out there that are terminating HTTP/2 at the application level? That's really quite surprising for anything serving production traffic.

> In my experience, most web applications are terminating HTTP connections from clients, then reaching out over a network to database servers, etc. to do work. This is very much IO-bound, so I'm not sure how this wouldn't be a good fit for most Rails applications.

Most rails applications are deployed using a multi-threaded application server such as Puma, a thread processes a single request and when it encounters IO (or calls out to a C function) the thread gives up its hold of the GVL and another thread can run. You can use 100% of your resources this way without the added complexity of parallelism within a single request.


That's not completely accurate. Rails 7.2 added fiber support.

Only action cable still doesn't fully support falcon using http 2. But that's coming soon as well.


My assumption is that’s due to the use case benefits for it.

More concurrency is not always ideal, especially if you’re not in an environment that guarantees you won’t have negative impacts or runaways process (BEAM languages, etc).

Rails projects are typically so hardwired to use a background queue like Sidekiq that it becomes very natural to delegate most use cases to the queue.


> More concurrency is not always ideal

Is this due to increased memory usage? Does the same apply to Sidekiq if it was powered by Fibers?


Really depends on the job. But generally, yes the same applies to Sidekiq. I think there is a queue for Ruby called Sneakers that uses Fibers?

If you're making API calls out to external systems, you can use all of the concurrency that you want because the outside systems are doing all of the work.

If you're making queries to your database, depending on the efficiency of the query you could stress your database without any real benefit to improve the overall response time.

If you're doing memory intensive work on the same system then it can create a potential issue for the server and garbage collection.

If you're doing CPU intensive work, you risk starving other concurrent processes from using the CPU (infinite loops, etc).

Something like the BEAM is setup for this. Each process has it's own HEAP that's immediately reclaimed when the process ends without a global garbage collector. The BEAM scheduler actively intervenes switch which process is executing after a certain amount of CPU time, meaning that an infinite loop or other intensive process wouldn't negatively impact anything else...only itself. It's one of the reasons it typically doesn't perform as well in a straight line benchmark too.

Even on the BEAM you still have to be cautious of stressing the DB, but really you have to worry about that no matter what type of system you're on.


I would argue - its not a comeback, it was always the "king" of web dev.

Seriously, other projects can use its success as a reference for implementation.

And I say this as a front end dev.


As a Rails dev from 2011-2018, having returned over the past year, it def seemed there was an exodus in or around 2015.

Part of it was due to the rise of SPA and Rails difficulty working with those (webpacker, anyone?), part due to poor perception of Rails 4, part due to newer options such as Elixir/Phoenix or Golang rising in popularity for backend work, part due to many of the leaders such as Yehuda Katz moving on.

Also watching from afar on HN, it seems like Rails 7 release was perceived as a sort of comeback, with quite a few articles posted in recent years praising the framework for a return to relevance.


Tried GoLang and also used Phoenix for a massive project which went well. But we had problems onboarding new folks into it and some junior and even senior engineers went bonkers trying to get their heads around FP and the elixir language in general. I would say it worked great for me, but the problems and the curve of learning for others in my team made it feel like Elixir creates that gap for teams in general.

Go is good, but again I only tried it long ago and can't comment it for what it is today. I loved Ruby but I find it hard to go back to it after my experience with Elixir and Typescript. I was hoping for Crystal to go to great lengths but it doesn't seem to be the case at all.


You do need to set some rule when onboarding people into an Elixir application. Not everything needs to be a GenServer, and please don't endlessly nest Enum.map calls.

> part due to newer options such as Elixir/Phoenix or Golang rising in popularity for backend work

I suspect Django and Laravel have taken a chunk of the market as like for like replacements.


Doubtful experienced devs moved to them. Attended many user groups and conferences during those years and both were seen as “lesser than”. Not unusual to see pot shots taken in presentation slides.

Elixir/Phoenix were embraced with excitement due to Jose’s connection to the community and the ruby like syntax.


I also noticed an elitism from other devs when it comes to Rails devs. I literally heard on multiple occasions "we don't hire Rails devs here!" followed by a laugh.

Of course it was tongue in cheek, if the candidate is amazing yes they're a hire.

But it spoke to a reputation that Rails devs had seemingly received. I think because prior to JS/Node, it was Rails that offered newbies the fastest path into web dev.

I don't believe this is the reason for any sort of exodus, but the negative perception may be partly a reason for devs choosing other frameworks.


> - YJIT has made Ruby fast. Like, _FAST_

Then I pray with all my heart that GitLab moves to it, because that's one of the major complaints I hear about folks who use their web interface. Even while I was visiting the site to look up whether their main repo had a .devcontainer folder in it, I could just watch all the stupid ajax-y shit spin endlessly in what is conceptually a static table (sure, it changes per commit, but they're not a stock ticker platform)

OT1H, I know, I know, "MRs welcome" but OTOH as a ruby outsider getting the dev environment up for contributing to them has been a lifelong struggle. Now that Rubymine has support for .devcontainer maybe it'll work better


I'm not saying GitLab is poorly designed, but a poorly designed website will be slow on the fastest of languages or frameworks. It's not necessarily a Rails or Ruby problem here.

>Then I pray with all my heart that GitLab moves to it,

YJIT does make Ruby fast, but that doesn't mean it makes _Rails_ fast. (yet) At least dont expect multiple times improvements.

Hopefully in the future.


There's plenty of reports of YJIT lowering real world Rails applications latency by 15-30%.

There is also plenty of perfectly snappy Rails applications out there. People need to stop blaming the framework...


Well, a lot of those pages have Vue application running on them.

GitHub has always been fast for me.

I have wondered about that, and I'd guess the thing you (and the sibling comments) pointing out about how GH seems to take an html-fragments approach versus GL opting for the more traditional(?) AJAX one probably does matter a great deal on the end user experience

Regrettably, since GH doesn't develop in the open, it's hard to have an apples-to-apples between the two in order to know if GH has some really, really tuned RoR setup, or it literally is just a matter of "don't be cute" when sprinkling every JS trope known to man upon your web ui and if GL one day cared about their page load times, they, too, could be quick without needing a RoR nor RVM upgrade


I totally suggest diving back in! I'm doing the same. Day job is all frontend, but messing around with Rails again in my own time is reminding me of a much more productive and interesting era of my career. I was lucky enough to be near-ish to Rails World this year, was surprised how many other attendees were in that exact same position.

I'm a long time Ruby/Rails web developer. I stick with it because it has worked well for me. It just keeps getting better every year.

> - YJIT has made Ruby fast. Like, _FAST_

Curious, I tried to look for some benchmarks, which still seems to show Node often being 3-10x faster

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I'm not saying you're wrong but this feeling seems to pop up whenever a major RoR version releases. Now maybe ror usage has been trending up but I'm not necessarily seeing it.



People are pushing back on “micro services and SPA everywhere” fads probably.

I think there's a new and healthy rivalry between Ruby, Python, and JS for web backends!

- Ruby and Rails now has all of the things mentioned above and more. I do have concerns that Rails will evolve in directions where bundled frontends have less official support, with the continued centralization of 37signals/DHH [0] and their controversial removal of Typescript from Turbo [1] (and bundling in general for Hey), but it's such a large community that there will be continued momentum in all directions.

- Python used to be the choice if you expected to do both data processing/machine learning/NLP and web backends in a centralized codebase with a single language. And Django is still a world-class solution there, with gevent + asyncio + forthcoming developments on GIL-less Python all contributing towards making Django a highly performant and parallel framework. That said, with much of an app's data processing complexity often best offloaded towards LLM-powered solutions that have dedicated APIs, and both Ruby [2] and Node having bindings to https://pola.rs/ as an alternative to Pandas, it's no longer the only solution.

- And on the JS end, frameworks that enable full-stack batteries-included admin-out-of-the-box development like https://redwoodjs.com/ and https://www.prisma.io/nextjs + e.g. https://next-admin.js.org/ continue to evolve. Nowadays, if you're building a complex web application from scratch, Prisma provides all the escape hatches you'd need, so that you can build entirely in JS/TS and have the facilities you'd expect from Rails or Django.

I'm really excited that all three communities are continuing to push the boundaries of what's possible; it's amazing to see.

[0] https://news.ycombinator.com/item?id=30600746 [1] https://news.ycombinator.com/item?id=37405565 [2] https://github.com/ankane/ruby-polars


Interesting that both ruby and python are on the jit path. less is more.

What do you mean?

Do you have any benchmarks to share on YJIT?


> YJIT has made Ruby fast. Like, _FAST_

Sadly, a lot of people still live in the Ruby 1.X era and thinks its slow.





Does this case's effect on CDL mean that a library could still buy a huge stack of ultra-cheap eBook readers, load each one up with their one copy of a given book, and then lend out the physical readers?


Presumably not, because the same copies would be created. This wasn't a case that hinged on DRM or content protection. IA was making copies, lots of copies, and that's an action governed by copyright law; it's right there in the name.

All that aside: if you have 1:1 physical books anyways, what is the reader accomplishing here? Just loan out the book.


No, but I suspect the licensing on the ebooks already forbids transferring the physical reader the book is on to another person.


As of about a year ago, Team Fortress 2 bots were so pervasive that they would often teamu p to use the "vote kick" functionality to kick non-bot players from public games.


> As of about a year ago,

This started about 5 years ago and continues today. They aimbot, spam both voice and textchat, dodge bullets, can fly, and hijack any vote to ensure their continued survival. If you can't get an invite to a private lobby you may as well play a different game. Either one that has anticheat or something singleplayer. If there was a silver bullet that made online play nice and respected your computer, someone would have shot it by now.


If you hadn't heard, there was a recent clean up and botting has become basically non existent in TF2 now. It went from completely unplayable to being the experience everyone once had. Don't know how long it will last but you can hop on and enjoy it while it lasts.


Ok that’s pretty hilarious, but what’s the point, is this just like playing battlebots for TF2 bot authors? (But if it was couldn’t they set up a purpose-built server for that?)


>couldn’t they set up a purpose-built server for that?

Yes, but the bots have to play on the official servers to get random drops of collectible hats. These items can be sold for a lot of money


No. More like a denial of service. The bot hosters do it out of pure hatred for TF2


They annoy other players and intentionally make them pay for bot protection services.


It’s not that they want a human-free environment or something. It’s all about causing grief. So it’d be targeted at a chosen victim probably to annoy the fuck out of them. Cheater has a button that can kick you out.


Those “burn spots” are almost certainly from a fungal disease, not from some magnifying glass effect. https://s3.wp.wsu.edu/uploads/sites/403/2015/03/leaf-scorch....


This magnifying glass effect is a pervasive and dangerous (to thirsty plants) garden myth. Don’t let sunshine stop you from watering a plant that’s suffering from lack of water. https://s3.wp.wsu.edu/uploads/sites/403/2015/03/leaf-scorch....


Thanks for busting that myth. Foliar isn't about hydration state, it has chemicals and surfactants and it's recommended to do it in morning/night. according to this AL extension office, it can causes a phytotoxicity (leaf burn) at high leaf temps (probably because higher uptake rate of the chemicals) https://www.aces.edu/blog/topics/lawn-garden/foliar-feeding-...


Right here on Hacker News, actually. https://news.ycombinator.com/item?id=16509058


FiLMiC Pro, another pro camera app for iPhone, also existed for years [0] and sold with the one-time-purchase business model. They're now a subscription [1] and owned by Bending Spoons / Evernote. (Plus, more bad news [2]...)

[0]: http://web.archive.org/web/20111130073123/http://www.filmicp...

[1]: https://www.cined.com/filmic-pro-is-joining-forces-with-bend...

[2]: https://www.theverge.com/2023/12/3/23986187/filmic-staff-lai...


That kind of proves that one time sales can be a successful business model, doesn't it?

They were a successful business employing 23 people after more than a decade on the one time sales model!


No one said it’s not possible to be successful with it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: