This reflects my experience working with Ruby over the years as well. I find it a perfectly pleasant language, but as with many other scripting languages, things get difficult as a project increases in size, complexity, and number of contributors.
Types are a great way to remove certain classes of issues. It's my hope that newer versions of Ruby really push gradual typing features I've been hearing about forward into common use. The productivity gains of preventing all silly Nil and type errors will be enormous.
In a broad and abstract sense (and IMHO), Ruby has a substantial core and community-wide commitment of aiming for that pleasantness, and a track record of achieving it.
I usually compare/contrast to Python's 10 Principles / Zen of Python, which are rather explicitly about the code itself; instead, Ruby has "developer happiness" and "ruby is nice so we are nice". These show up in many, many small ways throughout both the code ecosystem and within the human community (rubyconf!).
There are ofc concrete things to point to, but other comments have already done so, and this bit is a personal fav :)
For a lot of people it's the syntax that makes Ruby pleasant. As evidence of this, I refer to other comments claiming languages like Crystal or Elixir are alternative (or better?) versions of Ruby, even though the only thing they have in common is their syntax.
I don't know if it's fair to just declare that semantics are the important bit.
I suspect that for many users, much of what they like about Ruby is the syntax and core library, and to the degree that Crystal follows that, it will provide much of the same delight that Ruby does.
What semantics is crystal lacking? Obviously it's not a 1:1 replacement as the ecosystem is different, gem management (shards) is different, etc. Crystal also has union types and concurrency.
For people who want a syntax like ruby + performance/concurrency, but are willing to deal with a different ecosystem and having to do more stuff by hand, Crystal is a nice choice.
Local variables, method calls, basics like that are basically entirely different. Take the Ruby specification test suite and try to run it on Crystal, even with adding typing and other minor changes, and see how far you get.
Your criticism does not seem rigorously considered. Toy programs can often simply be renamed from .rb to .cr and be compiled and run as Crystal code. Crystal is not, however, trying to be a drop-in replacement: among other things, an enforced type system is not a minor change. Compilation is not a minor change to a language either. It's valid to say that you don't like the tradeoffs (and of course ideally you would have a full understanding of what those are), but it's incorrect to suggest that these languages are not extremely similar.
My argument is - if you swapped out the syntax of Crystal to not look like Ruby, would anyone think 'this is like Ruby'. I don't think so. Super fundamental parts of the language semantics, like method dispatch rules, are completely different between Ruby and Crystal.
> Your criticism does not seem rigorously considered.
I'm a major contributor to the Ruby specifcation, and I've got a decade of full-time experience in writing about and implementing Ruby and its semantics, so I'm not just doing a drive-by comment.
It still has a very similar feel to Ruby. A lot of the standard library is a very close match to Ruby. And the metaprogramming bits are possible to simulate with macros.
I find this discussion often go into the weird "it's static Ruby" / "not almost nothing like Ruby" extremes. I'd go with looks - same, feel - very familiar, inner workings - you can find ways around differences.
Suffice it to say that I don't think the details of message passing are what most people care about with Ruby. I can see how your experience would lead you to believe otherwise.
I strongly disagree. The message passing semantics are a pretty core part of why you can write pretty complex metaprogramming in Ruby and, without writing a lot of code, make a lot of stuff happen. And even somebody who "doesn't care" about that is critically indebted to it--because that's how Rails happens.
This is not always for the best, of course, but I reach for Ruby pretty consistently when I need to do that sort of thing--often in a dynamic programming context or where I'm binding a lot of state to present a straightforward DSL to an end user (which Sorbet helps with quite a lot, too). Sometimes, for practical reasons, I'll do that sort of work in TypeScript, but the result usually has a lot more sandpaper to it.
Crystal...just isn't that. At all. It's a fine language for what it is; it doesn't just offer anything that Ruby, TypeScript, Kotlin, and Rust don't, so I have no use for it. But it's definitely not Ruby and it doesn't even smell like Ruby.
Crystal has local variables. I didn't mean to imply that ruby & crystal are 1:1 drop-in replacements, but there are a great many similarities. Enough for me as a ruby/rails dev to start on crystal projects with relatively little issue once I learned a bit about the standard library and the ecosystem.
Ruby's way of handling types is abhorrent compared to Crystal, though. Sorbet/RBS are unfortunate systems tacked on after people realized that type systems are actually really good and not really that verbose.
The block syntax is a wonderful way to build and chain flexible iterative logic, the object model is fairly straightforward and the standard library strikes a good balance between functionality and brevity. Once you start to pick up on ruby idioms, the language is pretty fun to use IMO.
what does Ruby offer fhat makes it so "pleasant" to use?
For me, it fits my mental model of approaching problems quite well. I tend to write a lot of smaller programs for small tasks and when it comes to general chucking data around, Ruby isn't fast, but I can go from problem to solution with as little extraneous nonsense as possible.
Python is also quite good at this, but I find it a bit less consistent and tend to question my approach far more unless there's a library that already does the thing I want to do (more often the case with Python, though, to be fair).
Python and Ruby have similar conciseness:readability ratios for me, but Ruby has the added benefits of slightly better writability into the mix.
That's a very good question. I think in the old days people would have said "Ruby offers the most flexible meta-programming available in a popular script language." Back around 2005 there was a major flamewar between Lisp advocates and Ruby advocates:
But nowadays, when I want beautiful meta-programming, I use Clojure.
I think Ruby survives now because of Rails. And Rails is a major technology, so that is a valid reason for Ruby to survive. But it is good to be clear where the advantage is coming from: the framework and not the language.
Ruby has fallen behind, if you want great meta programming, you can get all the joy, plus much better speed, by using something like Clojure. That also gives you access to the vast ecosystem of Java libraries.
But Rails is a different story. For jumpstarting a greenfield API or CRUD app, there is nothing quite as good as Rails. Django and Symfony and other frameworks have tried to imitate Rails, but Rails is still way out in front, with better tools, and the marriage of Rails tools with Ruby's advanced meta-programming is something difficult to replicate elsewhere.
If your company needs a custom CRM or CMS, Rails remains the best starting place.
Initially? Not a whole lot. The point though, is that you used ruby/rails to rapidly build a startup, and then you have a successful business that is _stuck with a massive ruby/rails monolith_. This is where you start extracting services, or refactoring around engines, or some other strategy - doing that involves a huge amount of careful refactoring, and Sorbet can really help.
Satisfying static types AND 100% code cov is a significant burden.
If you're intrigued by this approach, or perhaps by a similar approach with typescript, I highly recommend checking out a strictly strong typed language at some point. I've been coding in Elm and a little in Haskell, and I find that the type checking is so thorough and exhaustive that I only need to write very few tests to get strong guarantees. This was a very pleasant change from Rails where you are encouraged to aim for 100% code cov, and as such, would spend well over 40% of my development writing tests.
It's a shame the article ends with "So as usual, consider not writing Ruby, but if you do, ...". That's worthless advice and many people, myself included, have Ruby as their most beloved programming language.
Just because you disagree with it doesn't mean it's worthless advice.
I always have a fond flutter whenever I see Ruby (which I used to write, and really enjoyed) but absolutely wouldn't start a new project with it, for practical reasons. It's slow, it can turn into a big ball of mud as applications scale (there's no named imports for crying out loud, everything is just in a global namespace with side-effects everwyhere), etc.
Certainly there are still people/projects where, upon consideration, Ruby is still the best choice (eg; small Rails shop has a standard CRUD app to build quickly that will not likely ever scale to be huge). But you should still _consider_ not using Ruby.
I read this argument lot from C/C++ devs in rust discussions, they just say that you are not a good developer and they never will do any mistake, but idk, I don't mind being a dumb developer, I like to have an app that I know works, even if you are a pro and you never do a mistake, your team surely will do, it put more burden in the devs, while there are tools that can make you life easier.
Then after that if you still choose ruby over a language that gives you more guarantees and safety then is your choice.
That's not the same argument at all. "know language X" != "be smart".
I am specifically calling out people that _don't know Ruby_ and complain about the language being this or that, when really their frustrations come from them being inexperienced in the language.
It is like never having used a hammer, gripping it close by the head, and complaining that it takes a lot of effort to nail things. Blame's not on the hammer, bud.
This might happen with every language. I don't know. I'm a Ruby developer and have seen this countless times. Also, none of this equates to saying that ruby is perfect. It is not.
Most devs aren't learning Ruby anymore, so if true (I'm skeptical) that sounds like it could be a problem for many organizations (perhaps not yours).
Hiring only people who are already experts in a given technology shrinks your talent pool in an already-tight market, and training is expensive (time and/or money).
I'm not sure your premise is correct. Hired.com's 2022 report says Ruby is one of the most in demand skills.[1]
In any case I'm not saying Ruby is high barrier or anything. It isn't! It is totally viable to hire non rubyists and build them up. I have seen it before many times. Ruby is a simple, forgiving language with fantastic documentation and an excellent community. It is pretty ideal for newcomers really. Training pays off.
That doesn't mean it's without downsides or that they'd make the same choice today, with the options we have available now.
For example, they probably spend an absolute fortune on cloud costs, especially CI. Node, Go, Kotlin, Elixir or maybe even async+jit'd Python might all be 2-10x cheaper to run.
But I bet there are other choices that would have been worse. I often enjoyed working with Ruby at Stripe, especially once Sorbet came along.
In fairness all major companies which use a Language extensively tend to invest in all aspects of its well being. Facebook did with PHP, Google hired Guido (and bunch of other python maintainers) and now go.
If your business depends on having a well maintained platform, then it makes sense to invest in it. Shopify should be commended for investing in Ruby. As a former contributor to Rails and Ruby ecosystem in general - IMO I would still choose Ruby for certain kind of work. I write Go mostly these days and parsing random JSON for example is major PITA (so it would be in many other static languages).
Thanks, I had indeed got my S-companies confused. I must've been thinking of Shopify's nix work, and put them in a mental bucket of "does advanced stuff".
Another way to put it would be that Ruby is so productive for the average developer at Shopify that they can afford to pay PLT experts to work on the tooling.
> there's no named imports for crying out loud, everything is just in a global namespace with side-effects everwyhere
Yeah, this is a significant flaw. I with it had a simple module system like Javascript's require function which just returns a normal object containing functions and data.
Comparing a trivial <500 LoC program between languages doesn't tell you anything that's useful other than the terseness of the syntax. You might as well chain unix utilities together at that point.
Maintaining a 5k+ LoC java/c#/go/rust/crystal codebase is orders of magnitude simpler than standard ruby. Sorbet/RBS bridge that gap now, but are a pita to use compared to natively implementing a type system. I know with rust/go I get a simple binary at the end to copy over. Rust, unfortunately, has slow compilation times still compared to go, but I really can't stand error handling in go compared to rust.
That said, "ruthlessly productive" is an apt description. I just don't want to have to maintain a large rails codebase again without sorbet/rbs. I'm hoping phoenix/elixir or something in rust catches on.
Any language with metaprogramming like operator overloading so you can create an ergonomic ORM, basically.
For such a language to be reasonably productive you want to be able to overload "mystruct.myvar" to not simply grab "myvar" from memory, but smartly fetch it from a remote database, cache it, etc.
It will never be perfect (blah blah impedence mismatch) but a proper ORM is so much more productive and readable than writing crap like `Manager(mystruct).GetAttr("myvar")...` or bespoke SQL composition.
+1. After working on Elixir for a long time I missed the simple ruby way to do things and found myself running in circles to troubleshoot some basic 3rd party libraries. Going back to Ruby for my next side project.
Having used both Ruby and Elixir, this is exact opposite of my experience. I would say Elixir/Erlang is one of the best language from observability/debugging perspective. Immutability, per process heap, ability to trace any module/function call in production (recon), remote shell etc make it trivial to debug most of the issues. I have found memory leak in production from 3rd party library in less than 30 minutes. Just looking at the process list sorted by memory will tell which library is leaking (in majority of the cases), as most of the processes has single purpose.
Agree with your point about debugging being easier but in my case I ended up doing a lot of housekeeping for things that are easy in ruby. This time I'm only concerned about the speed of development and ruby to me is the winner
Not disagreeing, but the nuance is productive for writing new code/features. It really feels counter productive once you have a large codebase/team and you need to refactor existing apps.
Spent 5 years at a Rails shop and it's crazy how much engineering effort was spent on keeping this app going. Adding typing seems like a nice step to help here.
Yeah, I disagree with "ruthlessly productive" as a blanket statement because of this. I tend to find that a huge chunk of the time I save at write-time in Ruby (Rails)/Python/etc., I end up repaying at either runtime (nil exceptions) or read/explain-to-other-dev/refactor times, sometimes in multiplicative form (chasing down why something became nil, or a string, or an elephant, but only if the ORM did X, Y, and Z to the DB response, etc. gets ridiculous quickly)
Well, the go one would compile to an easily deployable binary and actually be able to, you know, count a whole lot of words quickly. But the ruby one is much more terse and quicker to write.
I say we go back to awk and get the best of both worlds:
Yeah the go one looks terrible but then you take a look at it and it's just creating slices, looping over stuff and appending stuff to the aforementioned slices. The syntax is much less expressive, therefore very verbose, but it's basically doing the same thing, without the sugar of being able to inline some of the logic.
I've not written Ruby since about 2006 except with a stint in a Rails codebase which did nothing to give me any faith. How do you go about e.g. extracting a group of fields and renaming them? Or reorder function arguments? Is there a static tool that can do this, or do you have to do some kind of text search and hope you got them all?
There's basically no tool for doing that, aside from relying on tests to tell you a callsite is now incorrect. "find and replace" is... okay, up to a point, but obviously that has its limitations (especially once you start metaprogramming).
That being said, I still love Ruby. I have all sorts of little tools written in it that would have been a pain to do in another language.
Coming from a Clojure background which is … different. But I understand the enthusiasm around similar language features.
But to borrow words from another comment I find Kotlin “ruthlessly productive”. Apart from the lambdas, functions, data classes and immutably, it’s the ability to quickly and correctly refactor that makes me feel like I can work at speed and try stuff out.
I wish I could find Kotlin as ruthlessly productive as Ruby, but... I'm having a hard time doing so. It's got nothing to do with the language itself - at work, I really enjoy Kotlin.
Here's where Ruby shines for me: when I'm whipping up a script, I can toss a `Bundler.setup(:default)` at the top of the script, `vim mything.rb`, `$> ruby mything.rb` (or `./mything.rb` if I hashbang at the top), and I'm off to the races.
With Kotlin, it feels like I need to set up a project in IntelliJ, `$> ./gradlew run`, wait 10 seconds for the whole thing to compile, and finally my thing is running.
Is there a streamlined way to run a Kotlin script without building a whole jar, from the command line? I know this is generally not as easy with compiled languages. The D programming language is a notable example that has a "D script" mode (hashbang with `rdmd`), which quickly compiles+runs in a single step.
Rails is frankly a double-edged sword, though. While it helped propel Ruby, and is a great framework, it also seems to have permanently marked the language with a notion that its simply a vehicle for Rails.
Ruby is probably one of the more coherently designed general purpose programming languages (in my opinion, much more so than Python, a language that dominates the industry) but doesn't seem to get much use in that domain, which is a shame.
About a decade ago I had a social media marketing platform (Wildfire) part of which was a special case web page builder. We scaled our engineering team in many different ways, one of the many ways was to also contract external teams.
One such team wrote a book based on code the wrote for us, Objects on Rails. The code was fine for a time, but we were increasingly serving traffic from Youtube which was a very popular platform for us at the time.
The code in question could not scale in a cost effective manner for traffic coming from Youtube. In fact, it was stressing our ability to maintain a rational number of haproxy POPs - we'd have needed to massively change our operational architecture if the root problem could not be fixed. It took a while to identify, but eventually we found the exhibits abstraction to be the root of a number of significant and leaky performance problems (runtime object extension, blowing all method caches).
I refactored out all of the uses of this code, replacing them with simple in-place conditionals in all cases. This dropped somewhere around 15kloc, and modified another 5k or so (I think these may be under-estimates, but I no longer have access to the code, and it's been a decade). We were able to deploy that patch with very few changes. The teams had done a good job of writing extensive end to end tests. It also helped that removing the abstraction resulted in far simpler code paths.
I didn't need types to do this safely, the tests were sufficient. This is not to be a downer on typing extensions, but I think their advantages are often over-sold.
Sorry, I don't buy it. Tests might be sufficient, but they cannot detect a new issue. They can only prove that a well known one is not there. Strict typing and compile time enforcement will guarantee that a whole class of errors and incompatibilities will not occur.
The organization that owned the code at the time was Google, and there were multiple teams contributing to that code base concurrently with the patch in question.
Because the technique it uses is "clever" and at the same time terrible. It only works if you have 100% code coverage in automatic or monkey testing. The key quote from the paper is.
"""
as the
program is exercised, it converges towards a
correctly refactored program.
"""
This occurs because the refactoring engine uses runtime data to discover the correct types and thus correct refactorings.
It is certainly not the same robustness in refactoring that you get with a statically typed language. I don't want my refactoring to "converge". I want it done.
The key quote from the paper is — "We do not attempt to automatically refactor code."
"A very large Smalltalk application was developed at Cargill to support the operation of grain elevators and the associated commodity trading activities. The Smalltalk client application has 385 windows and over 5,000 classes. About 2,000 classes in this application interacted with an early (circa 1993) data access framework. The framework dynamically performed a mapping of object attributes to data table columns.
Analysis showed that although dynamic look up consumed 40% of the client execution time, it was unnecessary.
A new data layer interface was developed that required the business class to provide the object attribute to column mapping in an explicitly coded method. Testing showed that this interface was orders of magnitude faster. The issue was how to change the 2,100 business class users of the data layer.
A large application under development cannot freeze code while a transformation of an interface is constructed and tested. We had to construct and test the transformations in a parallel branch of the code repository from the main development stream. When the transformation was fully tested, then it was applied to the main code stream in a single operation.
Less than 35 bugs were found in the 17,100 changes. All of the bugs were quickly resolved in a three-week period.
If the changes were done manually we estimate that it would have taken 8,500 hours, compared with 235 hours to develop the transformation rules.
The task was completed in 3% of the expected time by using Rewrite Rules. This is an improvement by a factor of 36."
from “Transformation of an application data layer” Will Loew-Blosser OOPSLA 2002
> This occurs because the refactoring engine uses runtime data to discover the correct types and thus correct refactorings.
in smalltalk is not like other languages where you have a bunch of text files and you "build" it from there and need static types... it has to do it at runtime because thats all there is!
Types are definitely very useful for complex refactoring. However, I disagree with the author that it wasn’t possible before types. Combining a healthy amount of unit tests along with some integration tests makes complex refactoring totally possible even without types.
Tests can almost by definition give stronger guarantees than a type system, since the former is Turing complete while the latter is (almost always) not. This means that, for any type, it’s possible to replicate the check done by the compiler at compile-time in a unit test but not vice versa.
However, the ergonomics of this is worse than types, I’d argue, since a type system offers a much simpler way to assert basic stuff at compile-time (e.g. the presence of certain properties of an object). Also, type annotations are inline instead of being defined in a separate source file, which aids in code readability.
> Tests can almost by definition give stronger guarantees than a type system,
False.
> This means that, for any type, it’s possible to replicate the check done by the compiler at compile-time in a unit test but not vice versa.
False. I mean, Turing completeness does mean you could replicate any type guarantee in code, but that code would generally not be a unit test; guarantees provided by type systems are universally quantified, to confirm that with testing would require testing every combination of values in the function’s domain (as determined by its parameter types.)
The code would instead be a static typechecker that consumed the untyped source code plus a representation of the type assertions to be checked and where they attach to the untyped source code, and then did exactly the same things that any other static typechecker would do.
That is... standard optional static typecheckers for dynamic languages. (Though those usually also support inline type assertions, perhaps via comments in languages without specific syntactic support, e.g., Python 2.x)
You can approximate universally quantified guarantees in a testing regime with property-based testing, but that's at best probabilistic rather than actual universal quantification.
I get what the author is saying, but static analysis aside I would do all that I can to avoid a 178 file update. I get that sorbet is allowing them to do this with higher confidence but even in a language that's compiled (thus performing a similar role to Sorbet) that much change is asking for trouble.
If making a single update touches that many source files, then they may want to take a look at their code organization and architecture as that is a whole heck of a lot!
the most productive i have ever been working as a software engineer was in a smaller company where roughly all of the company's product and library code was in a monorepo, and we had a reasonable culture of writing automated tests (although not at 100% branch coverage). sometimes you realise the abstraction in a core library used by all the product code is wrong or limiting, and if your test suite gives you confidence that you're going to catch any potential regressions, you can rework the library with a breaking change & fix everything that transitively depends on that library in a single atomic commit.
The rub for me here is that making one change in a core library ideally wouldn't touch 100+ other source files. This feels like a separation of concerns issue in which a module is being over used.
Perhaps I'm just in the wrong headspace, as I have never had to work in a monorepo fashion. That being said just because your code is in a monorepo, does that directly mean that you aren't versioning your libraries?
To my understanding most package management tooling in most echo systems allow for versioning. With that said, wouldn't you slowly roll out the version to the downstream projects Monorepo or not?
> just because your code is in a monorepo, does that directly mean that you aren't versioning your libraries?
If you are living the monorepo trunk-based development dream, you version all of your product and library code with the commit of the enclosing monorepo. The product code contained in monorepo commit X depends on the versions of the libraries also contained in that same monorepo commit X. Maybe another way to say it is that library dependencies are resolved to whatever is checked out in version control, without going through an abstraction layer of versioned package releases pushed to some package repository.
> To my understanding most package management tooling in most echo systems allow for versioning
correct. but that doesn't mean adopting a decentralised package-based versioning strategy is the most productive way for a team to operate!
> With that said, wouldn't you slowly roll out the version to the downstream projects Monorepo or not?
Perhaps! I can think of some arguments why you might prefer a gradual rollout: to reduce effort and split the work into smaller pieces that can be delivered independently, to reduce risk of the change breaking one of N products it touches, forcing the entire change to be rolled back.
But on the other hand, you don't have to -- you can choose to do the refactor atomically, which is not a choice you have if the product and library code is scattered across N different source control systems that depend on each other through versioned package releases.
If you are working in a monorepo & all your internal library dependencies are fulfilled by the coupled library version in the monorepo commit checkout, not decoupled through versioned package releases, then you would need to use different techniques to allow flexibility of some products to depend on version V1 of a library at the same time as other products depend on version V2. The most obvious one is creating a copy of the entire V1 library, giving it a new name, making the V2 change, checking it in to the monorepo as a sibling library, then rewriting some of the products to depend on V2. See also https://trunkbaseddevelopment.com/branch-by-abstraction/
Refactoring doesn’t just mean optimizing the inside of an implementation.
With a monorepo and good test coverage, I can improve the signature of a function on some library, and use a full-featured IDE to confidently update the usage myself without making it a 10-ticket task that spans 12 teams and 18 meetings.
> This feels like a separation of concerns issue in which a module is being over used.
It's not always modules. Libraries and patterns and frameworks and so on, they will come and they will go. Sometimes you want to just change something across the whole codebase, and having complete type and test coverage over a codebase ensure that you can do so fearlessly.
> wouldn't you slowly roll out the version to the downstream projects
No need. The upside of doing so is that you generally prevent breakage, but at the cost of having to support multiple older versions of everything. In a monorepo, you can change your codebase from one language to another in a single commit, and everything should work just fine. (Speaking from experience.)
> having complete type and test coverage over a codebase ensure that you can do so fearlessly.
here's a bit of hard-won wisdom. if you have a company codebase in a monorepo where the different components (products, libraries) in the monorepo have a fairly uniform level of quality standards, and there is a uniformly high level of test automation & tooling to detect regressions and enable confidence when making sweeping changes, it is incredibly productive.
however, if you have a company codebase in a monorepo where some regions of the code have wildly different quality standards, and test automation may be patchy or missing from some components, the lack of flexibility and coupling caused by how a monorepo resolves internal dependencies can produce a miserable experience. low-quality code without a good automated test suite or other tools to detect regressions needs to be able to pin versions of libraries, or some other mechanism to decouple from the rate of change of the high-quality components.
E.g. a single developer may be the sole person allocated to a project to attempt to build a prototype to try to win a new customer, so they may be bashing out a lot of lower quality code -- often for a good business reason -- without much peer review or test automation. If that lower quality codebase is in the monorepo and depends on core libraries in the same monorepo, then you get a situation where core library developers expect to be able to make breaking changes to core libraries, and if the test suite is green, merge -- but the low-quality prototype codebase doesnt have any tests and just gets increasingly broken (the consultingware prototype code test suite is always vacuously green). Or conversely, the developer trying to get their consulting project over the line might end up telling the core library developers that they're not allowed to make breaking changes as it keeps pulling the rug out from underneath business-critical prototype project delivery, and then you're in a situation where it is no longer possible to refactor the core libraries.
I've completed multi-hundred file refactorings without major issues both manually (long live grep) as well as automated (yay golang and / or jetbrains). Do not fear, just be careful and break it into digestible byte-sized pieces.
Lines of code and number of files is equally meaningless as nearly any other metric of complexity. I'm inclined to say that changing 178 files probably means they're too small.
Having learned some Ruby over the years, and seeing how elegant Elixir is, leads me to the question: Is Elixir‘s type System strong enough to make refactoring worry-free (eg compared to Ruby)?
Did you come to Nim from a Ruby background? I've been using Crystal and am very happy with it but would be interested to know what the jump to Nim might be like.
Types are a great way to remove certain classes of issues. It's my hope that newer versions of Ruby really push gradual typing features I've been hearing about forward into common use. The productivity gains of preventing all silly Nil and type errors will be enormous.