There are several "we upgraded Rails, it was huge, risky, and took months to years" blog posts from medium to large companies. I personally take this as a warning against using Rails. Ruby is one of the most dangerous dynamic languages to refactor, I don't see how struggling to do it for over a year is a selling point of the framework. It also feels counter to Rails's mantra of delivering value fast with little effort, until you need to upgrade, then you have months of no business value delivery and need to bring in experts to help.
Let me get this straight: you're arguing that a 10-year-old top-100-in-the-world website taking 4 full-time engineers and having them upgrade their core framework 2 major version over 18 months is some sort of massive failure, and that failure would be solved with static types?
Also, you're saying that Github hasn't added any new features in the last 18 months?
I'm not sure any other technology stack would have fared much better.
Consider that 6 years, 2 months, and 20 days passed between Rails 3.2 and Rails 5.2. That's quite a bit of time for the framework to evolve. Then factor in the customizations from several non-framework dependencies and those added by GitHub.
This is an incredible achievement no matter how you slice it.
Yes, four FTE engineers taking 18 months to upgrade across two major versions indicates a massive problem, but not necessarily with Rails or Ruby. That's a cost of $1.5M give or take, just on the engineers, not including the opportunity cost in new feature development or paying other equally important tech debt.
$7 billion dollar company, $1.5M cost? $7 BILLION. Your order of magnitudes are waaaaayyyyy off.
This is the inverse of survivor bias, in that you are retroactively applying "best practice" at the wrong scale. What gets you from $0 to $7B may hurt you at $7B. Heck, may hurt you way earlier than that.
However, and YMMV on what problem you want to solve, but saving $1.5M, heck lets 10X it and call it $15M, saving $15M when worth $7B isn't the problem I'd personally be concerned with.
React 16 was a perfectly smooth upgrade for a massive part of the web ecosystem. It took 1 engineer to bump the version number and to test. Maybe a week or two at most to check that nothing was broken.
The React core team update tens of thousands of Facebook's components every time they update, I expect they'll continue to have uniquely smooth upgrades while that's the case.
It is a massive failure, and it would be solved by static types.
I could take a big, unmaintained 10 years old Haskell codebase and upgrade it to the newest compiler and libraries in a couple of days, at most (and it would most likely work on the first try after it compiles).
Or perhaps it’s survivorship bias. Companies that used Rails are around today with big, dated codebases because Rails adequately served the needs of their growing businesses.
It didn't take Github as a whole a year to go from 3.2 to 5.2. It took a small team within Github to work with everyone to upgrade from 3.2 to 5.2, with most of the changes introducing business value in the form of security and technical debt fixes(as many issues as I have with Rails, which are many, deprecations aren't willy nilly and generally have great reasoning behind them).
I'd love to know what other frameworks would be easier. It's all relative. I've been involved in upgrades of a lot of statically typed projects, in many languages, and... it's all the same. Major upgrades are always a pain in the ass.
It depends on scale. It tooks us at a previous company over a year to update from Java 6 to Java 8 (technically 7, but 8 was released before we got done with the upgrade so we jumped to 8), and that didn't involve any major new frameworks. Where I'm at today I don't even want to think about upgrading from Elasticsearch 5 to 6+. Terabytes of data indexed each day plus way too many consumers. I think web frameworks are easy to upgrade compared to databases or schema, at least when you're faced with distributed architectures.
For what it's worth, I'd be weary of putting too much weight on these types of articles when it comes to judging how difficult Rails upgrades are.
Rails was hugely popular for years (and still is, in a lot of ways). There are countless articles about it and it's been for a ton of projects. There are a lot of internal Rails apps built on earlier versions that are owned by companies with either limited in-house development resources, or none at all; in which case, it's easy for decision-makers on the business side to push off updates (assuming they even know about them). "That doesn't sound like a big deal, we'll just do it for the next one." A bit of time passes, and then you're two major releases behind and you're looking at a serious effort. Or maybe it's developers who make that decision for what are likely valid reasons in the short-term. Upgrades across multiple major releases aren't exactly uncommon because of that, and there are a lot of articles detailing them, blog posts discussing or complaining about them, questions on SO, etc. as a result.
For the most part, though, I don't that that Ruby or RoR is uniquely difficult to upgrade compared to other frameworks. I've handled upgrades across versions that have gone ridiculously smoothly, and some not so smoothly.
According to the article, the upgrade took 1 year for 3.2 > 4.2, and then just 5 months for 4.2 > 5.2. The project began with one FTE, and some volunteers, and expanded to 4 FTEs. But the author says almost none of them had ever done a Rails upgrade before.
I'm curious to what you think would be a better framework for Github to have used, that would've allowed for easier, speedier point upgrades? Rails likely was a big advantage (as it usually is) in the initial stages. Are you seriously expecting it to be just as smooth when the site experiences exponential user and feature growth? That moving from Rails 3 to 5 was doable, with what sounds like a small team and no massive service disruption, seems like a very strong argument that Rails can still be effective in a company's middle-age years.
ASP.NET MVC as uncool as that might be. IMO this is as close to the benefits of Rails you can get in a statically typed language. And it really doesn't take very long in developing a project for static typing to start saving you time either. IDEs can just be a lot more intelligent with static types, and it can be a big help in the readability of code without fully understanding the broader context.
The upgrade to .NET Core is probably worse than a Rails upgrade though, although it's not really the same thing as .NET Framework will continue to be updated for awhile. Switching to Core is really only necessary if running on Linux servers is a big win for you.
Migrating an ASP.Net MVC app from .Net Framework to .Net Core isn't really an upgrade, as both frameworks are continually updated.
The migration is a pain, but just upgrading from MVC 4 to 5 wasn't painful.
I am sure 2 to 5 would have been a nightmare, especially if you were using the deprecated Microsoft JavaScript libraries, and needed to replace them with their jQuery alternatives.
> ASP.NET MVC as uncool as that might be. IMO this is as close to the benefits of Rails you can get in a statically typed language.
I agree .NET MVC is as close as you get to something like Rails with regards to productivity in a statically typed, enterprisy language.
But using .NET/C# at Github would still have ended up with a significantly larger codebase -- which means more code to maintain, and therefore also in all likelyhood more bugs.
Of course, I'm only saying that would be my pick today. I personally dislike working with dynamic languages, but I would say Rails was their best option at the time.
I use Rails, but also Elixir, and in the past Java, .Net, and many other tech stacks.
In my experience all large or very large scale apps upgrades (and I've done quite a few) are complicated in a way or another, no matter the stack. Technical debt stacks up in subtle ways (dependencies get obsolete, a specific feature used the framework in unusual ways, stuff can be rewritten with more built-in framework features etc).
I don't see how this article would give Rails bad publicity, personally; I'll add that the advice they provide is pretty much what I would recommend for any tech stack too.
I've done upgrades on absolutely massive codebases for java to JDK7 and JDK8 and have never had anything but minor issues. Very rarely does something break backwards compatibility.
I think you're throwing the baby out with the bathwater here. GitHub has an enormous codebase where they've done a lot of work digging around in and modifying Rails under the hood. I think their situation calls for it and they have the internal talent to successfully "go off the rails" a bit. But it makes the upgrade path difficult. This is par for the course. Can you think of a technology/framework that would fare better than Rails in this case? I mean, you'd be in the same boat with Django, Spring, [your framework of choice], right?
You would not be in the same boat in a dependently typed language.
In such a language, any change on framework update would cause compiler errors if the framework's type constraints didn't match what your code expected.
As a result, upgrading in a dependently typed language is simply a matter of fixing compiler errors, and then it's upgraded.
For non-dependently-typed languages that take advantage of the type system, it's still significantly easier, though you probably will have to do a little more than just make sure it compiles.
> As a result, upgrading in a dependently typed
> language is simply a matter of fixing compiler
> errors, and then it's upgraded.
This is very naive, and is probably hilarious to a lot of people who've been through upgrade hell in a dependently typed language. A few (and then some) major reasons:
1. It's really the subtle runtime behavior changes that bite you. The ones that a compiler doesn't help you with. (This is not a Rails-specific thing; ask Unity or OpenGL etc. devs)
2. A lot of the pain of upgrading a project (Rails or otherwise) is dependency hell. You upgrade the framework, but some of your dependencies haven't been updated and don't work with the newer framework version. This is true whether it's a dynamic or compiled app.
3. It's certainly true that in a strongly typed language, these sorts of trivial problems would be caught at compile time and that's certainly an advantage. However it's not exactly rocket science to catch these in a Rails app. Assuming your test suite is anywhere near adequate, it's going to spit out a comprehensive list of these problems just like a compiler would, albiet not as instantly.
3a. Rails is pretty good about documenting these breaking identifier changes between versions. They don't exactly sneak up on you, unless you get drunk one night and decide to upgrade your enterprise Rails app without looking at the release notes.
3c. Rails is also quite conscientious at loudly announcing to you, via log messages, when you use functionality that is deprecated and targeted for removal. Assuming you're not willfully ignoring these (ie, drunken late-night upgrade bender?) they don't typically catch one by surprise.
I think you're right that my response is idealistic tinged with naivety. Sure, with a sufficiently smart type system you can model subtle things like OpenGL changes, but "a sufficiently smart type system" is obviously impractical at best.
I agree that dependency hell and getting the versions to line up right is equally hard.
However, I disagree that 3 is a good argument.
> these sorts of trivial problems would be caught at compile time and that's certainly an advantage. However it's not exactly rocket science to catch these in a Rails app
Actually, it kinda is. To model the constraints you create in a dependently typed language, you have to create a set of tests and checks in your dynamically typed language which are basically the equivilant of a full dependent-type-system.
Creating an ad-hoc human-enforced type-system and test suite is incredibly hard and I can't think of a single large project written in a dynamically typed language that adequately does this.
Regardless of how good the documentation and testing and warnings in rails are, it's not a replacement for a full type system, and the only way to get those benefits is to implement a poor ad-hoc type system in your methods and tests.
Yeah, big upgrades are always hard. Good type systems make them less scary and have less chance of breaking stuff, which honestly is the most important thing... But after you've finished getting through the stuff that all languages share (hardware sucks, dependencies suck, etc), a dependently typed language will be a matter of fixing compiler errors, not watching percentages of 500s in prod and crossing your fingers.
> But after you've finished getting through the stuff
> that all languages share (hardware sucks, dependencies
> suck, etc), a dependently typed language will be a
> matter of fixing compiler errors, not watching
> percentages of 500s in prod and crossing your fingers.
I am speaking from very direct experience here.
I was involved in a Rails 3.x --> Rails 5.x upgrade of one of the larger Rails monoliths in the world and the trivial sorts of things a compiler can catch were... well, also pretty trivial in our upgrade path. Just not quite as trivial as they'd be with a compiled language (nobody's denying they have the edge here)
> To model the constraints you create in a dependently typed
> language, you have to create a set of tests and checks in
> your dynamically typed language which are basically the
> equivilant of a full dependent-type-system.
No, that's not how you do it.
You don't "model the constraints" explicitly. You write integration tests, same as you'd do in any sort of language. If MethodA from Class1 is passing the wrong stuff to Class 2 from MethodB, your integration tests will fail. At least, assuming you've got proper coverage.
But even in a staticly typed, compiled language you have to write that test anyway, right? Because you need make sure that code path actually works anyway and MethodA is getting the correct response from MethodB there.
There are definitely advantages to strongly typed, compiled languages! To be honest, after a few years in Ruby land, I'm ready to GTFO and go back to something a little more static. But Ruby's not the nightmare you describe it as.
Maybe, but what dependently typed language has the stability, documentation, framework and overall development ergonomics that would let you create something like Github with the resources and the time they did?
> until you need to upgrade, then you have months of no business value delivery and need to bring...
This is not the case in my experience. I've upgraded pretty decent sized apps (hundreds of models,lines of routes, etc) and in my experience it would take a couple hours a day spread out over a few days a month and then I was done (for versions: 3-4 and 4-5, never done 3-5).
I would say most of the problem is making sure everyone on the team just keeps all functionality as-is. It can be tempting for team members to refactor as they go through but this then becomes a huge time-sync. Anyways, thats my exp on rails but I have no other frameworks to compare it to.
Has anyone migrated a massive app from some PHP Framework like Symfony or from a java framework like Play, or any framework with a large code base?
I have had to upgrade massive systems that were not done with any framework and full of one-off solutions with in-house developed libraries and it was an absolute nightmare, but I'm sure this depends on the language and team. However, in general I think an open-source library used by millions or even hundreds of people is going to have better documentation, bug coverage, support, etc. than something done in house, just IMHO.
So I guess my question would be, what does the alternative look like?
And yet, when Ruby is compared with other languages with regards to bugs in an actual study [1] it does better than many statically typed languages (e.g. Java, C#, Go, Typescript).
Add to that that a Ruby code base will be significantly smaller than a codebase in most statically typed languages. That means less code to maintain, and probably fewer bugs.
This is disingenuous. Quoting the abstract:
```
Language design does have a significant, but modest effect on software quality. Most notably, it does appear that disallowing type confusion is modestly better than allowing it, and among functional languages, static typing is also somewhat better than dynamic typing. We also find that functional languages are somewhat better than procedural languages.
```
The "statically typed" languages that you're focusing on (I say probably because they're the ones with high bug counts in the data) are probably C and C++, which have other issues making them higher in bug count. C is hardly even typed. Both have manual memory management.
Also, there's no control for commit frequency. Some people put everything in one commit, while others commit every line change. The Rails Tutorial even recommends the latter.
Lastly, Scala and Haskell killed in this study, as far as raw numbers go. But it doesn't seem significant.
I'll stick with subjective evaluations for now. This is just too hard to measure.
I am simply referring to the result data from the study. I fail to see how that is disingenuous.
You say Scala and Haskell killed it in the study, and you are right, they were the third and second best language respectively with regards to low rates of bugs. Perhaps you also happened to notice (but failed to mention) what language did best of all: Clojure, a dynamically typed language.
I think the point is that all else being equal, static typing is better. But obviously all else is not very equal at all, and so in practice you can have a very well-designed dynamic language beating static ones on this metric.
Note, in particular, that there's a high confidence, true, but the claim is "picking language X reduces the chances for bugs by a tiny bit." To quote the abstract:
"It is worth noting that these modest effects arising from language de- sign are overwhelmingly dominated by the process factors such as project size, team size, and commit size. However, we hasten to caution the reader that even these modest effects might quite possibly be due to other, intangible process factors, e.g., the preference of certain personality types for functional, static and strongly typed languages."
Personally I like statically typed languages due to playing nicer with autocompletion and in-editor documentation. Every time people make claims about "upgrades being done when project compiles" I die a little inside.
Refactoring is the biggest reason to prefer static typing for me. I just love the ability to right-click -> "Rename", and know for sure that this is done correctly throughout the entire codebase, even if it's megabytes of convoluted code.
Yes, modern IDEs for dynamic languages can also do this 95% of the time via type inference. The problem is that you never know if this time it's going to be the other 5%. And dynamism tends to encourage clever hacks that make code less verbose, but also make it especially hard for any sort of automated tool to figure out - and those can lurk in corners people don't even remember are there.
I disagree with this slightly. I've been using Ruby for a long, long time and Ruby does give you the tools to refactor safely - few people actually know how to use them though and there aren't very many high-level tools aimed at doing safe large-scale refactoring. In Ruby you can trace calls and do very powerful introspection which you can verify to make sure things like delegation happened properly
It also doesn't help when most codebases are using ActiveRecord or something in every complex class and wind up increasing the interface width and ancestor depth of their code significantly. The point is - I think the language does a pretty good job supporting the developers, but there are a lot of bad practices that are still in use and recommended. Can't fault the language because people are writing shit code
The “problem” with ruby is that you basically don’t know if code is valid unless you execute it, whereas a statically typed language will enforce a lot of things during compilation and simply not allow the program to compile, if API is used in the wrong way, this could be calling private functions, referencing undefined symbols (simple typos), wrong number of arguments, passing a string where integer is expected, etc.
Additionally access control is limited in Ruby, which makes it difficult to release a library and have the language enforce that people do not rely on things which are implementation details subject to change.
You're right - all of those things are possible. What I'm arguing is that given you have a reasonably good test suite and are aware of what patterns to are dangerous, you can refactor fairly confidently with the tools given
For example, it's possible to fetch a class's entire interface before and after a refactor and validate it is the same. It's possible to dynamically wrap every method you're refactoring track them and type check them. And if you extend that idea, now you can output this data to files and perform static analysis. Sure, it all relies on you having some safe execution context to get this information but Ruby probably has the best testing tools of any language and many projects have great test coverage
For library owners, I agree with you - there's no hope. But for application developers maintaining their monoliths with nothing depending on them there's a lot you can do to ensure safe refactoring
It's definitely important, but we (Python shop) have "good" (85%) test coverage and we still see 500s in prod every day because of things a type checker could trivially catch. And this is just in the course of normal operation; this isn't even a migration. Having extensive experience with both Go and Python, I would conservatively estimate that Go requires ~30% fewer tests than untyped Python for the same confidence. That's more than 30% time savings; not only are you not writing 30% of the tests, but that's 30% fewer tests to have to maintain. Of course these aren't the only considerations--for certain tasks Python may be faster to develop with (although I think people forget about things like deployment, tooling, dependency management, performance requirements, etc when they make their estimations).
Yes, we all know the metric is easily gamed but no one at our org is trying to game the metric. We are paid to build a product, not to boost the metric.
It's not about gaming the metric, it's just that the metric doesn't mean very much in the first place. Running a coverage tool during tests won't show you edge cases you forgot to handle in the code under test, it will show you code that's not tested at all. That can sometimes be useful for pointing out blind spots, but you shouldn't derive any confidence in the tests from a high coverage score, even if the people who worked on the project had the best intentions.
Coverage tools could only measure quality of a test suite if you're assuming that either the code is perfect or that the existing tests cover (logically) everything about what they test. Without either of those guarantees, it doesn't tell you anything very meaningful, as you discovered.
The metric is meaningful; I think you’re misinterpreting it. To your point, 100% coverage doesn’t mean you’ve eliminated all bugs, but it does mean that your code base almost certainly has a lower bug yield that the code base with 50% coverage (assuming no one has games the metric).
If you really think that the metric is meaningless and useless for deriving confidence, then you are necessarily asserting that code bases with 100% coverage have indistinguishable bug yields compared to those with 50%, 5%, or even 0% coverage. A claim like this is too extraordinary to be believed without considerable evidence.
I guess it's useful for deriving a baseline level of confidence, like a low
coverage score is a red flag, and an increasing coverage score probably
corresponds to increasing test coverage, but my issue is that 100% coverage
doesn't mean anything about the correctness of the code in absolute terms
(unless 'gaming the metric' includes not thinking of every edge case, ie, that
we're assuming the existing test suite is perfect). If you're working on a
poorly tested codebase, it's a useful relative metric of your progress in
testing what already exists, but unless you're assuming the code is already
correct, that doesn't mean anything more than that. If you wanted to derive,
say, the confidence that you won't see 500's daily in production from a
metric, then line coverage isn't an effective one to use for that; the tests
you write that give you that kind of confidence don't really help your
coverage score. The parts of the codebase that are in the most urgent need of
tests for getting that kind of confidence in places will most of the time be
ones that already have good coverage; think of how TDD works, even if you're
not doing TDD.
I could agree that a high coverage score is a prerequisite for having
confidence that your test suite is comprehensive (in general), but that's such
a low bar, it's like saying a full bath is a prerequisite for a nice house,
just knowing that shouldn't do much to convince you it's a mansion.
I noticed something similar with Elixir with regards to the number of tests we needed thanks to the compiler, but it also has late binding (albeit it is statically typed). A "best of both worlds" exists out there
TypeScript and Python 3 support gradual typing, so code can be optionally annotated with types. This makes refactoring easier, while still benefitting from how you can quickly sketch out new code dynamically and only annotate it once things start to solidify.
Personally, I'm a fan of very strong, static type systems, so I would prefer to annotate all the things, but I understand other people have different views.
From what I understand, everywhere that you would need to write a type, TypeScript allows you to either not declare a type at all, or put "Any" and avoid declaring a specific type. I think you can even go so far as to rename a ".js" file as ".ts", and it's fine, which is a great example of "gradual typing", and I think this all makes TypeScript a dynamic language.
If you don't specify types at all, is that really opting out? This is valid TypeScript code:
function foo(a, b) {
if (a != 3) {
return a + b
} else {
return "hello"
}
}
console.log(foo(2, 5)); //prints 7
console.log(foo(3, 5)); //prints "hello"
This is not like C#, where would be required to specify that every type is "dynamic". Here, you can optionally choose to specify that the types are "Any", but it's still gradual typing.
EDIT: you said "implicit opt-out"... which sounds like a synonym for "opt-in". I don't think "implicit opt-out" is a term.
The "implicit opt-out" is noImplicitAny[1]. That code of yours won't compile with this opt-out enabled. Edit: the "dynamic opt-in" is `any` (and is `dynamic` in C#).
If it was really not a statically typed language, I'd expect this code to at least run. But it detects a type failure at compilation time despite the lack of any declarations.
function foo() {
return 1;
}
let x = foo().substring(1, 2);
It still compiles the code to JavaScript even with the error, and you can still run it, which makes this more of a warning than anything else.
That warning based on type inference seems like a positive feature... because there's no situation in which that code is right. You can add `: any` to the function declaration and it will stop complaining, but I wouldn't contend that the language is not dynamic because it encourages you not to make a mistake.
The distinction between "warning" and "error" is highly arbitrary anyway. In C++, you also get a warning rather than an error if you, say, return a reference to a local, or read from an uninitialized local. You still get a binary, and it still runs. But these are 100% bugs, which is why most projects enable treat-warnings-as-errors for a lot of that stuff in C++. I'd imagine it's common with TypeScript, as well.
nowadays it’s quite good situation with php refactoring, given that php now allows type-hinting and if you keep type-hinting variables with phpdoc-style comments, IDE refactors code really well.
Can you name a framework where upgrading a very large (several hundred thousand LOC or more) application across 6+ years, two major versions, and multiple minor versions is not a significant undertaking?
FWIW at my former employer we had a huge Rails monolith with something like 500K+ lines of code. On top of that, our genius architects had split it up into a very nonstandard Rails architecture.
We hired these folks (no affiliation, other than that I used to work for a company that hired them) and they did a solid job. They blogged fairly extensively about each incremental upgrade and the problems they encountered:
I would take that warning with a grain of salt. You're talking about the main underlying framework upgrade from 4 years back to today. Take any MVC framework that powers your entire system with a 4 year upgrade gap and you'll end up with the same type of debt.
Also you're measuring effort in "X number of months" but as the article states it started as a hobby side-project for a few engineers. There is no notion of how much effort it actually represented. Heck I could need 5 years to upgrade from angular 1 to angular 2 if I put in 30 secs per day...
I would actually advocate for a framework that's past its prime/hype period over any newly untested hyped framework any day.
If they had good test coverage and stayed on top of updates this would be a non issue. Going across several major versions of any language/framework is going to be painful.
I've done major upgrades from 3.2 to 4.1/4.2, and from there to 5.1 followed by 5.2 - the only time we had to put in a little work was when moving from 3 to 4.
When moving from 4 to 5, we relied on simple smoke tests and unit tests, and had no major issues or bugs. The biggest effort was to make sure all of the application and environment configurations were up to date and using all the new settings introduced etc.
My very subjective opinion is that either most of these code bases are low on quality (meaning they are harder to maintain in general), too tightly coupled with Rails itself (models stuffed full of logic, instead of using plain ruby objects for logic and keeping ActiveRecord for persistence level logic), or engineers are just too scared to make changes to the codebase - which again is perhaps a combination of bad test coverage and bad code quality.
Either way the stories of upgrading major versions being a huge undertaking always make me scratch my head and wonder what are we doing wrong if its easy for us.
And inb4 someone claims our apps are just small and simple - we run about 12 Rails applications in production in various sizes, about half of them being relatively large.
You can refactor with confidence in Ruby if you have a good test suite. Just do it incrementally and test it well along the way, and also (when using Rails) address any deprecation notices you find along the way and you'll be fine.
I've been using Rails since late 2005, and in my last job upgrades a few Rails apps that haven't been touched since 2008 or so.
The problem I have with statements like this are that they apply to every large framework, not just Rails. ASP, Django, Zend, Cocoa etc. A combination of libraries are bound to be hard to update two major versions later when methods and variables are deprecated.
It depends on where you're coming from. Going to Rails 3.2 from a previous version is painful but upgrading from 3 to 4 or even 5 is considerably less dangerous.
I think it depends on whether "took months to years" means there was a team of several programmers working on it full-time for that period, or whether it just takes a couple of people working on and off and most of the time is waiting to see whether the logs are showing any problems.
(And time under the "took the opportunity to clean up technical debt" heading shouldn't really count.)
Assuming you have proper test coverage, upgrading even a “big” app isn’t that hard. The problem is when companies neglect the value of automated testing until they actually need it.
1. You're citing a specific anecdote (some people... java1.5) and trying to generalize. What matters is not some people, but the average case, which "some people" will not tell you.
2. "easy to upgrade" is not being argued; "easier in general" is. Just because it's easier to upgrade in a statically typed language doesn't make it easy, just easier than for a dynamically typed one.
In effect, you're saying "there are people using statically typed languages who didn't update, so it must not be easy to update".
A statement that makes a similar fallacious jump is: "There are some people who still type slowly on computers so I can't see how anyone could claim typing on computers is generally faster than typing on typewriters".
Anyway, the fact that the compiler catches more errors at compile-time means it should be obvious that it's easier to upgrade a statically typed language.
If I have a method in ruby "user.get_id" which used to return an int, but now returns a uuid in a new version of the framework, for a statically typed language my code just won't compile on the new framework until I handle that, regardless of test coverage... where-as in ruby, I'll need to have test coverage of that path or read the upgrade notes or something.
There are valid arguments to be had about dynamic vs static typing, but whether it's safer/easier to perform an upgrade of a library/framework is not an argument that dynamic typing can win easily.
If you miss something, the language shouldn't let it compile. Obviously the more backwards compatibility a language desires, the less likely this is to happen.
Much of your comment history is hijacking HN threads to complain about Ruby and Rails. Not sure what terrible things you've seen in your days as a Ruby programmer, but it might be worth it to try and talk to someone about it and get it out and learn to deal with your trauma.
Personal attacks will get you banned here. Please don't do this again.
If you think someone is using HN abusively, you should email us at hn@ycombinator.com so we can investigate. Attacking them in the comments is not cool, and being personally nasty is of course a bannable offense.
I feel this is comparing apples to oranges. Computing platforms supporting a new runtime is less about porting existing code, but more like adding a new feature to the code base. You really don’t see much “we port our app from framework/language version X to Y” articles anywhere, except for maybe Python 2/3. But that only generally happen for one version bump for a long time. Ruby (and particularly Rails) is really not doing as well as some of other players in this area.