Hacker News new | past | comments | ask | show | jobs | submit login

The problem I have with the argument is that "improvement" needs to be something objective and measurable. "I'm throwing away old code because eww" isn't improvement. The two examples cited are very telling:

> Consider cases like introducing Kotlin to gradually level-up a Java shop.

But why? Introducing a second language to do pretty much the same thing is a giant leap in complexity and it's not obvious we'd get something real in return.

> What about a PostgreSQL operation rewriting SQL stored procedures in PL/Python?

Yet again, why? SQL is popular and very well understood, the alternative solution would be less portable and a rewrite would introduce unnecessary risk.




I was let's rewrite this app guy once. I rewrote the whole app and realized that main issues were documentation and lack of understanding by anyone within the business how the system actually works.

The new app is better, but if a new dev looked at the code base they would suggest a rewrite. I would want to do it too, but I just don't feel like joining that rodeo at the moment.

This cycle will repeat till the end of time.


Throwing away old code and rewriting it in a sexy, new language also takes time away from projects that are actually meaningfully innovative: ie projects that help customers do more, or be more efficient


Kotlin became the language of Android, not because it was new, but because multiple company's studied it and devs were more productive in the language after a relatively short on ramping period, many cases a weekend to get familiar and less than a month to be more productive.

Java was stagnating on Android as well and Kotlin was able to introduce a lot of modern features far more quickly. The only argument to keep with Java is that Java actually seems to be chasing after some of the gains Kotlin made.


> Kotlin became the language of Android, not because it was new, but because multiple company's studied it and devs were more productive in the language after a relatively short on ramping period, many cases a weekend to get familiar and less than a month to be more productive.

[citation_needed]

> Java was stagnating on Android as well and Kotlin was able to introduce a lot of modern features

Java was stagnating on Android because Google was (and is) lagging with implementing newer features. Android 12 only got support for Java 11 ffs...

Interestingly enough with Project Mainline it will be possible to support newer Java versions in Android…


Strongly agree with this. Software Engineering needs something equivalent to evidence-based-medicine. (I think it exists, but isn't as widespread as it should be.)


Business needs that. Everyone jumped on the data-driven train (how rigorous was the information that lead them to do that? LOL) but it’s top-to-bottom bullshit. We have almost no clue how to measure management efficacy, for instance, and the methods we do have are too fiddly and require large sample sizes in just the right circumstances, so nobody but academics even try. It’s like that for almost everything. You look at the data gathering behind and analysis methods for the median strategy PowerPoint and it’s just gibberish, completely useless nonsense, and it doesn’t exactly take a trained & practicing scientist to tell, but everyone who can spot these things and is on the management track knows they’re not supposed to point that out. It’s all a big, weird game of pretend.


I don't agree at all. This is the McNamara fallacy applied to software. You don't have to measure management efficacy. When you call for rewriting project X in technology Y, this is, at a first approximation, just saying: What reasons do we have for thinking this will make a difference? What evidence exists to support such contentions?

You don't need to be able to perfectly measure things to have evidence. Evidence might be "We used Y in this other project because <it was good in some way>, and the engineers seem to be able to make changes faster: here's our data." Or "Technology Y is better at <some feature> because it <has more mature libraries, or a better approach to concurrency, or whatever>, so we think it will benefit us."

You don't have to be able to measure everything perfectly to make better decisions.


I agree! The current approach needs “science-based management” because that’s what it’s play-acting at—that’s what it would take to do the thing they’re claiming to do.

I think it’d be much better to admit that’s far too expensive and/or nearly impossible, plus probably not something most executives are interested in doing anyway, and back off the whole hyper-“legibility” (bad-)data-based-everything notion. It’s an expensive drag mostly delivering bullshit.


You're responding to a different end of the scale. Lots of shops are doing nothing, and that is not the right answer either. You can evaluate the data that exists and do a better job than they are doing.


Sure, I’ve also seen smaller businesses just totally failing to do anything with data they already have, that probably is decent.

Both problems may be connected by a fundamental failure to appreciate scientific and statistical methods at the level that most high school graduates have been exposed to.

There’re narrow areas of intense competition (though not whole sectors—pockets here and there) keeping everyone really on their A-game I suppose (I’ve not seen it, and I’ve seen some places one might expect it) and then there’s… everything else, where it’s all a clown show of guesswork and lots of energy and money spent pretending. It’s a miracle anything works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: