> I just question how useful it truly is as a heuristic
I think the author is presenting them as analytical tools that might or might not be useful depending on the situation.
Very often when faced with a difficult problem it's hard to know where to start attacking. Any idea, even if simplistic and wrong, can be useful to start gaining insight on what is going to work and why; even just refuting the original idea with a clear counterargument might suggest alternative avenues.
OT: this is IMO part of the reason why people like LLMs so much. Maybe the answer is trash, but articulating why it's trash gets you unstuck.
I don't really like 0 increment games OTB. Online it's fine and you could argue that time management is part of the game, but on a physical board you end up knocking down pieces and having a lot of arguments about whether moves were legal or not.
5 minutes became popular when digital clocks were rare and expensive, but these days a digital clock that does Fischer increment is like $50.
3 minutes + 2 seconds is about the same length as 5 minutes and much better suited to OTB play IMO.
I don't think the problem is with the tone, but rather with the messaging. All AI-related marketing has been along the lines of "let the computer think for you" rather than "use the computer to think better", infantilizing rather than empowering the user.
From the bicycle of the mind to the tricycle of the mind.
Sadly that's what most users expect. They don't want to be bothered with learning and understanding, they just want a quick result now with as little effort as possible.
I think it's better at specifying intent, similarly to how you would use "for" and "while" in a programming language even though they are literally the same thing and more often than not they compile to, respectively, identical query plans and identical asm/bytecode.
Also if you work a lot with databases you often need to do outer joins, a full cartesian product is almost never what you want. The join syntax is more practical if you need to change what type of join you are performing, especially in big queries.
> who are some of the most difficult buyers to convenience to pay
Citation very much needed?
Unless you're talking about enterprise software specifically, developers are probably among the most willing to shell out cash for software, it's the general public who seems to be fine with ad-ridden spyware freemium nonsense as long as it's free.
There's a long history of the likes of Redis, MongoDB, Grafana, Terraform etc first releasing their product as free and open source to get adoption, hoping to make money by some indirect means, then relicensing to closed source later on because nobody pays for something they can get for free.
And pretty much all major programming languages and libraries are given away for free too. Someone tries to introduce BitKeeper, a commercial version control system, for the Linux kernel? They won't stand for it, some's gotta clone it and give the clone away for free.
Hell, I've heard loads of people here on HN complaining when a SaaS company introduces features exclusively useful to large corporations - like single-sign-on integration - then wants to get paid for them.
There's a handful of exceptions. For example game developers will pay $$$ for "Unity" and store their assets in "Perforce" and suchlike. And I believe it's possible to pay for Visual Studio.
This is where remembering the free-costless and free-libre distinction is important. Linux is free-libre, so it's natural that it insists on its dependencies being free-libre.
Free-libre is necessarily also free-costless, but not the other way round.
> Visual Studio
It's interesting that everywhere I've worked as a Microsoft shop happily pays for MSDN, which gives you not just VS but a huge amount of other stuff.
Perforce handles large binary assets much better than git. There are also paid for closed version control systems that are really bad but get used anyway, such as in IC design.
Every single time someone posts about some commercial tool, in a website dedicated initially to startups, there is always a set of replies with half-baked open source alternatives to use instead.
Developers regularly underestimate the work required to build something and will spend a lot of time building something themselves vs buying someone else's tool for $5 / month.
It's hard because developers don't usually have spending authority or budget. Often, nor does their manager or their manager's manager. To get the company to buy something you have to escalate to an absurdly high place in the org chart and so devs will often try to cobble something together out of free stuff, even if it's far less efficient, because spending developer time doesn't require permission whereas spending credit card balance does.
Unrelatedly, there's also to some extent an expectation that everything is free, even for commercial users. The most common pricing question I get about my product is "can't you make it free for commercial projects that don't have revenue yet", i.e. effectively asking me to become investors in their own venture. Because often they want to make a product company, but not spend any money to do so.
Source: I run a small software company that sells to developers.
No, non developers are more likely to buy software for what they need for their profession (that is why tons of terrible software exists everywhere for such tasks). Ad ridden spyware is mostly for consumption things like games and random websites. On HN every now and then you will see people saying you can do anything with nano and vim/emacs and only recently some of them have started using LSP. Anything that is not totally free and open source gets 100 denials on HN.
It was a bit surprising to click this link on my VPN. Wish it had a NSFW tag. Uncensored is a bit of a crapshoot when it comes to GenAI, half of the time is just means "no instruction tuning."
The link should probably just be the GitHub repo. It's an interesting project and discussion topic (although maybe a little flamewar inviting), but it doesn't seem necessary to link to the product itself in this case.
The problem I have with the argument is that "improvement" needs to be something objective and measurable. "I'm throwing away old code because eww" isn't improvement. The two examples cited are very telling:
> Consider cases like introducing Kotlin to gradually level-up a Java shop.
But why? Introducing a second language to do pretty much the same thing is a giant leap in complexity and it's not obvious we'd get something real in return.
> What about a PostgreSQL operation rewriting SQL stored procedures in PL/Python?
Yet again, why? SQL is popular and very well understood, the alternative solution would be less portable and a rewrite would introduce unnecessary risk.
I was let's rewrite this app guy once. I rewrote the whole app and realized that main issues were documentation and lack of understanding by anyone within the business how the system actually works.
The new app is better, but if a new dev looked at the code base they would suggest a rewrite. I would want to do it too, but I just don't feel like joining that rodeo at the moment.
Throwing away old code and rewriting it in a sexy, new language also takes time away from projects that are actually meaningfully innovative: ie projects that help customers do more, or be more efficient
Kotlin became the language of Android, not because it was new, but because multiple company's studied it and devs were more productive in the language after a relatively short on ramping period, many cases a weekend to get familiar and less than a month to be more productive.
Java was stagnating on Android as well and Kotlin was able to introduce a lot of modern features far more quickly. The only argument to keep with Java is that Java actually seems to be chasing after some of the gains Kotlin made.
> Kotlin became the language of Android, not because it was new, but because multiple company's studied it and devs were more productive in the language after a relatively short on ramping period, many cases a weekend to get familiar and less than a month to be more productive.
[citation_needed]
> Java was stagnating on Android as well and Kotlin was able to introduce a lot of modern features
Java was stagnating on Android because Google was (and is) lagging with implementing newer features. Android 12 only got support for Java 11 ffs...
Interestingly enough with Project Mainline it will be possible to support newer Java versions in Android…
Strongly agree with this. Software Engineering needs something equivalent to evidence-based-medicine. (I think it exists, but isn't as widespread as it should be.)
Business needs that. Everyone jumped on the data-driven train (how rigorous was the information that lead them to do that? LOL) but it’s top-to-bottom bullshit. We have almost no clue how to measure management efficacy, for instance, and the methods we do have are too fiddly and require large sample sizes in just the right circumstances, so nobody but academics even try. It’s like that for almost everything. You look at the data gathering behind and analysis methods for the median strategy PowerPoint and it’s just gibberish, completely useless nonsense, and it doesn’t exactly take a trained & practicing scientist to tell, but everyone who can spot these things and is on the management track knows they’re not supposed to point that out. It’s all a big, weird game of pretend.
I don't agree at all. This is the McNamara fallacy applied to software. You don't have to measure management efficacy. When you call for rewriting project X in technology Y, this is, at a first approximation, just saying: What reasons do we have for thinking this will make a difference? What evidence exists to support such contentions?
You don't need to be able to perfectly measure things to have evidence. Evidence might be "We used Y in this other project because <it was good in some way>, and the engineers seem to be able to make changes faster: here's our data." Or "Technology Y is better at <some feature> because it <has more mature libraries, or a better approach to concurrency, or whatever>, so we think it will benefit us."
You don't have to be able to measure everything perfectly to make better decisions.
I agree! The current approach needs “science-based management” because that’s what it’s play-acting at—that’s what it would take to do the thing they’re claiming to do.
I think it’d be much better to admit that’s far too expensive and/or nearly impossible, plus probably not something most executives are interested in doing anyway, and back off the whole hyper-“legibility” (bad-)data-based-everything notion. It’s an expensive drag mostly delivering bullshit.
You're responding to a different end of the scale. Lots of shops are doing nothing, and that is not the right answer either. You can evaluate the data that exists and do a better job than they are doing.
Sure, I’ve also seen smaller businesses just totally failing to do anything with data they already have, that probably is decent.
Both problems may be connected by a fundamental failure to appreciate scientific and statistical methods at the level that most high school graduates have been exposed to.
There’re narrow areas of intense competition (though not whole sectors—pockets here and there) keeping everyone really on their A-game I suppose (I’ve not seen it, and I’ve seen some places one might expect it) and then there’s… everything else, where it’s all a clown show of guesswork and lots of energy and money spent pretending. It’s a miracle anything works.
Last time I had Windows installed on a physical machine was Windows 2000, but I still need to keep virtual Windows boxes around for random reasons (clients having terminally braindead VPN setups is a popular one.)
Boy is it bad! Consumer versions of Windows are basically malware at this point. No idea how people can get stuff done at all.
I think the author is presenting them as analytical tools that might or might not be useful depending on the situation.
Very often when faced with a difficult problem it's hard to know where to start attacking. Any idea, even if simplistic and wrong, can be useful to start gaining insight on what is going to work and why; even just refuting the original idea with a clear counterargument might suggest alternative avenues.
OT: this is IMO part of the reason why people like LLMs so much. Maybe the answer is trash, but articulating why it's trash gets you unstuck.
reply