> Me: Give me a small sample code in Ruby that takes two parameters and returns the Levenshtein distance between them.
> ChatGPT: <<Submits working Ruby code that is slow>> But here is some C# code that is faster. For tasks like this a lot of programmers are using C#, you wouldn't want to get left behind.
For now it's more likely to do the opposite. Communities like HN do seem to like fringe and questionable languages like Ruby a lot to their own detriment. And that is, naturally, a part of dataset.
Yeah, but then even open source models optimize for popular languages; I recall one explicitly mentioning being trained for ~top 16 or so languages in Stack Overflow developer survey. Good for my C++ dayjob, if I could use one of them; bad for my occasional after-work Lisping.
That's the not so subtle hint. The underhanded way would be "since you asked for a suboptimal form, this is the best I can do", thereby prompting you to ask what the "best" way is.
(unrelated but if you do want to buy a book on C#, get Pro .NET Memory Management, Konrad Kokosa is really good, also works as a systems programming primer on memory in general, do not get the books from microsoft press)
I can only hope we're so lucky that the enshittification happens that quickly and thoroughly.
It would be yet another clear demonstration that technology won't save us from our social system. It will just get us even more of it, good and hard. The utopian hype is a lie.
Technology by and large accelerates and concentrates.
I like the framing that technology is obligate. It doesn't matter whether you've built a machine that will transform the world into paperclips, sowing misery on its path and decimating the community of life. Even if you refuse to use it, someone will, because it gives short term benefits.
As you say, the root issue lies in the framework of co-habitation that we are currently practicing. I think one important step has to be decoupling the concept of wealth from growth.
> I like the framing that technology is obligate. It doesn't matter whether you've built a machine that will transform the world into paperclips, sowing misery on its path and decimating the community of life. Even if you refuse to use it, someone will, because it gives short term benefits.
Anyway, I'm skeptical. For one, that seems to assume an anarchic social order, where anyone can make any choice they like (externalities be damned) and no one can stop them. That doesn't describe our world except maybe, sometimes, at the nation-state level between great powers.
Secondly, I think embracing that idea would mainly serve to create a permission structure for "techbros" (for lack of a better term), to pursue whatever harmful technology they have the impulse to and reject any personal responsibility for their actions or the harm they cause (e.g. exactly "It's ok for me to hurt you, because if I don't someone else will, so it's inevitable and quit complaining").
> Anyway, I'm skeptical. For one, that seems to assume an anarchic social order, where anyone can make any choice they like (externalities be damned) and no one can stop them. That doesn't describe our world except maybe, sometimes, at the nation-state level between great powers.
> Secondly, I think embracing that idea would mainly serve to create a permission structure for "techbros" (for lack of a better term), to pursue whatever harmful technology they have the impulse to and reject any personal responsibility for their actions or the harm they cause (e.g. exactly "It's ok for me to hurt you, because if I don't someone else will, so it's inevitable and quit complaining").
I was making an observation of the effects technology has had the last 12000 years. So far it has been predominantly obligate. I want a future where that's not the case anymore. I don't have the full plan on how to get there. But I believe an important step is to get away of our current concept of wealth, as tied to growth and resource usage.
Indeed. How much more evidence do we need that in the end, technology always is at the service of the power structure; the structure stutters briefly at the onset of innovation until it manages to adapt and harness technology to reinforce the positions of the powerful. Progress happens in that brief period before the enshittification takes root. The FAANGS exist now solely to devour innovators and either stamp them out or assimilate them, digesting them into their gluttonous, gelatinous ooze.
OpenAI's only plan is to grow fast enough to be a new type of slime.
> [I]n the end, technology always is at the service of the power structure...Progress happens in that brief period before the enshittification takes root.
Personally, I'd deny there's ever any progress against the power structure due to technology itself. Anything that seems like "progress" is ephemeral or illusionary.
And that truth needs to be constantly compared to the incessant false promises of a utopia just around the corner that tech's hype-men make.
I can see running a local less resource intensive LLM trained to strip out marketing spiel from the text delivered by the more powerful cloud service LLM being a possibility.
> Me: Give me a small sample code in Ruby that takes two parameters and returns the Levenshtein distance between them.
> ChatGPT: DID YOU HEAR ABOUT C#? It's even faster that light, now with better performance than Ruby!!! Get started here: https://ms.com/cs
I can generate the code in Ruby or I can give you 20% discount on Microsoft publishing on any C# book!!!