Hacker News new | past | comments | ask | show | jobs | submit login

Would this still hold true in your opinion if models like O3 become super cheap and bit better over time? I don't know much about the AI space, but as a vanilla backend dev also worry about the future :)



I was helping a relative still in college with a project, and I was struck by how lackadaisical they are about cut-and-pasting huge chunks of code from chatgpt into whatever module they are building without thinking about why, or what it does, or where it fits, as long as it works. It doesn't help that it's all relatively same-looking Javascript so frontend or backend is kinda mixed together. The troubleshooting help I provided was basically untangling the mess by going from first principles and figuring out what goes where. I can tell you I did not feel threatened by the AI there at all, if anything I felt bad for the juniors and feeling like this is what we old people are going to end up having to support very soon.


I had a similar experience recently.

Not sure how accurate these numbers are but on https://openrouter.ai/ highest used "apps" basically can auto-accept generated code and apply it to the project. I was recently looking at top performers on https://www.swebench.com/ and noticed OpenHands basically does the same thing or similar. I think the trend is going to get much worse, and I don't think Moore's Law is going to save us from the resulting chaos.


AI slop-fixing consultants, at your service! There's hope for us veterans yet. ;)


It will be kinda like modern furniture, vs “Old Human written code.”

So many people won’t be able to get paid for the quality of their work, but the people who buy the cheaper product will get what they pay for.


Let's see how O3 pans out in practice before we start setting it as the standard for the future.


Mamba-ish models are the breakthrough to cheap inference if they pan out. Calling a dead-end already is just silly.


We know that OpenAI is verz good at least in one thing: generating hype. When Sora was announced everyone thought that this will be revolutionary. Look at how it looks like in production. Same when they started floating rumours that they have some AGI prototype in their labs.

They are the Tesla of the IT world, overpromise and under deliver.


It's a brilliant marketing model. Humans are inherently highly interested in anything which could be a threat to their well-being. Everything they put out is a tacit promise that the viewer will soon be economically valueless.


I hope people will come to the realisation that we have created a good plagiarizer at best. The "intelligence" originates from the human beings who created the training data for these LLMs. The hype will die when reality hits.


Hype is very interesting. The concept of Hyperstition describes fictions that make themselves real. In this sense, hype is an essential part of capitalism:

"Capitalization is [...] indistinguishable from a commercialization of potentials, through which modern history is slanted (teleoplexically) in the direction of ever greater virtualization, operationalizing science fiction scenarios as integral components of production systems." [0]

"Within capitalist futures markets, the non-actual has effective currency. It is not an "imaginary" but an integral part of the virtual body of capital, an operationalized realization of the future." [1]

This corresponds to the idea that virtual is opposed to actual, not real.

[0] https://retrochronic.com/#teleoplexy-12

[1] https://retrochronic.com/#on-accelerate-2b


Religion too. Those that are told a prophecy is to come, have a lot of incentive to fulfill that prophecy. Human belief systems are strange and interesting because (IMO) of the entanglement of beliefs with identity


Generally speaking, I think it would. I’m open to being wrong. I think there is a non-trivial amount of hype around O3, and while it would certainly be interesting if it was cheap, I don’t think it would address important issues that AI currently doesn’t seem to even begin to accommodate in its current capacity to recognize or utilize contexts.

For example, I have little to no expectation that it will handle software architecture well. Especially refactoring legacy code, where two enormous contexts need to be held in mind at once.


I'm really curious about something, and would love for an OpenAI subscriber to weigh in here.

What is the jump to O1 like, compared to GPT4/Claude 3.5? I distinctly remember the same (if not even greater) buzz around the announcement of O1, but I don't hear people singing its praises in practice these days.


I gave up interest in GPT4/Claude3.5 about 6 months ago as not very helpful, producing plausible but wrong code.

Have an o3-mini model available to me on the other hand I'm very impressed with its fast, succinct, correct answers while tooling around in zsh on my mac. what things are called, why they exist. why is macports installing db48 etc. It still fails to write simple bash one liners. (I wanted to pipe the output of ffmpeg to a column of --enabled-features and it just couldn't do it)

It's a very helpful rubber duck but still not going to suffice as an agent, but I think its worth a subscription. I wanted to do everything local and self hosted and briefly owned a $3000 mac studio to run llama3.3-70B but it was only as good as GPT4 and too slow to be useful so returned it. In that context even $200/m is relatively cheap.


I don't know how to code in any meaningful way. I work at a company where the bureaucracy is so thick that it is easier to use a web scraper to port a client's website blog than to just move the files over. GPT 4 couldn't write me a working scraper to do what I needed. o1 did it with minimal prodding. It then suggested and wrote me a ffmpeg front-end to handle certain repetitive tasks with client videos, again, with no problem. Gpt4 would often miss the mark and then write bad code when presented with such challenges


Try Claude. I get even better code results.


O1 is fine.


No degree of cheapness will be able to offset the “creates more work for me rather than less” part.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: