I actually don't have doubts that LLMs are quite good at writing software.
The problem for me is one of practicality. If, after hundreds of lines of AI-written code, I noticed some sort of issue (regarding scale, security, formatting, logic, etc.), I'm basically forced to start over.
We all know that reading code is way less pleasant than writing code. So, for me, LLMs can be very useful for writing code that I know is going to be correct without having to go back through it. For example, basic TRPC CRUD functions.
I'm a firm believer that if you want to start a tech company, at least one of the founders has to have a technical background. Even if you outsource all the work, you need to be able to ask the right questions related to security.
It's not just that this database was accessible via the internet. It was all public data. Storing people's IDs in a public database is just... wow.
But now we have amazing vibe coding tools that mean that you don’t need to be technical or whatever - you can just deliver results. After all, the best LinkedIn influencers and founders don’t care about how something is delivered, just what.
Yeah, we’ve finally, nearly, just got to the point where realizing that treating IT and security and such as simply a cost centre to be minimised maybe quite wasn’t leading to optimal security outcomes - to throwing it all away again.
Tech background isn’t sufficient. They need to have security background. Some of the worst people I’ve met with respect to security have been technical enough to have the wrong level of confidence.
Doctors need to study 5 to 8 years and pass rigorous exams
Attorneys the same
Structural architects and engineers the same
We have a couple of decades more until we lock tech up, up until now it was all fun and games, but now and in the future tech will be everywhere and will be load bearing
Totally agree. I'm a big fan of neovim but didn't find a good AI solution that compared to Cursor. Even though I miss some of my neovim plugins, Cursor + Vim plugin is pretty hard to beat.
And it's cheap. Imagine I told you that you could have direct access to every PhD in the world and they would respond to all of your questions instantly... for $20/mo. Mind-blowing stuff and people still complain.
It gives me better answers on most things than my actual PhD friends do. So... yeah?
The funny thing is, it's somewhat less useful for certain business stuff, because so much of that is private within corporations. But the stuff you learn in a PhD program is all published. And it's pretty phenomenal at distilling that information to answer any specific question.
Multiple times a week I run into an obscure term or concept in a paper that isn't defined well or doesn't seem to make sense. I ask AI and it explains it to me in a minute. Yes, it's basically exactly like asking a PhD.
The AI is optimized for producing text that sounds like it makes sense and is helpful.
This is not a guarantee that the text it produces is a correct explanation of the thing you are asking about. It’s a mental trick like a psychic reading tea leaves.
And they do. They stand on the fact that they save time, raise productivity, and assist in learning. That's the merit.
Demanding absolute perfection as the only measure of merit is bonkers. And if that's the standard you hold everything in your life too, you must be pretty disappointed with the world...
None of my comments say I’m demanding perfection. That’s a fallacy to reduce my position to absurdism, so it can be easily dismissed.
LLMs have not improved my productivity. When I have tried to use them, they have been a net negative. There are many other people who report a similar experience.
> This is not a guarantee that the text it produces is a correct explanation
A guarantee of correctness is perfection. I don't know else to take it.
Not all jobs or tasks are helped by LLM's. That's fine. But many are, and hugely.
You dismissed it for everyone as "a mental trick like a psychic reading tea leaves". Implying it has no value for anyone.
Your words.
That's just wrong.
Now you say it doesn't have value for you and for some other people. That's fine. But that's not what you were saying above. That's not what I was responding to.
"But the stuff you learn in a PhD program is all published." - What? This is the kind of misunderstanding of knowledge that AI boosters present that drives me insane.
And last sentences conflate a PhD with a google search or even dictionary lookup. I mean, c'mon!
I'm not talking about learning practical skills like research and teaching, or laboratory skills. I'm talking about the factual knowledge. Academia is built on open publishing. Do you disagree?
And the things I'm looking up just can't be found in Google or a dictionary. It's something defined in some random paper from 1987, further developed by someone else in 1998, that the author didn't cite.
And something that lead you to that paper would be wonderful but instead you have been disconnected from the social side of scholarship and forced to take the AI "at its word".
I've also seen AI just completely make up nonsense out of nowhere as recently as last week.
Huh? Nobody's forcing me to "take the AI at its word". It's the easiest thing to verify.
And I've got enough of the social side of scholarship already. Professors don't need me emailing them with questions, and I don't need to wait days for replies that may or may not come.
You literally ask it for the paper(s) and author(s) associated, put them into Google Scholar, and go read them. If it hallucinates a paper title, Scholar will usually find the relevant work(s) anyways because the author and title are close enough. If those fail, you Google some of the terms in the explanation, which is generally much more successful than Googling the original query. If you can't find anything at all, then it was probably a total hallucination, and you try the prompt a different way. That probably happens less than 1% of the time, however.
I mean, it's all just kind of common sense how to use an LLM.
It’s not actually cheap, just subsidized. Becoming reliant on it now virtually guarantees you will have a tough decision to make later when profitability is actually important.
Walk into any coffee shop or office and I can guarantee that you'll see several people actively typing into ChatGPT or Claude. If it was so useless, four years on, why would people be bothering with it?
I don't think you can even be bullish or bearish about this tech. It's here and it's changing pretty much every sector you can think of. It would be like saying you're not bullish about the Internet.
I honestly can't imagine life without one of these tools. I have a subscription to pretty much all of them because I get so excited to try out new models.
What does this mean? You don't get to write off the difference between your "target price" and actual sale price.
And a reminder that companies always do better if they make more money, not point in purposeful losses (unless you are getting a side benefit like goodwill from charity).
I think, but am not sure, the point they're trying to make is that hospitals and insurance companies can "charge" really high prices and then they can forgve those high prices in exchange for a tax break?
That's not at all how it works so they don't have any idea what they're talking about. This is like when people say businesses can "write it off on their taxes". Only people who don't know what that really means say it.
The problem for me is one of practicality. If, after hundreds of lines of AI-written code, I noticed some sort of issue (regarding scale, security, formatting, logic, etc.), I'm basically forced to start over.
We all know that reading code is way less pleasant than writing code. So, for me, LLMs can be very useful for writing code that I know is going to be correct without having to go back through it. For example, basic TRPC CRUD functions.
reply