This is insane. We have created the greatest tool in human history and people are complaining. I can use it to help me code, fix modeling issues as I learn CAD, help me troubleshoot the issues in my two-stroke leafblower engine and can consistently walk me through complex leetcode algorithms. It literally knows everything and people still complain.
It isn’t even close to being the greatest tool in human history. This type of misunderstanding and hyperbole is exactly why people are tired/bored/frustrated of it.
The uncomfortable truth is that AI is the world’s greatest con man. The tools and hype around them have created an environment where AI is incredibly effective at fooling people into thinking it is knowledgeable and helpful, even when it isn’t. And the people it is fooling aren’t knowledgeable enough in the topics being described to realize they’re being conned, and even when they realize they’ve been conned, they’re too proud to admit it.
This is exactly why you see people that are deeply knowledgeable in certain areas pointing out that AI is fallible, meanwhile you have people like CEOs that lack the actual technical depth in topics praising AI. They know just enough to think they know what “good” looks like, but not enough to realize when the “good” output is just lipstick on a pig.
What is the greatest tool in human history in your opinion?
I think it's too early to call whether AI is the answer to that question, but I think it could be. Yes, LLMs are terrible in all kinds of ways, but there's clearly something there that's of great value. I use it all day every day as a staff-level engineer, and it's making me much better and faster. I can see glimmers of intelligence there, and if we're on a road that delivers human-level intelligence in the next decade, it's difficult to see what else would qualify as the greatest tool humanity has ever invented.
It’s not hype when it’s released and used for concrete tasks. Some are hyping future potential sure. But GP is hyped about how he can use it NOW. Which I agree is very cool.
The human still needs to think, of course. But, I can get to my answer or my primary source using a tool faster than a typical search engine. That's a super power, when used right!
The jump in productivity we had with the world wide web and search engines was several orders of magnitude higher than what you have right now with LLMs, yet I don't remember a single person back in the 2000s calling Google "the greatest tool in human history".
Almost sixty years after ELIZA, chatbots seem to still produce a very strong emotional reaction to some folks.
People want to remain valuable and this tool takes that away. As long as you still find meaningful ways to contribute, all is good. But this says nothing about all the skills mastered that have been rendered effectively useless. And in time, as this tool gets better, it could rob you of the agency to change your environment.
Maybe the tool knows nothing.
But it allows me to learn niche things often much faster than via a web browser. So it has to value for me.
I think there’s lot of dangers and problems with it and frankly I’d probably be happier if it was never invented. But even then I can still see the value it has
A tool that constantly generates incorrect information, lacks any real awareness or internal state, and doesn’t even recognize its own mistakes, even when you explicitly point them out is, frankly, pretty useless.
Ever had this conversation with ChatGPT?
- ChatGPT: Here's my solution!
- You: This is wrong, you need to do X.
- ChatGPT: You're right! My solution was wrong because [repeats what you said]. Here’s my revised answer!
- You: This is still wrong. I said do X.
- ChatGPT: Understood! This clarifies: [still gets it wrong].
Or worse, you can trick it:
- ChatGPT: X + Y = Z (which is actually correct)
- You: No, X + Y = Q (which is false)
- ChatGPT: You're right, X + Y = Q is correct because...
I guess it's useful for generating boilerplate code or text, but even then, it often makes mistakes.
This. Precise text auto-completer. Without reasoning or cognitive processes whatsoever, just a very marketable illusion of it. Despite the lies, a great tool.
As any tool, it takes knowledge and responsibility. Just lile the unix chainsaw.
So would the greatest tool in human history in your mind be something that is used to plagiarise most content in the world and then output correct-30%-of-the-time slop? Or is there another definition you would use?
I'm struggling to think how this could even be in the top 10 tools in human history.
As a counterpoint, if I were to be teleported naked onto an abandoned island 10000 years ago and could bring one "tool" with me, a solar powered terminal with an LLM would be my #1 pick. An able-bodied and resourceful individual equipped with an LLM could accomplish far far more than with any other tool I can think of.
The Internet was the greatest tool in human history and it has lead to all sorts of issues. Misinformation at amazing scale that has undone lots of social progress that was made in the 20th century and 2000s. It has driven the greatest wealth disparities ever seen. It has become a harmful addiction for millions of people.
That's not to say the Internet hasn't caused good things to happen, but to ignore the bad things is counterproductive. Maybe it's ok to slow down and take a step back to make sure we're not doing more bad than good.
And it's cheap. Imagine I told you that you could have direct access to every PhD in the world and they would respond to all of your questions instantly... for $20/mo. Mind-blowing stuff and people still complain.
It gives me better answers on most things than my actual PhD friends do. So... yeah?
The funny thing is, it's somewhat less useful for certain business stuff, because so much of that is private within corporations. But the stuff you learn in a PhD program is all published. And it's pretty phenomenal at distilling that information to answer any specific question.
Multiple times a week I run into an obscure term or concept in a paper that isn't defined well or doesn't seem to make sense. I ask AI and it explains it to me in a minute. Yes, it's basically exactly like asking a PhD.
The AI is optimized for producing text that sounds like it makes sense and is helpful.
This is not a guarantee that the text it produces is a correct explanation of the thing you are asking about. It’s a mental trick like a psychic reading tea leaves.
And they do. They stand on the fact that they save time, raise productivity, and assist in learning. That's the merit.
Demanding absolute perfection as the only measure of merit is bonkers. And if that's the standard you hold everything in your life too, you must be pretty disappointed with the world...
None of my comments say I’m demanding perfection. That’s a fallacy to reduce my position to absurdism, so it can be easily dismissed.
LLMs have not improved my productivity. When I have tried to use them, they have been a net negative. There are many other people who report a similar experience.
> This is not a guarantee that the text it produces is a correct explanation
A guarantee of correctness is perfection. I don't know else to take it.
Not all jobs or tasks are helped by LLM's. That's fine. But many are, and hugely.
You dismissed it for everyone as "a mental trick like a psychic reading tea leaves". Implying it has no value for anyone.
Your words.
That's just wrong.
Now you say it doesn't have value for you and for some other people. That's fine. But that's not what you were saying above. That's not what I was responding to.
"But the stuff you learn in a PhD program is all published." - What? This is the kind of misunderstanding of knowledge that AI boosters present that drives me insane.
And last sentences conflate a PhD with a google search or even dictionary lookup. I mean, c'mon!
I'm not talking about learning practical skills like research and teaching, or laboratory skills. I'm talking about the factual knowledge. Academia is built on open publishing. Do you disagree?
And the things I'm looking up just can't be found in Google or a dictionary. It's something defined in some random paper from 1987, further developed by someone else in 1998, that the author didn't cite.
And something that lead you to that paper would be wonderful but instead you have been disconnected from the social side of scholarship and forced to take the AI "at its word".
I've also seen AI just completely make up nonsense out of nowhere as recently as last week.
Huh? Nobody's forcing me to "take the AI at its word". It's the easiest thing to verify.
And I've got enough of the social side of scholarship already. Professors don't need me emailing them with questions, and I don't need to wait days for replies that may or may not come.
You literally ask it for the paper(s) and author(s) associated, put them into Google Scholar, and go read them. If it hallucinates a paper title, Scholar will usually find the relevant work(s) anyways because the author and title are close enough. If those fail, you Google some of the terms in the explanation, which is generally much more successful than Googling the original query. If you can't find anything at all, then it was probably a total hallucination, and you try the prompt a different way. That probably happens less than 1% of the time, however.
I mean, it's all just kind of common sense how to use an LLM.
It’s not actually cheap, just subsidized. Becoming reliant on it now virtually guarantees you will have a tough decision to make later when profitability is actually important.