Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is the claim made by AI proponents every time it fails, such as for self-driving cars - humans make mistakes too. Humans make math mistakes, but I wouldn't be satisfied with a calculator that does.

ChatGPT is a tool; it's value depends on how well I can trust it. Humans are not tools.

> experts in their field tend to inject their own biases and experiential preferences when answering questions in depth.

Another typical argument - everyone makes mistakes, therefore my mistakes aren't relevant. Everyone can do math, but there's a big difference between my math and Timothy Gowers. Everyone lies and everyone tells the truth at times, but the meaningful difference is in degree - some do it all the time, with major consequences, take no responsibility, and cause lots of harm. That's different than the person committed to integrity.



To speak as a proponent, it's not about the er... "relative relevance" so much as the utility.

there are things about a chat model that you can't say about humans, like, it's not really ethical to keep a human stuffed in your pocket to be your personal assistant at your whim.

I think one of the things folks struggle with in grokking the value of these models is that we're really used to tools being like you say; they're reliable and do a thing. As though there are two states of work - perfect and useless. There are other patterns to interact with information, and this puts what we used to need humans for in a place that we can do other things with it. stuff like:

- brainstorming - rubber duck debugging - casually discussing a topic - exploring ideas / knowledge - study groups (as in, having other semi-knowledge entities around to bounce ideas off of, ask questions, etc)

when it comes to self driving cars, well, that's a bit of a different story and really is more a discussion about ethics and law and those standards. I, and others like you speak of are held of the opinion that the expectation for autonomous vehicles is a bit high given the rates of human failure, but there's plenty of arguments to be made that automating and scaling a thing means you should hold it to a higher standard anyway. I don't think there's a correct answer on this one - it's complex enough to be a battery if opinion. You mention the potential for harm, and certainly that applies here.

I'm less worried about chatgpt being wrong. Much less likely to flatten me at an intersection.


> I think one of the things folks struggle with in grokking the value of these models is that we're really used to tools being like you say; they're reliable and do a thing. As though there are two states of work - perfect and useless. There are other patterns to interact with information, and this puts what we used to need humans for in a place that we can do other things with it.

Maybe, but look at it this way: Do you work in business? If so, step back and reread that - it seems a lot like a salesperson finding a roundabout way to say, 'my product doesn't actually work'.


It’s either useful or it isn’t. Comparing AI to either human intelligence or rules-based computing tools is incoherent. Fucking stop it! What we are really talking about are the pros and cons of experiential, tacit knowledge. Humans can do this. Humans can also compute sums. Computers are really good at computing sums. It turns out they can work with experiential knowledge as well. Whodathunk.

What we should be saying is this: there will always be benefits of experiential knowledge and there will always be faults with experiential knowledge, regardless of man vs. machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: