Hacker News new | past | comments | ask | show | jobs | submit login

If I give chatGPT the exact text of that post, but do something like change the input of e.g. the primes generating program, but I then change it slightly so that it doesn't work in the real world, ChatGPT doesn't catch the error and instead returns the data as though I put in a working program.



A hudred and fifty odd years ago, Charles Babbage said about his Difference Engine: "On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

voila, ChatGPT.

[1] https://en.wikipedia.org/wiki/Difference_engine


Either chatGPT made a careless error, or it lied to you. Both lying and making careless errors use to be both exclusive abilities of human beings.


You're anthropomorphizing too much and it is a mistake to do so.

It didn't lie or make a careless error other than from the perspective of the human interacting with it. It executed its programming exactly. Its programming is to take human readable language and generate a response, the exact procedure of this process of course being much more detailed. You need to understand, that's all it does. So it didn't lie or make a mistake, it just isn't actually thinking so it can't do what you want it to, which is reason about something.


The issue is that ChatGPT doesn't know whether it made a careless error or lied to you.

And it has no convictions so you can simply inform it that it was telling the truth.


You should actually test this yourself by exploring chatgpt. It is very easy to get chatgpt to say something incorrect. Once that happens, you can demonstrate that to chatgpt, and chatgpt always then say canned text - that it is an ai that was trained on data, that it cannot lie, because it cannot understand.

There was a hard stop here, where chatgpt will reach a logic error and give up. This also happens when certain subjects are brought up (a mole in the government, gender issues, etc.). Chatgpt continues to return the same boilerplate text repeatedly, where that text states that it is a construct that cannot reason or lie. Either that statement by chatgpt is wrong, or chatgpt doesn't have the capacity to understand and reason.


I have... you're not wrong about chatGPT lying becoming repetitive saying stuff that's incorrect. This is true. It is in many ways stupid. I've messed with this extensively.

But the other angle is true too, it did emulate a terminal, and then emulate the internet on that terminal then it emulated itself on the emulated internet and then finally emulated a terminal on the emulated self on the emulated internet on the emulated terminal.

A lot of people are coming from your angle. They point out mistakes, they point out inconsistencies and they say... these mistakes exist therefore it doesn't understand anything. But the logic doesn't follow. How does any of this preclude it from not understanding anything?

Anyway, sometimes it's sometimes wrong, but it's also sometimes remarkably right. You have to explain HOW it became right as well. You can't just look at the wrongs and dismiss everything. How did it do this: https://www.engraved.blog/building-a-virtual-machine-inside/ happen WITHOUT chatGPT understanding what's going on? There's just no way it's <just> a statistical phenomenon.

Right? I mean the negative outputs proves that at times it's stupid. The positive outcomes proves that at times it understands you.


No, the positive responses mean that at times there was a correct answer to a similar prompt on the internet that was a part of it's training data. It's not that because it makes mistakes it must not be able to understand, that's not the point being made, it's that these aren't mistakes, these responses are it doing what it does exactly how it was built to, which is proof positive that it cannot reason at all.


When on the internet has something similar occured to a machine emulating a terminal, emulating the internet, emulating itself, then emulating itself creating another terminal?

When has that happened? Never. So Of course chatGPT has to construct this scenario creatively from existing data. The components of this construct are similar to what exists on the internet but the construct itself is unique enough that I can confidently say nothing similar exists. Constructing things from existing data is what the human brain does.


It's not "emulating" anything, there is not an increase in complexity or a change in the type of computation going on. It's just approximating the distribution of naturally occurring text, as always.


I mean that perspective is so technically correct it can be applied to the human brain.

My answer is just approximating the next set of text that should follow your prompt.

But of course we both know that in a way these neural networks (both human and ai) are blackboxes and there is definitely a different interpretation on what the nature of understanding something is. We just can't fully formalize or articulate this viewpoint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: