We do have python tutorials and SDKs showing how to use our service for ... geocoding, the actual service we provide.
I wrote the post mainly to have a page I can point people to when they ask why "it isn't working". Rather than take the user through a tour of past posts I need something simple they will hopefully read. But fair point, I can add a link to last year's post about the erronious youtube tutorials as well.
What I think you can't appeciate is the difference of scale. A faulty youtube video drives a few users. In the last weeks ChatGPT is sending us several orders of magnitude more frustrated sign-ups.
I get frustrated at the number of things ChatGPT gets blamed for that aren't its fault. It is completely understandable that if there are repos out on GitHub like the one for Phomber[1] thant ChatGPT would find that code and have no idea that it was phoney. Suggesting that ChatGPT just made this up out of thin air when you know it didn't is not very responsible.
You are blaming the victim. OpenAI is to be blamed.
They know what they are doing. They provide something that sounds over-confident for anything it says, knowing full well that it can't actually know if what it generated is accurate because it is designed to generate plausible sentences using statistics and probabilities, not verified facts from a database. On top of it, they trained it on an uncontrolled set of texts (though IIUC even a set of verified text would not be enough, nothing guarantees that a LM would produce correct answers). And they provide it to the general population, which doesn't always understand very well how it works and, above all, its limitations. Including developers. Few people actually understand this technology, including myself.
Inevitably, it was going to end up causing issues.
This post factually presents a problematic situation for the authors of this post. How ChatGPT works or how it can end up producing wrong results is irrelevant to the post's authors problem. It just does, and it causes troubles because of the way OpenAI decided to handle things.
And it's not "fair enough, because this false stuff can be found on the internet".
OpenAI is providing a language model that has some understanding of the world (in order to do that language model thing) and is surprisingly correct in some situations as a knowledge base.
The key thing is that it isn't a knowledge base. It doesn't claim to have correct information about the world. It has the ability to translate a question in natural language into what would be an answer in a natural language - but it isn't necessarily correct because its about the language rather than the knowledge.
People misusing the LLM as a knowledge base are at fault just as a person misusing a CD tray as a cup holder is at fault if it doesn't work correctly as a cup holder.
Phomber is not the best example. Ed contacted the developer of that tool over a year ago about the issue and to remove mentions of OpenCage and as far as I see the author removed it https://github.com/s41r4j/phomber/issues/4
We do have python tutorials and SDKs showing how to use our service for ... geocoding, the actual service we provide.
I wrote the post mainly to have a page I can point people to when they ask why "it isn't working". Rather than take the user through a tour of past posts I need something simple they will hopefully read. But fair point, I can add a link to last year's post about the erronious youtube tutorials as well.
What I think you can't appeciate is the difference of scale. A faulty youtube video drives a few users. In the last weeks ChatGPT is sending us several orders of magnitude more frustrated sign-ups.