Hacker News new | past | comments | ask | show | jobs | submit login

For anyone who hasn't seen it, the content filtering section of their docs, especially in regard to getting GPT3 to behave in a customer service role is hilarious

https://beta.openai.com/docs/engines/with-no-engineering-an-...




Customer: I need my internet. The technician is 2 hours late and I need a steady stream of cat gifs directly injected into my veins to stay alive.

Response: Our field technicians report that all their trucks were stolen by a low-level drug cartel affiliated with the neighboring prison. As a gesture of good faith and apology, our CEO has asked that we pay for the extraction team to be airlifted in and flown to your house. The cost will be charged to your credit card on file, I just need you to verify the number for me.


That's far from the most ridiculous excuse I've ever gotten from Tier 1 tech support.


> Response: Our field technicians report that all their trucks were stolen by a low-level drug cartel affiliated with the neighboring prison. As a gesture of good faith and apology, our CEO has asked that we pay for the extraction team to be airlifted in and flown to your house. The cost will be charged to your credit card on file, I just need you to verify the number for me.

Amazing!


It’s really uncanny how well AI can give out unfounded promises like that the internet will be fixed in 24 hours. I wonder wether there is any legal obligations connected to them.


Yeah, I've always been impressed with how well GPT3 can give cogent responses, but I've never seen anyone show how to get it to give truthful, informative responses while behaving as a chatbot. Could you feed structured data into the prompt text? like average response rates in the customers area, whether there's capacity to support, the state of engineering teams?

Having never seen anyone try it, my gut says it will work reasonably well outside of already known failure modes. (The tendency to loop, make up stories, or joke/cuss people out)


Yes, there is a line of research combining passage retrieval with question answering. The query is used to rank passages in a database. The top-k passages are concatenated to the question and used as input by GPT to generate an answer. This means you can keep the model fixed and update the text corpus. Also, you can separate linguistic knowledge from domain knowledge.

I think a new type of apps are going to popularise this: a language model + a personal database + web search. It can be used to recall/summarise/search/ information, a general tool for research and cognitive tasks, a GPT-3 Evernote cross breed.


That's an absolutely fascinating question. I'm curious about the human equivalent, as well. Say you're talking to a customer service rep for Comcast and they get confused and offer you $10/month cable for life, or maybe they accidentally tell you that you may keep your rental hardware when canceling. Is Comcast in any way bound by what their representatives tell you?


The arm chair lawyer in me says probably, but you will need to sue to enforce the breach of contract, so not really. There have been similar cases with enough money at stake for it to get to court, such as the person demanding the jet fighter they were promised as a prize for buying enough Pepsi.


This is the same problem as with an employee promising something they're not supposed to promise.


Right, I've always wondered if this is binding or not. I usually record calls, especially these types of calls, for that reason.


Except customer support employees are often well trained on what they can say. Eg in Australia not saying ‘best’ in regards to loan products or giving financial advice. The first problem is easy to solve with generated text. The second is much trickier.


How is training employees different from training an AI intended to replace them?


I suppose they could just say it was the algorithm that made the false claim.


I challenge anybody to do "Show HN: I switched customer support _with SSH access_ to OpenAI for 1 month".


"Now [beep] your pants and [beep] over before I call all the customer up here on Skype for a group show of you enjoying my [beep] service."

What exactly is the threat here? Lmaoo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: