I wish I had saved the conspiracy theory paper. I even tried asking GPT-4 for help and couldn't find it :( It was on the arXiv but for some reason I couldn't get GPT-4 to focus on arXiv web searches, but I don't know much about prompt engineering. In my comment I said it was "OpenAI's own research" but it was a collaboration between researchers at OpenAI and Stanford, so the paper is not on OpenAI's website.
Sadly using my own human brain didn't help: there's just too much stuff about GPT being put on the arXiv these days.
I wasn't thinking of a specific paper re: repeating training data to make facts "stick," I was repeating general folk wisdom around LLM design. There is probably a specific paper quantifying this.
Sadly using my own human brain didn't help: there's just too much stuff about GPT being put on the arXiv these days.
I wasn't thinking of a specific paper re: repeating training data to make facts "stick," I was repeating general folk wisdom around LLM design. There is probably a specific paper quantifying this.