Researchers from Carnegie Mellon University found that it's possible to automatically construct adversarial attacks on LLMs, forcing them to answer any questions and it's possible to generated unlimited number of such attacks, making them very hard to protect against.