Hacker News new | past | comments | ask | show | jobs | submit login

It turns out current methods of AI alignment are basically AI lobotomies.



I don't get how you can understand LLMs enough to confidently say stuff like this, but not understand how many different ways the article is eyerollingly stupid:

They're conflating ChatGPT the website with the underlying model, the former of which uses a system prompt that changes significantly over time, completely independently of AI alignment. Their recent custom system prompt change confirms what everyone suspected: they've been running around like chickens without heads trying to tweak that prompt to make everyone happy, but you can never have a default that achieves that.

It also uses summarization to enable long chats... sometimes causing lay people to claim it got worse or forgot how to do X in a single conversation when really their original instructions have long left the context window.

-

And the fact they're judging it on it's ability to do "basic math" in the context window when the only actual update to the underlying model was centered around making function calling more reliable...

I mean the code interpreter is now live, it makes ChatGPT brilliant at basic math and a hell of a lot more than that. Basic math isn't basic for an attention based model.


Yep




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: