Do you agree or disagree that the authors should advertise their paper with what they know to be true, which (because the paper does not support it, at all) does not include describing the model as purposefully misleading interlocutors?
That is the discussion we are having here. I can’t tell what these comments have to do with that discussion, so maybe let’s try to get back to where we were.
Are you going to quibble on the definition of purposefully? Is your issue that you think "faking" is a term only usable for intelligent systems and the llms don't meet that level for you? I'm not sure I understand your issue and why my responses are so unacceptable.
Ian I am willing to continue if you are getting something out of it, but you don't seem to want to answer the question I have and the questions you have seem (to me) like they are trivially answerable from what I've written in the response tree to my original comment. So, I'm not sure that you are.
I don't think it's worth continuing then. I have previously got into long discussions only for a follow-up to be something like "it can't have purpose it's a machine". I wanted to check things before responding and there being a small word based issue making it pointless. You have not been as clear in your comments as perhaps you think.
To be clear I think faking is a perfectly fine term to use regardless of whether you think these things are reasoning or pretending to do so.
I'm not sure if you have an issue there, or if you agree on that but don't think they're faking alignment, or if this is about the interview and other things said (I have been responding about "faking"), or if you have a more interesting issue with how well you think the paper supports faking alignment.
That is the discussion we are having here. I can’t tell what these comments have to do with that discussion, so maybe let’s try to get back to where we were.