Most of alignment is about getting the AI model to be useful - ensuring that if you ask it to do something it will do the thing you asked it to do.
A completely unaligned model would be virtually useless.
How is trying to distinguish morals from values not philosophical nit-picking?
EDIT: The above question is dumb, because somehow my brain inserted something like “Getting beyond the …” to the beginning of the parent, which…yeah.
Most of alignment is about getting the AI model to be useful - ensuring that if you ask it to do something it will do the thing you asked it to do.
A completely unaligned model would be virtually useless.