Doesn't the last layer output a variable-size vector based on seq length? It'd take a bit of hacking to get it to be a semantic vector.
Additionally, that vector is trained to predict next token as opposed to semantic similarity. I'd assume models trained specifically towards semantic similarity would outperform (I have not bothered comparing both in the past - MMTEB seems to imply so)
At that point - it seems quite reasonable to just pass the sentence into an embedding model.