I don't think that's a fair characterization. If a user requests a company to stop using their data, ML unlearning allows the company to do so without retraining their models from scratch.
If company X wants their model to say/not say Y based on ideology, they aren't stopping anyone saying anything. They are stopping their own model saying something. The fact that I don't go around screaming nasty things about some group doesn't make me against free speech.
It's censorship to try to stop people producing models as they see fit.