All this is technically correct, but it also means this technology is absolutely not ready to be used for anything remotely involving humans or end user data.
It's about models' ability to unlearn information or to configure their training environment so that something is never learned in the first place... is not exactly the same as "oups, we logged your IP in a log by accident".
A company is liable even if they have accidentally retained / failed to delete personal information. That's why we have a lot of standards and compliance regulation to ensure a bare minimum of practices and checks are performed. There is also the cyber resilience act coming up.
If your tool is used by/for humans, you need beyond 100% certitude exactly what happens with their data and how it can be deleted and updated.
You can never even got to 100% certainty, yet alone 'beyond' that.
Google can't even get 100% certainty that they eg deleted a photo you uploaded. No AI involved. They can get an impressive number of 9s in their 99.9..%, but never 100%.
So this complaint when taken to the absolute like you want to take it, says nothing about Machine Learning at all. It's far too general.