I'm not sure there's a good way to work around this - the alternative is to only respect the robots.txt as it was when the snapshot was taken, at which point once a confidential page is in the archive you can't (easily) remove it again.
Perhaps access to archived pages should only be blocked when the ia_archiver user agent specifically is denied in the robots.txt. That way they aren't inadvertently blocked by a generic robots.txt that denies everything (which sometimes occurs with parked domains), but there's still a way to deny the Wayback Machine if you really need to.
Well, then everybody would claim the same right and you'd have to maintain full list of bots and keep it up to date. This doesn't sound scalable, * should still mean "everybody".
robots.txt only tells bots not to crawl the website. It doesn't say anything about indexing or archival of pages.
It would be perfectly possible to have the Wayback Machine respect the robots.txt and not have it crawl or archive any new pages, whilst making pages that have already been archived accessible unless a specific user agent has been denied.
I would say that the default behavior should be to respect the robots.txt of the time of the snapshot and only revert archival of accidentally cached pages that were never intended to be public.
Sadly, it requires a bit of human intervention there.
The internet archive wouldn't work if its crawler wasn't fully automated. You can't handle the whole internet in a way that requires human intervention.
The problem for manual deletion via human intervention is that if you do it outside of robots.txt you then need to ensure the identity of the owner, which makes it much more complicated and costly.
Maybe robots.txt could have clauses that say whether to apply entries retroactively. True, domain parkers could enable it, but I don't think most would, since it's extra work for no benefit - the point is usually not to erase history but to protect current site.