This seems based on the assumption that the only knowledge worth anything is related to physicality and testability in the "real world", which is why language itself is rather useless. Ironically, that appears to me to be the exact kind of intellectual self-deception that he accuses the "high-brow 4chan" people of.
I think this is a fair assumption if your concern is about the impacts on the “real world”. An ASI which wreaks havoc in a simulation is not really cause for concern.
An arificial intelligence that somehow managed to stay inside internet only and wreck it would still be a gigantic problem for humanity. Simply because it would be very hard to get rid off.
Assuming the "usual" cliches that the AI would be very smart, it could get rich (e.g. by stock market, robbing the banks, or just creating some good products) and then use its money to block humanity from creating some sort of a "internet 2" - by which I mean a network simply without this entity.
Also with how much we need internet, how easy we are to spy on due to smartphones... a "war" between this entity and humanity would be a problem.
The AI wouldnt even need to build a robot / terminator to connect from one network to other. Just pay someone to make a link.
If we can convince people that whatever an AI says, it must not be acted on in the real world, there wouldn't be a problem.
I remember when risk-skeptics used to ask incredulously how it was supposed to magic itself off its servers. Still see a bit of that today, despite all the AI models that have been leaked by humans one way or another.
In this hypothetical, the infinitely intelligent super AI, knowing that what it says must not be acted upon, could say the exact right thing so as to get you to do the thing that it really wanted you to do anyway. I'm thinking of that scene in Doctor Who where the Doctor says a couple of words and takes down the Prime Minister with six words.
That feels like a Maxwell's Demon kind of infinitely intelligent to me.
I recognise this might be a failure of imagination on my part, given how many times I've seen other people say "no AI can possibly do XYZ" even when an AI has already done XYZ — but based on what I see, it's extrapolating beyond what I am comfortable anticipating.
The character of The Doctor can be excused here, not only for being fictional, but also for having a time machine and knowing how the universe is supposed to unfold.
We're well into Maxwell's Demon thought experiment-grade territory here. An ASI that dooms the human race is absolutely the same sort of intellectual faffing about that Maxwell proposed in 1867 with his though experiment, though it wasn't refered to as a demon until later, by Lord Kelvin, in 1874. It wouldn't be until the early 1970's that the Unix daemon would come about.
If your want to look at successes, corn, albeit with some modifications, and domesticated animals, have also been really successful at making sure their DNA reproduces.
Crops, pets, and livestock are symbiotic with us, they don't hurt us. The things I listed harm their host, they had to be in that category to make the point that harming us doesn't require high IQ — the harms we suffer from corn very much count as our own fault.
That would only follow if the knowledge that we have today isn't enough for an ASI to do anything. And furthermore it would only hold if there was no way for the ASI to leverage (i.e. manipulate) humans or existing systems to get access to "the real world". Neither of those two assumptions seem realistic to me.