Why not? A major problem with AI apocalypse arguments is that they are quite vague and terms are poorly defined. One thing that AI apocalypse believers often talk about is the danger of self improving AI. But I think AI won't be able to self improve until it is good enough such that it knows and cares about context. Self-improvement will require such s comprehensive understanding of the goal that AI will need to understand these concepts.
You might note that my argument is pure speculation without much basis in evidence. This is intentional, because it's all the arguments I've seen expounding on the AI apocalypse are equally speculative.
It'll see, but it won't care. See http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/