If you read the title, it only refers to the multi-head-attention part of BERT, excluding the feed forward and skip connections, hence calling it "Pure Attention".
> Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
This does not prove the original title was wrong, and this paper is not a counter, but an analysis of a submodule which helps better understanding transformers.
> Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
This does not prove the original title was wrong, and this paper is not a counter, but an analysis of a submodule which helps better understanding transformers.