Hacker News new | past | comments | ask | show | jobs | submit login

If you read the title, it only refers to the multi-head-attention part of BERT, excluding the feed forward and skip connections, hence calling it "Pure Attention".

> Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth

This does not prove the original title was wrong, and this paper is not a counter, but an analysis of a submodule which helps better understanding transformers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: