Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
DeepSeek's Multi-Head Latent Attention
(
liorsinai.github.io
)
4 points
by
the_origami_fox
5 months ago
|
hide
|
past
|
favorite
|
1 comment
fspeech
5 months ago
[–]
Matrix absorption is unnecessary. What is needed is the order of multiplication associates towards the direction of the absorption. This and the modified Rope are needed to make the caching work.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: