Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
icelancer
9 months ago
|
parent
|
context
|
favorite
| on:
Llama 3.1 405B now runs at 969 tokens/s on Cerebra...
Groq is legitimate. Cerebras so far doesn't scale (wide) nearly as good as Groq. We'll see how it goes.
hendler
9 months ago
|
next
[–]
Google TPUs, Amazon, a YC funded ASIC/FPGA company, a Chinese Co. all have custom hardware too that might scale well.
throwawaymaths
9 months ago
|
prev
[–]
How exactly does groq scale wide well? Last I heard it was 9 racks!! to run llama-2 70b
Which is why they throttle your requests
pama
9 months ago
|
parent
[–]
Well, Cerebras pretty much needs a data center to simply fit the 405B model for inference.
throwawaymaths
9 months ago
|
root
|
parent
[–]
I guess this just shows the insanity of venture led AI hardware hype and shady startup messaging practices
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: