Hacker News new | past | comments | ask | show | jobs | submit login

This was announced as part of their second day of "12 Days of AI": https://www.youtube.com/watch?v=fMJMhBFa_Gc



They're searching for enterprise customers before they become a commodity.


Llama 3.3 is insanely good and can be run on a Mac mini with 64GB of ram for $2k USD.

OpenAI is screwed.

(As an aside: very interesting Google tried to go closed source and objectively lost the race, and Meta went open and is the real threat to OpenAI.)


I haven't tried it on my $150 2080ti, but I know someone running it on a 3060 and it's not that horrible. Wild times.

Those M4 Macs with shared RAM definitely seem to be the best way to go for this, though.


> OpenAI is screwed.

They are for multiple reasons, not the least of which is:

https://www.wheresyoured.at/subprimeai/


With 64GB you only get a lower quality quantized version.


That’s the one I’m using. So far it’s quite good, and when I gave it and Claude the same programming problem not only did Llama give a better result, when I showed that result to Claude it also said the Llama approach was better.

Claude is already better than GPT on average at coding, so yeah, bad news for OpenAI as Llama is now potentially better at coding.

Of course Meta has a propriety training set of extremely high quality code, so if they are using that, I’d expect them to have vastly superior performance as FAANG production code is better training data than dogshit stack overflow questions to CS homework problems.

I really think whatever boost OpenAI get from their shadow CoT loop is nominal at best, but with 2x+ the amount of compute forcing them to increase prices an absurd amount.

It’s business 101, they just won’t make the revenue to cover those extra tokens and they are now competing against free. The economics do not suggest OpenAI has a path to survival without major breakthroughs in performance AND efficiency.


That's great to hear. I just want to make sure that you're aware that you're not getting the 100% FP16 experience. I guess at 8bit it's still pretty much the same.


This was obvious even before the Microsoft deal got penned.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: