Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Time to first token measured with an 8K-token prompt using a 14-billion parameter model with 4-bit quantization

Oh dear 14B and 4-bit quant? There are going to be a lot of embarrassed programmers who need to explain to their engineering managers why their Macbook can't reasonably run LLMs like they said it could. (This already happened at my fortune 20 company lol)



I don’t really get why people are smack talking this, are there other laptops available that can do better?


Wrong question. If you sell a 6k€ machine "for AI", then you are judged on your own merits.

Replies like "but, but other laptops" are very weak attempts at deflection.


at 6k you can get 128 gb RAM so you can use bigger models


My 2023 Nvidia 3060 laptop I spent $700 on?


you can't run models that are bigger than 16GB, not comparable.


sure you can. system ram is will be your limiter here.


it's too slow for usable inference though.


Slow and can’t run are different things, but I get your point.


Nope, but other producers does not claim that their hardware "can run AI".


I wonder if Apple has foresight into locally running LLMs becoming sufficiently useful.


It won’t handle serious tasks but I have Gemma 3 installed on my M2 Mac and it is good for most of my needs—-esp data I don’t want a corporation getting its hands on.


What kind of tasks are you using it for? I haven't really found any uses for small models.


I run Qwen 3.5 30B MOE and it’s reasonable at most tasks I would use a local model for - including summarizing things. For instance I auto update all my toolchains automatically in the background when I log in and when finished I use my local model to summarize everything updated and any errors or issues on the next prompt rendering. It’s quite nice b/c everything stay updated, I know whats been updated, and I am immediately aware of issues. I also use it for a variety of “auto correct” tasks, “give me the command for,” summarize the man page and explain X, and a bunch of tasks that I would rather not copy and paste etc.


Nothing like coding, just like relatively basic stuff. Idk its hard to explain but I use AI so frequently for work that I have a sense for what it is capable of.


Which size Gemma are you using?

I should clarify that by small I mean in the 3-8B range. I haven't tested the 14-30B ones, my experience is only about the smaller ones.

In my experience, small models are not good for coding (except very basic tasks), they're not good for general knowledge. So the only purpose I could see for them would be, when they're given the information, i.e. summarization or RAG.

But in my summarization experiments, they consistently misunderstood the information given to them. They constantly made basic errors and failed to understand the text.

So having eliminated programming, general knowledge, summarization and (by extension, RAG, because if you can't understand the information, then you can't do RAG either, by definition) -- I have eliminated all the use cases that I had in mind!

That would leave very basic tasks like classification or keywords, but I think there they would be in the awkward middle ground of being disappointing relative to big LLMs for many tasks, and cumbersome relative to small specialized models which can run fast and cheap and be fine tuned.


They do! "You're holding it wrong*


This wasn’t a statement about capability. It’s just a detail about what model they used to compare the speed of two chips for this purpose. You want a bigger model, run a bigger model.


Yeah no it didn’t. If you have a fully speced out M3/4 MacBook with enough memory you’re running pretty decent models locally already. But no one is using local models anyway.


I run a local model on the daily. I have it making tickets when certain emails come in and made a small that I can click to approve ticket creation. It follows my instructions and has a nice chain of thought process trained. Local LLMs are starting to become very useful. Not OpenClaw crap.


What vram you running to allow both a capable model to run and also everything else the device needs to run?


> Yeah no it didn’t

What is "it" and what didn't it do?


If your company can afford fully speced out M3/4 MacBook, then it can also afford cloud AI costs.


Perhaps, but sending everything to the cloud might get them in (very expensive) trouble. Depending on who we are talking about, of course.


cost isn't even close to the main motivating factor for my context


With OpenClaw and powerful local models like Kimi 2.5, these specs make a lot of sense.


I’m not sure what model I’d trust locally with anything meaningful in Openclaw. The smaller/simpler the model is, the greater the chance of fluff answers is.


GPT-OSS-120 works well.


K2.5 isn't remotely a local model


Technically you can get most MoE models to execute locally because RAM requirements are limited to the active experts' activations (which are on the order of active param size), everything else can be either mmap'd in (the read-only params) or cheaply swapped out (the KV cache, which grows linearly per generated token and is usually small). But that gives you absolutely terrible performance because almost everything is being bottlenecked by storage transfer bandwidth. So good performance is really a matter of "how much more do you have than just that bare minimum?"


Oh sure it is! I’ve helped set up an AI cluster rack with four K2.5s.

With some custom tooling, we built our own local enterprise setup:

Support ticketing system Custom chat support powered by our trained software-support model Resolved repository with detailed step-by-step instructions User-created reports and queries Natural language-driven report generation (my favorite — no more dragging filters into the builder; our (Secret) local model handles it for clients) In-application tools (C#/SQL/ASP.NET) to support users directly, since our software runs on-site and offline due to PPI A cool repair tool: import/export “support file packet patcher” that lets us push fixes live to all clients or target niche cases Qwen3 with LoRA fine-tuning is also incredible — we’re already seeing great results training our own models.

There’s a growing group pushing K2.5s to run on consumer PCs (with 32GB RAM + at least 9GB VRAM) — and it’s looking very promising. If this works, we’ll be retooling everything: our apps and in-house programs. Exciting times ahead!


of course it's not remotely local: remote and local are literally antonyms


You can totally run it locally. If you have 500GB of RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: