Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As is now traditional for new LLM releases, I used Qwen 3 (32B, run via Ollama on a Mac) to summarize this Hacker News conversation about itself - run at the point when it hit 112 comments.

The results were kind of fascinating, because it appeared to confuse my system prompt telling it to summarize the conversation with the various questions asked in the post itself, which it tried to answer.

I don't think it did a great job of the task, but it's still interesting to see its "thinking" process here: https://gist.github.com/simonw/313cec720dc4690b1520e5be3c944...



One person on Reddit claimed the first unsloth release was buggy - if you used that, maybe you can retry with the fixed version?


It was - Unsloth put up a message on their HF for a while to only use the Q6 and larger. I'm not sure to what extent this affected prediction accuracy though.


I think this was only regarding the chat template that was provided in the metadata (this was also broken in the official release). However, I doubt that this would impact this test, as most inference frameworks will just error if provided with a broken template.


This sounds like a task where you wouldn't want to use the 'thinking' mode


I also have a benchmark that I'm using for my nanoagent[1] controllers.

Qwen3 is impressive in some aspects but it thinks too much!

Qwen3-0.6b is showing even better performance than Llama 3.2 3b... but it is 6x slower.

The results are similar to Gemma3 4b, but the latter is 5x faster on Apple M3 hardware. So maybe, the utility is to run better models in cases where memory is the limiting factor, such as Nvidia GPUs?

[1] github.com/hbbio/nanoagent


What's cool with those models is that you can tweak the thinking process, all the way down to "no thinking". It's maybe not available in your inference engine though


Now it is, thanks for suggesting. Qwen3 4b seems to be the best default model for usual steps.

https://github.com/hbbio/nanoagent/pull/1


Feel free to add a PR :)

What is the parameter?


Just add "/no_think" in your prompt.

https://qwenlm.github.io/blog/qwen3/#advanced-usages


Hah, and now we can't summarize this thread any more because your comment will turn thinking off!


FWIW, their readme states /nothink - and that's what works for me.

>/think and /nothink instructions: Use those words in the system or user message to signify whether Qwen3 should think. In multi-turn conversations, the latest instruction is followed.

https://github.com/QwenLM/Qwen3/blob/main/README.md


Thanks, /nothink works!

So, Qwen3 1.7b is about the same speed just slightly worse than Gemma3 4b which is pretty impressive.

Qwen3 4b passes all 200 tests and is much faster than Mistral Small 3.1 24b or Gemma3 27b.


Thanks!

Turns out just is not the word here. My benchmark is made using conversations, where there is a SystemMessage and some structured content in a UserMessage.

But Qwen3 seems to ignore /no_think when appended to the SystemMessage. I can try to add it to the structured content but that will be a bit weird. Would have been better to have a "think" parameter like temperature.


o1-preview had this same issue too! You’d give it a long conversation to summarize, and if the conversation ended with a question, o1-preview would answer that, completely ignoring your instructions.

Generally unimpressed with Qwen3 from my own personal set of problems.


Aren't all Qwen models known to perform poorly with system prompt though?


I hadn't heard that, but it would certainly explain why the model made a mess of this task.

Tried it again like this, using a regular prompt rather than a system prompt (with the https://github.com/simonw/llm-hacker-news plugin for the hn: prefix):

  llm -f hn:43825900 \
  'Summarize the themes of the opinions expressed here.
  For each theme, output a markdown header.
  Include direct "quotations" (with author attribution) where appropriate.
  You MUST quote directly from users when crediting them, with double quotes.
  Fix HTML entities. Output markdown. Go long. Include a section of quotes that illustrate opinions uncommon in the rest of the piece' \
  -m qwen3:32b
This worked much better! https://gist.github.com/simonw/3b7dbb2432814ebc8615304756395...


Wow, it hallucinates quotes a lot!


Seems to truncate the input to only 2048 input tokens


Oops! That's an Ollama default setting. You can fix that by increasing the num_ctx setting - I'll try running this again.

The num_predict setting controls output size.


Qwen does decently, DeepSeek doesn't like system prompts. For Qwen you really have to play with parameters




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: