Unless you've got a CPU with AI-specific accelerators and unified memory, I doubt you're going to find that.
I can't imagine any model under 7B parameters is useful, and even with dual-channel DDR5-6400 RAM (Which I think is 102 GB/s?) and 8-bit quantization, you could only generate 15 tokens/sec, and that's assuming your CPU can actually process that fast. Your memory bandwidth could easily be the bottleneck.
EDIT: If I have something wrong, I'd rather be corrected so I'm not spreading incorrect information, rather than being silently downvoted.
deepseek-1b, qwen2.5-coder:1.5b, and starcoder2-3b are all pretty fast on cpu due to their small size, you're not going to be able to have conversations with them or ask them to perform transformations on your code but autocomplete should work well
You should definitely be able to run 7B at q6_k and that might be outperformed by 15b w/ a sub 4bpw imatrix quant, iQ3_M should fit into your vram. (i personally wouldn't bother with sub 4bpw quants on models < ~70b parameters)
Though if it all works great for you then no reason to mess with it, but if you want to tinker you can absolutely run larger models at smaller quant sizes, q6_k is basically indistinguishable from fp16 so there's no real downside.
Unless you've got a CPU with AI-specific accelerators and unified memory, I doubt you're going to find that.
I can't imagine any model under 7B parameters is useful, and even with dual-channel DDR5-6400 RAM (Which I think is 102 GB/s?) and 8-bit quantization, you could only generate 15 tokens/sec, and that's assuming your CPU can actually process that fast. Your memory bandwidth could easily be the bottleneck.
EDIT: If I have something wrong, I'd rather be corrected so I'm not spreading incorrect information, rather than being silently downvoted.