Hacker Newsnew | past | comments | ask | show | jobs | submit | SirMaster's commentslogin

That doesn’t make sense. You can’t lock the car if your key is inside?

So a bad person can just open your door and attack you because you can’t lock your door when your key is inside?

My Camry has incar fob detection and I can definitely lock the car while the fob is inside.


It won't let you press the button on the handle to lock it if the key is inside and you're not, prevents you from locking the keys in the car, mine does the same, the car will beep 3 times if I try to lock it from outside while the key is inside.

If you're also inside, you just press the lock button in the car and it'll lock just fine.


I meant lock the car from the outside, using the door handle.

Thanks for the clarification. Wild to me I didn't know this external lock button was a thing (my car's 16 years old... but I drive a rental a couple times/year...).

I'm getting 20 tokens/sec on the 120B model with a 5060Ti 16GB and a regular desktop Ryzen 7800x3d with 64GB of DDR5-6000.

Wow that's not bad. It's strange, for me it is much much slower on a Radeon Pro VII (also 16GB, with a memory bandwidth of 1TB/s!) and a Ryzen 5 5600 with also 64GB. It's basically unworkably slow. Also, I only get 100% CPU when I check ollama ps, the GPU is not being used at all :( It's also counterproductive because the model is just too large for 64GB.

I wonder what makes it work so well on yours! My CPU isn't much slower and my GPU probably faster.


AMD basically decided they wanted to focus on HPC and data center customers rather than consumers, and so GPGPU driver support for consumer cards has been non-existing or terrible[1].

[1]: https://github.com/ROCm/ROCm/discussions/3893


The Radeon VII Pro is not a consumer card though and works well with ROCm. It even has datacenter "grade" HBM2 memory that most Nvidias don't have. The continuing support has been dropped but ROCm of course still works fine. It's nearly as fast in Ollama as my 4090 (which I don't use for AI regularly but I just play with it sometimes)

You don't really need it to fit all in VRAM due to the efficient MoE architecture and with llama.cpp

The 120B is running at 20 tokens/sec on my 5060Ti 16GB with 64GB of system ram. Now personally I find 20 tokens/sec quite usable, but for some maybe it's not enough.


I have a similar setup but with 32 GB of RAM. Do you partly offload the model to RAM? Do you use LMStudio or other to achieve this? Thanks!

So it's a "mistake" to choose be Amish for example?

At what % of retouching a photo with a AI re-toucher tool makes it generation though?

But why would they if they don't have a competition on price without a tariff?

Why wouldn’t they have competition?

Even if you assume perfect competition costs like tariffs can be passed back to producers.

Imagine a demand and a supply curve. From the perspective of a producer outside the country the tariff effectively shifts the demand curve, but doesn't affect supply. That's going to lead to a lower price at equilibrium.

Of course, from the perspective of the consumer it's the opposite situation, the supply curve shifts which leads to a higher price at equilibrium.

Both happen simultaneously, who pays most of the tariff depends on the elasticity of the supply and the demand


I doubt it. The only video games I play are competitive games like DotA 2, Counter Strike 2, Call of Duty, Rainbow 6 Siege, etc. I don't really see how this completes or replaced that at all.

Just paste the pic into an LLM and ask for a data format.

Yes it's always been the case. I installed Proxmox 3.4 (based on Debian 7) this way originally, and have been upgrading ever since with no issues.

Why would you need to upgrade to wipr 3 if wipr 2 does what you want and need?

Ad blocking is an arms race and your ad blocker needs continuous updates to continue working on large sites.

Adblockers don’t just keep working forever.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: