That question was just an example (Lorem ipsum), it was easy to copy paste to demo the local LLM, I didn't intend to provide more context to the discussion.
I ordered a 2nd 3090, which has 24GB VRAM. Funny how it was $2.6k 3 years ago and now is $600.
You can probuild a decent AI local machine for around $1000.
I ordered a 2nd 3090, which has 24GB VRAM. Funny how it was $2.6k 3 years ago and now is $600.
You can probuild a decent AI local machine for around $1000.