There are many options and opinions about, what is currently the recommended approach for running an LLM locally (e.g., on my 3090 24Gb)? Are options ‘idiot proof’ yet?
Given you're running a 3090 24gb, go with Oobabooga/Sillytavern, and don't come here for advice on this stuff, go to https://www.reddit.com/r/LocalLLaMA/, they usually have a "best current local model" thread pinned, and you're less likely to get incorrect advice.
Did for me just now, although as of a week or two ago reddit has been blocking many of my attempts to access through a VPN old. or not. Usually need to reconnect about 3 or 4 times before a page will load.
VPNs were being blocked over the weekend (I checked after seeing https://news.ycombinator.com/item?id=39883747) but as of today within the last hour or so it seems like it works again for both old.redddit.com and regular reddit.com so something changed.
RFC: Is there some straightforward way to use a Pi-hole like setup to 302 redirect `reddit.com` to `old.reddit.com`?
I'm sure there are myriad browser extensions that will do it at the DOM level, but that's such a heavy-handed solution, and also lol I'm not putting an extension on the cartesian product of all my browsers on all my machines in the service of dis-enshittifying one once-beloved social network.
I doubt you could do this at the DNS level. Reddit probably loads static assets from `reddit.com`.
You could perhaps intercept the HTTP requests, but that would require decrypting SSL, so you'd have to MITM yourself if you wanted to do it at the network level.
You'd replace the DNS request for `reddit.com` to some device that intercepts the traffic. That device would redirect the HTTP request to `old.reddit.com` if applicable; static assets would be routed to `reddit.com`. I don't know how things like HSTS and certificate pinning fit into an idea like this.
Doing this at the device level is probably easiest. I've used Redirector [0] in the past and it works well.
I found this extension that makes it even easier, and you can configure it to go to either old.reddit.com or new.reddit.com (instead of going to 3rd generation even worse sh.reddit.com) without having to configure Redirector:
https://chromewebstore.google.com/detail/ui-changer-for-redd...
I guess you have a webserver there on your pihole?
Create a vhost for reddit.com, add reddit.com to your hosts file to point to the webserver (or setup a stub in the dns to the vhost webserver), and do a redirect to old.reddit.com on your webserver vhost.
This is probably the easiest (simple http server and a php file) and most reliable (https can be redirected to http to avoid any ssl headaches) way to accomplish what the OP wants.
That doesn't seem to be the case, at least right now. There's no pinned post as far as I can tell, and searching the sub for `best model` returns mostly questions.
The local models should be colored differently and have licenses/etc that should tip you off. You might need to scroll down the list to see one though.
If you are looking for a downloadable app as an alternative to Oobabooga / Sillytavern, we built https://github.com/transformerlab/transformerlab-app as a way to easily download, interact, and even train local llms.
I wouldn't say maximalism so much, as Oobabooga was designed to emulate Automatic1111, where it's a good enough tool by itself, but then it has a strong extension ecosystem, and it can be utilized through other tools via web service. It's not great at any one thing but it's "good enough" pretty much everywhere.
Oobabooga the closest thing we have to maximalism. It has exposure for by far the largest number of parameters/settings/backends compared to all others.
My main point is that the world yearns for a proper "Photoshop for text" - and no one has even tried to make this (closest is oobabooga). All VC backed competitors are not even close to the mark on what they should be doing here.
Huggingface has many, many fine tunes of Llama from different people, which don't ask that.
The requirement has always been more of a fig leaf than anything else. It allows Facebook to say everyone who downloaded from them agreed to some legal/ethics nonsense. Like all click-through license agreements nobody reads it; it's there so Facebook can pretend to be shocked when misuse of the model comes to light.
This repo doesn't seem to say anything about what it does with anything you pass it. There's no privacy policy or anything? It being open source doesn't necessarily mean it isn't passing all my data somewhere. I didn't see anything about this in the repo that everything definitely stays local.
I would be much more concerned if it had a privacy policy, to the point where just having one means I probably wouldn't use it. That is not common practice for Free Software that runs on your machine. The only network operations that Ollama has is managing LLMs (ie: download Mistral from their server).
> I didn't see anything about this in the repo that everything definitely stays local.
If this is your concern, I'd encourage you to read the code yourself. If you find it meets the bar you're expecting, then I'd suggest you submit a PR which updates the README to answer your question.
I run the docker container locally. As far as I can tell, it doesn't call home or anything (from reading the source and from opensnitch). It is just a cgo wrapped llama.cpp that provides an HTTP API. It CAN fetch models from their library, but you can just as easily load in your own GGUF formatted llama models. They implement a docker-like layers mechanism for model configuration that is pretty useful.
A privacy policy isn't typical for code that you build and run yourself on your own machine; it's something that services publish to explain what they will do with the data you send them.
Some will do it anyway for PR/marketing, but if the creator does not interact with, have access to, or collect your data they have no obligation to have a privacy policy.
We run coding assistance models on MacBook Pros locally, so here is my experience:
On hardware side I recommend Apple M1 / M2 / M3 with at least 400Gb/s memory bandwidth. For local coding assistance this is perfect for 7B or 33B models.
We also run a Mac Studio with a bigger model (70b), M2 ultra and 192GB ram, as a chat server. It's pretty fast. Here we use Open WebUI as interface.
Software wise Ollama is OK as most IDE plugins can work with it now. I personally don't like the go code they have. Also some key features are missing from it that I would need and those are just never getting done, even as multiple people submitted PRs for some.
LM Studio is better overall, both as server or as chat interface.
I can also recommend CodeGPT plugin for JetBrains products and Continue plugin for VSCode.
As a chat server UI as I mentioned Open WebUI works great, I use it with together ai too as backend.
An M2 ultra with 192 gb isn't cheap, did you have it lying around for whatever reason or do you have some very solid business case for running the model locally/on prem like that?
Or maybe I'm just working in cash poor environments...
Edit: also, can you do training / finetuning on an m2 like that?
We had some as build agent around already.
We don't plan to do any fine tuning or training, so we did not explore this at all. However I don't think it is a viable option.
You can chat with the models directly in the same way you can chat with GPT 3.5.
Many of the opensource tools that run these models let you also edit the system prompt, which lets you tweak their personality.
The more advanced tools let you train them, but most of the time, people are downloading pre-existing models and using them directly.
If you are training models, it depends what you are doing. Finetuning an existing pre-trained model requires lots of examples but you can often do a lot with, say, 1000 examples in a dataset.
If you are training a large model completely from scratch, then, yes, you need tons of data and very few people are doing that on their local machines.
+1 on these questions.
Can I run a local llm that will, for example
- visit specified URLs and collect tabular data into csv format?
- ingest a series of documents on a topic and answer questions about it
- ingest all my PDF/MD/Word docs and answer questions about them?
Some of the tools offer a path to doing tool use (fetching URLs and doing things with them) or RAG (searching your documents). I think Oobabooga https://github.com/oobabooga/text-generation-webui offers the latter through plugins.
Gpt4all and Simon Willison's llm python tool are a nice way to get started; even on a modest laptop. A modest 14" mac book pro with 16GB goes a long way with most of the 7B models. Anything larger you need more ram.
If you want a code centric interface it's pretty easy to use llangchain with local models. For example, you specify a model from hugging face and it will download and run it locally.
I like llangchain but it can get complex for use cases beyond a simple "give the llm a string, get a string back". I've found myself spending more time in llangchain docs than working on my actual idea/problem. However, it's still a very good framework and they've done an amazing job IMO.
edit: "Are options ‘idiot proof’ yet?" - from my limited experience, Ollama is about as straightforward as it gets.
FWIW I use Ollama and Open WebUI. Ollama uses one of my RTX A6000 GPUs. I also use Ollama on my M3 MacBook Pro. Open WebUI even supports multimodal models.
If you're broke, get a refurbished/used Nvidia P40. Same amount of vram as a 3090, between 4 and 10 times cheaper depending on how cheap you can find a 3090.
Granted it's slower of course, but best bang for your buck on vram, so you can run larger models than on a smaller bit faster card might be able to. (Not an expert.)
Edit: if using in desktop tower, you'll need to cool it somehow. I'm using a 3D printed fan thingy, but some people have figured out how to use a 1080 ti APU cooler with it too.
If you're able to purchase a separate GPU, the most popular option is to get an NVIDIA RTX3090 or RTX4090.
Apple Mac M2 or M3's are becoming a viable option because of MLX https://github.com/ml-explore/mlx . If you are getting an M series Mac for LLMs, I'd recommend getting something with 24GB or more of RAM.
You don’t need MLX for this. Ollama, which is based on llama.cpp is GPU accelerated on a Mac. In particular it has better performance on quantized models. MLX can be used for eg fine tuning etc. It’s a bit faster than PyTorch for that.
I still feel that llamafiles[1] are the easiest way to do this, on most architectures. It's basically just running a binary with a few command-line options, seems pretty close to what you describe you want.
Jumping on this as a fellow idiot to ask for suggestions on having a local LLM generate a summary from a transcript with multiple speakers. How important is it that the transcript is well formatted (diarization, etc) first. My attempts have failed miserably thus far.
My experience with more traditional (non-Whisper-based) diarization & transcription is that it's heavily dependent on how well the audio is isolated. In a perfect scenario (one speaker per audio channel, well-placed mics) you'll potentially see some value from it. If you potentially have scenarios where the speaker's audio is being mixed with other sounds or music, they'll often be flagged as an additional speaker (so, speaker 1 might also be speaker 7 and 9) - which can make for a less useful summary.
You're saying that diarization quality is dependent on speaker isolation? I should have clarified, to my knowledge whisper does not perform that step, and whether I need to do diarization is what I'm trying to figure out. (Probably I do.) Pyannote.audio has been suggested, but I ran into some weird dependency thing I didn't feel like troubleshooting late last night, so I have not been successful in using it yet.
I created a tool called Together Gift It because my family was sending an insane amount of gift related group texts during the holidays. My favorite one was when someone included the gift recipient in the group text about what gift we were getting for the person.
Together Gift It solves the problem the way you’d think: with AI. Just kidding. It solves the problem by keeping everything in one place. No more group texts. There are wish lists and everything you’d want around that type of thing. There is also AI.
Ollama is the easiest. For coding, use VSCode Continue extension and point it to ollama server.
The thing to watch out for (if you have exposable income) is new RTX 5090. Rumors are floating they are going to have 48gb of ram per card. But if not, the ram bandwidth is going to be a lot faster. People who are on 4090 or 3090s doing ML are going to go to those, so you can pick up a second 3090 for cheap at which point you can load higher parameter models, however you will have to learn hugging face Accelerate library to support multi gpu inference (not hard, just some reading trial/error).
I love it. It is easy to install and containerized.
Its API is great if you want to integrate it with your code editor or create your own applications.
I have written a blog [1] on the process of deployment and integration with neovim and vscode.
I also created an application [2] to chat with LLMs by adding the context of a PDF document.
Update: I would like to add that because the API is simple and Ollama is now available on Windows I don’t have to share my GPU between multiple VMs to interact with it.
I use it on my 2015 Macbook pro. Its amazing how quickly you can get set up, kudos to the authors. Its a dog in terms of response time for questions, but that's expected with my machine configuration.
Also they have Python (and less relevant to me) Javascript libraries. So I assume you dont have to go through LangChain anymore.
If you're writing something that will run on someone's local machine I think we're at the point where you can start building with the assumption that they'll have a local, fast, decent LLM.
> If you're writing something that will run on someone's local machine I think we're at the point where you can start building with the assumption that they'll have a local, fast, decent LLM.
I don't believe that at all. I don't have any kind of local LLM. My mother doesn't, either. Nor does my sister. My girl-friend? Nope.
Hugging Face Transformers is your best bet. It's pretty straightforward and has solid docs, but you'll need to get your hands dirty a bit with setup and configs.
For squeezing every bit of performance out of your GPU, check out ONNX or TensorRT. They're not exactly plug-and-play, but they're getting easier to use.
And yeah, Docker can make life a bit easier by handling most of the setup mess for you. Just pull a container and you're more or less good to go.
It's not quite "idiot-proof" yet, but it's getting there. Just be ready to troubleshoot and tinker a bit.
The current HYPERN // MODERN // AI builds are using flox 1.0.2 to install llama.cpp. The default local model is dolphin-8x7b at ‘Q4_KM’ quantization (it lacks defaults for Linux/NXIDIA, that’s coming soon and it works, you just have to configure it manually and Mac gets more love because that’s what myself and the other main contributors have).
flox will also install properly accelerated torch/transformers/sentence-transfomers/diffusers/etc: they were kind enough to give me a preview of their soon-to-be-released SDXL environment suite (please don’t hold them to my “soon”, I just know it looks close to me). So you can do all the modern image stuff pretty much up to whatever is on HuggingFace.
I don’t have the time I need to be emphasizing this, but the last piece before I’m going to open source this is I’ve got a halfway decent sketch of a binary replacement/conplement for the OpenAI-compatible JSON/HTTP one everyone is using now.
I have incomplete bindings to whisper.cpp and llama.cpp for those modalities, and when it’s good enough I hope the bud.build people will accept it as a donation to the community managed ConnectRPC project suite.
We’re really close to a plausible shot at open standards on this before NVIDIA or someone totally locks down the protocol via the RT stuff.
edit: I almost forgot to mention. We have decent support for multi-vendor, mostly in practice courtesy of the excellent ‘gptel’, though both nvim and VSCode are planned for out-of-the-box support too.
The gap is opening up a bit again between the best closed and best open models.
This is speculation but I strongly believe the current opus API-accessible build is more than a point release, it’s a fundamental capability increase (though it has a weird BPE truncation issue that could just be a beta bug, but it could hint at something deeper.
It can produce verbatim artifacts from 100s of thousands of tokens ago and restart from any branch in the context, takes dramatically longer when it needs to go deep, and claims it’s accessing a sophisticated memory hierarchy. Personally I’ve never been slackjawed with amazement on anything in AI except my first night with SD and this thing.