Hacker News new | past | comments | ask | show | jobs | submit login
Ollama releases Python and JavaScript Libraries (ollama.ai)
607 points by adamhowell on Jan 25, 2024 | hide | past | favorite | 149 comments



Are these libraries for connecting to an ollama service that the user has already installed or do they work without the user installing anything? Sorry for not checking the code but maybe someone has the same question here.

I looked at using ollama when I started making FreeChat [0] but couldn't figure out a way to make it work without asking the user to install it first (think I asked in your discord at the time). I wanted FreeChat to be 1-click install from the mac app store so I ended up bundling the llama.cpp server instead which it runs on localhost for inference. At some point I'd love to swap it out for ollama and take advantage of all the cool model pulling stuff you guys have done, I just need it to be embeddable.

My ideal setup would be importing an ollama package in swift which would start the server if the user doesn't already have it running. I know this is just js and python to start but a dev can dream :)

Either way, congrats on the release!

[0]: https://github.com/psugihara/FreeChat


On the subject of installing Ollama, I found it to be a frustrating and user-hostile experience. I instead recommend the much more user-friendly LLM[0] by Simon Willison.

Among the problems with Ollama include:

* Ollama silently adds a login item with no way to opt out: <https://github.com/jmorganca/ollama/issues/162>

* Ollama spawns at least four processes, some persistently in the background: 1 x Ollama application, 1 x `ollama` server component, 2 x Ollama Helper

* Ollama provides no information at install time about what directories will be created or where models will be downloaded.

* Ollama prompts users to install the `ollama` CLI tool, with admin access required, with no way to cancel, and with no way to even quit the application at that point. Ollama provides no clarity that about what is actually happening during this step: all it is doing is symlinking `/Applications/Ollama.app/Contents/Resources/ollama` to `/usr/local/bin/`

The worst part is that not only is none of this explained at install time, but the project README doesn’t tell you any of this information either. Potential users deserve to know what will happen on first launch, but when a PR arrived to at least provide that clarification in the README, Ollama maintainers summarily closed that PR and still have not rectified the aforementioned UX problems.

As an open source maintainer myself, I understand and appreciate that Ollama developers volunteer their time and energy into the project, and they can run it as they see fit. So I intend no disrespect. But these problems, and a seeming unwillingness to prioritize their resolution, caused me to delete Ollama from my system entirely.

As I said above, I think LLM[0] by Simon Willison is an excellent and user-friendly alternative.

[0]: https://llm.datasette.io/


I think it boils down to a level of oblivious disrespect for the user from the points you raised about ollama. I am sure it’s completely unintentional from their dev’s, simply not prioritising the important parts which might be a little boring for them to spend time on, but to be taken seriously as a professional product I would expect more. Just because other apps may not have the same standards either re complete disclosure, it shouldn’t be normalised if you are wanting to be respected fully from other devs as well as the general public - after all, other devs who appreciate good standards will also be likely to promote a product for free (which you did for LLM[0]) so why waste the promotion opportunity when it results in even better code and disclosure.


I don’t have any direct criticisms about anyone in particular, but the other thing is that any rational person spending this much time and money probably attempts to think of a business plan. So vendor lock-in creeps into our ideas even unintentionally. Nothing wrong with intentionally, per se.

We all tell ourselves it’s value-add, but come on, there’s always an element of “we’ll make ourselves a one-stop shop!”

So for example, I think the idea of modelfiles is sound. Like dockerfiles, cool! But other than superficially, it’s a totally bespoke and incompatible with everything else we came up with last year.

Bespoke has its connotation for reasons. Last I checked, the tokenizer fails on whitespace sometimes. Which is fine except for “why did you make us all learn a new file format and make an improvised blunderbuss to parse it!?”

(Heh. Two spaces between the verb and the args gave me a most perplexing error after copy/pasting a file path).


You don’t sound like the kind of user ollama was meant to serve. What you are describing is pretty typical of macOS applications. You were looking for more of a traditional Linux style command line process or a Python library. Looks like you found what you were after, but I would imagine that your definition of user friendly is not really what most people understand it to mean.


Respectfully, I disagree. Not OP, but this “installer” isn’t a standard macOS installer. With a standard installer I can pick the “show files” menu option and see what’s being installed and where. This is home rolled and does what arguably could be considered shady dark patterns. When Zoom and Dropbox did similar things, they were rightly called out, as should this.


I don't know what the fuss is about. Right below the shellscript curl-thingy(I do think this approach should die) https://github.com/ollama/ollama/blob/main/docs/linux.md It is available, you can do it by hand and the code is also there. And if you don't feel like tweaking code, you can simply shift it into a docker-container/-containers. and export the ports and some folders to persist the downloaded models.

That they are not advertising it is not uncommon. OpenWhisper from when OpenAI was open did the same thing. Being a linux-user and all, we have ways to find those folders :D .


> I don't know what the fuss is about.

The post I was replying to is specifically about MacOS. I specifically reference MacOS. You then link to Linux docs. ;)


My bad. (sorry I cannot help it) Comes to show though, if you want freedom and things the way you want to have them, you are just using the wrong OS :D


Oh, hey, suddenly I’ve just been transported back to the early 2000s.


I partially agree. My only issue with them is that the documentation is a little more hidden than I'd like.

Their install is basically a tl;dr of an installer. That's great!

It'd be nice if it also pointed me directly to a readme with specific instructions on service management, config directories, storage directories, and where the history file is stored.


nix-shell makes most of this go away, except the ollama files will still be in `~/.ollama` which you can delete at any time.

  nix-shell -p ollama
in two tmux windows, then

  ollama serve 
in one and

  ollama run llama2 
in the other.

Exit and all the users, processes etc, go away.

https://search.nixos.org/packages?channel=23.11&show=ollama&...


The Linux binary (pre-built or packaged by your distro) is just a CLI. The Mac binary instead also contains a desktop app.

I agree with OP that this is very confusing. The fact the Mac OS installation comes with a desktop app is not documented anywhere at all! The only way you can discover this is by downloading the Mac binary.


Is this any different from

    brew install ollama


"User hostile experience" is complete hyperbole and disrespectful to the efforts of the maintainers of this excellent library.


It's not hyperbole when he listed multiple examples and issues which clearly highlight why he calls it that.

I don't think there was anything hyperbolic or disrespectful in that post at all. If I was a maintainer there and someone put in the effort to list out the specific issues like that I would be very happy for the feedback.

People need to stop seeing negative feedback as some sort of slight against them. It's not. Any feedback should be seen as a gift, negative or positive alike. We live in a massive attention-competition world, so to get anyone to spend the time to use, test and go out of their way to even write out in detail their feedback on something you provide is free information. Not just free information, but free analysis.

Really wish that we could all understand and empathize with frustration on software has nothing to do with the maintainers or devs unless directly targeted.

You could say possibly that the overall tone of the post was "disrespectful" because of its negativity, but I think receiving that kind of post which ties together not just the issues in some bland objective manner but highlights appropriately the biggest pain points and how they're pain points in context of a workflow is incredibly useful.

I am constantly pushing and begging for this feedback on my work, so to get this for free is a gift.


“User-hostile” is not a term to use when you intend on giving useful constructive criticism. User-hostile is an accusation. I’m sorry that you’re so desperate for people to give you feedback that you’ll stoop down to the level of engaging with people who obviously seek to complain first and improve second, but…come on, your position as someone that ‘makes things’ is FAR from unique in this community. I think that a lot of people here understand the value of feedback, and the possible negative and positive attributes of feedback. It’s fair to point out this quite valid negative attribute.


What I said is an utterly factual statement: I found the experience to be user-hostile. You might have a different experience, and I will not deny you your experience even in the face of your clearly-stated intention to deny me mine.

Moreover, I already conveyed my understanding of and appreciation for the work open-source maintainers do, and I outright said above that I intend no disrespect.


“I have the right to say what I say” is a disingenuous and thought-terminating. What you are essentially saying is that you purely intended for this to be your soapbox, that you see your role here as dispensing your wisdom, and not to have any actual conversation other than one where people agree with you.

GP can just as easily say that they have a right to their opinion that your classification of the experience is invalid. “Yeah but I was talking about how I felt!” just doesn’t pass the smell test. Mature people can have their mind changed and can see when they were being a little over the top. Your “I felt this way, so I will always feel this way, and there’s nothing you can do to stop me” attitude is not a hill worth dying on.


It's very, very, very annoying how much some people are tripping over themselves to pretend a llama.cpp wrapper is some gift of love from saints to the hoi polloi. Y'all need to chill. It's good work and good. It's not great or the best thing ever or particularly high on either simple user friendliness or power user friendly. It's young. Let it breathe. Let people speak.


What troubles me is how many projects are using ollama. I can't stand that I have to create a model file for every model using ollama. I have a terabyte of models that are mostly GGUF, which is somewhere around 70 models of various sizes. I rotate in and out of new versions constantly. GGUF is a ~container~ that already has most of the information needed to run the models! I felt like I was taking crazy pills when so many projects started using it for their backend.

Text-generation-webui is leagues ahead in terms of plug and play. Just load the model and it will get you within 98% of what you need to run any model from HF. Making adjustments to generation settings, prompt and more is done with a nice GUI that is easily saved for future use.

Using llama.cpp is also very easy. It takes seconds to build on my windows computer with cmake. Compiling llama.cpp with different parameters for older/newer/non-existent GPUs is very, very simple... even on windows, even for a guy that codes in Python 97% of the time and doesn't really know a thing about C++. The examples folder in llama.cpp is gold mine of cool things run and they get packaged up into *.exe files for dead simple use.


Thank you for sharing, it's sooooo rare to get signal amongst noise here re: LLMs.

I'm really, really surprised to hear this:

- I only committed in a big way to local a week ago. TL;DR: Stable LM 3B doing RAG meant my every-platform app needed to integrate local finally.

- Frankly didn't hear of Ollama till I told someone about Nitro a couple weeks back and they celebrated they didn't have to Ollama anymore.

- I can't even imagine what the case for another container would be.

- I'm very appreciative of anyone doing work. No shade on Ollama.

- But I don't understand the seemingly strong uptake to it if it's the case you need to go get special formatted models for it. There's other GUIs, so it can't be because it's a GUI. Maybe it's the blend of GUI + OpenAI API server? Any idea?? There's clearly some product-market fit here* but I'm at as complete a loss as you.

* maybe not? HN has weird voting behavior lately and this got to like #3 with 0 comments last night, then it sorta stays there once it has momentum.

- p.s. hear hear on the examples folder. 4 days, that's it, from 0 to on Mac / iOS / Windows / Android / Linux. I'm shocked how many other Dart projects kinda just threw something together quick for one or two platforms and just...ran with it. At half-speed of what they could have. All you have to do is pattern after the examples to get the speed. Wrestling with Flutter FFI...I understand avoiding lol. Last 4 days were hell. https://github.com/Telosnex/fllama


"It's not great or the best thing ever or particularly high on either *simple user friendliness* or power user friendly."

But there are multiple reports in this thread about how easy of an install it was. I'm adding my own in. It was super simple.

It was way easier than installing Automatic1111. It's easier than building llama.cpp.

SnowLprd had some good points for power users although I think he was overly critical in his phrasing. But what's got y'all tripping thinking this is hard?


Indeed, I thought the user experience was great. Simple way to download, install and start: everything just worked.


Big fan of Simon Willison's `llm`[1] client. We did something similar recently with our multi-modal inference server that can be called directly from the `llm` CLI (c.f. "Serving LLMs on a budget" [2]). There's also `ospeak` [3] which we'll probably try to integrate to talk to your LLM from console. Great to see tools that radically simplify the developer-experience for local LLMs/foundation models.

[1] https://github.com/simonw/llm

[2] https://docs.nos.run/docs/blog/serving-llms-on-a-budget.html...

[3] https://github.com/simonw/ospeak


I agree that alternative is good, but if you want to try ollama without the user experience drawbacks, install via homebrew.


There's also a docker container (that I can recommend): https://hub.docker.com/r/ollama/ollama


I got the same feeling. I think it’s generally bad practice to ask a user for their admin password without a good rationale as to why you’re asking, particularly if it’s non-obvious. It’s the ‘trust me bro’ approach to security that that even if this is a trustworthy app it encourages the behaviour of just going ahead and entering your password and not asking too many questions.

The install on Linux is the same. You’re essentially encouraged to just

    curl https://ollama.ai/install.sh | sh
which is generally a terrible idea. Of course you can read the script but that misses the point in that that’s clearly not the intended behaviour.

As other commenters have said, it is convenient. Sure.


We really need to kill this meme. All the “pipe to shell” trick really did from a security perspective is lay bare to some naive people the pre-existing risks involved in running third-party code. I recall some secondary ‘exploits’ around having sh execute something different to what you’d see if you just inspected the script yourself, by way of serving different content, or some HTML/CSS wizardry to have you copy out something unexpected, or wherever. But really, modern-day Linux is less and less about ‘just’ installing packages from your first-party OS package manager’s repositories. Beyond that, piping a downloaded script to your shell is just a different way of being as insecure as most people already are anyway.


https://github.com/ollama/ollama/blob/main/docs/linux.md

They have manual install instructions if you are so inclined.


Just for connecting to an existing service: https://github.com/ollama/ollama-python/blob/main/ollama/_cl...


For the client API it's pretty clear:

    from ollama import Client
    client = Client(host='http://localhost:11434')

But I don't quite get how the example in "Usage" can work:

    import ollama
    response = ollama.chat(model='llama2', messages=[
    {
        'role': 'user',
        'content': 'Why is the sky blue?',
    },
    ])
    print(response['message']['content'])
Since there is no parameter for host and/or port.


Once you have a custom `client` you can use it in place of `ollama`. For example:

  client = Client(host='http://my.ollama.host:11434')
  response = client.chat(model='llama2', messages=[...])


Thanks. I don't have the service installed on my computer RN, but I assume the former works because it by default uses a host (localhost) and port number that is also the default for ollma service?


Exactly that. Client host options default, https://github.com/ollama/ollama-python/blob/main/ollama/_cl...

Also overrideable with OLLAMA_HOST env var. The default imported functions are then based off of a no-arg constructed client https://github.com/ollama/ollama-python/blob/main/ollama/__i...

    # ollama-python/ollama/__init__.py
    _client = Client()

    generate = _client.generate
    chat = _client.chat
    embeddings = _client.embeddings
    ...


I used Ollama docker image to integrate with Gait Analyzer[1], a self-hosted gait analysis tool; all I had to do was to set up the docker compose file,

I was able to get the setup done with a single script for the end user and I used langchain to interact with Ollama.

[1] https://github.com/abishekmuthian/gaitanalyzer


I posted about the Python library few hours after release. Great experience. Easy, fast and works well.

I create a GIST with a quick and dirty way of generating a dataset for fine-tuning Mistral model using Instruction Format on a given topic: https://gist.github.com/ivanfioravanti/bcacc48ef68b02e9b7a40...


How does this fine-tuning work? I can see that you are loading a train.jsonl file and the some instructions but is the output model generated or this is some kind of a new way of training the models?


The gist is only to create the dataset not to fine tune


What's your observations about finetunes - are they really useful for anything practical? :)


Does olana support fine-tuning? I assume not. (Not asking about finetuned models that I know they support)


can we use it on cloud or I gotta download it locally? it might not work on my MacBook 2015 with 8GB ram


Gist isn't an acronym, it's a word. (e.g. "get the gist of things")


An off topic question: Is there such a thing as a "small-ish language model". A model that you could simple give instructions / "capabilities" which a user can interact with. Almost like Siri-level of intelligence.

Imagine you have an API-endpoint where you can set the level of some lights and you give the chat a system prompt explaining how to build the JSON body of the request, and the user can prompt it with stuff like "Turn off all the lights" or "Make it bright in the bedroom" etc.

How low could the memory consumption of such a model be? We don't need to store who the first kaiser of Germany was, "just" enough to kinda map human speech onto available API's.


There are "smaller" models, for example tinyllama 1.1B (tiny seems like an exaggeration). PHI2 is 2.7B parameters. I can't name a 500M parameter model but there is probably one.

The problem is they are all still broadly trained and so they end up being Jack of all trades master of none. You'd have to fine tune them if you want them good at some narrow task and other than code completion I don't know that anyone has done that.

If you want to generate json or other structured output, there is Outlines https://github.com/outlines-dev/outlines that constrains the output to match a regex so it guarantees e.g. the model will generate a valid API call, although it could still be nonsense if the model doesn't understand, it will just match the regex. There are other similar tools around. I believe llama.cpp also has something built in that will constrain the output to some grammar.


https://pypi.org/project/languagemodels/ can load some small models but forming JSON-reliably seems to require a larger-ish model (or fine tuning)

Aside: I expect Apple will do exactly what you're proposing and that's why they're exposing more APIs for system apps


Not really. You can use small models for task like text classification etc (traditional nlp) and those run in pretty much anything. We're talking about BERT-like models like distillbert for example.

Now, models that have "reasoning" as an emergent property... I haven't seen anthing under 3B that's capable of making anything useful. The smaller I've seen is litellama and while it's not 100% useless, it's really just an experiment.

Also, everything requires new and/or expensive hardware. For GPU you really are about 1k€ at minumum for something decent for running models. CPU inference is way slower and forget about anythin that has no AVX and preferably AVX2.

I try models on my old thinkpad x260 with 8Gb ram, which is perfectly capable for developing stuff and those small task oriented I've told you about, but even though I've tried everything under the sun, with quantization etc, it's safe to say you can only run decent LLMs with a decent inference speed with expensive hardware now.

Now, if you want task like, language detection, classifying text into categories, etc, very basic Question Answering, then go on HugginFace and try youself, you'll be capable of running most models on modest hardware.

In fact, I have a website (https://github.com/iagovar/cometocoruna/tree/main) where I'm using a small flask server in my data pipeline to extract event information from text blobs I get scraping sites. That runs every day in an old Atom + 4Gb RAM laptop that I use as sever.

Experts in the field say that might change (somewhat) with mamba models, but I can't really say more.

I've been playing with the idea of dumping some money. But I'm 36, unemployed and just got into coding about 1.5 years ago, so until I secure some income I don't want to hit my saving hard, this is not the US where I can land a job easy (Junior looking for job, just in case someone here needs one).


Speaking of, I imagine Alexa, Siri, etc, should now be replaced by LLMs? Or where they already implemented using LLMs?


Exactly this, I have not yet to run into a "small" model that is good enough (gpt-3) quality


Not directly related to what Ollama aims to achieve. But, I’ll ask nevertheless.

Local LLMs are great! But, it would be more useful once we can _easily_ throw our own data for them to use as reference or even as a source of truth. This is where it opens doors that a closed system like OpenAI cannot - I’m never going to upload some data to ChatGPT for them to train on.

Could Ollama make it easier and standardize the way to add documents to local LLMs?

I’m not talking about uploading one image or model and asking a question about it. I’m referring to pointing a repository of 1000 text files and asking LLMs questions based on their contents.


For now RAG is the best “hack” to achieve this at very low cost since it doesn’t require any fine tuning

I’ve implemented a RAG library if you’re ever interested but they are a dime a dozen now :)

https://www.github.com/jerpint/buster


Sounds like Retrieval Augmented Generation. This is the technique used by e.g. most customized chatbots.


I don’t know if Ollama can do this but https://gpt4all.io/ can.


Basically, I want to do what this product does, but locally with a model running on Ollama. https://www.zenfetch.com/


Hey - Akash from Zenfetch here. We’ve actually tested some of our features with local models and have found that they significantly underperform compared to hosted models. With that said, we are actively working on new approaches to offer a local version of Zenfetch.

In the meanwhile, we do have agreements in place with all of our AI providers to ensure none of our users information is used for training or any other purpose. Hope that helps!


Hey. Congratulations on your product. I’m guessing it will be greatly useful for your target audience.

I don’t have a serious need that I’d think worth paying for. So, I’m probably not in your target. I wanted to do this for a personal use case.

Throw all my personal documents at a local model and ask very personal questions like “the investment I made on that thing in 2010, how did I do against this other thing?” Or “from my online activity, when did I start focusing on this X tech?” Or even “find me that receipt/invoice from that ebike I purchased in 2021 and the insurance I took out on it”.

There is no way I’m taking the promise of a cloud product and upload all my personal documents to it. Hence my ask about the ability to do this locally - slowly is perfectly fine for my cheap need :-)


Makes a lot of sense. This might work for your use case: https://khoj.dev/. It's local, free, and open-source.


Interactive smart knowledge bases is such a massively cool direction for LLMs. I’ve seen Chat with RTX at the NVIDIA preview at CES and it’s mindblowingly simple and cool to use. I believe that interactive search in limited domains is gonna be massive for LLMs


Ooh, I want this too.


Llama_index basically does that. You have even some tuto using Streamlit that creates a UI around it for you.


> I’m never going to upload some data to ChatGPT for them to train on.

If you use the API, they do not train on it.

(However, that doesn't mean they don't retain it for a while).

As others have said, RAG is probably the way to go - although I don't know how well RAG performs on local LLMs.


Data is the new oil.

You can be 100% sure that OpenAI will do whatever they want whenever they want with any and every little bit of data that you upload to them.

With GPTs and their Embeddings endpoint, they encourage you to upload your own data en masse.


There's two main ways to "add documents to LLMs" - using documents in retrieval augmented generation (RAG) and training/finetuning models. I believe you can use RAG with Ollama, however Ollama doesn't do the training of models.


You can "use RAG" with Ollama, in the sense that you can put RAG chunks into a completion prompt.

To index documents for RAG, Ollama also offers an embedding endpoint where you can use LLM models to generate embeddings, however AFAIK that is very inefficient. You'd usually want to use a much smaller embedding model like JINA v2[0], which are currently not supported by Ollama[1].

[0]: https://huggingface.co/jinaai/jina-embeddings-v2-base-en

[1]: https://github.com/ollama/ollama/issues/327


Maybe take a look at this? https://github.com/imartinez/privateGPT

It's meant to do exactly what you want. I've had mixed results.


Used ollama as part of a bash pipeline for a tiny throwaway app.

It blocks until there is something on the mic, then sends the wav to whisper.cpp, which then sends it to llama which picks out a structured "remind me" object from it, which gets saved to a text file.


I made something pretty similar over winter break so I could have something read books to me. ... Then it turned into a prompting mechanism of course! It uses Whisper, Ollama, and TTS from CoquiAI. It's written in shell and should hopefully be "Posix-compliant", but it does use zenity from Ubuntu; not sure how widely used zenity is.

https://github.com/jcmccormick/runtts


Would you share that code? I'm not familiar with using the mic in Linux, but interested to do something similar!


I'd also be really interested in seeing this


I love the ollama project. Having a local llm running as a service makes sense to me. It works really well for my use.

I’ll give this Python library a try. I’ve been wanting to try some fine tuning with LLMs in the loop experiments.


Noob question, and may be probably being asked at the wrong place. Is there any way to find out min system requirements for running ollama run commands with different models.


I have a 11th gen intel cpu with 64gb ram and I can run most of big models slowly... so it's partly what you can put up with.


On my 32G M2 Pro Mac, I can run up to about 30B models using 4 bit quantization. It is fast unless I am generating a lot of text. If I ask a 30B model to generate 5 pages of text it can take over 1 minute. Running smaller models like Mistral 7B is very fast.

Install Ollama from https://ollama.ai and experiment with it using the command line interface. I mostly use Ollama’s local API from Common Lisp or Racket - so simple to do.

EDIT: if you only have 8G RAM, try some of the 3B models. I suggest using at least 4 bit quantization.


Check out this guide for some recommendations: https://www.hardware-corner.net/guides/computer-to-run-llama...

You can easily experiment with smaller models, for example, Mistral 7B or Phi-2 on M1/M2/M3 processors. With more memory, you can run larger models, and better memory bandwidth (M2 Ultra vs. M2 base model) means improved performance (tokens/second).


They have a high level summary of ram requirements for the parameter size of each model and how much storage each model uses on their GitHub: https://github.com/ollama/ollama#model-library


Rule of thumb I have used is to check the size and if it fits into your GPUs VRAM then it will run nicely.

I have not ran into a llama that won't run, but if it doesn't fit into my GPU you have to count seconds per token instead of tokens per second


Llama2 7b and Mistral 7b run at about 8 tk/s on my Mac Pro, which is usable if you're not in a hurry.


Thank you so much everyone, all the help was really needed and useful : )


I run ollama on my steamdeck. It's a bit slow but can run most 7b models.


I posted about my awesome experiences using Ollama a few months ago: https://news.ycombinator.com/item?id=37662915. Ollama is definitely the easiest way to run LLMs locally, and that means it’s the best building block for applications that need to use inference. It’s like how Docker made it so any application can execute something kinda portably kinda safely on any machine. With Ollama, any application can run LLM inference on any machine.

Since that post, we shipped experimental support in our product for Ollama-based local inference. We had to write our own client in TypeScript but will probably be able to switch to this instead.


Could you maybe compare it to llama.cpp?

All it took for me to get going is `make` and I basically have it working locally as a console app.


Ollama is built around llama.cpp, but it automatically handles templating the chat requests to the format each model expects, and it automatically loads and unloads models on demand based on which model an API client is requesting. Ollama also handles downloading and caching models (including quantized models), so you just request them by name.

Recently, it got better (though maybe not perfect yet) at calculating how many layers of any model will fit onto the GPU, letting you get the best performance without a bunch of tedious trial and error.

Similar to Dockerfiles, ollama offers Modelfiles that you can use to tweak the existing library of models (the parameters and such), or import gguf files directly if you find a model that isn’t in the library.

Ollama is the best way I’ve found to use LLMs locally. I’m not sure how well it would fare for multiuser scenarios, but there are probably better model servers for that anyways.

Running “make” on llama.cpp is really only the first step. It’s not comparable.


This is interesting. I wouldn't have given the project a deeper look without this information. The lander is ambiguous. My immediate takeaway was, "Here's yet another front end promising ease of use."


I had similar feelings but last week finally tried it in WSL2.

Literally two shell commands and a largish download later I was chatting with mixtral on an aging 1070 at a positively surprising tokens/s (almost reading speed, kinda like the first chatgpt). Felt like magic.


For me, the critical thing was that ollama got the GPU offload for Mixtral right on a single 4090, where vLLM consistently failed with out of memory issues.

It's annoying that it seems to have its own model cache, but I can live with that.


vLLM doesn't support quantized models at this time so you need 2x 4090 to run Mixtral.

llama.cpp supports quantized models so that makes sense, ollama must have picked a quantized model to make it fit?


Eh? The docs say vLLM supports both gptq and awq quantization. Not that it matters now I'm out of the gate, it just surprised me that it didn't work.

I'm currently running nous-hermes2-mixtral:8x7b-dpo-q4_K_M with ollama, and it's offloaded 28 of 33 layers to the GPU with nothing else running on the card. Genuinely don't know whether it's better to go for a harsher quantisation or a smaller base model at this point - it's about 20 tokens per second but the latency is annoying.


[flagged]


> comes with a heavy runtime (node or python)

Ollama does not come with (or require) node or python. It is written in Go. If you are writing a node or python app, then the official clients being announced here could be useful, but they are not runtimes, and they are not required to use ollama. This very fundamental mistake in your message indicates to me that you haven’t researched ollama enough. If you’re going to criticize something, it is good to research it more.

> does not expose the full capability of llama.cpp

As far as I’ve been able to tell, Ollama also exposes effectively everything llama.cpp offers. Maybe my use cases with llama.cpp weren’t advanced enough? Please feel free to list what is actually missing. Ollama allows you to deeply customize the parameters of models being served.

I already acknowledged that ollama was not a solution for every situation. For running on your own desktop, it is great. If you’re trying to deploy a multiuser LLM server, you probably want something else. If you’re trying to build a downloadable application, you probably want something else.


How much of a performance overhead does this runtime add, anyway? Each request to a model would eat so much GPU for actual text generation, the cost to process the request and response strings even in a slow, garbage-collected seems negligible both in latency.


For me the big deal with Ollama is the ease of instantly setting up a local inference API. I've got a beefy machine with a GPU downstairs, but Ollama allows me to easily use it from a Raspberry Pi on the main floor.


In my experience award for easiest to run locally will go to llamafile models https://github.com/Mozilla-Ocho/llamafile.


Also one feature request - if the library (or another related library) could also transparently spin up a local Ollama instance if the user doesn’t have one already. “Transparent-on-demand-Ollama” or something.


I have been working on something similar to that in Msty [1]. I haven’t announced the app anywhere (including my friends as I got a few things in pipeline that I want to get out first :)

[1]: https://msty.app


That gets into process management which can get dicey, but I agree, a "daemonless" mode could be really interesting


I'd like to see a comparison to nitro https://github.com/janhq/nitro which has been fantastic for running a local LLM.


> Ollama is definitely the easiest way to run LLMs locally

Nitro outstripped them, 3 MB executable with OpenAI HTTP server and persistent model load


Persistent model loading will be possible with: https://github.com/ollama/ollama/pull/2146 – sorry it isn't yet! More to come on filesize and API improvements


I just wanted to say thank you for being communicative and approachable and nice.


Who cares about executable size when the models are measured in gigabytes lol. I would prefer a Go/Node/Python/etc server for a HTTP service even at 10x the size over some guy's bespoke c++ any day of the week. Also, measuring the size of an executable after zipping is a nonsense benchmark in of itself


Not some guy, agree on zip, disagree entirely with tone of the comment (what exactly separates ollama from those same exact hyperbolic descriptions?)


So cool! I have bene using Ollama for weeks now and I just love it! Easiest way to run local LLMs, we are actually embedding them into our product right now and super excited about it!


I am using ollama as LLM server + ollama-webui as chat app server. Great UI


What's the product?


I used this half a year ago, love the UX but it was not possible to accelerate the workloads using an AMD GPU. How's the support for AMD GPUs under Ollama today?


Hi, I'm one of the maintainers on Ollama. We are working on supporting ROCm in the official releases.

If you do build from source, it should work (Instructions below):

https://github.com/ollama/ollama/blob/main/docs/development....

The reason why it's not in released builds is because we are still testing ROCm.


I'm using it on an AMD GPU with the clblast backend.


Unfortunately "AMD" and "easy" are mutually exclusive right now.

You can be a linux/python dev and set up rocm.

Or you can run llama.cpp's very slow OpenCL backend, but with easy setup.

Or you can run MLC's very fast Vulkan backend, but with no model splitting and medium-hard setup.


I'm a huge fan of Ollama. Really like how easy it makes local LLM + neovim https://github.com/David-Kunz/gen.nvim


This should be nice to be easier to integrate with things like Vanna.ai, that was on HN recently.

There a bunch of methods need to be implemented to work, but then usual OpenAI buts can be switched out to anything else, e.g. see the code stub in https://vanna.ai/docs/bigquery-other-llm-vannadb.html

Looking forward to more remixes for other tools too.


Why does this feel like an exercise in the high priesting of coding. Shouldn't a python library have everything necessary and work out of the box?


What I hate about ollama is that it makes server configuration a PITA. ollama relies on llama.cpp to run GGUF models but while llama.cpp can keep the model in memory using `mlock` (helpful to reduce inference times), ollama simply won't let you do that:

https://github.com/ollama/ollama/issues/1536

Not to mention, they hide all the server configs in favor of their own "sane defaults".


Sorry this isn't easier!

You can enable mlock manually in the /api/generate and /api/chat endpoints by specifying the "use_mlock" option:

{“options”: {“use_mlock”: true}}

Many other sever configurations are also available there: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...


I think a faq with the answers of this kind of questions could be useful for users.


I love Ollama's simplicity to download and consume different models with its REST API. I've never used it in a "production" environment, anyone knows how Ollama performs? or is it better to move to something like Vllm for that?


The performance will probably be similar as long as you remember to tune the settings listed here: https://github.com/ollama/ollama/blob/main/docs/api.md

Try to, for example, set 'num_gpu' to 99 and 'use_mlock' to true.


They all probably already use elements of deep learning but are very likely trained in a supervised way to output structured data (I.e. actions)


+1


Does Ollama support GBNF grammars?


No, but it does support json formatting


API wise, it looks very similar to the OpenAI python SDK but not quite the same. I was hoping I could swap out one client for another. Can anyone confirm they’re intentionally using an incompatible interface?


There is an issue for this: [1]. I think it's more of priority issue.

[1] https://github.com/ollama/ollama/issues/305


Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Ollama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal.

For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]

[1] https://github.com/oobabooga/text-generation-webui/issues/53...

[2] https://github.com/langroid/langroid/blob/main/langroid/lang...

Related question - I assume ollama auto detects and applies the right chat formatting template for a model?


I've built exactly this if you want to give it a try : https://github.com/lhenault/simpleAI


This is going to make my current project a million times easier. Nice.


I love ollama, the engine underneath is llama.cpp, and they have the first version of self-extend about to me merged into main, so with any luck it will be available in ollama soon too!


A lot of the new models coming out are long context anyway. Check out Yi, InternLM and Mixtral.

Also, you really want to wait until flash attention is merged before using mega context with llama.cpp. The 8 bit KV cache would be ideal too.


Is anyone using this as an api behind a multi user web application? Or does it need to be fed off of a message queue or something to basically keep it single threaded?


What model format does ollama use? Or is one constrained to the handful of preselected models they list?


You can import GGUF, PyTorch or safetensors models into Ollama. I'll caveat that there are current limitations to some model architectures

https://github.com/ollama/ollama/blob/main/docs/import.md


Thanks!


Wow, I guess I wouldn’t have thought there would be GPU support. What’s the mechanism for this?


Via llama.cpp's GPU support.


`ollama serve` exposes an api you can query with fetch. why the need for a library?


ollama feels like llama.cpp with extra undesired complexities. It feels like the former project is desperately trying to differentiate and monetize while the latter is where all the things that matter happens.


Literally wrote an Ollama wrapper class last week. Doh!


If you're using TypeScript I highly recommend modelfusion https://modelfusion.dev/guide/

It is far more robust, integrates with any LLM local or hosted, supports multi-modal, retries, structure parsing using zod and more.


This looks really nice but it’s good to point out that this project can use the Ollama HTTP API or any other API, but does not run models itself. So not a replacement to Ollama, but rather to the Ollama npm. Perhaps that was obvious because the post is about that, but I briefly thought this could run models too.


What is the benefit?

Ollama already exposes REST API that you can query with whatever language (or you know, just using curl) - why do I want to use Python or JS?


What’s the benefit of abstracting something?


There is a reason why "leftpad" is followed by "incident".


That one doesn’t have to write the glue code around your HTTP client library?


Feels pretty bad to install dependency just so you can avoid making a HTTP request.



The Rust+Wasm stack provides a strong alternative to Python in AI inference.

* Lightweight. Total runtime size is 30MB as opposed 4GB for Python and 350MB for Ollama. * Fast. Full native speed on GPUs. * Portable. Single cross-platform binary on different CPUs, GPUs and OSes. * Secure. Sandboxed and isolated execution on untrusted devices. * Modern languages for inference apps. * Container-ready. Supported in Docker, containerd, Podman, and Kubernetes. * OpenAI compatible. Seamlessly integrate into the OpenAI tooling ecosystem.

Give it a try --- https://www.secondstate.io/articles/wasm-runtime-agi/


Interesting. But the gguf file for llama2 is 4.78 GB in size.

For ollama, llama2:7b is 3.8 GB. See: https://ollama.ai/library/llama2/tags. Still I see ollama requires less RAM to run llama 2


Why would anyone downvote this? There is nothing against HN rules and the comment itself is adding new and relevant information.


From the HN Guidelines:

“Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.”

That user almost exclusively links to what appears to be their own product, which is self promotion. They also do it without clarifying their involvement, which could come across as astroturfing.

Self promotion sometimes (not all the time) is fine, but it should also be clearly stated as such. Doing it in a thread about a competing product is not ideal. If it came up naturally, that would be different from just interjecting a sales pitch.

I haven’t downvoted them, but I came close.


Thanks Ollama


I wish JS libraries would stop using default exports. They are not ergonomic as soon as you want to export one more thing in your package, which includes types, so all but the most trivial package requires multiple exports.

Just use a sensibly named export, you were going to write a "how to use" code snippet for the top of your readme anyway.

Also means that all of the code snippets your users send you will be immediately sensible, even without them having to include their import statements (assuming they don't use "as" renaming, which only makes sense when there's conflicts anyway)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: