Hacker Newsnew | past | comments | ask | show | jobs | submit | SnowLprd's commentslogin

On the subject of installing Ollama, I found it to be a frustrating and user-hostile experience. I instead recommend the much more user-friendly LLM[0] by Simon Willison.

Among the problems with Ollama include:

* Ollama silently adds a login item with no way to opt out: <https://github.com/jmorganca/ollama/issues/162>

* Ollama spawns at least four processes, some persistently in the background: 1 x Ollama application, 1 x `ollama` server component, 2 x Ollama Helper

* Ollama provides no information at install time about what directories will be created or where models will be downloaded.

* Ollama prompts users to install the `ollama` CLI tool, with admin access required, with no way to cancel, and with no way to even quit the application at that point. Ollama provides no clarity that about what is actually happening during this step: all it is doing is symlinking `/Applications/Ollama.app/Contents/Resources/ollama` to `/usr/local/bin/`

The worst part is that not only is none of this explained at install time, but the project README doesn’t tell you any of this information either. Potential users deserve to know what will happen on first launch, but when a PR arrived to at least provide that clarification in the README, Ollama maintainers summarily closed that PR and still have not rectified the aforementioned UX problems.

As an open source maintainer myself, I understand and appreciate that Ollama developers volunteer their time and energy into the project, and they can run it as they see fit. So I intend no disrespect. But these problems, and a seeming unwillingness to prioritize their resolution, caused me to delete Ollama from my system entirely.

As I said above, I think LLM[0] by Simon Willison is an excellent and user-friendly alternative.

[0]: https://llm.datasette.io/


I think it boils down to a level of oblivious disrespect for the user from the points you raised about ollama. I am sure it’s completely unintentional from their dev’s, simply not prioritising the important parts which might be a little boring for them to spend time on, but to be taken seriously as a professional product I would expect more. Just because other apps may not have the same standards either re complete disclosure, it shouldn’t be normalised if you are wanting to be respected fully from other devs as well as the general public - after all, other devs who appreciate good standards will also be likely to promote a product for free (which you did for LLM[0]) so why waste the promotion opportunity when it results in even better code and disclosure.


I don’t have any direct criticisms about anyone in particular, but the other thing is that any rational person spending this much time and money probably attempts to think of a business plan. So vendor lock-in creeps into our ideas even unintentionally. Nothing wrong with intentionally, per se.

We all tell ourselves it’s value-add, but come on, there’s always an element of “we’ll make ourselves a one-stop shop!”

So for example, I think the idea of modelfiles is sound. Like dockerfiles, cool! But other than superficially, it’s a totally bespoke and incompatible with everything else we came up with last year.

Bespoke has its connotation for reasons. Last I checked, the tokenizer fails on whitespace sometimes. Which is fine except for “why did you make us all learn a new file format and make an improvised blunderbuss to parse it!?”

(Heh. Two spaces between the verb and the args gave me a most perplexing error after copy/pasting a file path).


You don’t sound like the kind of user ollama was meant to serve. What you are describing is pretty typical of macOS applications. You were looking for more of a traditional Linux style command line process or a Python library. Looks like you found what you were after, but I would imagine that your definition of user friendly is not really what most people understand it to mean.


Respectfully, I disagree. Not OP, but this “installer” isn’t a standard macOS installer. With a standard installer I can pick the “show files” menu option and see what’s being installed and where. This is home rolled and does what arguably could be considered shady dark patterns. When Zoom and Dropbox did similar things, they were rightly called out, as should this.


I don't know what the fuss is about. Right below the shellscript curl-thingy(I do think this approach should die) https://github.com/ollama/ollama/blob/main/docs/linux.md It is available, you can do it by hand and the code is also there. And if you don't feel like tweaking code, you can simply shift it into a docker-container/-containers. and export the ports and some folders to persist the downloaded models.

That they are not advertising it is not uncommon. OpenWhisper from when OpenAI was open did the same thing. Being a linux-user and all, we have ways to find those folders :D .


> I don't know what the fuss is about.

The post I was replying to is specifically about MacOS. I specifically reference MacOS. You then link to Linux docs. ;)


My bad. (sorry I cannot help it) Comes to show though, if you want freedom and things the way you want to have them, you are just using the wrong OS :D


Oh, hey, suddenly I’ve just been transported back to the early 2000s.


I partially agree. My only issue with them is that the documentation is a little more hidden than I'd like.

Their install is basically a tl;dr of an installer. That's great!

It'd be nice if it also pointed me directly to a readme with specific instructions on service management, config directories, storage directories, and where the history file is stored.


nix-shell makes most of this go away, except the ollama files will still be in `~/.ollama` which you can delete at any time.

  nix-shell -p ollama
in two tmux windows, then

  ollama serve 
in one and

  ollama run llama2 
in the other.

Exit and all the users, processes etc, go away.

https://search.nixos.org/packages?channel=23.11&show=ollama&...


The Linux binary (pre-built or packaged by your distro) is just a CLI. The Mac binary instead also contains a desktop app.

I agree with OP that this is very confusing. The fact the Mac OS installation comes with a desktop app is not documented anywhere at all! The only way you can discover this is by downloading the Mac binary.


Is this any different from

    brew install ollama


"User hostile experience" is complete hyperbole and disrespectful to the efforts of the maintainers of this excellent library.


It's not hyperbole when he listed multiple examples and issues which clearly highlight why he calls it that.

I don't think there was anything hyperbolic or disrespectful in that post at all. If I was a maintainer there and someone put in the effort to list out the specific issues like that I would be very happy for the feedback.

People need to stop seeing negative feedback as some sort of slight against them. It's not. Any feedback should be seen as a gift, negative or positive alike. We live in a massive attention-competition world, so to get anyone to spend the time to use, test and go out of their way to even write out in detail their feedback on something you provide is free information. Not just free information, but free analysis.

Really wish that we could all understand and empathize with frustration on software has nothing to do with the maintainers or devs unless directly targeted.

You could say possibly that the overall tone of the post was "disrespectful" because of its negativity, but I think receiving that kind of post which ties together not just the issues in some bland objective manner but highlights appropriately the biggest pain points and how they're pain points in context of a workflow is incredibly useful.

I am constantly pushing and begging for this feedback on my work, so to get this for free is a gift.


“User-hostile” is not a term to use when you intend on giving useful constructive criticism. User-hostile is an accusation. I’m sorry that you’re so desperate for people to give you feedback that you’ll stoop down to the level of engaging with people who obviously seek to complain first and improve second, but…come on, your position as someone that ‘makes things’ is FAR from unique in this community. I think that a lot of people here understand the value of feedback, and the possible negative and positive attributes of feedback. It’s fair to point out this quite valid negative attribute.


What I said is an utterly factual statement: I found the experience to be user-hostile. You might have a different experience, and I will not deny you your experience even in the face of your clearly-stated intention to deny me mine.

Moreover, I already conveyed my understanding of and appreciation for the work open-source maintainers do, and I outright said above that I intend no disrespect.


“I have the right to say what I say” is a disingenuous and thought-terminating. What you are essentially saying is that you purely intended for this to be your soapbox, that you see your role here as dispensing your wisdom, and not to have any actual conversation other than one where people agree with you.

GP can just as easily say that they have a right to their opinion that your classification of the experience is invalid. “Yeah but I was talking about how I felt!” just doesn’t pass the smell test. Mature people can have their mind changed and can see when they were being a little over the top. Your “I felt this way, so I will always feel this way, and there’s nothing you can do to stop me” attitude is not a hill worth dying on.


It's very, very, very annoying how much some people are tripping over themselves to pretend a llama.cpp wrapper is some gift of love from saints to the hoi polloi. Y'all need to chill. It's good work and good. It's not great or the best thing ever or particularly high on either simple user friendliness or power user friendly. It's young. Let it breathe. Let people speak.


What troubles me is how many projects are using ollama. I can't stand that I have to create a model file for every model using ollama. I have a terabyte of models that are mostly GGUF, which is somewhere around 70 models of various sizes. I rotate in and out of new versions constantly. GGUF is a ~container~ that already has most of the information needed to run the models! I felt like I was taking crazy pills when so many projects started using it for their backend.

Text-generation-webui is leagues ahead in terms of plug and play. Just load the model and it will get you within 98% of what you need to run any model from HF. Making adjustments to generation settings, prompt and more is done with a nice GUI that is easily saved for future use.

Using llama.cpp is also very easy. It takes seconds to build on my windows computer with cmake. Compiling llama.cpp with different parameters for older/newer/non-existent GPUs is very, very simple... even on windows, even for a guy that codes in Python 97% of the time and doesn't really know a thing about C++. The examples folder in llama.cpp is gold mine of cool things run and they get packaged up into *.exe files for dead simple use.


Thank you for sharing, it's sooooo rare to get signal amongst noise here re: LLMs.

I'm really, really surprised to hear this:

- I only committed in a big way to local a week ago. TL;DR: Stable LM 3B doing RAG meant my every-platform app needed to integrate local finally.

- Frankly didn't hear of Ollama till I told someone about Nitro a couple weeks back and they celebrated they didn't have to Ollama anymore.

- I can't even imagine what the case for another container would be.

- I'm very appreciative of anyone doing work. No shade on Ollama.

- But I don't understand the seemingly strong uptake to it if it's the case you need to go get special formatted models for it. There's other GUIs, so it can't be because it's a GUI. Maybe it's the blend of GUI + OpenAI API server? Any idea?? There's clearly some product-market fit here* but I'm at as complete a loss as you.

* maybe not? HN has weird voting behavior lately and this got to like #3 with 0 comments last night, then it sorta stays there once it has momentum.

- p.s. hear hear on the examples folder. 4 days, that's it, from 0 to on Mac / iOS / Windows / Android / Linux. I'm shocked how many other Dart projects kinda just threw something together quick for one or two platforms and just...ran with it. At half-speed of what they could have. All you have to do is pattern after the examples to get the speed. Wrestling with Flutter FFI...I understand avoiding lol. Last 4 days were hell. https://github.com/Telosnex/fllama


"It's not great or the best thing ever or particularly high on either *simple user friendliness* or power user friendly."

But there are multiple reports in this thread about how easy of an install it was. I'm adding my own in. It was super simple.

It was way easier than installing Automatic1111. It's easier than building llama.cpp.

SnowLprd had some good points for power users although I think he was overly critical in his phrasing. But what's got y'all tripping thinking this is hard?


Indeed, I thought the user experience was great. Simple way to download, install and start: everything just worked.


Big fan of Simon Willison's `llm`[1] client. We did something similar recently with our multi-modal inference server that can be called directly from the `llm` CLI (c.f. "Serving LLMs on a budget" [2]). There's also `ospeak` [3] which we'll probably try to integrate to talk to your LLM from console. Great to see tools that radically simplify the developer-experience for local LLMs/foundation models.

[1] https://github.com/simonw/llm

[2] https://docs.nos.run/docs/blog/serving-llms-on-a-budget.html...

[3] https://github.com/simonw/ospeak


I agree that alternative is good, but if you want to try ollama without the user experience drawbacks, install via homebrew.


There's also a docker container (that I can recommend): https://hub.docker.com/r/ollama/ollama


I got the same feeling. I think it’s generally bad practice to ask a user for their admin password without a good rationale as to why you’re asking, particularly if it’s non-obvious. It’s the ‘trust me bro’ approach to security that that even if this is a trustworthy app it encourages the behaviour of just going ahead and entering your password and not asking too many questions.

The install on Linux is the same. You’re essentially encouraged to just

    curl https://ollama.ai/install.sh | sh
which is generally a terrible idea. Of course you can read the script but that misses the point in that that’s clearly not the intended behaviour.

As other commenters have said, it is convenient. Sure.


We really need to kill this meme. All the “pipe to shell” trick really did from a security perspective is lay bare to some naive people the pre-existing risks involved in running third-party code. I recall some secondary ‘exploits’ around having sh execute something different to what you’d see if you just inspected the script yourself, by way of serving different content, or some HTML/CSS wizardry to have you copy out something unexpected, or wherever. But really, modern-day Linux is less and less about ‘just’ installing packages from your first-party OS package manager’s repositories. Beyond that, piping a downloaded script to your shell is just a different way of being as insecure as most people already are anyway.


https://github.com/ollama/ollama/blob/main/docs/linux.md

They have manual install instructions if you are so inclined.


SEEKING FREELANCER | Remote

We seek experienced Django engineers to add new features to our product, which allows people to replace expensive SaaS tools with tap-to-install open-source applications. Lower costs, better privacy!

Tools/technologies:

* Django + PostgreSQL (required)

* HTMX and/or JS (preferred)

* Tailwind CSS (preferred)

We are looking for folks with about 20 hours/week of availability.

For more information, reach out to: entroP at gmail


Hi, your email seems to not be a valid gmail address.


SEEKING FREELANCER | Remote

We seek experienced Django engineers to add new features to our product, which allows people to replace expensive SaaS tools with tap-to-install open-source applications. Lower costs, better privacy!

Tools/technologies:

* Django + PostgreSQL (required)

* HTMX and/or JS (preferred)

* Tailwind CSS (preferred)

We are looking for folks with about 20 hours/week of availability.

For more information, reach out to: entroP at gmail


You have sunk to a new low, copy-pasting this same falsehood. The other comments here that speak of your past misdeeds are now all the more clear.


2-year old issue proof:

https://github.com/justinmayer/tackle/issues/3

Tackle support for "modules" and "function" snippets in Fisherman proof:

https://github.com/fisherman/fisherman/blob/master/functions...

⁣⁣

I exhort you to demonstrate my comment was false.

Bonus Points ⁣⁣

PRs are ignored for months:

https://github.com/justinmayer/tackle/pull/16

Issues are left unanswered for months:

https://github.com/justinmayer/tackle/issues/14

All I am saying is, if you need first-class support for your fisheries, Fisherman is the man for the job, not Tackle. But hey, Fish is great out of the box and you don't really need anything to get up and running :)


Hehe... Okay, buddy. Want proof? That "issue", which is merely a suggested adoption of someone else's function, isn't even 18 months old.

You are unnecessarily hostile and misrepresent facts. As other commenters here have already noted, you clearly cannot be trusted.


The comparisons to PyPy would be much more meaningful if they weren't based on a version of PyPy that's over two years old: PyPy 1.9 was released in 2012.


I've posted updated benchmarks with the latest PyPy 2.3.1 http://pythonjs.blogspot.com/2014/06/pythonjs-faster-than-cp...


You mean 1.9 is not the last stable PyPy?


Not for me. Option-clicking produces no result for me on iTerm 1.0.0.20130319.


Works fine in iTerm2 1.0.0.20140112


I tested it in iTerm 2 Build 1.0.0.20131228, where it works. It must be a recent addition.


My tutorial for setting up fish on Mac OS X and Ubuntu: http://hackercodex.com/guide/install-fish-shell-mac-ubuntu/

I've never had any trouble using chsh to make fish the default shell. Bash scripts in crontabs still run under bash -- not sure why anyone would think otherwise. Plus, changing the shell command for iTerm/Terminal won't help with remote servers, so using chsh everywhere means you always get a consistent shell experience, no matter whether you're using your terminal locally or remotely.


I'd welcome any suggestions for improving the set-up guide, so fire away.


For those who want to give fish a spin, it might be worth checking out the guide I wrote on the topic:

http://hackercodex.com/guide/install-fish-shell-mac-ubuntu/

I'll soon have it updated for fish 2.1.0, Mavericks, and the new PPA locations.


Homebrewed Python includes the latest versions of pip and setuptools, making the command you suggested only necessary if you plan on sticking with the bundled system Python (which isn't recommended for reasons discussed in other comments here).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: