Hacker Newsnew | past | comments | ask | show | jobs | submit | ph1lw's commentslogin

I'm 29 and honestly skeptical that life expectancy will keep on rising as it's assumed. With all the toxins, stress, rising heat deaths and crap in our environment, will most of us even be healthy enough to work till 70? Could it be 75 when I approach retirement? I think it's important to keep up some positive prospects for our generation - otherwise I can think e.g. of a higher suicide rate which then also shrinks the workforce


Shouldn't we ban the word "disposable"?

"being able to get rid of by throwing away or giving or selling to someone else."

Most items labled as disposable don't just disappear. They disappear out of our mind but not from the planet.

The word gives the impression of false simplicity: The word makes complex waste management issues seem simple, absolving consumers of responsibility.


Unfortunately no BYOLLM. Brave supports bringing your own LLM e.g. through Ollama

Besides that I'm using AI Summary Helper plugin for Chromium-based browsers https://philffm.github.io/ai-summary-helper/ which also allows using Ollama (or OpenAI / Mistral), asking questions to articles and inserting summaries right into the DOM (which is perfect for hoarding articles / forwarding them to Kindle)


Sort of funny for the lack of local options considering that Mozilla funds llamafile even. Hopefully they allow some API integration, if they are using standard OpenAI API calls, it should be easy to enable swapping the endpoint.

Also, while it's nice to have a service option for those without any spare compute, I think it's a bit of a shame on the model considering how even at the 7B class, models like Llama 3.1 8B, Qwen 2.5 8B or Tulu 3 8B, Falcon 3 7B, all clearly outclass Mistral 7B (Mistral 7B is also very bad at multilingual, and is particularly inefficient at multilingual tokenization).

The current best fully open weights (Apache 2.0 or similar) small models currently are probably: OLMo 2 7B, Qwen 2.5 7B, Granite 3.1 8B, and Mistral Nemo Instruct (12B)

There's been a recent launch of a "GPU-Poor" Chat Arena for those interested in scoping out some of the smaller models (not a lot of ratings so very noisy, take it with a grain of salt): https://huggingface.co/spaces/k-mktr/gpu-poor-llm-arena


> Brave supports bringing your own LLM e.g. through Ollama

It's a shame Brave is so far ahead of the game but no one seems to notice.


I run Brave at home and a local LLM on the same machine, and didn't know this. I guess I'll be playing around this weekend.


This would be a good moment to raise the taxes for airlines to have it aligned with rail travel. Demand goes down and they will adjust their additional charges.


I had been using Anytype for some time but a big disadvantage is it requires a standalone app = Not possible on a corporate laptop. What I use is VSCode with Foam - a colleague uses Dendron. Has the advantage that I can also use GitHub Copilot or local LLMs for autocompletion. Imho the combo is better for pure writing and building quick connections between documents


Related: The Frankfurt apple wine tram. https://de.wikipedia.org/wiki/Ebbelwei-Expre%C3%9F

A local standup comedian keeps on delivering the joke that this thing has been in operation for 50 years but only holds 30 google reviews with 3.5 stars on average. I mean who writes reviews while being intoxicated by apple wine?

Every city should have one of them!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: