Cool, looks like text highlighting is a new addition in 2.10. There aren't any examples in the demo site of this, but can it capture the highlighted text snippets and show them in the link details page? That would help me recall quickly why I saved the link, without opening the original link and re-reading the page. I haven't really seen this in other tools (or maybe I just haven't looked hard enough), except Memex.
Great product! Does it handle special metadata like https://mymind.com/ does, eg. showing prices directly in the UI if the saved link is a product in a shop? If not, things like that would be a great addition!
Site note: When a website advertising a product does a bad job at optimising the loading of the page, that's usually a red flag for me; yes that website has noticeable jitter when scrolling up and down even though it _only_ load around ~70Mb worth of assets initially.
I'd be interested to hear your thoughts on having a PWA vs regular mobile apps since it looks like you started with a PWA, but are moving to regular apps. Is that just a demand / eyeballs thing or were there technical reasons?
Ahh, yes, you can reduce it to names with a lot of columns. In my personal ideal, I've love to store a short-name for a link and have no boxes. Personally, I've always wanted links to be like the tag cloud in pinboard and to have a page with multiple tags/categories.
I'd also love a separation of human tags and AI tags (even by base or stem), just in case they provided radically different views, but both were useful.
EDIT:
Just did a quick look in the documentation, is there a native or supported distinction between links that are like bookmarks and links that are more content/articles/resources?
So there are different ways it archives a webpage.
It currently stores the full webpages as a single html file, a screenshot, a pdf, a read-it-later view.
Aside from that, you can also send the webpages to the Wayback Machine to take a snapshot.
To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
> To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
> To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
It'd be awesome to integrate this with the SingleFile extension, which captures any webpage into a self-contained HTML file (with JS, CSS, etc, inlined).
How difficult would it be to import an existing list of links/tags? Also, if I were using a hosted version, would I be able to eg insert/retrieve files via an API call?
I ask because currently I use Readwise but have a local script that syncs the reader files to a local DB, which then feeds into some custom agent flows I have going on on the side.
- Does the web front end support themes? It’s a trivial thing but based on the screenshots, various things about the default theme bug me and it would be nice to be able to change those without a user style extension.
- Does it have an API that would allow development of a native desktop front end?
a question arose for me though: if the AI tagging is self hostable as well, how taxing is it for the hardware, what would the minimum viable hardware be?
I took a look at this... and you use the Ollama API behind the scenes?? Why not use an OpenAI compatible endpoint like the rest of the industry?
Locking it to Ollama is stupid. Ollama is just a wrapper for llama.cpp anyways. Literally everyone else running LLMs locally- llama.cpp, vllm (which is what the inference providers use, also I know Deepseek API servers use this behind the scenes), LM Studio (for the causal people), etc all use an OpenAI compatible api endpoint. Not to mention OpenAI, Google, Anthropic, Deepseek, Openrouter, etc all mainly use (or at least fully supports, in the case of Google) an OpenAI compatible endpoint.
If you don’t like this free and open source software that was shared it’s luckily possible to change it yourself…or if it’s not supporting your favorite option you can also just ignore it. No need to call someone’s work or choices stupid.
Strong disagree. Just because something is free and open source does not make it good. Call a spade a spade.
Ollama is a piece of shit software that basically stole the work of llama.cpp, locks down their GGUFs files so it cannot be used by other software on your machine, misleads users by hiding information (like what quant you are using, who produced the GGUF, etc), created their own API endpoint to lock in users instead of using a standard OpenAI compatible API, and more problems.
It's like they looked at all the bad walled garden things Apple does and took it as a todo list.
That's an absolutely terrible defense. Ignorance is not an excuse, try telling that to a police officer.
And plus, certain people are held to a higher standard. It's not like I'm expecting a random person on the street to know about Ollama, but someone building AI software is expected to research what they are using and do their due diligence. To plead ignorance is to assert incompetence at best and negligence at worst.
Some key features of the app (at the moment):
- Text highlighting
- Full page archival
- Full content search
- Optional local AI tagging
- Sync with browser (using Floccus)
- Collaborative
Also, for anyone wondering, all features from the cloud plan are available to self-hosted users :)