Obsidian is free for individual use. The $50/yr is a commercial license.
They also have a $4/month sync product (sync across devices with e2e encryption), but you can use icloud, google drive, etc too.
In Obsidian , you can even sync to github or dropbox with community plugins . so price wise it is free
Also Obsidian has much better search (using community plugin) which is lacking in noteshub
+1 on the third party search. Quick Switcher plugin lets you bring up a hotkeyed modal to search across all note titles/headings/subheadings/tags all in a single fast search interface.
A really interesting feature would be the ability to post to your own host --- the publishing aspect is the one thing which has me seriously contemplating Obsidian, but I'm so deep into gitbook and github I haven't been able to justify a cost-benefit calculation.
The app being the in store for mac and ios, I stopped using obsidian when they removed the app from the mac store and only allowed it for ios.
I need the sandbox, for bussiness is a no brainer, allow some apps from the store, give the right permissions, done and for me personally, I don't use anything that doesn't come from the store, even if I can download the app freely from the project page, a few bucks for the sandbox and peace of mind is worth it to me.
I donated to Obsidian because I liked the project in general, I dislike the way they distribute the app in all platforms outside of ios, ex, snap with --classic rendering the attempt to sandbox it useless.
Edit ---
Reading some comments, it's pretty obvious that a lot of people even install third party plugins, on an app that is about taking personal notes, it's refreshing to see how much people care about cybersecurity and their personal, business notes.
Thanks for your recommendation! I just ran Llamafile for the first time with a custom prompt on my Windows machine (i5-13600KF, RX6600) and found that it performed extremely slowly and wasn't as smart as ChatGPT. It doesn't seem suitable for productive writing. Did I do something wrong, or is there a way to improve its writing performance?
Local models are definitely not as smart as ChatGPT but you can get pretty close! I'd consider them to be about a year behind in terms of performance compared to hosted models, which is not surprising considering the resource constraints.
I've found that you can get faster performance by choosing a smaller model and/or by using a smaller quantization. You can use other models with llamafile as well. They have some prebuilt ones:
RAM and what GPU you have are big determinants of how fast it will run, and how smart a model you can run. A large amount of RAM and GPU memory is required for larger models without significant slowdown because its much faster if it can keep the entire model in memory. Small models range from 3-8 gigabytes, but a 70B parameter model will be 30-50 gigabytes.
Q4_K_S. While not as good as top commercial models like chatgpt, they are still quite capable and I like that there are also uncensored/abliterated models like Dolphin.
reply