Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: ChatCBT – AI-powered cognitive behavioral therapist for Obsidian (github.com/clairefro)
69 points by marjipan200 on Dec 2, 2023 | hide | past | favorite | 22 comments
ChatCBT is an AI-powered cognitive behavioral therapist for your local Obsidian notes.

You have the choice to use OpenAI, or a 100% local model with Ollama for total data privacy.

When you're done with your conversation, ChatCBT can automatically summarize the chat into a table listing your negative beliefs, emotions, categories of negative thinking, and reframed thoughts. This way you can start to recognize patterns in your thinking and begin to rewire your reactions to disturbing circumstances.

Conversations are stored in markdown files on your local machine, ensuring privacy and portability while giving you the freedom to organize your sessions as you please. You could easily share these files with a therapist.

I built this for myself when I noticed the patterns of chat help I was getting from my therapist in between therapy sessions was essentially coaching that didn't require much context beyond the immediate situation and emotions. This felt like a particularly good use case for LLMs.

ChatCBT has been pretty effective for me to talk myself through spiraling episodes of negative thinking. I've been able to get back on my horse faster, and it's convenient that it's available 24/7 and 5000x cheaper than a therapy session (or free if using Ollama). That's why I'd like to share it - curious if it helps anyone else.

It's under review to become an Obsidian community plugin, but in the meantime it's available now via git clone (see readme). Happy for feedback



I think LLMs have a lot of therapeutic potential, but I am worried about how easily they can cause harm in this area. CBT is the gold standard treatment for many problems, but applying it correctly requires some nuance and an accurate diagnosis of the problem.

Consider a teenager who is fixated on the idea that they're a "bad person" because they're having homosexual thoughts. You might think they need positive affirmations or exercises to challenge their fears with evidence. But these symptoms are sometimes a manifestation of OCD. If so, using CBT to "argue away" the fear could end up reinforcing the OCD cycle and causing more and more self-doubt. For this person, the better treatment would be a different CBT tool: exposure. It may seem odd, but they would be better served with exercises such as repeatedly thinking sexual thoughts and then telling themselves, "I might be a horrible person because of this." (The purpose is to desensitize them to the idea - eventually, they will just get bored.) Needless to say, this type of treatment needs to be implemented with care.

I think it's beyond the capabilities of LLMs to reliably distinguish problems like this. So then, I think the systems have to be designed so that their output is at least harmless for all people, and that sounds really hard.


That's an interesting point, but I'll reiterate this tool is explicitly described as not being a replacement for therapy.

> I think LLMs have a lot of therapeutic potential, but I am worried about how easily they can cause harm in this area

At the end of the day, no one can be 100% responsible for someone else. The agency lies with the patient. Even human therapists come with their own beliefs that seep into their practice, which while well-intentioned may not align with the goals of the patient. It is up to the patient to understand their own values and decide whether therapy with a given therapist/therapy method is effective in improving their life.

In your example above, I'm not sure the harm risk is so high. In fact, with your example of exposure, since LLMs tend to regurgitate the same flavor of response over and over, they might actually deliver the same effect of desensitization.

I fall into the OCD thinking category, and found LLMs helpful precisely for the fact they have endless patience (whereas I feel the need to hold back after a certain number of times talking to friends/therapist about the same issue)

Best one can do is be upfront with the aims/design of the tool, and let people decide


Those are all great points.

I hope my comment didn’t come across as a criticism of your project. I was really just going off on a tangent - I’ve heard a number of people discuss building LLM therapist bots, and so I’ve had these concerns in the back of my mind.

Your project looks awesome, and I think “assisted journaling” may be one of the best ways to frame this technology’s potential. I especially like how the app tends to follow up with questions that lead the user to continue exploring their feelings (rather than offering opinions or conclusions). Also: kudos on the clever name.


Seems irresponsible to advertise to anyone who doesn’t have an understanding of deep learning methods. A big part of therapy is simply getting patients out of the house and interacting with another human being for a time. LLM’s could speak identically and still have failed at providing that. Hence, not good to suggest it as a cheaper/free option in my opinion.

Having said that, people tend to try to make these a subscription service, while this indeed appears to be entirely free ignoring openAI costs. Still I think if you’re not careful someone might get hurt or might get worse care than they otherwise would have because of the appeal of lower prices.


I understand that line of thought, but feel differently. Maybe this could be worded more clearly, but it's stated in the REAMDE that ChatCBT is not intended to be a replacement for professional therapy or human interaction, rather a supplement. ChatCBT is just a journaling tool, similar to the CBT worksheets therapists give patients for their emotional toolkits, but more interactive to help you get out of your head a bit.

Frankly, not everyone can afford therapy - monetarily, or in terms of the emotional availability of the friends and family around them. I agree with you human interaction is important. But journaling is also a form of therapy.

I don't intend to monetize this, and would prefer it be a free tool that helps people to not need it anymore.


> I understand that line of thought, but feel differently. Maybe this could be worded more clearly

I think changing the name and general "stated goal" (AI powered CBT) would go a long way toward reducing confusion and improving outcomes/expectations. Cognitive-behavioral journaling might be a step closer to what you're actually doing and a step away from having handed someone in a very fragile state the tools to worsen that state (whether that is your goal or not).

In particular I'm reminded that people like Blake Lemoyne exist, who struggle far more than I or the typical HN'er when it comes to understanding that these LLM's are indeed very much _not_ alive, thinking, sentient, emotional, whatever. Someone who is experiencing delusional thinking already may actually be advised to go nowhere near an LLM, in much the same way we now know that social media can actively harm one's mental health.

Mostly I think people can and will use tools like this because it's their right to. I just think the "pitch" (I know you aren't monetizing) is strange and risky.

To your point about those who can't afford therapy - I don't see how that justifies participation with an experimental tool simply because it is cheaper. Even if you were a qualified health professional (there are many trying to make these type of apps), you really do need to go through the red tape that is government oversight and consensus within the field of psychiatry. This will take years.


On the other hand if a tool like this can help, it would be bad not to use it. I have been developing my own life assistant using GPT-4 and I use it interactively and have found the therapeutic effect of it absolutely beyond amazing helping me get through self doubt and making decisions in life without regrets. I also do therapy and I think it is actually a lot more powerful than a therapist. It can understand technical challenges and problems I face much better than a therapist can who can mostly only give me generic advice, but to an LLM I can discuss daily down to most intricate details about my problems. I am so glad something like that exists now.

I know nothing else that has been so effective in combatting overthinking and unnecessary self doubt.

And I have usually been disappointed and frustrated with real therapy.


Noted. I can update the README to emphasize that it is a journaling tool.

edit - done!


Thanks!


The benefit here is that you can at least unpack stuff you aren't comfortable talking to a therapist about. The "it's free" thing isn't the real benefit.

The other important thing to look at is that mental healthcare in the US is still heavily stigmatized. There are professions that actively punish people who seek help, like the military or civil flight - a 35-year pilot a few years away from retirement just won't, for example. A 17 year old kid whose family all served in the military, who wants to also do it, will probably be advised against seeking therapy because certain labels lock you out of the career path. And life insurance can and will deny you coverage if you have conditions like anxiety or depression.

Even if you can afford it, you can end up risking the livelihood of you (career options) or those you care about (a stay at home spouse who needs the insurance if something happens to you).

Therapy is important, and I wholeheartedly agree with you: people should go see a real human and get out of the house. It's the healthiest option, and least risky from a health perspective. But even if an LLM is worse than a real licensed therapist and just helps you "talk" about things, I can understand why someone might feel that is the only realistic option under the poor system we live in.


[flagged]


I think they raised an important point and appreciate the call out to make the aim clearer


Go away, troll.


It doesn't matter how you dress up the comments above, the intention is clear.


Yeah you’re right I was jealous. How did you figure it out?


Ask ChatCBT


How do you prevent openais model telling you to talk to a licensed therapist anytime you say anything slightly negative?


I can't make it work through the plugin, I get an "Invalid initialization vector" error which seems like an node issue ; the ollama server is up and running and I can curl it and prompt it through the cli, no problem.


Strange - are you seeing the error on obsidian? Happy to investigate if you open an issue


Yes it pops on top left ; I don't have a github account and there is nothing on the server log though (just my manual curl). It's like obsidian doesn't even try. The plugin is running too.


Thanks - I'll try to reproduce


This is really cool, well done on shipping, great naming too :)


Another day another great AI tool!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: