Not the op but I use chatblade[0] on the cli, chatgpt-next-web[1] as webgui and quivr[2] for multimodal stuff files/images/audio/video. atm everything goes over a azure openai endpoint but would love to infere an llm locally.
At work we're always told to use the Azure one because our CIO is elbow deep in Microsoft's ass.
But I wonder.. one funny thing is that everyone at Microsoft is referring to OpenAI as an 'acquisition' whereas as far as I know they only have a <50% interest?
If it's useful, I recently open-sourced a simple GPT-4 TUI [1], which I use many times a day. And since this is a Kagi post, I might as well plug my (extremely simple) Kagi CLI [2], which uses their FastGPT API under the hood.
I use the ChatGPT bot for matrix mainly. I've tried some other frontends that are more like ChatGPT web but I keep going back to matrix because it's just so handy. I have all my other chats in there too, through bridges.
I use the OpenAI playground because I'm paranoid that third party frontends will steal my API keys and I don't have enough time to audit the code or set up firewall rules.