Don’t sleep on Codex-CLI + gpt-5. While the Codex-CLI scaffolding is far behind CC, the gpt-5 code seems solid from what I’ve seen (you can adjust thinking level using /model).
It’s a very smart move for DeepSeek to put out an Anthropic-compatible API, similar to Kimi-k2, GLM4.5 (Puzzled as to why Qwen didn’t do this). You can set up a simple function in your .zhsrc to run Claude-Code with these models:
Wow thanks! I just ran into my claude code session limit like an hour ago and tried the method you linked and added 10 CNY to a deepseek api account and an hour later i've got 7.77 CNY left and have used 3.3 million tokens.
I'm not confident enough to say it's as good as claude opus or even sonnet, but it seems not bad!
I did run into an api error when my context exceeded deepseek's 128k window and had to manually compact the context.
VoiceInk (one time payment) and WisprFlow (subscription) are currently my fav dictation apps. I just looked at Whispering and have to say VoiceInk is far superior to Whispering in terms of Ux, and clarity of settings, so I think VoiceInk deserves at least as much attention. There are several things that make a huge difference things that make a huge difference in dictation apps, besides the obvious speed and accuracy:
- allow flexible recording toggle shortcuts
- show a visual icon with waves etc showing recording
- how the clipboard is handled during recording (does it copy to clipboard? does it clear it after text output?)
VoiceInk is nearly there in terms of good behavior on these dimensions, and I hope to ditch my Wispr Flow sub soon.
I recently dove into Tmux just to be able to combine it with Claude-Code(CC): allowing CC to watch and interact with a CLI application in a separate pane. A nice feature of tmux is that it is scriptable, I.e allows programmatically sending keystrokes to a specific pane. So I built this little tool "tmux-cli" that creates a convenient, safe wrapper (that prevents self-killing, and has built-in delays for Enter key, etc) around tmux that lets CC spawn another pane, launch a CLI script and actually interact with it. This gives CC some interesting abilities: interact with CLI scripts expecting user input; spawn another instance of CC and give it a task (like sub-agents but fully-visible); launch a CLI script with a debugger like Pdb and stepping thru it; launch UI servers and use Puppeteer MCP to check the browser.
Speaking of prefix-key binding -- I find all control-key combos painful. I use the UHK split keyboard, and set mod-space as the prefix key which is very ergonomic.
I find it very effective to use a good STT/dictation app since giving sufficient detailed context to CC is very important, and it becomes tedious to type all of that.
I’ve experimented with several dictation apps, including super whisper, etc., and I’ve settled on Wispr Flow. I’m very picky about having good keyboard shortcuts for hands-free dictation mode (meaning having a good keyboard shortcut to toggle recording on and off), and of course, accuracy and speed. Wispr Flow seems to fit all my needs for now but I’d love to switch to a local-only app and ditch the $15/mo sub :)
Agreed. CC lets you attempt things that you wouldn’t have dared to try. For example here are two things I recently added to the Langroid LLM agent framework with CC help:
Nice collapsible HTML logs of agent conversations (inspired by Mario Zechner’s Claude-trace), which took a couple hours of iterations, involving HTML/js/CSS:
A migration from Pydantic-v1 to v2, which took around 7 hours of iterations (would have taken a week at least if I even tried it manually and still probably wouldn’t have been as bullet-proof):
Beyond just running CLI commands, you can have CC interact with those, e.g I built this little tool that gives CC a Tmux-cli command (a convenience wrapper around Tmux) that lets it interact with CLI applications and monitor them etc:
For example this lets CC spawn another CC instance and give it a task (way better than the built-in spawn-and-let-go black box), or interact with CLI scripts that expect user input, or use debuggers like Pdb for token-efficient debugging and code-understanding, etc.
I really wish Qwen3 folks put up an Anthropic-compatible API like the Kimi and GLM/Zai folks cleverly did — this makes their models trivially usable in Claude Code, via this dead-simple setup:
reply