I personally would like to "fix" the thinking when it comes to asking these models for help on more complex and subjective problems. Things like design solutions. Since a lot of these types of solutions are belief based rather than fact based, it's important to be able to fine-tune those beliefs in the "middle" of the reasoning step and re-run or generate new output.
Most people do this now through engineering longwinded and instruction-heavy prompts, but again that type of thing supposes that you know the output you want before you ask for it. It's not very freeform.
If you run one of the distill versions in something like LM Studio it’s very easy to edit. But the replies from those models isn’t half as good as the full R1, but still remarkably better then anything I’ve run locally before.
Most people do this now through engineering longwinded and instruction-heavy prompts, but again that type of thing supposes that you know the output you want before you ask for it. It's not very freeform.