There's a lesser-known trick described in the OpenAI Cookbook. You can linearly transform a generic embedding space (like OpenAI’s) for your usecase with a well-tuned matrix multiply.
I ported the cookbook technique into a standalone hardware-accelerated webapp, and added some UX to guide the process along. There are some examples on the homepage for out-of-the-box fun. With 30 seconds of training, I was able to improve performance by the first example by 15%.