Have you had a chance to look at the Direct Preference Optimization [1] pre-print? It seems like it might help get around RLHF which (as far as I can tell) is the really hard/expensive part of training the best of these models.
They said they will release the code soon, so I guess we will found out soon enough.
For compatibility with the OpenAI API one project to consider is https://github.com/go-skynet/LocalAI
None of the open models are close to GPT-4 yet, but some of the LLaMA derivatives feel similar to GPT3.5.
Licenses are a big question though: if you want something you can use for commercial purposes your options are much more limited.