Depends on your use case. If you're doing pure classification then there are smaller encoder-only models like DeBERTa that might get you better performance with a much smaller model size (so cheaper inference).
But if you need text generation and are ok with a 7B+ parameter model, Llama 2 or one of its derivatives is what I'd strongly recommend. The community around it is much larger than any of the alternatives so the tooling is better, and it's either state of the art or close to it on all evals when compared to other similarly-sized open models.
If you're comfortable sharing more details of the task you're trying to do I might be able to give more specific advice.
It depends a lot on what you're trying to do. If have a focused use case of the type of fine-tuning you want, you can probably get away with one of the smaller models.
Another thing to look out for is Retrieval Augmented Generation (RAG). I don't see it in wide use yet, but it may turn out to more useful than fine tuning for a lot of situations.