If you're using it commercially you're probably deploying it on a server where you're not limited by the 24GB and you can just run llama 2 70b.
The majority of people who want to run it locally on 24GB either want roleplay (so non commercial) or code (you have codellama)