I don't have access to 175B for comparison. In a vacuum, 30B isn't very good. In the neighborhood of GPT-NeoX-20B, I think, but not good. It repeats itself easily and has a tenuous relationship with the topic. It's still much better than anything I could run locally before now.