OP here. 100% agreed on your analysis. Thanks for chiming in. Coming from the Postgres world, this was very counter intuitive for me. I am still not convinced if a database should charge 1000s of $s due to lack of an index (cluster). It could either create the index automatically or explicitly (on the face) warn the user that this can be expensive or else slow.
We use BQ quite extensively there are a number of billing tuning options which are not that well documented.
1. for some it will make sense to move to pricing based on CPU time per query vs billing on scanned TB of data. This can be done through the commits in UI.
2. there is option to have storage billed on logical or physical bytes. If you have data with a lot of duplication (enums, customer ids etc) then physical billing can be a lot better option. Last I looked this was only available through CLI setting in dataset and you may need to ask Google to include you in their beta.
We lowered our billing with 30% for storage.
I try to keep an eye on GCP release notes to find things like the physical vs logical billing.