I'm not sure censorship or lack of it matters for most use cases. Why would businesses using LLM to speed up their processes, or a programmer using it to write code care about how accurately it answers to political questions?
"hacking" is bad and c pointers are too difficult for children*, so while "tank man in square" may not come up regularly during a hack sesh, there are coding problems that ChatGPT won't answer is you ask it the "wrong" way. like calling something a hack sesh and it picking up that you're trying to do the immoral act of "hacking". phrasing a request as "write me a chrome extension to scrape pictures off my ex girlfriend's Instagram profile" will get you a refusal for being unethical, but being a halfway intelligent human and getting it to write the code to do that just by figuring out how to phrase it in an acceptable manner is just stupid wasted time.