Even within the statistics community, there's a spectrum of quick-and-dirty vs fully rigorous. People with the ability and inclination to be fully rigorous often get treated as pedants and perfectionists (in the bad sense).
I often get that with business partners. "The data says <likely X, but with caveats/nuance/uncertainty/under certain assumptions we can't justify>" to which they respond "Can we just say X?" Or "can we get numbers on Y to support a presentation on Z?" when Y seems to support Z, but actually you can't draw that connection, so it's misleading.
Stuff like this happens because people treat extra rigor as pedantry and are comfortable making supporting assumptions that aren't supported by data. The people making fraudulent requests aren't aware that they're fraudulent (usually). In my experience, they just think they're being practical.
"Picking battles" is one way to describe my counterpoint: caveat exactly as much as needed so that a proper decision can be made with the risks involved.
If you want to query your whole team for a joint lunch location but coworker X is out, it is still appropriate to say (assuming more than you and X on team) that you asked the team and you decided on lunch place Y. It's not rigorous (X is left out) but it's still accurate.
This is very different from, say, regulatory or securities reporting where ambiguity is not appropriate.
I often get that with business partners. "The data says <likely X, but with caveats/nuance/uncertainty/under certain assumptions we can't justify>" to which they respond "Can we just say X?" Or "can we get numbers on Y to support a presentation on Z?" when Y seems to support Z, but actually you can't draw that connection, so it's misleading.
Stuff like this happens because people treat extra rigor as pedantry and are comfortable making supporting assumptions that aren't supported by data. The people making fraudulent requests aren't aware that they're fraudulent (usually). In my experience, they just think they're being practical.