OK, if what you were doing there was using "bernoulli distribution" to mean "bernoulli trial", then I stand corrected. But that's different than the binomial distribution, which is the more common thing to discuss, and what I was assuming you were talking about.
> I thought your whole point was that OPs code was not a valid way to sample from a binomial distribution?
The code OP posted was just taking the ratio of two binomial random variables. It's not "sampling from a binomial", except (perhaps) in the sense that each of those random variables was the result of independent coin flips.
We really need to be more precise in our terminology here. "Sampling from a distribution" can mean a lot of things. Based on the sibling comments, it seems like they were trying (?) to sample from the binomial CDF.
Setting this aside, my high-level point was OPs calculation doesn't have anything to do with error distributions.
> We really need to be more precise in our terminology here. "Sampling from a distribution" can mean a lot of things.
I know it to mean only one thing: Generating a value in such a way, so that if that process were to be repeated, the generated values follow the given distribution. How exactly this is done is irrelevant, as long as the distribution is correct. See also https://en.wikipedia.org/wiki/Pseudo-random_number_sampling.
> [...] it seems like they were trying (?) to sample from the binomial CDF.
This is technically unclear usage of terminology. You can not actually sample from a CDF. But it is clear that you are referring to Inverse transform sampling (https://en.wikipedia.org/wiki/Inverse_transform_sampling) – where you sample from a Uniform distribution and use that sample to generate a sample form a non-uniform distribution using that distribution's CDF.
> The code OP posted was just taking the ratio of two binomial random variables. It's not "sampling from a binomial", except (perhaps) in the sense that each of those random variables was the result of independent coin flips.
Once again: Since the Binomial distribution is the distribution of a series of independent coin flips, doing a series of independent coin flips is a perfectly valid way of sampling the binomial distribution.
> Based on the sibling comments, it seems like they were trying (?) to sample from the binomial CDF.
As they explain in the sibling comment, they generate two samples from the binomial distribution and compare them to each other the same way the original authors did. What they achieve by this is sampling from the same random variable that the original authors were implicitly sampling from. They then took multiple samples from that variable in order to get a feel for its distribution, to confirm their original point: That 3% is not an uncommonly big value under that distribution.
> Setting this aside, my high-level point was OPs calculation doesn't have anything to do with error distributions.
So I don't quite now what you mean by "error distribution". I assume you mean the distribution of the random variable that had the value of 3% in the article? If so, then OPs calculation does – as explained – have a lot to do with that distribution. It does not calculate that distribution, but it samples from it, which is a useful way to get a feel for a distribution without having to do any fancy mathematics or research.
> I thought your whole point was that OPs code was not a valid way to sample from a binomial distribution?
The code OP posted was just taking the ratio of two binomial random variables. It's not "sampling from a binomial", except (perhaps) in the sense that each of those random variables was the result of independent coin flips.
We really need to be more precise in our terminology here. "Sampling from a distribution" can mean a lot of things. Based on the sibling comments, it seems like they were trying (?) to sample from the binomial CDF.
Setting this aside, my high-level point was OPs calculation doesn't have anything to do with error distributions.