I myself am a rather recent convert to using Bayesian statistic, for the simple reason, that I was trained and have used frequentist statistics extensively in the past and I had no experience using Bayesian statistics. Once you take the time to master the basic tools, it becomes quite straightforward to use. I am currently away from my computer and resources, which makes it difficult to suggest them. As a somewhat shameless plug, you could check the https://www.frontiersin.org/articles/10.3389/fpsyg.2020.0094... paper and the related R-package https://cran.r-project.org/web/packages/bayes4psy/index.html and GitHub repository https://github.com/bstatcomp/bayes4psy, which were made to be accessible to users with frequentist statistics experience.
To brutaly simplify the distinction. Using frequentist statistics and testing, you are addressing the question, whether based on the results, you can reject the hypothesis that there is no difference between two conditions (e.g., A and B in A/B testing). The p-value broadly gives you the probability that the data from A and B are sampled from the same distribution. If this is really low, then you can reject the null hypothesis and claim that there are statistically significant differences between the two conditions.
In comparison, using Bayes statistic, you can estimate the pobability of a specific hypothesis. E.g. the hypothesis that A is better than B. You start with a prior belief (prior) in your hypothesis and then compute the posterior probability, which is the prior adjusted for the additional empirical evidence that you have collected. The results that you get can help you address a number of questions. For instance, (i) what is the probability that in general A leads to better results than B. Or related (but substantially different), (ii) what is the probability that for any specific case using A you have a higher chance of success than using B. To illustrate the difference, the probability that men in general are taller than women approaches 100%. However, if you randomly pick a man and a woman, the probability that the man will be higher than the woman is substantially lower.
In your A/B testing, if the cost of A is higher, addressing the question (ii) would be more informative than question (i). You can be quite sure that A is in general better than B, however, is the difference big enough to offset the higher cost?
Related to that, in Bayes statistics, you can define the Region of Practical Equivalence (ROPE) - in short the difference between A and B that could be due to measurment error, or that would be in practice of no use. You can then check in what proportion of cases, the difference would fall within ROPE. If the proportion of cases is high enough (e.g. 90%) then you can conclude that in practice it makes no difference whether you use A or B. In frequentist terms, Bayes allows you to confirm a null hypothesis, something that is impossible using frequentist statistic.
In regards to priors - which another person has mentioned - if you do not have specific reason to believe beforehand that A might be better than B or vice versa, you can use a relatively uninformative prior, basically saying, “I don’t really have a clue, which might be better”. So issue of priors should not discourage you to using Bayes statistics.
To brutaly simplify the distinction. Using frequentist statistics and testing, you are addressing the question, whether based on the results, you can reject the hypothesis that there is no difference between two conditions (e.g., A and B in A/B testing). The p-value broadly gives you the probability that the data from A and B are sampled from the same distribution. If this is really low, then you can reject the null hypothesis and claim that there are statistically significant differences between the two conditions.
In comparison, using Bayes statistic, you can estimate the pobability of a specific hypothesis. E.g. the hypothesis that A is better than B. You start with a prior belief (prior) in your hypothesis and then compute the posterior probability, which is the prior adjusted for the additional empirical evidence that you have collected. The results that you get can help you address a number of questions. For instance, (i) what is the probability that in general A leads to better results than B. Or related (but substantially different), (ii) what is the probability that for any specific case using A you have a higher chance of success than using B. To illustrate the difference, the probability that men in general are taller than women approaches 100%. However, if you randomly pick a man and a woman, the probability that the man will be higher than the woman is substantially lower.
In your A/B testing, if the cost of A is higher, addressing the question (ii) would be more informative than question (i). You can be quite sure that A is in general better than B, however, is the difference big enough to offset the higher cost?
Related to that, in Bayes statistics, you can define the Region of Practical Equivalence (ROPE) - in short the difference between A and B that could be due to measurment error, or that would be in practice of no use. You can then check in what proportion of cases, the difference would fall within ROPE. If the proportion of cases is high enough (e.g. 90%) then you can conclude that in practice it makes no difference whether you use A or B. In frequentist terms, Bayes allows you to confirm a null hypothesis, something that is impossible using frequentist statistic.
In regards to priors - which another person has mentioned - if you do not have specific reason to believe beforehand that A might be better than B or vice versa, you can use a relatively uninformative prior, basically saying, “I don’t really have a clue, which might be better”. So issue of priors should not discourage you to using Bayes statistics.