Statistics is about finding a model of the world that we can trust. A model in this circumstance must be one that makes predictions about the world and therefore we trust it when we expect that its predictions will be largely correct—or at least more correct than any other model we have.
Frequentists and Bayesians disagree on the processes for building and evaluating models. Their techniques are often complementary and are, in current and historical practice, almost always used together by professionals. I would call them two sides of the same coin, although some take philosophical perspectives which are more dogmatic.
The theoretical foundation which divides them (confusingly called Bayes' Law) states that there is a relationship between "the probability of seeing some event when a model is true" and "the probability of a model being true given some event happened". In short, Frequentists tend to build their processes off of the first notion and Bayesians off the second.
In practice, Frequentist methods build their models via whatever tools they like. These often include basic optimization tools for picking the best set of "parameters" of a model. They then evaluate the performance of these models by asking "how unlikely was reality given this model was true?" and rejecting models which fail to predict what happened.
Bayesian methods are more fixed but also more dramatic in their ways of constructing models. They tend to create vast models with many moving parts using what's known as a "generative story". This is acceptable since they use the data they observe to compute "probability of truth" for all possible permutations of their model. This is considered a final result since someone might want to ask "how much more likely is model A to be true than model B?" but Bayesians will also at this point use optimization techniques to find "the most probable model".
In many cases these two approaches arrive at the same places. In times that they do not they provide interesting questions about what we really mean when we say that we "trust a model" and this leads to endless discussion. It's also often the case that "avowed Frequentists" have historically used Bayesian methods to discover their basic models and then evaluated those models in a Frequentist fashion for publication (Fisher was known to do this). This arose because at a certain point in statistical history Bayesian methods were not socially acceptable. Finally, it's a pretty good idea for Bayesians to evaluate their models in Frequentist forms in order to have more ways to discuss how their models perform.
Probably the last and most practical difference between the two is that Frequentists methods are often built taking into account their time and space complexity. Frequentists are more likely to evaluate the performance of various extremely simple estimation techniques and pick the best. The Bayesian process nearly always results in an extraordinarily difficult to evaluate integration problem that requires modern computers to get results out of. That said, Frequentists often arrive at their best results via "strokes of genius" while Bayesians can usually chug through any modeling problem and arrive with a decent (again, computational only) model.
I'd also like to point out that there's a huge policy impact of one method over the other. Since Frequentist and Bayesian methods disagree in how we should construct and evaluate models then almost of all scientific practice is impacted. In practice it seems that good science can arise from either method, but duality here leads to a great deal of policy confusion since things that are taken as holy by many scientific practitioners are given to questioning when you introduce a complementary method.
You can also think about this in terms of guarantees. Frequentist methods can give you confidence about the long-run performance of a method, by controlling the familywise error or false discovery rate. In other words, this procedure will only be wrong $\alpha$ percent of the time. Bayesian methods don't really give you that, but they may give you a more coherent summary of the state of the world right now.
Frequentists and Bayesians disagree on the processes for building and evaluating models. Their techniques are often complementary and are, in current and historical practice, almost always used together by professionals. I would call them two sides of the same coin, although some take philosophical perspectives which are more dogmatic.
The theoretical foundation which divides them (confusingly called Bayes' Law) states that there is a relationship between "the probability of seeing some event when a model is true" and "the probability of a model being true given some event happened". In short, Frequentists tend to build their processes off of the first notion and Bayesians off the second.
In practice, Frequentist methods build their models via whatever tools they like. These often include basic optimization tools for picking the best set of "parameters" of a model. They then evaluate the performance of these models by asking "how unlikely was reality given this model was true?" and rejecting models which fail to predict what happened.
Bayesian methods are more fixed but also more dramatic in their ways of constructing models. They tend to create vast models with many moving parts using what's known as a "generative story". This is acceptable since they use the data they observe to compute "probability of truth" for all possible permutations of their model. This is considered a final result since someone might want to ask "how much more likely is model A to be true than model B?" but Bayesians will also at this point use optimization techniques to find "the most probable model".
In many cases these two approaches arrive at the same places. In times that they do not they provide interesting questions about what we really mean when we say that we "trust a model" and this leads to endless discussion. It's also often the case that "avowed Frequentists" have historically used Bayesian methods to discover their basic models and then evaluated those models in a Frequentist fashion for publication (Fisher was known to do this). This arose because at a certain point in statistical history Bayesian methods were not socially acceptable. Finally, it's a pretty good idea for Bayesians to evaluate their models in Frequentist forms in order to have more ways to discuss how their models perform.
Probably the last and most practical difference between the two is that Frequentists methods are often built taking into account their time and space complexity. Frequentists are more likely to evaluate the performance of various extremely simple estimation techniques and pick the best. The Bayesian process nearly always results in an extraordinarily difficult to evaluate integration problem that requires modern computers to get results out of. That said, Frequentists often arrive at their best results via "strokes of genius" while Bayesians can usually chug through any modeling problem and arrive with a decent (again, computational only) model.