While beginning calculus students often pick up derivatives and integrals (and the associated formulas) easily, the delta-epsilon definitions of limits and continuity are a well known stumbling block for many. I've been told that the difficulty stems from that being the first place math beginners really see nested quantifiers: (forall epsilon)(exists delta)(...). In logic though, nested quantifiers are fundamental. I don't know what happens if someone tries to study logic without first having studied calculus. Maybe it's a good idea, but few people do it that way.
> I don't know what happens if someone tries to study logic without first having studied calculus.
When I was in college, the Philosophy department offered this course. It was considered an easy way to get a general education math credit without needing to be good at math. It was a really enjoyable course[0] that put me on the path to becoming a computer programmer. It occasionally comes in handy[1].
Delta epsilon is just an annoying unenlightening technicality, not the essence of real analysis. Surreal numbers (infinitesimals)solve the problem more elegantly.
To each his own, but epsilon-delta is my go-to example of formalizing an intuitive concept ("gets closer and closer"), which is a high-level mathematical skill.
The intuition and the formalism are presented together (at least, they should be!). To learn the role of epsilon and delta, the student needs to jump back and forth, finding the correspondences between equations and the motivation. This is a skill that needs practice; this was one of the first places I found the equations dense enough that I couldn't just "swallow them whole".
(The earlier I remember is the quadratic formula, which I first painfully memorized as technical trivia. It took me a couple of years to grasp that it was completing-the-square in general form. Switching between the general and the specific is another skill that you develop)
Surreal analysis is sort of a thing but it is quite far out there (e.g. you can have transfinite series instead of merely infinite ones). Maybe you meant nonstandard analysis (NSA), which is real analysis done with infinitesimals, but the machinery justifying it is way outside of what you'd see in even a theory-oriented intro calculus class. There was an intro calculus text (Keisler, 1976) that used infinitesimals and NSA. I don't know how it dealt with constructing them though.
The problem is that epsilon deltas have very little practical use outside of theoretical proofs in pure mathematics. Even for cutting edge CS/statistics fields like high level machine learning, most of the calculus used are existing formalisms on multidimensional statistics and perhaps differential equations. Aside from Jensen's inequality and the mean value theorem, I have never seen any truly useful epsilon delta proofs being used in any of the ML papers with significant impact. It's perhaps mentioned once in passing when teaching gradient descent to grad students.
> Even for cutting edge CS/statistics fields like high level machine learning, most of the calculus used are existing formalisms on multidimensional statistics and perhaps differential equations.
If you mean experimental work, then sure, that's like laboratory chemistry. You run code and write up what you observe happens. If you're trying to prove theorems, you have to understand the epsilon delta stuff even if your proofs don't actually use it. It can be somewhat abstracted away by the statistics and differential equations theorems that you mention, but it is still there. Anyway, the difficulty melts away once you have seen enough math to deal with the statistics, differential equations, have some grasp of high dimensional geometry, etc. It's all part of "how to think mathematically" rather than some particular weird device that one studies and forgets.
I agree, and including delta-epsilon proofs in calculus 1 seemed like a way for the curriculum authors to feel good that they were “teaching proof techniques” to these students, when in reality they are doing no such thing. I later did an MS in math, and loved the proofs, including delta-epsilon proofs…after taking a one semester intro to proofs class that focused just on practicing logic and basic proof techniques
If you want to do "exact" computation with real numbers (meaning, be able to rigorously bound your results) you just can't avoid epsilon-delta reasoning. That's quite practical, even though in most applied settings we just rely on floating point approximations and try to deal with numerical round-off errors in a rather ad-hoc way.