"For example, we teach students in high school that if the product of two polynomials is zero, then to solve we set each one separately equal to zero. Yet this does not hold with nonzero numbers. For example, working in polynomials with real coefficients, we know that f(x) * g(x)=0 implies either f(x) = 0 or g(x) = 0. Yet it is not the case that if f(x) * g(x) = 4, then either f(x) = 2 or g(x) = 2."
Does this really require knowing abstract algebra? Seems obvious to anyone doing any sort of multiplication that if the output is 0 then one of variables/functions has to be 0, if it is nonzero then the variable/function can be anything but 0.
> Does this really require knowing abstract algebra? Seems obvious to anyone doing any sort of multiplication that if the output is 0 then one of variables/functions has to be 0
No, this is only true when you are doing multiplication in an integral domain.
mod should be part of the signature of the ring, so you are not looking at pure multiplication. Then the domain would be still integral. Why wouldn't it?
An integral domain is any commutative ring with no zero divisors (a,b are zero divisors if ab = 0 but a != 0, b != 0). In this case 2*2 = 0, so 2 is a zero divisor, so Z/4z = Z_4 is not an integral domain.
I teach math at a community college. Your question is not so simple to answer. Much of mathematical teaching involves lying and not justifying statements. The details are often way more complicated than the idea.
It is "obvious" in the real numbers that if you multiply two numbers and get 0 then one of them must be zero. I doubt you could prove this. It's obvious simply because you are used to it being true. But it is not true for all algebraic systems. The algebraic structure of all 2x2 matrices can be viewed as an extension of the real number system and in the set of 2x2 matrices you can multiply two matrices to get 0 in which neither matrix is 0.
One of the goals of abstract algebra is to understand under what conditions certain properties hold in an algebraic system. To truly understand these things requires the oft mentioned mathematical maturity. But to get to the point of gaining this maturity requires just accepting what you've been told is true is indeed true.
We tell students in Calculus I that the function 1/x is discontinuous at 0. There's a break in the graph there. But, in reality, it is meaningless to talk about a function being continuous (or not being continuous) at a number not in the domain of the function. Indeed, in the standard subspace topology the function 1/x from R-{0} to R is continuous. But this nuance is way too complicated to get across to students in Calculus I so we fudge things a bit. This happens a lot at lower levels of math.
EDIT: So my point is that if your goal is to truly understand things then yes, Abstract Algebra is necessary. If your goal is to be operationally functional in working with polynomials over the real numbers then it isn't.
> It is "obvious" in the real numbers that if you multiply two numbers and get 0 then one of them must be zero. I doubt you could prove this.
Theorem: If x & y are reals and xy=0, then at least one of x and y is zero.
To see this, assume x and y are both nonzero. Divide both sides of xy=0 by x (this is valid because x is nonzero). Then y=0; contradiction. Therefore, at least one of x and y is zero.
First you'd need that the integers are an integral domain and then build up to the fact that the reals are a field. At least that's how I'd go about it. Maybe one can start with the reals themselves. Here's the proof that the integers have no zero divisors:
> Are you sure that the assumptions that your proof relies on are more fundamental than than what you are trying to prove?
Yes, at least to the extent that "fundamental-ness" can be defined. It is a well, and probably universally, established convention that the field axioms allowing division by non-0 elements (that is to say, allowing multiplication by arbitrary elements, and multiplicative inversion of non-0 elements) is taken as part of the definition of a field, and the fact that the set of non-0 elements in a field is closed under multiplication as a theorem about fields.
I believe the point the person you were responding to was getting at is that the person's proof assumes that the reals are a field. It's not a convention that cancellation holds for fields. It's a theorem one proves about integral domains. That a field is closed under multiplication is not a theorem; it's part of the definition of being a ring.
> It's not a convention that cancellation holds for fields. It's a theorem one proves about integral domains. That a field is closed under multiplication is not a theorem; it's part of the definition of being a ring.
Yes, that's what I said. My point was that, between
(A) making an axiom the right to divide in a field by non-0 numbers, and proving the closure of non-0 numbers in a field under multiplication,
and
(B) just making an axiom the closure of non-0 numbers in a field under multiplication,
it is only convention (rooted in the deeper convention of not making an additional axiom out of something we could prove from existing ones) that we choose (A) instead of (B). (Also notice that I was talking about the closure under multiplication of the set of non-0 numbers, which is (usually) not part of the definition, rather than of the entire set, which is indeed always part of the definition.)
I studied applied mathematics, but it took me ages to shake away (false) intuition engendered in me about “Calculus” and “infinitesimals” at high school. Sure, it worked, but learning “differentiation from first principles” with the limit taken when δ︎x→︎0 just by cancelling out did an unmeasurable amount of damage to my ability to absorb the formal Weierstrss formulation in terms of limits.
On a more serious note, you can understand most, if not all, of Calculus by saying that dx=0.0001, and that A ~= B if they don't differ by more than, say, 0.01 (say, that's the instrument error).
Then you get your limits, FTC, and so on, and verify the results with a four-function calculator.
The mental effort you have to make here is that things on the LHS of ~= are "actual" values, and on the RHS are "measured" values, and that ~= is not an equivalence relation.
On a yet more serious note, learning about differential forms will help justify some of that high-school notation.
On a philosophical note, Weierstrass is not the end-all of Calculus. Neither Newton nor Leibniz did it that way. By adding rigor, some argue that the essence has been obscured (hence the non-standard analysis above).
Yeah, I’m aware of the numerics... but I’m also aware that if you do the same process backwards (naive numerical integration), for example for simulating planetary motions, you get into ridiculous situations where the collective momentum of a closed system rises exponentially after a ”close flyby”. This is precisely the kind of situation where ”false intuition” created by these shallow teachings cause the most harm.
I don't think it's harmful - without stumbling onto an example like that, it's hard to justify why we need solid foundations and exact solutions.
On the other hand, very brute numerics work for an awful lot of scenarios - that's why epsilon-delta came centuries after Calculus was invented.
For instance, "first-order optics" and "third-order optics" rise from chopping off the Taylor series after the 1st and 3rd term, resp. And it works! In many places, 1st order approximations are just good enough. A lot of scenarios are inherently stable.
So I don't think the intuition you build up is wrong - it just has a scope. There's nearly always a place for counter-examples where "things work the way you think they should" wouldn't apply, however you think about things :)
On a philosophical note, the continuity is a human construct - down there, things seem to be discrete, just with a very small step size. Continuity models these pretty well, until it doesn't - but that doesn't mean the intuition you build up is wrong. Just limited in scope.
I see what you are saying, but to each his own: I’m one of those kids (there's one in every mechanics class) that grow irate at the sin(ϴ︎)≈︎ϴ︎ approximation in the pendulum solution and then waste weeks using the full series expansion, only to find out the results differ only in the third or fourth decimal place. The thing is, I emerged from the experience thinking to myself “that was a useful learning moment”.
> Does this really require knowing abstract algebra? Seems obvious to anyone doing any sort of multiplication that if the output is 0 then one of variables/functions has to be 0, if it is nonzero then the variable/function can be anything but 0.
Since you mention them specifically, it's not true for functions: multiply the function that is 1 for positive numbers and 0 elsewhere, by the function that is 1 for negative numbers and 0 elsewhere. (One can even produce continuous, or even smooth, examples with only a little more work.) A more traditional example is that it's not true for matrices: multiply the matrix ( ( 0 1 ) ( 0 0 ) ) by itself. (I just noticed sedeki https://news.ycombinator.com/item?id=15860883 pointed this out a few minutes earlier, noting that, for example, neither 2 nor 3 is congruent to 0 modulo 6, but their product is.) What I mean to say is: it often doesn't require knowing abstract algebra to think that things are obvious, but it may sometimes require knowing abstract algebra to figure out whether obvious things are true.
(Also, the last sentence you quote:
> Yet it is not the case that if f(x) * g(x) = 4, then either f(x) = 2 or g(x) = 2.
is, I would say, the important operational point. My students, especially in calculus, love to use this style of reasoning, even when specifically told that it doesn't work—although sometimes they change it (usually to conclude that f(x) = 4 or g(x) = 4). As Twain might have approximately said, it's not what's obvious that you don't know that gets you; it's what's obvious that ain't so.)
Regarding your students, well I don't dispute that there are students that think that - I just don't think that say a teacher knowing more about abstract algebra would be able to explain that better.
In abstract algebra those would be rings with zero divisors. But all my early algebra education was in integral domains (ring without zero divisors like the integers) or fields which are even nicer.
I was talking mainly about the simple case presented above.
x*y can be 0 (mod 6) but I don't think it takes knowing abstract algebra and a deep knowledge of modulo and axioms to figure that out. I hope math teachers that don't know abstract algebra know that!
Reading this I thought the same thing. Then I remembered one of my high school teachers telling the class that Bertrand Russell wrote a multi-volume book with the goal to prove that 1+1 = 2. At that point I realized that you don't need to work in abstract algebra in the high school mathematics curriculum, but rather you need teachers who have a deep understanding of mathematics. Unfortunately given economic reality that's hard to achieve in the present day.
You only need multi volumes to prove 1 + 1 = 2 if you start with first order logic. If you start with the rules of basic arithmetic, it takes less than a page.
There is no fundamental difference when changing your starting assumptions other than one set of assumptions might prove more things than the other. From the perspective of single proof, either is equally as good. We axiomatically know basic arithmetic to be true, just as we axiomatically know first order logic to be true.
Boy, I didn't realize I needed to set the explicit "irony flag" on that post! The idea is that my math teacher was making a math joke, stimulating his students' curiosity to think about deep things like "what does it mean to add and how do you prove things that seem intuitive" and informing the class that people have written serious and long books on the foundations of mathematics. None of my kids' math teachers (so far) have made any math jokes...
Well I know there are entire books written to codify basic math. I don't knowing that, or say reading and understanding Russell's book, would help teach a child that 1+1 = 2
You don’t need to teach a child that 1+1=2, they already know it’s true. You just need to teach them what the symbols mean. Every child knows that you can put two rocks together.
Yep! And I really wish this concept (of bringing together all of those properties, and exploring some of the resulting structures) was taught (specifically including the introduction of the term "group") in high school algebra. I think that would reduce a lot of the friction between learning secondary and post-secondary mathematics.
It _is_ taught to 16 year olds in Australia, if they elect to take math C (math a/B is compulsory where math A is remedial). Everyone hated it and wondered what the point was (including me - it was only when I did a course on abstract algebra in university and learned about quotient groups that I was converted)
Also covered in math C is the computational parts of linear algebra (e.g. Gaussian elimination, Cramer's rule).
I took the International Baccalaureate (IB) programme when I was in my final two years of High School (September 1997-June 1999) and our Maths Higher unit included ”Group Theory”.
Great. Teach kids some rote manipulations. Things they will have forgotten if they later stumble upon some math. Or will have to relearn in college math.
Will the same mistake happen with the learning to code in school movement? Or will there be a sensible connection between abstract computer science and high school coding?
I think the learning to code movement is a general waste of time. Sure, offer them the class if the child is interested, but shoveling kids through a mandatory Scratch or Python course? Are we really supposed to believe that the ability to create Pong in Python is time better spent than something like auto skills?
Computer literacy (not just how to use the Office suite) and very basic IT (passwords, security culture, generalized troubleshooting, basic networking, etc.) would take a child much further as a general purpose class. Think about every time a family member has called and complained that they've lost internet connectivity. It turns out they unplugged something or flipped a hardware switch. Teaching students these basic parts of troubleshooting would be a boon.
I agree that as shown these are wrote abstractions and without motivation, and I wouldn't see their use. It's much easier (for me) to initially learn some wrote rules for solving a problem (algebraic manipulation of Reals) based on examples. Then this Group discussion could be used to show that there was something more general. Then you need to show that the generalization is useful... somewhere?
However, it is difficult to motivate even simple algebra (word problems anyone?) in high-school as something other than a game (and perhaps that's enough). It's when moving to the next level (e.g. analytic geometry with sin/cos) where complex numbers are useful. There you could show usefully that there are 2 different systems of math (Groups) and that algebra that allows the sqrt(-1)=i have different solutions than in the reals. Linear algebra and matrix inversion are another case where the introduction of group theory might make sense... and that you can create more generic versions of commutativity/inverses etc. I think you can get from linear algebra to the linkage between LFSRs (Linear Feedback Shift Registers) and their characteristic polynomials by showing the groups are the same, but that's almost certainly college level.
The step from there to using it to solve a physical problem in Chemistry (e.g. for IR Spectroscopy) or Materials (e.g. X-ray crystalography) is the next step. You get familiar with one thing (or see physical analogies) and then step to the next, but showing utility and providing intuition along the way is important to most.
For some people it's all just a game. I find those people are best at the math... they don't need physical motivation or purpose.
Does this really require knowing abstract algebra? Seems obvious to anyone doing any sort of multiplication that if the output is 0 then one of variables/functions has to be 0, if it is nonzero then the variable/function can be anything but 0.