I may be biased as I am a trained Mathematician, but I always feel when someone says "Math is Hard", that is because they had bad teachers.
Math is easy if you build up from fundamentals, not like physics education where you say "but lets delete everything before because it had an oversimplifying assumption", rather if you build your knowledge entirely sequentially from things you know or assume, you build up a toolbag that applies literally everywhere.
So math isnt hard. Learning random bits of math out of context is hard. Climb the ladder once, you have it for life.
The one constant I observed in most parts of my mathamatics journey (math major in college, software engineering & computer science at university) was the lack of understanding by the person doing the math teaching that not everyone will be able to follow along if steps in the ladder are missing.
Words and sentences like 'it is obvious', 'clearly', 'as can be seen' should be avoided when teaching someone a subject as abstract as mathematics as inevitably you are not fully realising the size of the gap in knowledge between you and your students and how such statements can leave them feeling frustrated.
To a certain point, I guess. Most people hit a wall of abstraction at some point, either because the abstraction is too hard or because the abstraction stops being relevant so the person loses drive to learn. For me, the wall is model theory and the second course of abstract algebra. They are both too hard and too abstract for me to push through.
I found this to be the points where abstractions being learned today are only precursors for abstractions that will be learned tomorrow. Another way to put it is at the stage where you're learning to make tools that are themselves only used to make other tools, not used to get results outside of the domain of tool making.
These stages have no apparent relevance outside of math, and if your style of memory formation depends on making many inferential links to laterally associated concepts, moreso than making a few direct links between vertically associated concepts, it can be rough going. A lot of what feels like following memorized pirate treasure map directions in the dark, with no sense of what obstacles you're working around or even the general direction where the treasure lies to give you a sense of bearing and progress.
I remember memorizing multiplication tables in school.
I learned that 3 x 9 = 27. You just had to memorize that, right? Well then I realized that if 3 x 10 = 30, then 3 x 9 must be one fewer '3' added together by the multiplication, which means take out one '3' from the set of 3s you are adding together by multiplication when doing 3 x 10, which comes to 30 - 3 = 27.
That means I didn't really need to memorize 3 x 9, I needed the above simple rule in addition to the fact that n x 10 is always what you get when you take the digit 'n' and add a 0 after it.
So learning multiplication tables was hard, until I learned the rule of looking for an easier-to-remember result and then adding or subtracting something to it. Of course I also had to understand that multiplication is really just repeated addition.
My teacher never taught me this trick, just told us to recite the multiplication tables in out heads again and again. But after doing that for some time I figured out the above trick myself.
Learning math beyond multiplication is hard if you cannot multiply numbers in your head, because lots of math presentations assume that of course you know that 3 x 9 = 27. Or something similar. It is not just about understanding the concepts, it's about being able to perform calculations, in your head. Else you cannot understand the explanations of new concepts. Even though we have pocket-calculators, we still need to be able to do calculations in our heads to understand new topics. in math.
So, learning what is 3 x 9 is not hard AFTER you have learned n * 10, and this trick. I assume something like that happens in the minds of mathematicians. They know a lot of math already which makes it easier to understand new results when they already know a lot. To learn what is n * 10, you had to learn 1 x 10, 2 x 10, 3 x 10 etc. and then understand the pattern in there.
Learning something is easy if you already know lots of related stuff. So it's not about learning more and more difficult things, it is about just learning more and more, related things. It is about having more and more (learned) data in your head.
I assume that is also why LLMs work so well: They have lots of data.
In summary: Learning math is not "difficult", it is tedious.
> So, learning what is 3 x 9 is not hard AFTER you have learned n * 10, and this trick.
The tricky thing here is that you have a limited amount of working memory, energy, and focus.
To do well at math you need:
- practice at being focused and confronting things that are hard
- an understanding of the problem space you are facing and how your tools work
- enough stuff memorized so that you don't have to context switch too much
You can have some missing pieces in the third area and do okay. But for a lot of students, needing to context switch to do simple arithmetic throws them off. I encounter students who can do any step of a problem, and can even describe the steps of what to do, but when I observe them thunk down to arithmetic and struggle, they aren't able to find their place again and make mistakes.
Most students are better served by getting their multiplication tables firmly committed to memory; perhaps a mnemonic or a simple algorithm of multiplying by 9 helps them get there. But you still don't want to be leaning on that when you're trying to factor a quadratic or cancel things in fractions or whatever.
(Seeing patterns, and learning why the pattern works is perhaps more valuable than multiplication tables... but that doesn't mean you don't need the multiplication tables.)
Good point about working memory. And you are right if it is in memory you can read stuff that assumes you know it and just glide through without stopping.
For me the tricks like above were like a backup solution, using it a few times it became obvious that 9 x 3 == 27. Indelible. It is. For some cases it was like "It can only be 27 OR 26" and then I would use the trick figure out which.
But whether you use a simple trick and a trivial calculation or don't have to do that at all the point is the same it should not take much thinking which would cause you to lose your focus and train-of-thought, as you say.
I probably had crummy teachers in some places, but my experience was that math up through linear algebra made sense and wasn’t all that bad, but that calculus was a huge bag of “if it looks kinda like this try this thing, and if the result looks kinda right it probably worked, if not try this other thing” such that I could never form a framework for it in my head. Also didn’t help when teachers in some things would say “oh this is much easier and more straightforward with calculus”, even without a prerequisite for it, and proceed to only explain concepts with calculus half or more of the class had never learned. One of these days I need to find a way to get it the right way.
In hindsight, that’s because high school calculus doesn’t teach you how things work and just teaches you a bag of tricks so you can grind through problems. There’s a certain number of tricks you should know, I.e., you should be able to take some simple integrals and derivatives, but for higher math, you run into complicated things where the tricks don’t work or don’t exist. Some of the tricks are actually really useful, but you have to fully internalize where they come from, e.g., integration by parts just comes from rearranging the chain rule, and, if you know that, you can apply it to more exotic derivatives.
I did well in HS calculus but struggled in college math because the bag of tricks approach doesn’t work there. It took a lot of effort for me to undo the bad habits I learned from K-12 math and learn the good stuff, but it paid off.
Also, it’s well known that eventually professional mathematicians hate certain kinds of math. There’s the classic divide between analysists (those that do calculus-type stuff) and algebrists (those that do things like group theory, and linear algebra goes here). You don’t have to like it all, and something you don’t appreciate the first time you see it, you may enjoy later
This assumes that climbing the ladder comes easy. To me at least, it doesn't. It requires tedious labor, and lots of repetition for every single step. That's the main difference I keep noticing between me and people who say they like math or find it easy. They just look at each step of the ladder once, and immediately "get" it, sometimes even skipping steps. In contrast, I need to repeatedly step up and down the ladder multiple times, until I can take the next step.
My theory is that people who like math have a reward system that responds well to gaining an understanding on empirical concepts. I have that, and it does drive me to keep studying math. Not that I find it easy though, I don't think I'm able to skip steps, and I often have to repeat things I've already done before they sink in. The difference is that I find this process enjoyable, so I don't mind spending the time.
If I can compare to another activity, I've always wanted to be an artist as well, and have spend quite a bit of time trying to build up the skills. The problem is that, if I'm honest with myself, is I just don't enjoy the process of creative expression, it doesn't trigger any reward system that means anything for me. I wish it did but there's just nothing there. It was a hard pill to swallow, but I realized I like the idea of being an artist, but I don't enjoy the process. Hence my ultimately crummy artwork!
Sorry, I realized I'm talking about myself more than you, but I hope it's some help. The point I hope it makes is that everyone has a different personality, and from that different reward systems. It sounds to me like yours doesn't align with math, and that's fine. I wouldn't try to force yourself to study something which you don't love, at least if it's optional self study. Find subjects that you love learning, and the results will come naturally.
> The difference is that I find this process enjoyable, so I don't mind spending the time.
This is definitely the difference for at least some of the people out there, however...
Imagine however that you do enjoy it at the start so you move on from topic Y to topic Y+1, then to to Y+2. However you find that you no longer understand Y and you need Y when you are doing trying to learn Y+3 so you study Y+3 and Y, now your progress in Y+3 has been slowed down.
Really your goal was to get o Y+7 though that is where you can start breaking new ground and contributing but as you try Y+4 and Y+5 the gains stop and maybe even reverse. You are now on a learning treadmill(perhaps sometimes falling off and having to restart too) redoing Y-1,2,3,4,5 not moving forward. Often it is possible to find a trick/skill/simplification/etc to continue moving forward to get to Y+6,7.
How long would you find the process fun on that treadmill though? I think it is common to not find covering the same ground over and over fun or never being able to make it to the point where you are part of peer group where you can contribute. An understandable result is when those people invest elsewhere, where they see better returns.
I'll bite, I think this is mostly bias. Strong evidence against this is that the average IQ of a mathematics undergraduate is, like, 125 or so, compared to ~115 for the average college graduate - that is just way too sizeable a difference to be explained by chance.
Math really does seem just plain hard for a great many people. It seems to me from having done some math on the inside like it also would get harder with each point downward in IQ than at a faster rate than most other valuable things in life.
IMHO when people say 'math is hard' they mean 'it takes more work than other subjects to be good at' - you're either a prodigy or you grind problems until you get the intuition. The easier subjects you can usually talk everyone and yourself into thinking you know them, or perhaps the ratio of memoization to practice is skewed more towards memoization. Maths is practice, practice, practice and then some more practice - blood, sweat and tears.
I’m not even sure it takes more work than, say, getting good with language. Hell, it might take less!
I think the main difference is that practicing language is far more rewarding for most people, than practicing math. They also have way more opportunities to practice it naturally, without even intending to do so.
What do you mean by "trained mathematician"? I ask this because I always think that mathematicians are simply people do research in mathematics, if not they aren't mathematician. So no need to add "trained", what is an "untrained mathematician" btw?
I saw some people claimed on their twitter/blog that they are "trained mathematician" but I cannot find any single published contribution of them in mathematics.
And everyone I know who do research in math seems to agree all that "math is hard".
Sure, but it's like saying "not dying of dehydration in a desert is easy. Just drink water!". Where do you find the water?
The clarify my terrible analogy, where do you find a curriculum that tells you exactly what to learn in what order? When you don't know math, you can't even tell if you ladder is missing steps.
> I may be biased as I am a trained Mathematician, but I always feel when someone says "Math is Hard", that is because they had bad teachers.
You're biased.
I've had excellent teachers, math was - and still is - hard. Especially when you get into the more complex stuff. Not everybody is as gifted at math as you are.
Completely disagree. Problem with (at least) math is that you want a teacher is not super good at math, but still knows what they are teaching. When you get taught by a brilliant match wiz teacher they skip over the stuff that is obvious to them, but what is probably crucial for mere mortals.
I have had teachers who just blew over the simple stuff because they didnt care about it and focused on the interesting hard stuff, which felt a lot of people behind and also with actually good teachers who focused on the "easy stuff" to build a strong foundation before moving to the harder stuff.
Which I still stand by. Math is easy for people that are good at math, computer programming is easy for people that are good at programming etc. For the rest of the world those things are not so easy, even if they do have good teachers. To assume that everybody can be equally good at math or computer programming is denying reality. I am a pretty good teacher and have found that some kids take to this stuff like fish to water and for others it is a serious effort with everything else being more or less constant. It would be great if we could identify that one single factor of 'the teacher' as the root cause of all of the trouble but unfortunately that's an oversimplification. Sure, there are bad teachers, and some of those are really good at math themselves. But that's just a fraction of the problem.
"Bad teacher" often strikes me as a face saving excuse. Not that having a bad teacher will not make it harder to learn, but there seem to be a lot of bad maths teachers out there, if I go by how many times I heard that.
I mean it's okay to be bad at something. I sucked in history class, and I'm not blaming the teachers. I simply had zero interest in it as a teenager, unlike maths and physics.
Programming in dependent types with univalence (Homotopy Type Theory) is an awesome way to see this realized.
The typing statement has to be proven by realizing the isomorphism demanded by substitution. You are more than anything directly proving what you claim in the type. Since proof is isomorphism here, the computation in terms of lowering the body of the definition to a concrete set of instructions is execution of your proof! (possibly machine code or just abstract in a virtual machine like STG). The constructive world is really nice. I hope the future builds here and dependent types with univalence is made easier and more efficient.
For dependent types, I would look at Idris [1]. Adding Univalence in a satisfying way is I think still somewhat of a research question (I could be wrong, and if anyone has any additional insight would be interested to hear), i.e. see this thread about Univalence in Coq [2]. There are some implementations in Cubical Type Theory, but I am not sure what the state of the art is there [3]
Assume that regular software has, on average, one bug every 50 lines. (All numbers made up on the spot, or your money back.) Let's suppose that Idris can reduce that to absolutely zero. And let's suppose that totally-working software is worth twice as much as the buggy-but-still-mostly-working slop we get today.
But Idris is harder to write. Not just a bit harder. I'd guess that it's maybe 10 times as hard to write as Javascript. So we'd get better software, but only 1/10 as much of it. Take your ten most favorite web applications or phone apps. You only would have one of them - but that one would never crash. Most people won't make that trade. Most companies that produce software won't make it, either, because they know their customers won't.
Well, you say, what about safety-critical software? What about, say, airplane flight control software? Surely in that environment, producing correct software matters more than producing it quickly, right?
Yes, but also you're in the world of real-time embedded systems. Speed matters, but also provably correct timing. Can you prove that your software meets the timing requirements in all cases, if you wrote it in Idris? I believe that is, at best, an unsolved problem. So what they do is they write in carefully chosen subsets of C++ or Rust, and with a careful eye on the timing (and with the help of tools).
I've been dabbling with Idris and agda and coq. I think I'm pretty much settling on agda, because I can appeal to Haskell for help. It's tough finding things that aren't just proofs, actually running a program isn't hard, there just doesn't seem to be many people who do it. I've got some toy projects in mind, and I'm going to lean hard on https://github.com/gallais/aGdaREP (grep in agda). I can't tell you if it's ten times harder - that seems high. It's different, sure. I'm having a tougher time than with, say, prolog. But most of the bumps and bruises are from lack of guidance around, uh, stuff.
So given that context, it doesn't sound to tough to add a cost to the type for each operation, function call, whatever, and have the type checker count up the cost of each call. So you'd have real proof that you're under some threshold. I wouldn't put the agda runtime on a flight control computer. But I think I could write a compiler, now, For like a microcontroller that would count up (or spend time budget, doesn't matter).
A more sophisticated computer would be way way harder, and be resource efficient. But if you modeled it as "everything's a cache miss" and don't mind a bunch of no-ops all the time, that would be a pretty straightforward adaptation of the microcontroller approach.
I would recommend trying Lean4 because I think it is better suited to programming. Lean has Rust-like toolchain manager; a build system (cf. `.agda-lib`); much more developed tactics (including `termination_by`/`decreasing_by`); more libraries (mathlib, and some experimental programming-oriented libraries for sockets, web, games, unicode...); common use of typeclasses in stdlib/mathlib; `unsafe` per declaration (cf. per module in Agda); sound opaque functions (which must have a nonempty return type) used for `partial` and ffi; "unchained" do-notation (early `return`, imperative loops with `break`/`continue`, `let mut`); easier (more powerful?) metaprogramming and syntax extensions. And in Agda you can't even use Haskell's type constructors with type classes (ex. monad polymorphic fns, and that makes it more difficult to make bindings to Hs libs, than to C libs in Lean).
There are features in Agda/Idris (and probably Coq, about which I sadly know almost nothing) that are absent from Lean and are useful when programming (coinduction, set omega, more powerful `mutual`, explicit multiplicity, cubical? etc), but I'd say the need for them is less common.
IntelliJ Arend probably has the most comprehensive support for HOTT among the proof systems: https://arend-lang.github.io/ . Not a lot in the way of tutorials though, just the official documentation.
Damn. One of the most powerful experiences of my life was working for someone with ALS. They could only communicate via moving and blinking their eyes slightly as I cued them. This person pivoted their career and was actively researching the disease locking them into their brain.
One day they played a trick on me (I was basically a human auto complete as I took some graduate courses in their field and I was able to help finish the proposal they were writing). They made a joke in the word document. It made me laugh, and as I was looking in their eye I could see the joy that made. I cried so much that night.
It really really gets to me. Every time I saw a new eye tracker device or otherwise I would email their spouse. I fuckin hate that disease, but I'm amazed at the human spirit I've seen it reveal.
My mom was a nurse who spent a significant portion of her career caring for people with neurodegenerative disorders. It's a very difficult thing to be in proximity of.
I remember one specific patient that was a child and his parents were going to extreme lengths to have him live like a normal kid. I forget the specific disease but for the last few years of his life he had no functioning nervous system and was kept alive by machines until the parents could bring themselves to let him go.
I think about these situations a lot and yeah -- it brings out a range of strong emotions.
It’s extremely cruel but as you saw people can be resilient. The artist TransFatty made a very moving, sad, brilliant, and even funny documentary about his experience being diagnosed with ALS.
Ugh the first paragraph kicks off with a missing bit of information. Min Nan has at least 5 subvarieties. In my spouse's it's pronounced "de" not "te" (and she is Min Nan as a first language). She laughed and said there's no way the people in Xiamen said "te", or any of the subvarieties she is familiar with within coastal fujian where she spent the first 25 years of her life.
So failed look at archaic miss transliterations, kind of reminiscent of the entire theme of the article. The example map being built on an argument from a bad assumption (a historical error as a pretext for argument).
de/te as in English or de/te as in pinyin? She and the article could well be both right if she's talking about pinyin pronunciation and the article not.
As was common for Chinese romanization systems from before the 20th century, it used 't' for unaspirated /t/ (which we would use use 'd' for today in systems like Pinyin for Mandarin or Peng'im for Teochew), and used 'th' for aspirated /tʰ/ (which we sould use 't' for today in systems like Pinyin or Peng'im).
This isn't an archaic miss transliteration, it's just an alternative transliteration strategy. In many languages that primarily use the latin alphabet, the phonemes associated with 't' and 'd' are /t/ and /d/, primarily distinguished by voice instead of aspiration (where aspiration is allophonic), so it's logical that the creators of earlier romanization systems focused on preserving that voice distinction, even if it's less common today for a variety of reasons.
While I do not doubt that your spouse, this is besides the point, the point is that Min Nan pronounces the first phoneme of the word for “tea” with a dental stop, other chinese variants/languages realize the same phoneme with a dental affricate.
I do not know if the article author/cartographer ever studied linguistics or phonetics, but this is the main takeaway from the map for me, a linguist, the pattern is the message, not the somewhat imprecise data points.
The new result (with explicit link to the arXiv as well as the author's home page) is linked in the fourth paragraph, and it only appears that far down because the first three paragraphs very efficiently provide background on the problem and recent results. The whole thing is an excellent general-audiences article explaining a complex theoretical result with illustrations and accessible links to all the relevant source material. I'm really glad we have Quanta and I'm not sure how this reporting could have been handled better.
EDIT: I was going to write a snark-ish comment that someone would eventually complain about the post title only to refresh a second later and see that someone changed the title already.
“Major algorithmic goal” was completely fine and says quite a few different things than “new, faster”.
Also, according to HN’s guidelines, the post title shouldn’t have changed in this situation:
“Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.”
The original title was not misleading or linkbait.
Wish I saw this write-up in like 2009.
The terminology in Cohomology was pretty opaque to me until I (much later) learned the concepts via a backwards mapping from learning deeper applications of algebraic geometry. I would have learned that much easier if I understood this easier.
I don't think "learning deeper applications of algebraic geometry" has anything to do with "deep learning" in a machine learning context.
My best guess is that OP had a course about rather abstract homological algebra, which he only grokked after learning about applications in algebraic geometry, which were "deeper" in some sense.
Yeah I chuckle every time people keep rediscovering Hilbert's functional analysis and the idea of orthogonal basis for functional spaces (goes well beyond Hilbert's widest dreams too!!).
Also Re the person who said no to you... Wave mechanics is written in Hilbert's language directly...
Math is easy if you build up from fundamentals, not like physics education where you say "but lets delete everything before because it had an oversimplifying assumption", rather if you build your knowledge entirely sequentially from things you know or assume, you build up a toolbag that applies literally everywhere.
So math isnt hard. Learning random bits of math out of context is hard. Climb the ladder once, you have it for life.
Hopefully for this person that sticks.